The rise of mobile technology has flipped the gaming industry on its head. What used to be the accepted method of producing and selling games was called into question by a generation of people who toil over purchasing a $0.99 app, but have no problem spending $4 on a cup of designer coffee. Quite quickly , the model of “Free to Play” (also seen abbreviated as “F2P” or “FTP”, which is, in and of itself confusing, since FTP also has its own meanings) became the accepted model for games, and has become (in my opinion) a sinister foreshadowing of a bland gaming future in which the experiences that people are actually looking for in their games are left out in favor of clever monetization schemes and endless, repetitive gameplay.
To be fair, I don’t think that paying for items in games through “microtransactions” is a bad idea, as long as it’s done right. Prior to the advent of F2P gaming, the entirety of a game’s reach could be contained within its “walls”, so to speak. That is, a game existed in its own space, with its own rules – its own little universe. Players would inhabit that space, entering into, learning its rules, conventions, and so on. The entirety of the game existed inside itself. Even World of Warcraft, with its subscription model, could fit in this definition, since players paid their subscription fees for an expansive, constantly evolving game world. Once the dues were paid, the door was open, and the only limit to what a player could achieve was, effectively, time. Given enough time, a player could ostensibly find whatever items he or she desired within the game world.
Then Nexon, an online gaming company, came along with a title called “Maple Story”, which was an adorable little side-scrolling RPG that could be digested in bite-sized chunks or marathon gaming sessions. Maple Story was different from other MMORPGs since it was free, and offered players the ability to pay for cosmetic changes to their characters. Players would “rent” costumes for their in-game characters, which would then get layered over their equipment, so that a player could make his or her in-game character look however he or she wanted. People did this all the time, and it was a way for players to feel more connected to their characters, since they could customize every aspect of their character’s look. Additionally, since the costumes were rentals, Nexon would introduce new looks all the time, often coinciding with events in popular culture.
As these Massively-Multiplayer Online Games (MMOGs) gained in popularity, particularly successful players would end up selling their items or entire accounts on eBay or other community-based sale sites. People were paying real money for virtual goods, and the game companies started paying attention. They wanted a cut of these sweet, sweet dollars that were getting sloshed around, and, thus, the F2P model was born.
See, someone realized that if players were willing to pay for virtual goods using real currency, they, like Apple, should simply make that process easier. Why go to eBay and engage in a potentially shady and illegal transaction, when you can just as easily drop $5 on a pack of gems and get some epic loot? Instant gratification, right? Except that it’s not that easy. I recently read a piece that neither condemned nor endorsed the idea of microtransactions in F2P games, and the author suggested that these “microtransactions” are a way for people to actually pay what the game is worth, that a person should pay what they feel the game is worth. There are several problems with that notion:
- First and foremost, the game is no longer the product. Players are the product. The game is not designed to be “good”, or to have a cohesive universe, or to tell a story. The game is designed to get players to spend money through carefully-crafted game mechanics that exploit psychological states.
- The price of a game is potentially infinite. Purchasing power-ups or items that perform a task or help a player’s character doesn’t add value to the game, it simply allows the player to play the game. The in-app purchases don’t make the game better, they simply make the game whole. Constantly having to pay to play a game makes the cost of the game theoretically infinite.
- Ultimately, these games are simply not very good. While one could make the argument that paying $20/month for a World of Warcraft subscription is far more than most players spend on most F2P games ever, the difference is that World of Warcraft was a well-crafted, intentional game with lots of care put into the design and game mechanics. Most F2P games have little to no recognizable story, little character development, and shallow mechanics 1. This renders the argument that “a person will spend what they feel the game is worth” moot, since, once again, many of these games have artificial difficulty multipliers or other mechanics that do not adhere to established gameplay mechanics and make the game artificially difficult or impossible to complete. Thus, it doesn’t matter how much the player feels the game is worth, since they cannot assess the value of the entire game; the entire game is not exposed to the player unless he or she pays an unknown amount of money.
- Time is not money. While some developers maintain the illusion that a player could ostensibly spend a large amount of time accruing whatever in-game currency allows the player to progress to the next phase of gameplay, 40 hours of gameplay in a highly-regarded, well-crafted game world is not the same as 40 hours of gameplay in a F2P title. The former is characterized by new experiences, success as well as failures, and (in applicable cases), and the revelation of more of a (hopefully) well-crafted story. The latter gameplay experience is typically characterized by repetitive gameplay and monotonous “grinding” in order to accrue the necessary capital to advance. A player should not be subjected to a below-average gameplay experience simply because he or she does not want to pay real currency to advance.
If we project out along this trajectory, we can see that the game worlds that developers are creating are becoming increasingly devoid of meaning. Why should a player care about any one specific game world? All they have to do is drop $100 on a pack of some form of in-game currency to acquire an item, which they can use to defeat a difficult enemy or progress past a difficult puzzle. The actual “game” becomes meaningless because the “win” state becomes increasingly defined by how much money a person has in his or her bank account. Furthermore, the game worlds that developers and designers create become less about art and vision, and more about simply driving players to microtransactions. Character archetypes become shallower, player statistics are tracked so that the game can adjust difficulty dynamically in order to create the aforementioned artificial difficulty spikes, and game challenges no longer represent tests of mental acuity, reasoning, or reflexes, but rather a combination of advertising and outright paying to win.
The future looks bleak for games.
1 : While comparisons have been drawn to video arcades of the 80s and 90s that were fueled by quarters, this idea ultimately falls flat because the machines could still be purchased and played in their entirety from start to finish if a person chose to do so. Additionally, if a person wanted to play a particular video game, the arcade was the only place a person would be able to play many of these games. The arcade owner was effectively “renting” his or her machine to the player. When a player purchases a game for any price (including $0), they should be able to play the entire game.
Back when the iPad mini debuted, many people criticized the device’s $329 price tag (for the base model), saying that it was too expensive compared to other tablets of similar size that were on the market. I thought the same thing, until I pulled back a bit and looked at the mini from a longer-term perspective.
See, for most companies, they can get along fairly decently creating products for the right now, adjusting their products, pricing, and features according to market whims. It’s a way of interfacing with the market that has always seemed reactionary to me. Look at what the public is slathering over and give it to them, riding the wave until the notoriously fickle populous decides they want something else. Alternatively, you can just create a smattering of different products with arbitrary and marginal variations, designed to cater to fractionally different subsets of popular culture, and hope that people gravitate towards one product or another, or perhaps just make them enough money to offset the cost of developing, manufacturing, and marketing who knows how many different iterations of a given product.
For Apple, however, the view is longer. The timeline extends 5–10 years out, and is driven internally by the desire to deliver really incredible products into peoples’ hands. That places the locus of control squarely inside the company, instead of vesting that power in the whims of a population that worships reality TV and Hollywood drama. As such, Apple looks at supply chains and forecasts their production costs far further ahead than most companies do, and is thus able to deliver better products over time than their competitors because they’ve had the wherewithal to cultivate and maintain a stable, consistent base (referring simultaneously to supply, production, and consumption). Thus, while the $329 price point may not have made sense for the iPad mini given the original permutation of components, an iPad mini with a Retina screen, which will undoubtedly cost more to manufacture, can still provide Apple with healthy margins since Apple has already been able to account for the decrease in price of Retina displays over time and has been able to invest in battery research to drive the new displays. That, in concert with the other inevitable improvements that Apple has made to the hardware and software of its new iOS devices, will allow Apple to manufacture a better product while still maintaining margins and trying to keep investors happy (a notoriously difficult thing to do). From one angle, it’s very difficult to see the justification for that price point. But, given time, it becomes clear that Apple never priced the iPad mini for the market when it was introduced, it was looking many years down the line.
If that’s what they can do for pricing when looking forward, imagine what they’ll be doing for products.
There has been a long-running narrative in tech writing about the downfall of, or the necessity to bring about the downfall of, cable companies. Recently, I was discussing the merits and drawbacks of cable with a friend of mine, and ended up at an interesting predicament. We agreed that cable was expensive, yes; we also agreed that people who subscribed to the services that companies like Comcast provide walk a thin line between freedom (otherwise known as net neutrality) and tight, restrictive control; but, we ended up agreeing that, based on what we’re getting from the “evil empire” of Comcast, it’s not a bad deal.
The discussion started with HBO Go, a service that HBO provides its customers that allow them to stream HBO programming on demand to a large number of connected devices. It’s a great way for members of the same household to watch their favorite shows and movies without having to fight over the TV. The downside is that HBO Go is not purchased as a standalone service, it’s a part of HBO. While that seems ridiculous, someone recently asked me how much I’d be willing to pay for HBO service if it meant I could watch the shows anywhere. Considering what I was paying for Netflix and Hulu, I said I’d probably be ok with paying $8–10 a month. When I got home, I checked the price of HBO, and, to my surprise, it was only $10 a month (I was assuming that it would be much higher, since it’s a premium cable channel, and cable is controlled by “evil” media companies). It was then that I considered what I was getting for my monthly offering at the Altar of Comcast.
For my monthly fee, I get a whole heaping pantload of data, combined with a paltry offering of standard-definition TV channels. The most common narrative I hear is that people feel like they’re bullied into paying for channels they don’t need in order to get the ones they want. “Just let me pay for the channels I want!” everyone seems to say. And I agree with them! Why should someone be charged for something they don’t use and didn’t really ask for? After all, when you walk into a store and pick out the items you want, the store clerk doesn’t start shoveling unwanted merchandise into your basket. The same should go for programming and media, right?
Well, we’re actually a lot closer than we think, in my opinion. In addition to the base fee for my cable, I pay $8 per month each for Netflix and Hulu, which allows me to access a great deal of TV programming and movies for just under $20/month. I can add HBO to my cable package and get HBO go for “free” on my iPad and Apple TV, which accesses the content through…the same cable connection that I’m already paying for. So it’s…not really free, is it? In fact, it’s almost like I’m being charged double…and yes, that’s frustrating. But the same parallel can be drawn with merchandise. Sometimes you don’t want the “free” stuff that gets bundled with your popcorn, or the tchotchke that is shrink-wrapped your deodorant. In this case, I think of the stuff I don’t use as “bonus” features.
The structure I painted above makes sense when you look at it vis-à-vis the type of relationship that most people have with their cell phone carriers. Really, it’s very much the same – a person pays a monthly fee to be able to use that carrier’s network and pipe data to their device, the amount of which is allowed each month varying by user. Any other dues and subscriptions that the person pays are completely separate from the carrier. To carry the analogy even further, look at what’s happened to the precious airtime minutes and text messaging packages – they’ve all been ditched so that carriers can charge users for what they’re really hungry for: data. The airtime minutes and texting is almost a throw-in now.
The example of the carriers, however, should also serve as a cautionary tale. While most people are used to the fee structure now, there was a time when tiered data plans were met with hatred and anger because people realized they were being taken advantage of. Then, when the mighty marketing machines that drive these carriers went to work convincing people that they would save some money every month if they forfeit their right to an unlimited amount of data, people caved left and right…and they did this just as streaming media was entering the spotlight. It was the perfect one-two blow to the American consumer.
Now, the fear is that terrestrial data providers will repeat the same behavior that their wireless brethren got away with. That they’ll introduce tiered data plans, bandwidth caps, and heavy throttling to ensure that they can strangle every last nickel out of the American populace. This all assumes, of course, that net neutrality laws remain in place. While net neutrality laws are supposed to protect the Internet and the free flow of information around the globe, I have a feeling that terrestrial data providers will use net neutrality against consumers, making the argument that they can provide open pipes so long as consumers pay for the privilege.
At any rate, the current situation of à la carte channels and pricing is almost kinda not-really-but-sorta here…if you know where to look.
Mobile is the future. No one doubts that, and those that do are clearly riding their tiny rafts toward the inevitable plummet off the edge of the waterfall.
What is interesting is how these different mobile OS choices are defined (e.g. available apps, number of users, types of users, user engagement, developers, just to name a few), and what those definitions mean for the larger mobile landscape.
Many people argue for the benefits of iOS over Android, and vice versa, and I think the choice that most people make to go with one operating system or another isn’t driven by some core ethos or belief in how a mobile operating system should behave, it’s driven by far simpler forces – popular culture, how much money is in their wallet, and what feels right.
When it comes to the tablet space, iOS is the clear winner, having scooped up both the lion’s share of the market as well as customer satisfaction. I find it somewhat painful to watch owners of most other tablet devices struggle with basic functionality; I’m left with the feeling that someone, somewhere has done them a disservice by recommending something that did not fit their needs, having pushed some other device into their hands instead.
Where things start to blur, however, is when people start looking to devices outside of the mobile device market, things like connected TVs, appliances, and other gadgets. A person who isn’t fond of Apple can, ostensibly, purchase a Roku box for streaming content to their TV, but how well does that really integrate with a person’s home theater setup if they have iOS devices? How about Android? What about Linux? The trick here is that there are some devices that work well together, and some that don’t. Did you happen to buy one of those early Google TVs? How’s that working out for you? Sorry there aren’t more of them out there, turns out people didn’t like them very much. Sad.
The Apple TV is an iOS device, however, and I think it fills a key role in Apple’s connected living room idea. I’ve talked about this in past posts, as well, but something that many people don’t take into account is the fact that the Apple TV runs iOS, but in a form that isn’t immediately recognizable to most people.
Apple has created a chain of interconnected devices which, on their own, may seem unremarkable. Start linking them together, however, and they become far stronger and more capable than they were on their own.
I’ll end with a little story. I spent a half on the phone with a man recently, trying to help him compose and reply to an email on his new Android phone. I felt sorry for him. He had never owned a smartphone before, and was having a very difficult time using the device. For whatever reason, data was not enabled on his phone and he had to find the setting to turn it on before he could actually send the email. He was very frustrated, and it was clear that he wasn’t feeling confident. He was told that this device was very “user friendly” and that it “just worked”, but his experience demonstrated otherwise. That same night, I had some friends over, many of whom are involved in some sort of music production or performance, or who simply have great taste in music. They were sharing their favorite tracks and videos on my TV, all from their phones, all without having to fiddle with a remote or web browser. They were laughing and talking, all able to discuss and converse without needing to configure anything. They just tapped the AirPlay button and sent the media to the Apple TV. Zero configuration, zero setup.
From where I was sitting, it looked like magic.
Came across this today. Stuff like this continues to amaze me. While I would love to purchase a DSLR, I’d never be able to do it justice. Then something like this comes along, and the aluminum and glass in my pocket is suddenly way more incredible than I thought. What I’m really stoked for? One of the new APIs opens up 60fps recording to 3rd-party apps.
So, Apple has unveiled iOS 7 to much discussion, hand-wringing, and cheers. There are lots of things that I feel that Apple is promising to do right with this release, and a number of things that we will, of course, need to see to believe.
One of the most prevalent activities that iOS users engage in is photo sharing. Apple’s recently-released “every day” video showcases the power of the iPhone as a camera. iPhone users know that their iPhone is probably one of the best cameras they’ve owned, and the millions of pictures snapped daily obviously underscores that.
I was intrigued by the keynote’s handling of photos, namely the application of filters and the introduction of “Shared Photo Streams” into iCloud. To be honest, I felt that this was a feature sorely missing from iOS and iCloud for a long time. The idea of a photo stream curated by a single person is fine if one of your friends happens to be a professional photographer or something like that, but most situations in which people are snapping photos tend to be social, with multiple people desiring to both view and (most likely) contribute to an album of the event. The trick lies in determining the canonical center of the stream. Who “owns” the photo stream? When people contribute to the photo stream, are they adding to a single user’s photos, or are they, in effect, “copying” that stream to their own photo collection and then adding to it, which can then be seen by other parties? Or, are the photos stored on Apple’s servers, where multiple parties “own” photos, can add to the stream, and then define who else “owns” those photos? “Own”, here, is an operative word, since ownership of the photos is tough to nail down, in this case.
I’ve always wondered how something like this would work, but it’s a problem that Apple absolutely has to tackle in order to stay relevant. As people add more and more photos of their lives to their devices, the related storage of said photos become of paramount importance, followed closely by how people identify and integrate those photos into their identity. What has become increasingly obvious is that people don’t just craft an identity that is tied to a mobile device, they create a digital identity that the mobile device allows them to access. In order for these technologies to be relevant, they have to allow people to share photos and feel comfortable about storing them in a way that is non-destructive and still allows them to reference past events with ease. It’s clear that Apple is now moving towards more meaningful photo sharing, but it has yet to be seen if they can take this idea and use it to deliver the type of interconnectivity that people implicitly ask for.
One of the things that Apple did not address, and something I’ve heard from people who have recently switched away from iOS as their primary mobile platform, is that iOS hamstrings users by not allowing them to easily pass data between apps. While I agree with some parts of this argument, I can see Apple’s stance on the idea of inter-app data sharing. The scenario that I often hear from heavy Android users is that things like taking notes, or saving PDFs from one app to another, etc. are easier on Android. I don’t agree with this because I do those very same things every single day with iOS and, ever since Apple started allowing for custom URLs to pass data from one app to another, have never had an issue with that. As such, I think I understand their stance – that Android allows a freer exchange of data between apps using a more-or-less centralized file system.
One thing that we saw in the WWDC keynote, however, is the introduction of a new tagging feature in Mac OS 10.9, which, I believe, is going to be Apple’s eventual answer to the file system. Instead of files being stored on the device, in a folder, they’ll be stored in iCloud, accessible as clusters of files related to a specific idea. This is finally the intelligent organization that Palm’s WebOS got right. Ultimately, people don’t really organize their data by app, they organize it by idea or topic, which is a far cry from having data “live” in an app.
I think the ultimate goal is to enable a user to cluster files together around a central theme or project that they may be working on, and make that cluster available as an item in an app that keeps track of and syncs tags across platforms. Ostensibly, the user could open the app, see all of their tag groups, and (possibly using an Photos app-like pinch to spread gesture) see all of the files in that tag group. Tapping on a file would open a list of corresponding apps that are capable of handling that type of file. Interestingly enough, this may also allow Apple to put a little more control in the user’s hands by allowing the user to pick which app would be the default handler of that file type. In this manner, people don’t necessarily have to know where to look for their files, they need only to open the “Tags” app, find the group they want to work on, and tap the file they want to work on in that group. The OS then passes that file to whatever helper application the user has selected as default, and they’re off to the races. A system like this wouldn’t be able to satisfy every Android lover’s desire for a true file system, but Apple wouldn’t need to – the average user would see this as a new feature, and customers on the fence may see this as a tipping point.
This one is weird to me, but I like the way that Apple has addresed it in the update, with the WebOS-style “cards” interface informing this component of the OS heavily. The ability to see live updates of each app, or at least the current status of each open application as the user left it is another way Apple brings parity with Android, but does it better. I’ve seen Android’s task-switching waterfall, and It has always felt too sterile to be enjoyable to use, although I believe that’s more of a fault of the OS design language as a whole than that specific part of the interface.
There have been a not insignificant number of words spoken about the changes Apple has made to the look of the stock app icons in iOS 7. To be honest, I feel like this whole discussion is completely moot. App icons are incredibly important, to be sure – they are the way a user identifies your application in the sea of other apps on their phone – but they are somewhat arbitrary. They need to be well-designed, but there is a certain “minimum effective dose” that allows most people to identify the app they’re looking for and associate it with the task they’re looking to accomplish.
When Apple made the choice to redesign the stock app icons, the folks behind Apple’s design choices exposed their design process as well as the grid-based layout sytem that informed the icon designs. There were comments made by graphic designers about how Apple’s layout choices were half-baked or wrong, and other coments that discussed how the color choices were catering to a younger generation, or the aesthetic biases of the cultures in new and emerging markets. Regardless of the reason behind the choices, I can’t help but relegate all of this commentary to the trash heap for the same simple reason: all of these comments are about a subjective experience. Of course Jony Ive wants to create an experience that is beautiful, familiar, approachable, friendly, and functional…but there are so many ways to accomplish this, and all of the commentary comes from a single data point in the universe. Even assuming that all of these designers and amateur critics were able to ascertain some objective truth about these designs that was universally applicable, they all have differing opinions – some of them conflicting – and it must thus follow that they’re either all right, or all wrong. I’m clearly in the latter camp. People are going to take a look at the icons and freak out because they’re different, and then everything will go back to normal and everything will be fine because, in truth, app icons only matter as pointers to something a user wants to accomplish. Once users draw new associations in their minds, they’ll be fine.
The Little Things
There are, of course, things that Apple hasn’t mentioned or brought up, most likely because they simply didn’t have enough time to do so, but I feel like I should mention them here for the sake of completeness.
While I know that not all of these things will be addressed (or even should be addressed) because of the focus that Apple is trying to maintain with iOS, there are some things that venture into that grey area that exists between the worlds of Mac OS and iOS. The first of these is the way the OS (and many apps) handle external keyboards. Safari, for instance, is able to handle a “Tab” keystroke, but does not recognize Command+L to put the cursor in the address bar, or Command+W to close an open tab. These aren’t necessarily “shortcomings” of the OS, but nor do they enhance the user experience. I’ve never thought to myself “Boy, am I sure glad they left out those keystrokes! My life is so much easier!” With this type of behavior, I’m not sure if the omission is intentional or not. Apple is a very intentional company, but something like this feels like an oversight as opposed to a deliberate design decision.
Naturally, when people see new OS announcements from Apple, they assume that new hardware is going to follow closely behind. Something that I heard recently was that Apple’s new design, while beautiful on all current iOS devices, absolutely sings and looks right at home on the new devices that Apple has lined up for the fall. What these devices are is anyone’s guess, but I don’t think anyone would lose betting on a new iPhone. New iPad minis, iPads, and possibly iPod Touch units may also be in the works, but it isn’t completely clear yet exactly how these things will take shape, and what sort of changes we can expect. I love looking forward, but I don’t “do” rumors, so I’m not going to waste any time on speculating about what Apple is working on.
Ultimately, the new iOS version that Apple has introduced to the the world looks great and, based on what I’ve heard, feels amazing. I have no desire to start ripping on an OS that’s in beta, nor do I have the desire to laud it. While it’s exciting to see a refresh to the world’s most important mobile OS, the proof will be in the pudding once it’s been finalized and released.
This is the kind of garbage that makes no sense to me anymore. There’s no point to packing more pixels past the point that your eye can see. Maybe a few more than 300 pixels per inch will make a small difference, but why is this a selling point on a mobile display? It makes no sense. It’s like encoding music at ridiculously high bitrates. Most people can’t tell the difference. In fact, the overwhelming majority of people can’t tell the difference. For those who can, there is specialized equipment to let them indulge their senses. Don’t make this war of pixels out to be better for anyone.