So let’s talk about the ability of the watch to be the center of a personal universe. Apple recently introduced the ability for their products to talk to one another using a technology they call “Continuity”, which is really a bundle of different types of communications that work together under this umbrella of “Continuity” to provide a seamless experience across devices.
Let’s imagine for a moment that a person is using phone, and moves to their computer. Continuity allows this person to pick up writing an email, for instance, in exactly the same place they were at on the phone. The same goes for other continuity-enabled apps. With the recent introduction of the Apple watch, however, I believe continuity has a much larger role to play in Apple’s future plans.
The Apple Watch (technically, WATCH) as introduced recently, requires an iPhone in order to function. As Ben Thompson and James Allworth discussed recently on their Exponent podcast, however, it doesn’t take much imagination to envision a product that does not require an iPhone to function. A stand-alone, wrist-worn computer that becomes the center of a person’s digital life. With intelligence provided by Siri, back-end functionality provided by iCloud, and Continuity-enabled interconnectivity, it becomes very easy to see how Apple can leverage its current and emerging technologies to great effect.
The scenario go something like this: a person is walking through town and receives a message. In the current state of things, the message is “received” on their iPhone, but displayed on both iPhone and watch due to the persistent connection between the two. currently, this connection is an explicit part of the watch’s functionality. Again, with very little imagination, it would not be hard to envision a version of this product displays the same message not because of an explicit connection between the two devices, but because the watch itself has its own suite of network connections. Once this version of the future becomes reality, we could also envision many of these interactions moving from the pocket device to the wrist device, with the input and interface facilitated by Siri. Then, in the event that a person would like to delve more deeply into a specific task, or if the task requires a different type of input/interface (writing a document comes to mind), they can seamlessly transition to another device with a different input/interface model using the continuity technologies that Apple has developed.
The “wrist-worn” computer then becomes a reality in a way that other companies simply haven’t been able to apprehend yet. Up to this point, the wearable category has largely been occupied by “companion” devices that serve as an auxiliary display for notifications and/or offer up limited functionality beyond being a window to a person’s primary computing device (the computer in their pocket). What Apple has developed, almost invisibly, is a device that they see as the future center of a person’s life.
Beyond being an incredible vision of the future, this new device category is going to enable people to be more human. I believe that Tim Cook envisions a future in which people are able to live their lives more fully through the assistive capabilities of wearable technology. Currently, communication and machine interaction is achieved despite interfaces that are abstract and opaque.
The future that Tim Cook’s Apple is fostering is one that allows us to be better humans because of it.
Benedict Evans once talked about a sort of “openness Tourette’s Syndrome” that occurs whenever people discuss Apple’s platforms vs. competitors. Basically, it goes like this: someone mentions how good an Apple platform is, and then someone else says, “Yeah, but Android is open.”
There’s a pleasant sort of fiction that is promised with “open” that simply isn’t a viable reality for most people. I’ve heard salespeople use this in retail stores, and I’ve heard IT professionals use this when offering Android to their clients. This demonstrates a fundamental misunderstanding of what Android’s various meanings of “open” are. The type of “open” that people are typically referring to when they use that word is actually conflated with “extensible” or perhaps “has relaxed security”, which are very different things than the “open” that Android was conceived with.
Android’s initial form as a project was open-source, and the Android of today is still technically “open-source” but, due to its reliance on Google’s services and cloud features, the current version of Android that comes loaded on many phones is not nearly as “open” as many would have you believe. Would you like to use another mapping service? How about something other than Google Now? Can you use the features of the home screen without being tied to Google services? Sadly, no.
That doesn’t mean that one couldn’t install the Android Open-Source Project’s fork of the operating system, but it means that the marquee features of the operating system, the things that Google and Android fans like to wave in the air, are inherently tied to Google and make it very difficult to use non-Google-developed operating systems.
Instead of this word “open”, then, let’s use the word “extensible”, since that more accurately reflects the Android OS’s ability to facilitate communication between apps, and to allow developers to build software that adds functionality to the OS or preexisting apps.
The problem with Android up to this point has been that security has not been (or at least hasn’t appeared to have been) a priority for developers or users. While I could try to offer up what I see as reasons for this type of behavior (laziness, “Accept” fatigue). I may be wrong on this, but from what I’ve seen, Android are more than willing to download apps and grant them almost completely exclusive access to their mobile device without really thinking through the ramifications.
Apple has avoided this for many years by sandboxing their apps and keeping inter-app communication on the back burner until they developed a way to allow apps to communicate effectively without sacrificing a user’s privacy or requiring them to grant unnecessary privileges to an app that really shouldn’t require it. Naturally, this came at a cost. For years, iOS users have not been able to install 3rd-party keyboards or send information between apps in a way that was “easy” (to be fair, the iOS “Open In…” functionality has allowed users to send documents and files between applications for some time, but required a degree of savviness from users that was sometimes lacking).
Now that Apple has introduced the ability for developers to create “Extensions”, however, that gap has very quickly been bridged, and iOS 8 will allow developers to create new ways for their apps to interact. Some may argue that Apple’s approach may differ from Google’s, but the end-user result is basically the same: a person will be allowed to install and use third party keyboards, send information between apps, and interact more directly with the data in other apps.
What I’m interested in seeing now, however, is what the conversation will center around now. For many years, Android users have told me that Android has been superior because of its customizability. When I would press these users to provide me with more information about what “customizability” means, they would often say two things: support for third party keyboards and home screen widgets.
These two “features” of the operating system, in my opinion, are not very important, and would often open a user’s device up to instability and/or unnecessary resource usage. I have used Android devices, and I have seen the home screen widgets for the apps that I use the most, and there is no version of reality in which the widgets provide a superior experience to using the app. Again, this is my experience, and maybe there are some people who really enjoy looking at two lines of their mail on their home screen underneath a line and a half of their upcoming calendar events, and not really being able to meaningfully interact with either until they open the app anyway.
Third party keyboard support has also perplexed me, but I can understand the utility for people living outside the United States, for whom third party keyboards can offer substantially improved text entry. That being said, none of the Android users that I discussed this with lived outside the United States, so it seems that their argument is a moot point, or at least purely subjective.
Thus, it seems to me that the discussion of Android as an “open” system (again, in the way that most people understand the term “open”) has lost much of its value. Android as an “extensible” operating system has also lost much of its value, as well (at least as a marketing ploy) in light of the new functionality of iOS 8. How, then, should we be defining “open”?
When we look on the post-PC landscape and see two operating systems that allow their users to interact with their data similarly, and enter information into their devices similarly, and allow applications built upon their platforms to communicate similarly, how should a person decide which device to use? Perhaps the discussion shouldn’t be centered around questions like “Which device lets me tinker with system files?” or “Which device will allow me to inadvertently break things if I wanted to?”, but should really be “Which device is better for humans?”
For quite some time, people have been clamoring for unified iTunes Accounts/Apple IDs. It seems like apple is taking a step in the right direction with its “Family Sharing” feature announced today at WWDC. Let’s hope they keep walking.
I’m at the Greek Fest at lake cook rd
In line for loukoumades
Well this place is packed with people young and old
Good food dancing
And the famous Mavrodafni wine tasting.
Which I had to taste so now
I think I start singing
That’s all for now
Kind of light headed
I love you and wish you where here
Something that I’m very aware of at all times these days is the idea that, in many ways, we live in the future. Not necessarily the future envisioned by all Sci-Fi writers of the 20th century, but one that combines bits and pieces from many visions and predictions.
As such, I’d like to dedicate one post per week to something I consider absolutely amazing – an example of “the future” that many of us have been given over the years.
Here, then, is the first of such posts.
Clearly, CNN and MSN know how it’s done:
- Include the words “Miley”, “Cyrus”, and “twerk” in a headline.
- Lean back.
- Count your money.
I know it’s the oldest story ever told, but stuff like this always makes me sad.
The rise of mobile technology has flipped the gaming industry on its head. What used to be the accepted method of producing and selling games was called into question by a generation of people who toil over purchasing a $0.99 app, but have no problem spending $4 on a cup of designer coffee. Quite quickly , the model of “Free to Play” (also seen abbreviated as “F2P” or “FTP”, which is, in and of itself confusing, since FTP also has its own meanings) became the accepted model for games, and has become (in my opinion) a sinister foreshadowing of a bland gaming future in which the experiences that people are actually looking for in their games are left out in favor of clever monetization schemes and endless, repetitive gameplay.
To be fair, I don’t think that paying for items in games through “microtransactions” is a bad idea, as long as it’s done right. Prior to the advent of F2P gaming, the entirety of a game’s reach could be contained within its “walls”, so to speak. That is, a game existed in its own space, with its own rules – its own little universe. Players would inhabit that space, entering into, learning its rules, conventions, and so on. The entirety of the game existed inside itself. Even World of Warcraft, with its subscription model, could fit in this definition, since players paid their subscription fees for an expansive, constantly evolving game world. Once the dues were paid, the door was open, and the only limit to what a player could achieve was, effectively, time. Given enough time, a player could ostensibly find whatever items he or she desired within the game world.
Then Nexon, an online gaming company, came along with a title called “Maple Story”, which was an adorable little side-scrolling RPG that could be digested in bite-sized chunks or marathon gaming sessions. Maple Story was different from other MMORPGs since it was free, and offered players the ability to pay for cosmetic changes to their characters. Players would “rent” costumes for their in-game characters, which would then get layered over their equipment, so that a player could make his or her in-game character look however he or she wanted. People did this all the time, and it was a way for players to feel more connected to their characters, since they could customize every aspect of their character’s look. Additionally, since the costumes were rentals, Nexon would introduce new looks all the time, often coinciding with events in popular culture.
As these Massively-Multiplayer Online Games (MMOGs) gained in popularity, particularly successful players would end up selling their items or entire accounts on eBay or other community-based sale sites. People were paying real money for virtual goods, and the game companies started paying attention. They wanted a cut of these sweet, sweet dollars that were getting sloshed around, and, thus, the F2P model was born.
See, someone realized that if players were willing to pay for virtual goods using real currency, they, like Apple, should simply make that process easier. Why go to eBay and engage in a potentially shady and illegal transaction, when you can just as easily drop $5 on a pack of gems and get some epic loot? Instant gratification, right? Except that it’s not that easy. I recently read a piece that neither condemned nor endorsed the idea of microtransactions in F2P games, and the author suggested that these “microtransactions” are a way for people to actually pay what the game is worth, that a person should pay what they feel the game is worth. There are several problems with that notion:
- First and foremost, the game is no longer the product. Players are the product. The game is not designed to be “good”, or to have a cohesive universe, or to tell a story. The game is designed to get players to spend money through carefully-crafted game mechanics that exploit psychological states.
- The price of a game is potentially infinite. Purchasing power-ups or items that perform a task or help a player’s character doesn’t add value to the game, it simply allows the player to play the game. The in-app purchases don’t make the game better, they simply make the game whole. Constantly having to pay to play a game makes the cost of the game theoretically infinite.
- Ultimately, these games are simply not very good. While one could make the argument that paying $20/month for a World of Warcraft subscription is far more than most players spend on most F2P games ever, the difference is that World of Warcraft was a well-crafted, intentional game with lots of care put into the design and game mechanics. Most F2P games have little to no recognizable story, little character development, and shallow mechanics 1. This renders the argument that “a person will spend what they feel the game is worth” moot, since, once again, many of these games have artificial difficulty multipliers or other mechanics that do not adhere to established gameplay mechanics and make the game artificially difficult or impossible to complete. Thus, it doesn’t matter how much the player feels the game is worth, since they cannot assess the value of the entire game; the entire game is not exposed to the player unless he or she pays an unknown amount of money.
- Time is not money. While some developers maintain the illusion that a player could ostensibly spend a large amount of time accruing whatever in-game currency allows the player to progress to the next phase of gameplay, 40 hours of gameplay in a highly-regarded, well-crafted game world is not the same as 40 hours of gameplay in a F2P title. The former is characterized by new experiences, success as well as failures, and (in applicable cases), and the revelation of more of a (hopefully) well-crafted story. The latter gameplay experience is typically characterized by repetitive gameplay and monotonous “grinding” in order to accrue the necessary capital to advance. A player should not be subjected to a below-average gameplay experience simply because he or she does not want to pay real currency to advance.
If we project out along this trajectory, we can see that the game worlds that developers are creating are becoming increasingly devoid of meaning. Why should a player care about any one specific game world? All they have to do is drop $100 on a pack of some form of in-game currency to acquire an item, which they can use to defeat a difficult enemy or progress past a difficult puzzle. The actual “game” becomes meaningless because the “win” state becomes increasingly defined by how much money a person has in his or her bank account. Furthermore, the game worlds that developers and designers create become less about art and vision, and more about simply driving players to microtransactions. Character archetypes become shallower, player statistics are tracked so that the game can adjust difficulty dynamically in order to create the aforementioned artificial difficulty spikes, and game challenges no longer represent tests of mental acuity, reasoning, or reflexes, but rather a combination of advertising and outright paying to win.
The future looks bleak for games.
1 : While comparisons have been drawn to video arcades of the 80s and 90s that were fueled by quarters, this idea ultimately falls flat because the machines could still be purchased and played in their entirety from start to finish if a person chose to do so. Additionally, if a person wanted to play a particular video game, the arcade was the only place a person would be able to play many of these games. The arcade owner was effectively “renting” his or her machine to the player. When a player purchases a game for any price (including $0), they should be able to play the entire game.
Came across this today. Stuff like this continues to amaze me. While I would love to purchase a DSLR, I’d never be able to do it justice. Then something like this comes along, and the aluminum and glass in my pocket is suddenly way more incredible than I thought. What I’m really stoked for? One of the new APIs opens up 60fps recording to 3rd-party apps.
This is the kind of garbage that makes no sense to me anymore. There’s no point to packing more pixels past the point that your eye can see. Maybe a few more than 300 pixels per inch will make a small difference, but why is this a selling point on a mobile display? It makes no sense. It’s like encoding music at ridiculously high bitrates. Most people can’t tell the difference. In fact, the overwhelming majority of people can’t tell the difference. For those who can, there is specialized equipment to let them indulge their senses. Don’t make this war of pixels out to be better for anyone.
Something that I heard a significant amount of with the release of the new iPad mini, as well as the new iPhone, was this idea that Apple can’t have any more “Just one more thing” moments, mostly due to their inability to mask the movements of their supply chains. Truth be told, it’s difficult to build an economy of that scale without drawing someone’s attention. The eyes of the world are on Apple right now (as well as Google and Microsoft, of course), and it’s clear that the world is poring over Apple’s supply chains in the hopes that they’ll be able to predict Apple’s next move based on fluctuations in part orders and such. The idea is that by scrutinizing Apple’s suppliers, looking at the parts that are coming out of the various manufacturers around the world and being assembled in China, that analysts will be able to stay one step ahead of Apple’s next “big thing”.
Here’s where I disagree with that idea, however. While analysts may be able to look at Apple’s current supply chains and see where they’re headed with their current products, they can’t find what they don’t know what to look for to begin with. We all know that Apple produces smartphones, tablets, laptops, computers, and displays (as well as other things). Here’s the thing about their products though: while they’re currently “predictable”, there weren’t always that way. No one really saw the iPhone coming, and they didn’t really know what was up with the iPad before it was the iPad. One might look at those examples and say, “Well, sure, but we saw the iPad mini coming, and we’re able to predict the new iPhones before they’re out…” and so on. Of course people can predict those things because they know what to look for. Analysts have their eyes fixed on display shipments (and the size of those displays), processors, logic boards, and more. They’re looking for all the things that make up the current generation of Apple products, and, since they have a pretty good idea of how those things fit together right now, they can make some pretty good guesses as to how those things fit together, and “predict” the next apple product.
Let’s look at this another way, though. Let’s say an applesauce manufacturer orders a lot of sugar and a lot of apples. It wouldn’t take a genius to figure out that they’re making applesauce, and there’s the rub. Analysts look at the current state of Apple products and say, “Hey look! They’re making applesauce! I’m so smart!” Except they’re really not. What they’re doing is putting the words on the page together to form a complete sentence, and they’re screaming from the rooftops that they’re literate. While that’s a great accomplishment, it’s only the first step to being able to truly analyze information and synthesize some new ideas.
Apple’s ability to have “Just one more thing moments” hasn’t diminished in the slightest. Their ability to innovate isn’t waning at all, in my opinion. The true innovation will appear where people aren’t looking, and will manifest itself in a way that people aren’t expecting, utilizing components that people aren’t expecting to see. Or, alternatively, they’ll take components that people understand and put them together in an unpredictable or disruptive way.
We can’t know how those things will take shape, because we don’t know what to look for yet. But, I’d be willing to bet that, when it does happen, people will still be just as surprised, and it will make all of the so-called analysts look like third grade children trying to read Chaucer.