This is the future we were promised, folks. Well…it’s getting there, at least. Check back in a couple years when this is beamed right into our eyeballs with connected contact lenses.
Mobile is the future. No one doubts that, and those that do are clearly riding their tiny rafts toward the inevitable plummet off the edge of the waterfall.
What is interesting is how these different mobile OS choices are defined (e.g. available apps, number of users, types of users, user engagement, developers, just to name a few), and what those definitions mean for the larger mobile landscape.
Many people argue for the benefits of iOS over Android, and vice versa, and I think the choice that most people make to go with one operating system or another isn’t driven by some core ethos or belief in how a mobile operating system should behave, it’s driven by far simpler forces – popular culture, how much money is in their wallet, and what feels right.
When it comes to the tablet space, iOS is the clear winner, having scooped up both the lion’s share of the market as well as customer satisfaction. I find it somewhat painful to watch owners of most other tablet devices struggle with basic functionality; I’m left with the feeling that someone, somewhere has done them a disservice by recommending something that did not fit their needs, having pushed some other device into their hands instead.
Where things start to blur, however, is when people start looking to devices outside of the mobile device market, things like connected TVs, appliances, and other gadgets. A person who isn’t fond of Apple can, ostensibly, purchase a Roku box for streaming content to their TV, but how well does that really integrate with a person’s home theater setup if they have iOS devices? How about Android? What about Linux? The trick here is that there are some devices that work well together, and some that don’t. Did you happen to buy one of those early Google TVs? How’s that working out for you? Sorry there aren’t more of them out there, turns out people didn’t like them very much. Sad.
The Apple TV is an iOS device, however, and I think it fills a key role in Apple’s connected living room idea. I’ve talked about this in past posts, as well, but something that many people don’t take into account is the fact that the Apple TV runs iOS, but in a form that isn’t immediately recognizable to most people.
Apple has created a chain of interconnected devices which, on their own, may seem unremarkable. Start linking them together, however, and they become far stronger and more capable than they were on their own.
I’ll end with a little story. I spent a half on the phone with a man recently, trying to help him compose and reply to an email on his new Android phone. I felt sorry for him. He had never owned a smartphone before, and was having a very difficult time using the device. For whatever reason, data was not enabled on his phone and he had to find the setting to turn it on before he could actually send the email. He was very frustrated, and it was clear that he wasn’t feeling confident. He was told that this device was very “user friendly” and that it “just worked”, but his experience demonstrated otherwise. That same night, I had some friends over, many of whom are involved in some sort of music production or performance, or who simply have great taste in music. They were sharing their favorite tracks and videos on my TV, all from their phones, all without having to fiddle with a remote or web browser. They were laughing and talking, all able to discuss and converse without needing to configure anything. They just tapped the AirPlay button and sent the media to the Apple TV. Zero configuration, zero setup.
From where I was sitting, it looked like magic.
So, Apple has unveiled iOS 7 to much discussion, hand-wringing, and cheers. There are lots of things that I feel that Apple is promising to do right with this release, and a number of things that we will, of course, need to see to believe.
One of the most prevalent activities that iOS users engage in is photo sharing. Apple’s recently-released “every day” video showcases the power of the iPhone as a camera. iPhone users know that their iPhone is probably one of the best cameras they’ve owned, and the millions of pictures snapped daily obviously underscores that.
I was intrigued by the keynote’s handling of photos, namely the application of filters and the introduction of “Shared Photo Streams” into iCloud. To be honest, I felt that this was a feature sorely missing from iOS and iCloud for a long time. The idea of a photo stream curated by a single person is fine if one of your friends happens to be a professional photographer or something like that, but most situations in which people are snapping photos tend to be social, with multiple people desiring to both view and (most likely) contribute to an album of the event. The trick lies in determining the canonical center of the stream. Who “owns” the photo stream? When people contribute to the photo stream, are they adding to a single user’s photos, or are they, in effect, “copying” that stream to their own photo collection and then adding to it, which can then be seen by other parties? Or, are the photos stored on Apple’s servers, where multiple parties “own” photos, can add to the stream, and then define who else “owns” those photos? “Own”, here, is an operative word, since ownership of the photos is tough to nail down, in this case.
I’ve always wondered how something like this would work, but it’s a problem that Apple absolutely has to tackle in order to stay relevant. As people add more and more photos of their lives to their devices, the related storage of said photos become of paramount importance, followed closely by how people identify and integrate those photos into their identity. What has become increasingly obvious is that people don’t just craft an identity that is tied to a mobile device, they create a digital identity that the mobile device allows them to access. In order for these technologies to be relevant, they have to allow people to share photos and feel comfortable about storing them in a way that is non-destructive and still allows them to reference past events with ease. It’s clear that Apple is now moving towards more meaningful photo sharing, but it has yet to be seen if they can take this idea and use it to deliver the type of interconnectivity that people implicitly ask for.
One of the things that Apple did not address, and something I’ve heard from people who have recently switched away from iOS as their primary mobile platform, is that iOS hamstrings users by not allowing them to easily pass data between apps. While I agree with some parts of this argument, I can see Apple’s stance on the idea of inter-app data sharing. The scenario that I often hear from heavy Android users is that things like taking notes, or saving PDFs from one app to another, etc. are easier on Android. I don’t agree with this because I do those very same things every single day with iOS and, ever since Apple started allowing for custom URLs to pass data from one app to another, have never had an issue with that. As such, I think I understand their stance – that Android allows a freer exchange of data between apps using a more-or-less centralized file system.
One thing that we saw in the WWDC keynote, however, is the introduction of a new tagging feature in Mac OS 10.9, which, I believe, is going to be Apple’s eventual answer to the file system. Instead of files being stored on the device, in a folder, they’ll be stored in iCloud, accessible as clusters of files related to a specific idea. This is finally the intelligent organization that Palm’s WebOS got right. Ultimately, people don’t really organize their data by app, they organize it by idea or topic, which is a far cry from having data “live” in an app.
I think the ultimate goal is to enable a user to cluster files together around a central theme or project that they may be working on, and make that cluster available as an item in an app that keeps track of and syncs tags across platforms. Ostensibly, the user could open the app, see all of their tag groups, and (possibly using an Photos app-like pinch to spread gesture) see all of the files in that tag group. Tapping on a file would open a list of corresponding apps that are capable of handling that type of file. Interestingly enough, this may also allow Apple to put a little more control in the user’s hands by allowing the user to pick which app would be the default handler of that file type. In this manner, people don’t necessarily have to know where to look for their files, they need only to open the “Tags” app, find the group they want to work on, and tap the file they want to work on in that group. The OS then passes that file to whatever helper application the user has selected as default, and they’re off to the races. A system like this wouldn’t be able to satisfy every Android lover’s desire for a true file system, but Apple wouldn’t need to – the average user would see this as a new feature, and customers on the fence may see this as a tipping point.
This one is weird to me, but I like the way that Apple has addresed it in the update, with the WebOS-style “cards” interface informing this component of the OS heavily. The ability to see live updates of each app, or at least the current status of each open application as the user left it is another way Apple brings parity with Android, but does it better. I’ve seen Android’s task-switching waterfall, and It has always felt too sterile to be enjoyable to use, although I believe that’s more of a fault of the OS design language as a whole than that specific part of the interface.
There have been a not insignificant number of words spoken about the changes Apple has made to the look of the stock app icons in iOS 7. To be honest, I feel like this whole discussion is completely moot. App icons are incredibly important, to be sure – they are the way a user identifies your application in the sea of other apps on their phone – but they are somewhat arbitrary. They need to be well-designed, but there is a certain “minimum effective dose” that allows most people to identify the app they’re looking for and associate it with the task they’re looking to accomplish.
When Apple made the choice to redesign the stock app icons, the folks behind Apple’s design choices exposed their design process as well as the grid-based layout sytem that informed the icon designs. There were comments made by graphic designers about how Apple’s layout choices were half-baked or wrong, and other coments that discussed how the color choices were catering to a younger generation, or the aesthetic biases of the cultures in new and emerging markets. Regardless of the reason behind the choices, I can’t help but relegate all of this commentary to the trash heap for the same simple reason: all of these comments are about a subjective experience. Of course Jony Ive wants to create an experience that is beautiful, familiar, approachable, friendly, and functional…but there are so many ways to accomplish this, and all of the commentary comes from a single data point in the universe. Even assuming that all of these designers and amateur critics were able to ascertain some objective truth about these designs that was universally applicable, they all have differing opinions – some of them conflicting – and it must thus follow that they’re either all right, or all wrong. I’m clearly in the latter camp. People are going to take a look at the icons and freak out because they’re different, and then everything will go back to normal and everything will be fine because, in truth, app icons only matter as pointers to something a user wants to accomplish. Once users draw new associations in their minds, they’ll be fine.
The Little Things
There are, of course, things that Apple hasn’t mentioned or brought up, most likely because they simply didn’t have enough time to do so, but I feel like I should mention them here for the sake of completeness.
While I know that not all of these things will be addressed (or even should be addressed) because of the focus that Apple is trying to maintain with iOS, there are some things that venture into that grey area that exists between the worlds of Mac OS and iOS. The first of these is the way the OS (and many apps) handle external keyboards. Safari, for instance, is able to handle a “Tab” keystroke, but does not recognize Command+L to put the cursor in the address bar, or Command+W to close an open tab. These aren’t necessarily “shortcomings” of the OS, but nor do they enhance the user experience. I’ve never thought to myself “Boy, am I sure glad they left out those keystrokes! My life is so much easier!” With this type of behavior, I’m not sure if the omission is intentional or not. Apple is a very intentional company, but something like this feels like an oversight as opposed to a deliberate design decision.
Naturally, when people see new OS announcements from Apple, they assume that new hardware is going to follow closely behind. Something that I heard recently was that Apple’s new design, while beautiful on all current iOS devices, absolutely sings and looks right at home on the new devices that Apple has lined up for the fall. What these devices are is anyone’s guess, but I don’t think anyone would lose betting on a new iPhone. New iPad minis, iPads, and possibly iPod Touch units may also be in the works, but it isn’t completely clear yet exactly how these things will take shape, and what sort of changes we can expect. I love looking forward, but I don’t “do” rumors, so I’m not going to waste any time on speculating about what Apple is working on.
Ultimately, the new iOS version that Apple has introduced to the the world looks great and, based on what I’ve heard, feels amazing. I have no desire to start ripping on an OS that’s in beta, nor do I have the desire to laud it. While it’s exciting to see a refresh to the world’s most important mobile OS, the proof will be in the pudding once it’s been finalized and released.
While my posts haven’t been coming fast and furious lately, I’ve been watching the tech landscape recently and have seen some interesting shifts in where I believe a lot of things are heading.
Whither the iPod Nano?
This has been a perennial issue for me. When the iPhone 4S (aka the iPhone 5), was released, people did two things:
1. Thought that it was an inferior phone because the character “5” was not in the title
2. Forgot about everything else for a little while.
I, however, did not forget about the iPod nano. Conversely, I began to think more about it, mostly from the perspective of “How can Apple make use of this new Bluetooth 4.0 thing?” While Bluetooth may not be very important to many people in the world, or may be synonymous with “headset”, Bluetooth information exchange technology makes possible a great many things that people basically don’t take advantage of. Case and point, a friend of mine just saw me typing this blog post on a wireless Bluetooth keyboard and said “Wow, a wireless keyboard? I didn’t even know they made those.” Naturally, he’s a little behind the times (friar, vow of poverty), but that doesn’t stop the concept from being foreign to many people. An iPad-toting client of mine didn’t know that Bluetooth could be used to connect an iPad to a wireless keyboard, either (see “headset” equivocation above).
At any rate, that’s where we’re at. Bluetooth having effectively been relegated to another name for “headset”
The iPod Nano has the opportunity to become something so far beyond what it is right now. It can be a gateway to the information stored on an iPhone, a supplement to an iPad (remote control, keyfob, microphone, etc.), and, possibly even more importantly, a front-end for Siri. Naturally, the iPod Nano’s screen isn’t designed for displaying large amounts of information, but that doesn’t preclude it from being an information portal.
When talk of an “iPad Mini” started swirling about, I immediately started thinking about the whole Steve Jobs “people don’t like these ‘tweener’ sizes for tablets” statement. Whenever he says that, you know that a product isn’t too very far away. The issue for Apple wasn’t creating a product in that size, but rather timing their entry into that size category. One of the things that I’ve noticed about a great deal of the other 7″ (ish) tablets on the market is that they lack anything truly compelling for me. I wouldn’t want a Kindle/Kindle Fire because its primary purpose is to read books purchased through Amazon.
The Nexus 7 was almost enough to get me on board until I used one. “Why would I spend any money on this?” I found myself asking over and over. The only truly compelling thing that I saw in the Nexus 7 was the NFC capability, but even that was a stretch. I need a product like that to be an iPad, but smaller, capable of all the things my iPad is capable of. I’m sure there are many people in the same boat.
I’ve been using the iPad to take notes, draw, read, and write since its introduction to the market. People tried to tell me that it wouldn’t be capable of much, and I would just quietly continue working, nodding as I continued to accomplish goals I set out for myself from the comfort of a tablet that I could use comfortably all day.
I knew there was one problem, though: it was too big (and not by much) for me to carry in my hoodie pocket. There were times that I only wanted to carry my tablet with me and nothing else, lack of charging equipment and extra tubes for my bike being reasonable things to forego in favor of a tablet that could slip easily into my back pocket. My iPad was literally a half inch too big, and I resigned myself to carrying the things I needed in addition to my wundertablet.
It was a hard life, I know, but I made it through. Thanks for your concern
Now, however, I feel like Apple is going to make a lot of people happy by creating a device that is perfectly capable of an absolutely ludicrous number of things (vis a vis other tablets), yet still has an extremely portable form factor (as though the iPad wasn’t portable enough).
Here’s the thing, though: Apple needed to time this whole thing. Releasing a 7″ (ish) tablet shortly after the iPad would have been great, and people would have really liked it, sure, but it wouldn’t have had the same impact that I believe it will have now. By releasing an “iPad Mini” now, Apple has allowed all the trash to sift itself out. Plenty of other companies have brought “me too” devices to market, and each has captured some small part of the iPad experience that people love, but left even more behind. Other companies thought that, if they could only have gotten that 7″ tablet to market first, that they would have ruled that space. The issue with that type of thinking is that it leads to sloppiness. Should this “iPad Mini” be released soon, it will be released with the entire weight of Apple behind it. It will have access to the iTunes store, it will have access to the App Store. All the apps that people have already purchased will be available on their device from day one. Their contacts and calendars will be synced through iCloud, and, while the same can be said for any Android tablet in that form factor, a person toting both Android and Apple devices would have to manage two devices with two different stores to shop from, two places to store their media, and no convenient way to slosh purchases around between devices.
With a device having a smaller screen size and profile, Apple will be making their signature store/device integration available in an even more portable form factor. The market will respond, and it will respond favorably.
Keep Your Friends Close
The last thing that I haven’t been hearing much about recently is NFC. Samsung released the Galaxy S III to a mediocre amount of fanfare, touting all of this NFC magic…but I have yet to see anything really interesting come out of it. I love the idea of NFC, but, like the Nexus 7, I see no one using it. I don’t see any stores with NFC tags on their doors, no restaurants with NFC tags on the tables to allow patrons to silence their phones and join their wifi with a single tap. None of this is real because I have a sneaking suspicion that Samsung has no idea what it’s doing. They put products on the market that have checkboxes in all the right places, but no real-world application of any of the things that those boxes relate to. Great job, Sammie, your phone has NFC! Does that honestly play a role in most people’s buying decisions? No, no it doesn’t. A friend of mine recently purchased a new GSIII and, when asked about the NFC feature, had no idea what I was talking about.
Truth be told, I’m not sure NFC will ever be a truly compelling technology, but I believe that, if it is, that Apple will do it right. They’ll do it right because they’re really the only company that can make something as obscure as NFC relevant enough to matter to the world. When the world’s most valuable company throws its weight behind something, you’re pretty safe betting that people are going to pay attention.
All of this assumes a few things
1. Apple is releasing a new iPod Nano.
2. Apple is releasing an “iPad Mini”.
3. The aforementioned products, in addition to the new iPhone, will contain NFC technology.
Those are a great deal of assumptions, but they all seem to make sense. I’m not one to start making assumptions and thinking that I’ve got it all right, but, based on what I’ve been seeing and, perhaps even more importantly, what I haven’t been seeing, I believe that all of these things are very close to reality.
I haven’t even touched on the possible integration with a refresh of the Apple TV, but I think that all those things are around the corner, as well.
It’s gonna be a helluva September
When Siri was unveiled with the introduction of the iPhone 4S, there were a lot of very intrigued, very happy people. Already, in my usage of Siri with my new iPhone 4S, I find myself pleasantly surprised with the things I’m able to do, and how easy Siri makes so many of the things I’m used to doing. Naturally, there are some shortcomings. Since I use an unlocked 4S with the T-Mobile network, I’m relegated to EDGE when not on wi-fi (how was this speed ever acceptable?), and communication with Siri is woefully slow. I wish I had the scratch to pull off an AT&T subscription, but I just don’t right now.
This got me thinking, however. Since the 4S relies on a persistent, high-speed network to deliver results to the user, what happens when a person has a slow connection, or is in a wireless dead zone? The ability for Siri to function as an interface diminishes dramatically, leaving a person only able to interact with the data that is already on his or her phone. While this normally would not be a problem, anyone looking for Siri functionality in a wireless dead zone is going to be frustrated, period. Naturally, the last thing Apple wants is unhappy customers, so what can Apple do to circumvent this situation?
I found the answer in the iPod Shuffle.
This little device, as many know, is what one might call one of Apple’s lesser-loved projects. At the time of its inception, it filled a necessary void–that of a low-cost music player bearing the iconic Apple logo and “iPod” name. It was my first iPod, and, I’d wager, the first iPod for many others, as well. The problem with the iPod Shuffle, now, is it lacks features. It isn’t relevant anymore. When the shuffle was introduced, MP3 players, including the iPod Classic, were large and relatively bulky, and their battery life left something to be desired. The Shuffle had long battery life, was capable of syncing with iTunes, and offered people an interesting alternative to the blue-hued screens and click wheels of their larger cousins. The storage was all flash, which meant that it wasn’t prone to hard drive failures in the same way the iPod Classic was, and that it could play all day on a single charge.
Since the Shuffle lacked a screen, however, there was no way for a user to really know what was about to play. Apple solved this with their “VoiceOver” feature, which was able to announce the name of the playing track or playlist, or the remaining battery life. In order to do this, however, the user needs to give up some storage space on their device to make room for the VoiceOver data. For some, this is an easy tradeoff, since it adds a sense of depth to the diminutive device. Tuck that in the back of your mind for a moment.
It was recently discovered that the iPhone 4S contains a dedicated sound-processing chip that enables it to better separate your voice from background noise, which increases its ability to recognize what you’re saying before sending that data off to Siri for processing and language recognition. All this data being sent to Siri means that there are a great deal of sound snippets that Apple has at its disposal to refine and improve its voice-recognition and accuracy. The more people use Siri, the better it gets, and the better it gets, the more people use it. Eventually, I believe, Apple will be able to “distill” certain Siri queries down to their core components, picking out speech patterns and pull user voices away from background noises more easily. Furthermore, Apple will be able to condense certain components of Siri down to include that functionality on devices that don’t have a persistent wireless connection, and significantly speed up Siri queries on devices that do. Naturally, looking up restaurants on Yelp or finding out data from Wolfram is going to require a connection to the internet, but things like setting reminders, calendar appointments, taking notes, and playing music can all (theoretically) be done locally, without a persistent data connection. This would allow Apple to install Siri on all of its devices. When the device has a wireless connection, it would be able to upload usage statistics, and download changes to the onboard Siri database while doing its nightly iCloud backup.
Naturally, the user might have to sacrifice some storage space, but it would allow even the iPod shuffle to become a “personal computer”, with the ability to store notes, read emails, and access a user’s information in the cloud when a connection becomes available. Who knows? Apple may even negotiate a wireless deal with service providers that allow all its devices to connect to a Kindle WhisperNet-style “SiriNet” for free, for the purposes of communicating with the Siri servers.
Until we have ubiquitous worldwide wireless coverage, we can talk to the little Siri in our Shuffle.
I’ve reading a great deal in the past few months about all of the new Nexus phones that have been coming out recently, reviews by people who have used iPhones and tried to switch but failed, reviews by people who are avid Android users who love them, and most people who are somewhere in between. I’ve heard arguments as to why certain operating systems have more future, certain phones are objectively better, and really just stand somewhere in the middle, looking at all of this with a little bit of a quizzical look on my face. I’m not trying to take sides here, but I believe that Apple’s position in this market is much better because of one main reason: NFC.
While it’s true that Google’s Nexus phones have had NFC built-in for some time, it has been clear that the feature has been little more than a bullet point in a presentation in order to build some buzz and give Android pundits something to hold over Apple’s head. I thought the inclusion of NFC in the first round of Nexus phones to be half-baked, mostly because I looked around at the places I visited every single day and saw literally nothing that used NFC in a way that was available for public interaction. The only usage for NFC that I’ve seen implemented anywhere was in the TouchPad. We all know how that went.
The key here is this.
If users wave a NFC-equipped iPhone at a NFC Mac (they need to be in close proximity to interact), the Mac will load all their applications, settings and data. It will be as though they are sitting at their own machine at home or work. When the user leaves, and the NFC-equipped iPhone is out of range, the host machine returns to its previous state.
This is huge, and with Bluetooth coming back in a big (or perhaps little, as in low-power) way, this may be even more effective.
“The usual idea is that you would use NFC to set up the link between the two devices and then do an automatic hand over to a different protocol for doing the actual transfer of data – eg Bluetooth, Wi-Fi, TransferJet etc – and that’s what I imagine would be happening here,” she said.
The above coming from Analyst Sarah Clark of SJB Research.
This idea still has so much potential. As Steve Jobs said when he unveiled iCloud, Apple is demoting the computer to just another device, one that accesses your data in its servers in North Carolina somewhere. With the computer being just a gateway to your computing state anywhere, any device can also theoretically access this saved state and allow the user to resume their previous session wherever they are.
Let’s also look at another piece to the puzzle: Apple TV. We don’t know what Apple is planning for this theoretical Apple TV later this year, but let’s take a look at the Apple TV in its current incarnation, the tiny little black box that, quite frankly, is a little Wünderdevice.
For starters, you can now do this. I think that’s a pretty big deal. So the Apple TV, in its current state, can run iOS apps. It can access iCloud. It can play music and movies, and also allows a compatible device to mirror its display through a Wi-Fi connection. Let’s talk about that for a moment, as well.
If you haven’t already, check out Gameloft’s Modern Combat 3. It’s basically a Modern Warfare clone, but it has one killer feature: the ability to mirror the game on an Apple TV, which turns the iOS device you’re holding into a controller and puts the game on the big screen. I tried this on my iPad and was amazed with the results. This is truly something that game developers need to be looking at, but it’s also something that regular developers need to be looking at, as well. Think about it–if a device that is mirroring its display output to an Apple TV can display different content on the device than on the TV, a word processing app could essentially turn the tablet into a wireless keyboard, while the main workspace is displayed on the TV. The iPad or iPhone (or both!) could display a suite of controls or “function” keys, or function as pointing devices, or really anything that you can think of. The idea of a “technology appliance” holds even more water here, since these devices can be used synergistically to create an effect that one device on its own is technically capable of, but is better when spread out among several devices. Look at Keynote, for instance. With an iPad and iPhone, a person can run an entire professional presentation with no bulky equipment and a minimum of technical prowess.
In the context of the aforementioned connection to an Apple TV, this capability becomes even more important, since it allows the TV to function like a traditional “desktop”, but without the bulk of wiring, an extra device to draw power, and connections to set up. NFC handles everything, and the bulk of the transfer can then take place over Wi-Fi, Bluetooth, or some other protocol that is standard in Apple devices.
And this, my friends, is why Apple is positioned so much more powerfully in this market than any Android device manufacturer. While other manufacturers will essentially be playing catch up with all of this anyway, they will also have to contend with consumers who will be presented with each manufacturer’s take on this idea. Where Samsung may offer one type of connectivity, Asus may not, since it doesn’t have a TV of its own, but LG might. The consumer will stand in front of his TV and scratch his head wondering why his Motorola Xoom isn’t connecting to his Samsung TV, while his neighbor with an iPad and Apple TV is able to transition from room to room in his house without missing a beat.
The aftermath of this whole shebang would be the equivalent of a Destruction Derby, with all of these companies vying for the consumer dollar, blowing themselves to bits and waging a war of attrition while Apple’s devices still lead the way due to their simplicity and interoperability. The next thing that will happen is that these other manufacturers will start listing even more specs on their TVs, things like gigs of ram, processor speeds, and core counts. The consumer will look at all this and once again scratch his or her head in confusion. The Apple TV will say something like “Best-in-Class Picture Quality, Siri, and [catchy Apple-fied name for NFC connections]. Say Hello to Apple TV.”
It’ll sell like gangbusters, and we’re all going to want one. Of course we will, it’s going to represent the future of computing. Can we even call it that anymore? No, not really, it doesn’t feel right, and in this one (admittedly long-shot) future, “computing” isn’t a thing. You just pick what you want or need to do, and you use well-designed, simple hardware to do it.
Looking at the state of mobile technology today, it’s clear that the tablet form factor is the flavor of the week. A decade ago, however, the future of mobility looked a lot less like a clipboard and a lot more like a wristwatch.
For years, people were focused on wearing their computers. What is a thin, rectangular window to endless content now was a wrist-mounted portal to information then. The problem that designers always ended up getting stuck on, however, was the interface.
Designers tried to tackle this in a wearable computer concept, but the end result is still a mashup of the ideas of the last few decades and the fancy swirly graphics of today. The input method in the aforementioned concept (a swing-out keyboard? really?) is kludgy, at best, and the whole thing looks, well…huge. Would anyone actually wear that? No, no they wouldn’t because that sort of thing is a fashion nightmare.
Then there’s this one. Ouch. Really? I mean, sure this is military technology, so we’re not looking for haute couture here, but…I mean…really? This just won’t do.
The problem is that the input method for all of these concepts still involves directly interacting with the device, touching buttons, or tapping the screen with a tiny stylus. All of these options are unacceptable when it comes to wearable computing. A person cannot have devices oozing out of every pore and orifice just to get at a Wikipedia article. What they need is a device that is intuitive and simple, something that “just works”.
This is where it gets difficult.
Apple has already developed a powerful, revolutionary computing interface powered by speech. They call it Siri, and I’m sure that most people are familiar with it at this point. If not, the link should tell you everything you need to know. The bottom line is that it’s intuitive, and allows a person to perform almost every single task they usually need a computer to do with little else than a functional set of vocal cords. This powerful computing interface, however, requires a persistent connection to the internet to be able to send your voice to Siri, and to receive Siri’s reply. Furthermore, access to Siri’s beautiful mind is limited solely to owners of Apple’s iPhone 4S, at the moment.
Here’s where it gets interesting.
Apple designs hardware. They also design software and build empires on their intuitive, simple interfaces. Siri is about as simple as you can get, but not everyone has the ability to talk to Siri, and there may be those who simply don’t want to purchase a new phone for the privilege. What if, however, access to Siri could be granted by wearing a watch? Apple’s design team could surely design a beautiful watch. What if this watch was actually a computer, however? Or, perhaps not a computer, but rather a gateway to this magical, intuitive, almost infinitely powerful computer? Follow me, child, the path to this potential future is an interesting one.
Apple has been doing a lot of work behind the scenes, as it usually does. It’s been chugging away at the internal components of the iPhone 4S, upgrading a little-loved part of the phone that may actually end up being the key to this whole new ecosystem that Apple has developed: Bluetooth 4.0. The main thing about the new Bluetooth 4.0 specification is that it allows for a very low-power state, which keeps certain communication avenues open while allowing others to close. This versatility means that a wrist-mounted “computer” doesn’t actually need to do any processing of its own, but requires a connection to a device that can. Furthermore, while previous iPhone models may not sport the swanky new Bluetooth 4.0-compatible chips, they can still perform admirably with normal Bluetooth connections. This opens up the possibility for previous iPhone models to access Siri through a special piece of hardware that piggybacks off of the existing iPhone data connection through Bluetooth in much the same manner a headset would.
The end result is that a person will be able to talk to Siri, but do so without any sort of visual feedback. Ultimately, this is the sort of interaction that Apple is going for anyway. The device doesn’t need a screen (but may have one like the iPod Nano) because the interface is completely invisible. Much like the iPod Shuffle’s tiny form factor that can still communicate with the user, the new “wearable computer” does not have to be anything more than a gateway. The magic of the iPod Shuffle is that it feels like it’s so much bigger. The power of the new wearable computer is not that it is super fast and spec’ed to the gills. The power is that it feels like the world is no more than a question away.
Dick Tracy would be jealous.