Then it must be one, right?
When Apple released its new AppleTV, I asked one of the questions that I should really learn to stop asking with Apple products: “So what?”
What I forget constantly is that Apple figures all of its new products into a wonderful long-term strategy that is often hard to decipher but beautiful to watch unfold. The AppleTV was one of those devices that didn’t really have a place in my heart until I started using it, and it’s become even more incredible with the recent unveiling of iOS 5.
When I started using my iPhone, I discovered a little jailbreak-only app that allowed me to mirror my iPhone’s screen on my TV, which allowed me to do things that were (at the time) not possible, like pump music from the iPod app out to the TV and show photos (without creating a slideshow). “It’s like having a computer in your pocket,” one of my friends remarked at the time.
This is heating up now, and I think the barely-mentioned screen mirroring over AirPlay is going to be one of the most life-changing things I’ll experience this side of 2000. I already use my iPad for just about everything in my life, and the Mac Mini sitting just below my TV does very little. Sure, it has some apps installed for design purposes, but they’re all secondary to the writing and creation that I do on a daily basis on my iPad. Sometimes (rarely), I feel like it would be nice to be able to throw what I’m doing on a larger screen and just lean back a little, see the whole thing take shape in front of me. Why is this not a computer, again? Currently, I can do that (sort of) with my iPad via an HDMI cable…but that isn’t really ideal because it means that I have to jockey with cables, and risk damaging ports when things get inevitably jerked around or flexed in strange ways.1
Now, I can dock my iPad, throw the bluetooth keyboard on a desk, and type as much as my little fingers can type without a second thought. This is huge. It means that I can go to a friend’s house, and mirror my display on his or her TV without any setup, without digging around behind the TV to find his or her HDMI port, and without any cables to lose or forget. Just make sure the friend has an AppleTV (which have enough value proposition on their own) and I’m set. It means that businesses don’t have to worry about sales pitches going wrong due to configuration issues anymore. It means that we’re one step closer to a shared classroom where people can contribute anything they’re reading to a discussion without having to be an IT professional to do it.
One step closer to living in the future. Oh wait, we’re already there.
1 I’ve always had problem with dongles or adapter cords. For some reason, the cables alway break or fray internally, and the whole thing fails within a few months of normal use. I like to avoid them whenever possible.
One of the most recent and powerful innovations to develop in the mobile computing space has been the capacitative screen. First put into widespread use in the iPhone and later adopted by the mobile phone industry as a standard for mobile devices, the capacitative screen is amazing, but not without its drawbacks. Try tapping on something with the cap of a pen, or using the screen with gloves on, for instance, and you’ll be greeted with…nothing (unless you have those fancy gloves with capacitative pads on the fingers *jealous*).
This is a reaction to the early Tablet PCs, when the computer required what was called an “active” stylus. Active styli essentially have some sort of communication ability built into them (whether magnetic or otherwise) that tells the computer when the stylus is close and allows it to register input on the screen. The problem was that these devices were essentially useless unless they had their accompanying stylus. Lose that, and you’re left with what amounts to a fancy monitor.
The flip side to capacitative screens is that they respond (very well) to skin. While that’s great for your fingertip, it’s not so great for your wrist if you (like almost everyone on the planet) rest your wrist on a surface while writing. Go ahead and try it, chances are you do the same. People anchor their hands to their writing surface with their wrists. It’s just what we do. Try to do the same thing on the surface of an iPad, however, and you’ll be greeted with virtual ink all over the place. Some programs try to circumvent that problem by processing screen inputs to filter out unwanted “marks” on the page, but it isn’t perfect.
Witness, then, the triumphant return of the stylus.
There have been plenty of remarks about Apple’s magic tablet and its lack of a dedicated input stylus. Steve Jobs said clearly that he was against styli when he was first introducing iOS 4. What Steve wanted, was a simple start to a powerful operating system that didn’t require the user to learn “how” to use the stylus (the original styli for Tablet PCs were only semi-intuitive, mostly because users were forced to use an operating system that was never designed for that type of interaction). Steve wanted people to jump right in and start using the OS without requiring them to hunt for buttons with a stick. Fast forward a little while, and we start seeing that people actually do want a stylus, but not for the purpose Windows Mobile used it for. Now, people want to teach kids how to write. They want to teach kids how to draw, to create, and that’s difficult to do when all you’ve got is your finger.
Anyway, here’s a little tidbit:
The application, which proposed several different types of styli, such as a disk pivot and a powered conductive tip, for use with capacitive touch displays, was filed in July 2008, several years before the release of the iPad.
The pen paradox is that, for all the contention that has been fostered between multitouch and styli, the two can shine when used in tandem, assuming the user interface has been created with both in mind.
The crux of the argument is that people are interacting with a tablet using a tool they were born with (their hand), and they want to take the next evolutionary step: the writing utensil. Strangely enough, with this world of keyboards and texting, there is still something that people love about handwriting. I tend to wonder if the obsession with a natural handwriting is a waste of time. We keep chasing a writing system on these various tablets that replicates writing on paper…but why? Why should we be concerned with that at all?
I mean, fine, teach kids to write using a pen and paper, but why are we searching for a system of writing on an iPad, when typing is clearer, more efficient, and readily transferable to other media, as well? It doesn’t make sense to me, and it smacks of misunderstanding. If you need to jot a quick note, there are plenty of styli out there that will accomplish that for you (I use the Pogo Sketch), but for longer text, what is the advantage of writing over typing? I haven’t yet figured that out.
That isn’t to say that I don’t think that a stylus would have its value. Drawing is, without a doubt, easier with some sort of stylus. I imagine that CAD would be a natural fit, as well. The truth is that the real case for styli hasn’t been made yet. With all of the amazing talent out there and incredible ideas that the last two years have produced, I can’t wait to see the “killer app” that the stylus will enable.
Not the iPad 2 or some MobileMe revamp.
I hope that Steve is OK. He’s one of the main reasons I’m writing today, and I hope he gets better so I can tell him how much of a difference he’s made in my life.