Well, the iPhone 5 has launched and has been the center of a great deal of attention, most notably since it was timed to follow shortly after the release of iOS 6 (and the shift to Apple’s much-maligned mapping data). To say that Apple’s hardware releases garner a great deal of attention is an understatement, but the scale of the press attention and media frenzy that gets whipped up around their new devices is indicative of something greater, perhaps, and I’m both nervous and hopeful about what this may portend for the consumer electronics space in general.
A conversation I had with a friend recently had us circling back to the topics of patents, lawsuits, and innovation vis à vis the other two. Discussions of this sort are always minefields, since there are so many aspects to the total picture that may not be immediately visible to any one person. Perspective, also, plays a huge role in the idea of “intellectual property”, in this case as represented by Apple’s lawsuits targeting Samsung for patent infringement. While the results of our discussion are, for the purposes of this article, largely irrelevant, the conversation itself is what was interesting to me. While it could have been easy for us to start the age-old “X [company] is better because of Y!” type arguments, we didn’t go there. What we focused on instead was the difficulty in the enforcement of these patents, and the slippery slope that is created when the judicial system starts enforcing these patents.
To be clear, my opinion is that there is something about Apple’s hardware, software, and culture that is incredibly appealing to many people. This appeal lends itself to a significant and conspicuous presence in American consumer culture. Manufacturers, thus, faced with the need to sell products that look good and perform powerfully, tend to copy Apple’s designs. Apple, clearly fed up with the practice, decided to put the kibosh on the whole thing and started patenting various components of their designs. Some of these patents were software patents. This is where things start to get…difficult.
I understand what’s going on here. Apple has designed a product that leapfrogged a generation of mobile devices. While the mobile device landscape eventually would have seen the introduction and proliferation of these devices in the marketplace, Apple got there first and wanted to make sure that they stayed there long enough to establish some sort of market dominance. Strategically, it makes sense. The central issue in this discussion, I believe, is the importance of innovation, and the theoretical stifling effect that these patents will have on it.
The argument is this: Apple’s patents will slow innovation, because other companies will be unable to push forward in a space that is controlled by a competitor who guards it fiercely with litigation.
I see that, I do. I understand the fear that other companies will suffer as a result of these lawsuits and the potential resulting damage awards or judicial decries. What I disagree with at the core is that anything Apple does will hamstring progress in the mobile space.
Assuming that all of Apple’s patents are enforceable (I don’t think they really are), other companies would have to (theoretically) tread lightly in the mobile space.
Except they really don’t.
Apple didn’t tread lightly, did they? No, they didn’t, because they set out to define a subsection of the mobile space that basically didn’t exist. There were cellular devices, and there were PDA devices, and there were “tablet” devices (Nokia N800, N810, etc.), but all were based off of the wrong paradigms. Cellular devices were all basically phones, with buttons and functions tied to the physical layout of the phone. Those devices that were moving away from the idea of buttons with hard-coded functions still were tied to buttons, in general (the Palm Treo 700w/wx springs to mind), and the non-button user interface was generally clumsy and not designed to be used with a finger. Many users used these devices with fingernails¹, and there were many software development companies that made their bread and butter off of developing software that would make the user interface a little more “finger-friendly”. To unpack that a little more, the user interface of Windows mobile-based devices was greatly lacking², in my opinion.
Thus, Apple’s approach to hardware and the software that powered it was absolutely unique. No other manufacturer had created a mobile device in quite this way before. Apple, therefore, would be forgiven for wanting to protect their creation. Since something of this magnitude had never really been attempted before, Apple was assuming a great deal of risk in introducing this untested form of mobile device into the marketplace.
Apple was innovating in a way that other companies were afraid to.
Once the iPhone had been introduced, things changed very rapidly. Device manufacturers suddenly were forced to change their attitude toward design, manufacturing process, and materials. Apple’s phone made use of high-quality glass and aluminum. The feel of the device was unlike anything that had been created previously, and felt like something that would come out of a high-end fashion house, rather than a consumer device manufacturer.
Apple, however, was just that – a consumer device manufacturer. As such, it was uniquely positioned to get this device into the hands of millions of people. Once other manufacturers actually realized what Apple had done (turned the game upside-down), they had to change their manufacturing processes rather quickly in order to keep up.
Unsurprisingly, that’s what they’ve been doing ever since.
This is why I take issue with the idea that “Apple stifles innovation”. If there’s innovation to be had, then innovate. The problem, as I see it, isn’t that Apple is stifling innovation, it’s that Apple is putting a lid on copycats. Instead of letting everyone ride on its coattails, Apple has kicked everyone off their train and is riding it to a future that it is, day by day, defining. If other companies are so upset about it, then create something new. Go out there and design a device that is better than the iPhone, make it out of materials that haven’t even been thought of yet, and integrate it with services that don’t exist. Don’t whine about someone slapping your wrist for copying their hard work. Knuckle up and make something that will take the world by storm.
We’re all waiting for someone to build the future. Will it be you?
1: Since resistive displays would register touch inputs from anything, users were only limited by what they had available. I, for instance, would often use a coffee stirrer since the stylus always had a tendency to grow legs and walk away.
2: Palm did its best to try to create a UI that was inviting and accessible over the standard Windows mobile operating system, but it’s clear that they had difficulty doing that. Attempting to change system settings, or interacting at all with the file system, was incredibly difficult for most users. I, considering myself a power user, would often have to resort to registry changes and such to get the device working just the way I wanted. Purchasing software that made the device mores accessible and user-friendly would typically cost $30-50.
Naturally, I had to do many of the same things with the original (2007) iPhone (read: [Jailbreaking[(http://en.wikipedia.org/wiki/IOS_jailbreaking) iOS), but the current iteration of iOS solves many of the issues that I would normally Jailbreak for.
A friend of mine recently asked me what all the hullaballoo surrounding NFC was. This friend, I believe, represents a perspective that a great deal of folks share.
- What is it?
- Why should I care?
These two questions are critical in determining whether or not any technology is really going to catch on with the mainstream crowd.
The answer to the first question is pretty technical, but I’ll break it down as much as I can. NFC is a technology that enables very short range communication between devices. One device reads data, the other stores it for access by a reader. The storage device is tiny, we’re talking minuscule. The wires that act as antennae for this thing are about as thick as a human hair, and are typically wound up to increase their visibility by the reader devices.
The communication part isn’t that interesting, it’s how a device’s NFC technology interprets these NFC signals that’s really interesting.
The overwhelming majority of the discussions that I’ve seen regarding NFC center around using a phone or tablet as a mobile wallet, tying credit cards, loyalty cards, transit cards, and a number of other possible wallet-fattening pieces of plastic to said gadget. Proponents of the technology have long pointed to Japan and Europe, where people have been using their cell phones as subway passes (and, yes, credit cards, etc.) for years. The countries which have seen good NFC adoption naturally have relatively tech-savvy customers, but they also sport solid infrastructure to support the rollout.
It’s this specific component of the NFC landscape that’s been a large question mark for so long in America. There are, of course, businesses that have adopted NFC as a part of normal business (Jamba Juice, Mobil, McDonald’s, to name a few). Their card readers are compatible with credit and debit cards sporting certain logos that have those embedded chips.
However, I believe that the discussion of nationwide payment systems employing NFC is a red herring. In fact, I think it’s the most impractical use of the technology that I can think of right now. While it would be neat to wave my phone at a cash register and finish my transaction that much faster (without having to break my IM conversation to pay, mind you), the true usefulness of NFC tech comes in the form of self-created NFC “tags”, much like what Google has included with their Nexus 7 tablet, and that Samsung is touting as a feature of their Galaxy S III.
The draw here is that a user can create “actions” and bind them to these little stick-on “tags”, which they can then use to automate certain actions on their device. For example, a user could set a custom action script that would switch the device’s Bluetooth on or off (depending on current state), put it in pairing mode, and open the Bluetooth settings page to look for other devices. The user could then stick this tag to the back of his or her Bluetooth keyboard, and, by tapping the device against that tag, easily set up pairing between that device and the keyboard, saving the user the frustration of having to manually re-pair a device and the keyboard when switching between two or three devices. Another possible scenario could see restaurants placing NFC tags on tables that users could tap to toggle their phones to “silent” mode. When users are finished with their meal, they would simply tap the tag again to toggle the phone back to its previous settings. A user could check into a venue on foursquare or Facebook, join his or her home or work wi-fi network, or launch certain apps when he or she taps the tag that he or she has stuck on his or her front door.
The point is that NFC, with a little ingenuity, can automate repetitive tasks or common actions that are tied to very specific places. Sometimes GeoFences aren’t enough to allow users that granular control over their location and/or actions. Airplanes could have NFC tags located at the main entrance that put the phone in, well, airplane mode. It would be easy for travelers simply to tap the device against the tag and ease in for takeoff.
Apple cites the inclusion of the Passbook app as perfectly adequate for most consumers. What Apple is referring to, in this case, is the very specific use case of using the iPhone as a credit card or payment device. To their credit, I believe they’re right. The average American doesn’t want to deal with the steps necessary to link a device to a credit card or other account.
But what about all the other uses? I guess we’ll just have to wait until next year.
Not sure why you’d need this if you already have Pages, Keynote, and Numbers. Maybe I’m not the target audience? Who is the target audience?
Being a gamer, and reading game related news, I was a little surprised at this article from Rock, Paper, Shotgun, which talked about a shift in policy on EA’s part regarding the marketing and sales of games on Steam:
HMM. This demonstrates incredible confidence in EA’s own brands, but the key back foot they’re on is that they don’t have any other publishers they can bring on board. What would change everything in the war against Steam is if the other major publishers launched their own Origin-like services and restricted their download sales to those. I won’t be at all surprised if that happens, as a few are quietly building the infrastructure – THQ have a store, Ubisoft have that uPlay thing, Blizzard obviously sell their own digital stuff direct… You could even see Call of Duty: Elite as heading vaguely in that direction.
It feels like this is a trend that’s moving very quickly. When we see artists, developers, etc. selling their own stuff without a store or aggregation service to market their wares for them, we enter into a different kind of relationship with the creator- it’s more one-to-one as opposed as separated by the rift of the store.
Before, people would go to a single place to find stuff. This method of curation led people to associate their buying and their consumption with a place, a store, an entity somewhat divorced from the source of the goods. This is a fallacy, and can be frustrating for a customer because they don’t necessarily know where their stuff is from. It also robs people of creativity and imagination.
Now, with the proliferation of creators on the internet, there’s an increasing emphasis on discovery. That means that people need to be more self-aware and understand their wants and likes more. It also means that the creators have to have more clout since no one is doing their marketing for them. Either that, or a lot of really awesome relationships to build on.
This reminds me of Trent Reznor’srecent push into digital publishing:
Like a more magnanimous Radiohead, Reznor’s called into question the major-label reserve clause for established, profitable musicians by not just coming up with a new way to monetize music, but just giving it away for free, no strings attached. Instead of “tip-jar,” it’s “this one’s on me.”
and, of course, there’s always the “original” self-released album:
This is a hint of things to come. Over time more artists will decide to self-release music in this fashion, thus creating long, staggered release windows that place serious fans first and more casual fans further back in line. Traditional retail must wait in line, too. That means service companies that provide the tools and expertise for the online self-release of albums will benefit from this self-release strategy while the second wave of consumers are left to retailers.
What remains to be seen is if self-publishing will win out over a curated experience like the various “App Stores” that are cropping up all over the place. Clearly, if a developer or creator of something wants all the money, they’re going to have to sell it themselves. If they want maximum exposure, they have to give a little of that up to be on one of these stores. This will be interesting to watch, for sure. Will we see increasing fragmentation or consolidation? Or, still possible, some strange hybrid of both.
For years now, I’ve been a loyal user of Google. I can remember the day I made the transition from .Mac to the power of the Google. It was quite an interesting day. I was convinced that Google was the way to go. Everything in sync, all the time. All my information, my photos, my calendars, email, everything. Everything was going to live in the cloud with the Google overlords watching benevolently as I used their services, added to their search statistics, usage patterns, and more. I became obsessed with keeping everything together, everything synchronized across multiple devices, using their servers as a free way to organize my life.
What I learned, however, was that I wasn’t as much interested in the sync or the connectivity, I was more interested in not losing everything. Being a student and a geek for many years meant that I was always trying to push the limits of technology in the classroom. I was using my 12″ Powerbook G4 in classes long before teachers were comfortable with the idea. I was tethered to my Nokia 6600 through bluetooth and surfing the web happily while my teachers lectured. My mind was busy, I took copious notes, and, come exam time, had a far easier time studying than my peers with binders and notebooks full of handwritten scribble (“When did we talk about that? Was it before or after the thing about the guy…?” *Command+F* “Nevermind, found it.”)
When my computer suffered a critical hard drive failure, however, it became clear that I needed a way to back everything up in the case of a similar failure. I lost almost everything. I had managed to backup a significant portion of my (for those days) gargantuan music library (60 GB!), so it wasn’t a total loss, but my notes, papers, research…all gone. I was upset, which is why I started using Google Docs. I never wanted to be left in the position of losing everything and being without some record of my past. Obviously, using Google Docs led me to Gmail, and Google Calendar, and Google, well, everything.
There’s a special insanity that comes from using Google products, as I’m sure is true of using basically any product almost exclusively, but Google’s is particularly sinister and particularly ubiquitous. In a matter of a few months, I had made the transition to Google almost completely, and I started learning to do things “The Google Way.”
This refers to the accumulation of knowledge, facts, data. You essentially start to look at your life as a series of interconnected services, all of which are always accessible and always updated and live in this magical land known as “The Web.” It’s comforting. You don’t have to worry about software updates, managing resources, or crashes. You just open your web browser, and you have Google (I mean that almost literally, since almost every single browser has a little search box that takes you through the goog’s massive brain). And it’s free! Wow!
Then you look closer, and you realize something. Google has eaten your life.
You’ve given them everything: your contacts, events, email, documents. Literally everything you need to make your life work. They have it all. In return, they gave you the tools to create more, give them more. Sure, there are privacy concerns there. How many articles have you read about Google’s scary in-email advertising? It’s not really invading anything, and they’re not really “reading” your email, but people are creeped out by it. It’s uncomfortable. It’s like cheating on your taxes, and then opening a fortune cookie that says, “Cheating is bad!” It’s not meant for you, it’s a random cookie from a bag of a thousand, but it feels like it’s for you, and you get all weirded out and feel guilty (I’m not saying it’s ok to cheat on your taxes, just trying to illustrate an example).
Even more sinister, however, is how Google has made you a slave to numbers.
Your inbox, your unread count on Google reader, your new voicemails on Google Voice, your unread Waves. These and more pull you in a thousand directions, trying to coax you back in from productivity, focus, or wherever else you may have been doing. Suddenly, instead of reading that book you wanted to read, you’re wrapped up in a conversation with a friend from halfway across the world while reading the latest Guardian headlines. Then you’re behind, but those numbers keep getting larger, and you can’t get to that paper until your unread RSS count hits zero.
You’ve gone crazy.
So, I decided to pull the plug on all that. No more google. I got a subscription to MobileMe, synced all my contacts, and was on my way. I don’t have an “unread” count anymore, just a folder full of sites that I like to check. If I have time, I read them. If my day is too busy, I don’t. The world will survive, it moves on. I don’t need to check my inbox constantly. I can, if I want, but I like knowing that my email is there for me to take care of when I actually have time to take care of it.
It’s not freedom from the privacy that was my victory, it was regaining appreciation for doing things my own way. Having my computer, my iPad, and my iPhone tied together with a service that is expressly made to keep them in sync is amazing (like the old ads for Mac OS X Tiger, it feels “built-in, not bolted on.”), but having the ability to handle my work my way? Beautiful.