Benedict Evans once talked about a sort of “openness Tourette’s Syndrome” that occurs whenever people discuss Apple’s platforms vs. competitors. Basically, it goes like this: someone mentions how good an Apple platform is, and then someone else says, “Yeah, but Android is open.”
There’s a pleasant sort of fiction that is promised with “open” that simply isn’t a viable reality for most people. I’ve heard salespeople use this in retail stores, and I’ve heard IT professionals use this when offering Android to their clients. This demonstrates a fundamental misunderstanding of what Android’s various meanings of “open” are. The type of “open” that people are typically referring to when they use that word is actually conflated with “extensible” or perhaps “has relaxed security”, which are very different things than the “open” that Android was conceived with.
Android’s initial form as a project was open-source, and the Android of today is still technically “open-source” but, due to its reliance on Google’s services and cloud features, the current version of Android that comes loaded on many phones is not nearly as “open” as many would have you believe. Would you like to use another mapping service? How about something other than Google Now? Can you use the features of the home screen without being tied to Google services? Sadly, no.
That doesn’t mean that one couldn’t install the Android Open-Source Project’s fork of the operating system, but it means that the marquee features of the operating system, the things that Google and Android fans like to wave in the air, are inherently tied to Google and make it very difficult to use non-Google-developed operating systems.
Instead of this word “open”, then, let’s use the word “extensible”, since that more accurately reflects the Android OS’s ability to facilitate communication between apps, and to allow developers to build software that adds functionality to the OS or preexisting apps.
The problem with Android up to this point has been that security has not been (or at least hasn’t appeared to have been) a priority for developers or users. While I could try to offer up what I see as reasons for this type of behavior (laziness, “Accept” fatigue). I may be wrong on this, but from what I’ve seen, Android are more than willing to download apps and grant them almost completely exclusive access to their mobile device without really thinking through the ramifications.
Apple has avoided this for many years by sandboxing their apps and keeping inter-app communication on the back burner until they developed a way to allow apps to communicate effectively without sacrificing a user’s privacy or requiring them to grant unnecessary privileges to an app that really shouldn’t require it. Naturally, this came at a cost. For years, iOS users have not been able to install 3rd-party keyboards or send information between apps in a way that was “easy” (to be fair, the iOS “Open In…” functionality has allowed users to send documents and files between applications for some time, but required a degree of savviness from users that was sometimes lacking).
Now that Apple has introduced the ability for developers to create “Extensions”, however, that gap has very quickly been bridged, and iOS 8 will allow developers to create new ways for their apps to interact. Some may argue that Apple’s approach may differ from Google’s, but the end-user result is basically the same: a person will be allowed to install and use third party keyboards, send information between apps, and interact more directly with the data in other apps.
What I’m interested in seeing now, however, is what the conversation will center around now. For many years, Android users have told me that Android has been superior because of its customizability. When I would press these users to provide me with more information about what “customizability” means, they would often say two things: support for third party keyboards and home screen widgets.
These two “features” of the operating system, in my opinion, are not very important, and would often open a user’s device up to instability and/or unnecessary resource usage. I have used Android devices, and I have seen the home screen widgets for the apps that I use the most, and there is no version of reality in which the widgets provide a superior experience to using the app. Again, this is my experience, and maybe there are some people who really enjoy looking at two lines of their mail on their home screen underneath a line and a half of their upcoming calendar events, and not really being able to meaningfully interact with either until they open the app anyway.
Third party keyboard support has also perplexed me, but I can understand the utility for people living outside the United States, for whom third party keyboards can offer substantially improved text entry. That being said, none of the Android users that I discussed this with lived outside the United States, so it seems that their argument is a moot point, or at least purely subjective.
Thus, it seems to me that the discussion of Android as an “open” system (again, in the way that most people understand the term “open”) has lost much of its value. Android as an “extensible” operating system has also lost much of its value, as well (at least as a marketing ploy) in light of the new functionality of iOS 8. How, then, should we be defining “open”?
When we look on the post-PC landscape and see two operating systems that allow their users to interact with their data similarly, and enter information into their devices similarly, and allow applications built upon their platforms to communicate similarly, how should a person decide which device to use? Perhaps the discussion shouldn’t be centered around questions like “Which device lets me tinker with system files?” or “Which device will allow me to inadvertently break things if I wanted to?”, but should really be “Which device is better for humans?”
So, Apple has unveiled iOS 7 to much discussion, hand-wringing, and cheers. There are lots of things that I feel that Apple is promising to do right with this release, and a number of things that we will, of course, need to see to believe.
One of the most prevalent activities that iOS users engage in is photo sharing. Apple’s recently-released “every day” video showcases the power of the iPhone as a camera. iPhone users know that their iPhone is probably one of the best cameras they’ve owned, and the millions of pictures snapped daily obviously underscores that.
I was intrigued by the keynote’s handling of photos, namely the application of filters and the introduction of “Shared Photo Streams” into iCloud. To be honest, I felt that this was a feature sorely missing from iOS and iCloud for a long time. The idea of a photo stream curated by a single person is fine if one of your friends happens to be a professional photographer or something like that, but most situations in which people are snapping photos tend to be social, with multiple people desiring to both view and (most likely) contribute to an album of the event. The trick lies in determining the canonical center of the stream. Who “owns” the photo stream? When people contribute to the photo stream, are they adding to a single user’s photos, or are they, in effect, “copying” that stream to their own photo collection and then adding to it, which can then be seen by other parties? Or, are the photos stored on Apple’s servers, where multiple parties “own” photos, can add to the stream, and then define who else “owns” those photos? “Own”, here, is an operative word, since ownership of the photos is tough to nail down, in this case.
I’ve always wondered how something like this would work, but it’s a problem that Apple absolutely has to tackle in order to stay relevant. As people add more and more photos of their lives to their devices, the related storage of said photos become of paramount importance, followed closely by how people identify and integrate those photos into their identity. What has become increasingly obvious is that people don’t just craft an identity that is tied to a mobile device, they create a digital identity that the mobile device allows them to access. In order for these technologies to be relevant, they have to allow people to share photos and feel comfortable about storing them in a way that is non-destructive and still allows them to reference past events with ease. It’s clear that Apple is now moving towards more meaningful photo sharing, but it has yet to be seen if they can take this idea and use it to deliver the type of interconnectivity that people implicitly ask for.
One of the things that Apple did not address, and something I’ve heard from people who have recently switched away from iOS as their primary mobile platform, is that iOS hamstrings users by not allowing them to easily pass data between apps. While I agree with some parts of this argument, I can see Apple’s stance on the idea of inter-app data sharing. The scenario that I often hear from heavy Android users is that things like taking notes, or saving PDFs from one app to another, etc. are easier on Android. I don’t agree with this because I do those very same things every single day with iOS and, ever since Apple started allowing for custom URLs to pass data from one app to another, have never had an issue with that. As such, I think I understand their stance – that Android allows a freer exchange of data between apps using a more-or-less centralized file system.
One thing that we saw in the WWDC keynote, however, is the introduction of a new tagging feature in Mac OS 10.9, which, I believe, is going to be Apple’s eventual answer to the file system. Instead of files being stored on the device, in a folder, they’ll be stored in iCloud, accessible as clusters of files related to a specific idea. This is finally the intelligent organization that Palm’s WebOS got right. Ultimately, people don’t really organize their data by app, they organize it by idea or topic, which is a far cry from having data “live” in an app.
I think the ultimate goal is to enable a user to cluster files together around a central theme or project that they may be working on, and make that cluster available as an item in an app that keeps track of and syncs tags across platforms. Ostensibly, the user could open the app, see all of their tag groups, and (possibly using an Photos app-like pinch to spread gesture) see all of the files in that tag group. Tapping on a file would open a list of corresponding apps that are capable of handling that type of file. Interestingly enough, this may also allow Apple to put a little more control in the user’s hands by allowing the user to pick which app would be the default handler of that file type. In this manner, people don’t necessarily have to know where to look for their files, they need only to open the “Tags” app, find the group they want to work on, and tap the file they want to work on in that group. The OS then passes that file to whatever helper application the user has selected as default, and they’re off to the races. A system like this wouldn’t be able to satisfy every Android lover’s desire for a true file system, but Apple wouldn’t need to – the average user would see this as a new feature, and customers on the fence may see this as a tipping point.
This one is weird to me, but I like the way that Apple has addresed it in the update, with the WebOS-style “cards” interface informing this component of the OS heavily. The ability to see live updates of each app, or at least the current status of each open application as the user left it is another way Apple brings parity with Android, but does it better. I’ve seen Android’s task-switching waterfall, and It has always felt too sterile to be enjoyable to use, although I believe that’s more of a fault of the OS design language as a whole than that specific part of the interface.
There have been a not insignificant number of words spoken about the changes Apple has made to the look of the stock app icons in iOS 7. To be honest, I feel like this whole discussion is completely moot. App icons are incredibly important, to be sure – they are the way a user identifies your application in the sea of other apps on their phone – but they are somewhat arbitrary. They need to be well-designed, but there is a certain “minimum effective dose” that allows most people to identify the app they’re looking for and associate it with the task they’re looking to accomplish.
When Apple made the choice to redesign the stock app icons, the folks behind Apple’s design choices exposed their design process as well as the grid-based layout sytem that informed the icon designs. There were comments made by graphic designers about how Apple’s layout choices were half-baked or wrong, and other coments that discussed how the color choices were catering to a younger generation, or the aesthetic biases of the cultures in new and emerging markets. Regardless of the reason behind the choices, I can’t help but relegate all of this commentary to the trash heap for the same simple reason: all of these comments are about a subjective experience. Of course Jony Ive wants to create an experience that is beautiful, familiar, approachable, friendly, and functional…but there are so many ways to accomplish this, and all of the commentary comes from a single data point in the universe. Even assuming that all of these designers and amateur critics were able to ascertain some objective truth about these designs that was universally applicable, they all have differing opinions – some of them conflicting – and it must thus follow that they’re either all right, or all wrong. I’m clearly in the latter camp. People are going to take a look at the icons and freak out because they’re different, and then everything will go back to normal and everything will be fine because, in truth, app icons only matter as pointers to something a user wants to accomplish. Once users draw new associations in their minds, they’ll be fine.
The Little Things
There are, of course, things that Apple hasn’t mentioned or brought up, most likely because they simply didn’t have enough time to do so, but I feel like I should mention them here for the sake of completeness.
While I know that not all of these things will be addressed (or even should be addressed) because of the focus that Apple is trying to maintain with iOS, there are some things that venture into that grey area that exists between the worlds of Mac OS and iOS. The first of these is the way the OS (and many apps) handle external keyboards. Safari, for instance, is able to handle a “Tab” keystroke, but does not recognize Command+L to put the cursor in the address bar, or Command+W to close an open tab. These aren’t necessarily “shortcomings” of the OS, but nor do they enhance the user experience. I’ve never thought to myself “Boy, am I sure glad they left out those keystrokes! My life is so much easier!” With this type of behavior, I’m not sure if the omission is intentional or not. Apple is a very intentional company, but something like this feels like an oversight as opposed to a deliberate design decision.
Naturally, when people see new OS announcements from Apple, they assume that new hardware is going to follow closely behind. Something that I heard recently was that Apple’s new design, while beautiful on all current iOS devices, absolutely sings and looks right at home on the new devices that Apple has lined up for the fall. What these devices are is anyone’s guess, but I don’t think anyone would lose betting on a new iPhone. New iPad minis, iPads, and possibly iPod Touch units may also be in the works, but it isn’t completely clear yet exactly how these things will take shape, and what sort of changes we can expect. I love looking forward, but I don’t “do” rumors, so I’m not going to waste any time on speculating about what Apple is working on.
Ultimately, the new iOS version that Apple has introduced to the the world looks great and, based on what I’ve heard, feels amazing. I have no desire to start ripping on an OS that’s in beta, nor do I have the desire to laud it. While it’s exciting to see a refresh to the world’s most important mobile OS, the proof will be in the pudding once it’s been finalized and released.
Apple announced the iPhone
5 4S yesterday, much to the chagrin of the internet. Well…perhaps not to the chagrin of the internet, but everyone was expecting something called the “iPhone 5”, and Apple announced an absolutely amazing piece of kit they’re calling the “iPhone 4S”.
There was a lot of backlash, from what I understand, which seems…silly? I think that’s probably the best word to use right now. Silly.
See, the iPhone 5 was supposed to have all these amazing features, like a dual-core A5 processor, a higher-resolution camera, and image stabilization when shooting video. It was supposed to do all these amazing things with even better battery life, too. What a product! Yet, what we got was…the…wait let me check on this…we got the iPhone 4S…thing…with a dual-core A5 processor, higher-resolution camera, image stabilization, and something called “Siri”. Ok? But this silly piece of hardware is…well just look at it! It looks the same as the iPhone 4! And it’s called the iPhone “4S”. PEOPLE can you hear what I’m saying? It has a four in the name. Four is not five, my dearies. This is clearly a disappointment.
Let’s talk about what’s NOT in the iPhone 4S:
I think that about covers it.
Seriously, though, the next iPhone is revolutionary. Not because it looks like an iPod touch, but because it’s basically an iPad 2 in the palm of your hand.
I don’t think it’s time for a chassis redesign, and I’m glad they stuck with the iPhone 4’s slick glass and steel thing. There’s so much more in there, and all it will take for people to understand the beauty of the iPhone’s new guts is moseying down to their local crystal palace (aka Apple Store) and fiddling with the thing for five minutes, in which time they’ll realize that they can be twice as productive with this new pocket computer than they are with their current one. Game, set, and match.
The announcement of Windows 8 and its subsequent demos were interesting for many reasons, not the least of which was its inevitable comparison to Lion, iOS, and Android. To be fair, Android should generally be left out of this comparison since it doesn’t have a true desktop operating system (yet), but periodic comparisons have to be made.
Windows 8’s user interface, at least the touch portion, looks good. I like the clean, muted-color aesthetic, and the transitions between apps looks natural and pleasing (similar to the Star Trek LCARS aesthetic, but a little bit sharper). The way that Microsoft is pushing this thing, however, is silly to me. Recently, I wrote about Apple playing “catch-up” with this release, but something I failed to address was the simplicity of the whole experience, and this is ultimately the most important aspect of the entire OS.
I can wax poetic about the history of the personal computer and its role in our lives, the changes that personal computing has brought to our lives and how we experience the world around us, but it’s unnecessary. We all know that the face of personal computing is changing rapidly and being redefined constantly. Instead, let’s ask a fundamental question:
Why are computers becoming “simpler”?
I know lots of people who lament the “over-simplification” of today’s computers. Computers should require lots of specialized knowledge and time to learn, in the opinion of many folks. They don’t understand that all the layers of schlock the operating system puts in between a user and the task that they wish to accomplish are unnecessary and silly. In order to build a spreadsheet in Microsoft Excel, for instance, I need to understand Microsoft Excel conventions and jargon on top of Microsoft Windows conventions and jargon. This is difficult, and it gets in the way of actually accomplishing things. The learning of the tool takes up more time and energy than using it. This is bad technology, but has come to be accepted.
Companies that design software continue to simplify it so that it gets out of the way of the user’s intentions and goals, so that it helps them accomplish what they need to accomplish. This is good technology, and has difficulty gaining traction because the software almost looks like a toy compared to the byzantine and grotesque software hydras that have become commonplace in the world.
I remember using my Palm Treo 700w, the Windows Mobile version of the popular Palm smartphone. It had a custom build of the Windows Mobile software that was tuned for the Palm “experience” which meant that it worked better than standard windows. I thought it was really great, but what really blew me away was a software shell (a layer over the standard Windows Mobile OS that looked way better than the standard home screen) by SPB called “SPB Mobile Shell”. This was right around the time the iPhone was introduced, incidentally. I tried using this OS shell for a while because it was “touch-friendly” (where have we heard that before?), but ultimately gave up because the phone was still using a really ugly, really unusable OS designed for styli under all the gloss and shine.
This is what Windows 8 is, and it’s flawed. The key is to change what the OS is at its heart, change the way that interacts with the user, change the way that OS feels. IF the user gets the opportunity to “peek under the hood” of the OS, he or she will see the ugliest, most confusing parts of the system laid bare. The gears, cogs, oil, chugging engines…everything. In Windows 8, this is still confusing, ugly, overcomplicated. In the Apple world, this is simpler, easier, uncomplicated. This is where Windows will fail. If I start to use an app that hasn’t been designed for the new “touch-friendly” shell (which is essentially all Windows 8 looks to be), then it fires up in its old, byzantine, bloated-hydra form. With iOS and the future of MacOS, this isn’t even an option. If an app hasn’t been designed to be used full-screen, it doesn’t matter, it’s still usable in its beautiful, native form.
This brings me around to the initial question of “Why are computers becoming simpler?”
The computer-savvy elite that used to be the only folks for whom computers were intelligible are no longer the only ones who can use computers, and computers are becoming simpler because that’s what we all want. Even programmers, developers, and coders want everyone to use computers. Everyone! They want their software to shine on beautiful hardware, too, and we’re seeing this happen from the world’s most innovative companies. Microsoft, however, doesn’t seem to get it. They want their software to be archaic, opaque, and impenetrable when it comes to interacting with the user.
Maybe that’s why they keep losing.
So the big announcement is iCloud, iOS 5, and Lion. These are all good things, and probably make clever use of a new, powerful back-end that will hopefully be a major part of Apple’s strategy going forward. One of the interesting thing to see is how Apple will be pricing this “new” service, if it’s going to be considered “new” at all.
I agree with what TUAW has to say about Apple’s paid vs. free options being a part of its iCloud (née MobileMe) plans. I can’t imagine that Apple would ignore the vast potential in this market. There’s just no way that any company in their right mind would ignore the power that a uniting backbone would have in its ecosystem.
It’s been a perennial rumor that Apple will stop charging $99/year for much of its MobileMe service. The rumors have always suggested Apple will offer basic services (like email and over-the-air device syncing) for free, while paying subscribers will have access to things like website hosting, online photo galleries, storage options through iDisk, and now potentially wireless streaming of music via the rumored iCloud service.
Then there’s this article by AppleInsider that offers up another possible interpretation, namely that offer will be introducing a “tiered” pricing model to their new iCloud service based on the user’s operating system. I don’t think this is going to happen, since tiered pricing is uncharacteristic of Apple.
That price tag may remain for users who do not make the upgrade to Lion, or for Windows users. But it is expected that the cloud services will become free to Mac users who run the latest version of Mac OS X.
My opinion is that Apple will introduce some kind of free option. Just about every big tech player out there offers some sort of free email option, and that’s by design. By pulling people into your ecosystem, you grab mindshare and envelop them in whatever “culture” your product or service suite represents.
There’s also the increasing awareness of what email addresses mean. A person with an “@me.com” email address is telling the world “I probably own a Mac or iOS device, and have the ability to view whatever files you’d like to email me or access just about any site you send my way.” This is important in today’s business world, where the data is less important than the connections they represent. A business owner isn’t going to say, “Hey, can you send me that file in a keynote? I have an iPhone.” No, they’re just going to be able to open because they have an iPhone. Offering their customers even more integration, stability, and ease-of-use would be a huge selling point for Apple, and will also pave the way for their future plans for FaceTime (which I believe Apple will push heavily as a replacement for phone calls in the coming years).
Exciting stuff, can’t wait for the Keynote.
So there have been a lot of approaches to this whole smartphone/tablet combo, and I struggle to see how any of them are truly good approaches to something that really isn’t a problem to begin with, and, truth be told, some of them seem actually harmful to the future of the PC that we’re currently headed toward.
For some reason, tablet manufacturers keep insisting that the tablet experience is hamstrung on its own, and continuously mandate the use of some sort of phone in order to complete the experience, or even use the device at all. Before anyone jumps on me for that sentence, I know that two of those examples aren’t even tablets, but take that in the spirit of the statement.
Companies designing these personal, productivity-driven devices that are reliant on smartphones are saying several things simultaneously. “You can do more!”, “You don’t have to manage the data on two devices separately!”, “You have more flexibility!” etc. What is really happening, however, is the cheapening of these devices and damage to the overall industry. Let’s take the Palm Foleo, the first of its kind and arguably the predecessor to the netbook. This device was “revealed” in an era when people got their data connections by tethering their devices to bluetooth-capable phones, so it made sense for the Foleo to then suck data out of its tethered Treo. Kudos to Palm for attempting to creating a great ecosystem, too. I applaud that. I think it was too revolutionary at the time, however, which led to its ultimate failure. (Side note: At the time, I was using a Nokia N800 paired with a Sony-Ericsson K790a (James Bond, FTW!). I loved both of these devices, but I kept thinking “I’d like to be able to use this tablet if I ever forget my phone,” and “I wish this phone was more capable at general ‘computing’ tasks so I can still use it if I ever forget my tablet.” Then I got an iPhone. At no point, however, did I think that the phone should be my gateway to the Internet for another device. It stood on its own and was perfectly functional.).
Currently, however, having this sort of dependence tells the consumer that
- Their device is not capable of real work (which is a lie).
- Their larger laptop/tablet is no more than a large phone (which is also a lie).
- The two devices are explicitly codependent.
This is really bad! It further solidifies the view that phones are “just” phones, and that tablets are “just” big phones. I have taken notes, written papers, and read books on my iPhone. The fact of the matter is that this device is powerful and capable of producing real work that I have gotten graded, real research that I have used to write papers and blog posts, and real communication with people oceans away. The reason that I have an iPad and an iPhone is because I want two separate devices, not some crazy Frankenstein monster of a device. There are times that I need to work on just one device, and, let’s face it, sometimes we just forget one at home. The key isn’t creating a physical bridge between the two that mandates the existence of one in order for the other to be used, it’s creating an invisible backbone that allows these devices to share information invisibly, so that the user can put one down,pick the other up, and resume working exactly where he or she left off. There have been hopes of iOS “state” cloud syncing for a little while, and this truly where this needs to go.
We don’t need devices that are tethered together using wires and plugs, we need devices and services that are smart enough to get out of the way and let our intention take center stage.
Update: Corrected spelling of “Padfone.”
It’s been ages since I abandoned the Skype app on my iPhone for any sort of serious communication. The push notifications are totally bogus, and their ability to actually handle an incoming call is pathetic. Not to mention this little thing called the iPad 2 with a front-facing camera that they seem to have ignored.
Now that Microsoft has purchased them, I hope there’s some sort of renewed interest in the iOS app. Otherwise, stay away from this one and go with the much better TextFree for making calls and/or Tango for video chat.
There was a recent incident over the border with our friends in the north regarding internet usage and the billing thereof. Those silly Canucks thought it would be appropriate to put ridiculous data caps (50 GB? seriously?) in place to make sure their customers were doing anything cRaZy, like using the internet they paid for. No, silly person! You can’t watch streaming video on the internet or rent movies from online providers! That’s silly! You need to drive out to a video rental store and take home a physical disc so you can watch it in your deeveedee player. What’s that you say? All the video stores are shutting down because all of these super awesome streaming movie companies are putting them out of business? Pish posh. Less talking, more driving to video stores. They don’t have what you want? Just rent something anyway. Rent it. Just shut up and rent something.
Before I get too carried away, this is what I’m referring to:
Canadian cable provider Shaw hit back at mounting complaints of restrictive bandwidth caps by unveiling a new set of Internet plans with much looser caps and increased speeds.
The whole thing is ridiculous, and honestly degrading to consumers in general. There is no need to be imposing these types of restrictions on the average consumer. If there’s a problem with a few users eating up hundreds upon hundreds of gigs of data each month, then address the issue with them. Otherwise, putting data caps in place, even large ones, as listed below, is asinine.
Starting June 7, capped plans will start with at least 400GB of data per month at 50Mbps down, 3Mbps up at $59 per month for those with a Legacy TV package, moving up to 100Mbps down, 5Mbps up and 750GB of data for $79 per month.
A second phase in August will add a 250Mbps download, 15Mbps upload plan with a 1TB cap for $99.Both phases will have genuine unlimited plans. In the first phase, a 100/5 unlimited plan will be available for $119 on top of the TV plan. From August onwards, this plan will be replaced by a 250/15 version for the same price. Existing 1Mbps, 7.5Mbps, and 25Mbps plans are getting an immediate boost from 15GB, 60GB, and 100GB caps to 30GB, 125GB, and 250GB respectively.
It sounds all fine and good, right? To be honest, I don’t think I’ve ever hit anywhere near that amount of data in all the time I’ve been using the internet, so I’m not complaining about the size of those limits, I’m complaining about the idea that caps need to be instituted on a large-scale basis. It’s condescending and hostile towards consumers. The article then taps into the ongoing discussion going on in the United States right now:
Internet providers in North America have regularly tried to claim that the rapid growth in online video has raised the costs of maintaining their networks and that they allegedly need to institute low caps to keep these costs check. Critics, including smaller providers and advocacy groups, have shown evidence the claims are often false since the cost of bandwidth has often gone down. They have at times accused companies like Bell and Rogers of using low caps to either delay network upgrades or to discourage competition from nimbler rivals to traditional TV, such as iTunes and Netflix.
The fact of the matter is, Internet usage is increasing, and telecommunications companies are shaking in their boots because their fat paychecks are going to start dwindling. I’m all for making money, but not when it comes at the expense of customer satisfaction. The trend here is, as I said before, hostile. No company should treat its customers like they’re harming its business. Your customers are the reason that you’re here to begin with, and don’t you dare try to justify your actions by pointing the finger at innovation and progress.
Was perusing my Apple feed when I came across this headline:
I’m still amazed that there are companies out there who believe that subscriptions on the iPad are a bad idea, or that they need to test the waters. That’s insane. What these companies need to understand is that the iPad does, certainly, represent (or stands at the forefront of) a digital publishing revolution. I could get the New Yorker Twitter feed, I could subscribe to the RSS feed, but it feels different when you see the class New Yorker covers splayed on your screen in glorious color. It’s a good app, and it’s a good experience (unlike the absolutely horrendous, awful, want-to-vomit “The Daily” app).
Subscribe today, you’ll really like what you see. And no, I’m not on Condé Nast’s payroll, I just like the app.