Benedict Evans once talked about a sort of “openness Tourette’s Syndrome” that occurs whenever people discuss Apple’s platforms vs. competitors. Basically, it goes like this: someone mentions how good an Apple platform is, and then someone else says, “Yeah, but Android is open.”
There’s a pleasant sort of fiction that is promised with “open” that simply isn’t a viable reality for most people. I’ve heard salespeople use this in retail stores, and I’ve heard IT professionals use this when offering Android to their clients. This demonstrates a fundamental misunderstanding of what Android’s various meanings of “open” are. The type of “open” that people are typically referring to when they use that word is actually conflated with “extensible” or perhaps “has relaxed security”, which are very different things than the “open” that Android was conceived with.
Android’s initial form as a project was open-source, and the Android of today is still technically “open-source” but, due to its reliance on Google’s services and cloud features, the current version of Android that comes loaded on many phones is not nearly as “open” as many would have you believe. Would you like to use another mapping service? How about something other than Google Now? Can you use the features of the home screen without being tied to Google services? Sadly, no.
That doesn’t mean that one couldn’t install the Android Open-Source Project’s fork of the operating system, but it means that the marquee features of the operating system, the things that Google and Android fans like to wave in the air, are inherently tied to Google and make it very difficult to use non-Google-developed operating systems.
Instead of this word “open”, then, let’s use the word “extensible”, since that more accurately reflects the Android OS’s ability to facilitate communication between apps, and to allow developers to build software that adds functionality to the OS or preexisting apps.
The problem with Android up to this point has been that security has not been (or at least hasn’t appeared to have been) a priority for developers or users. While I could try to offer up what I see as reasons for this type of behavior (laziness, “Accept” fatigue). I may be wrong on this, but from what I’ve seen, Android are more than willing to download apps and grant them almost completely exclusive access to their mobile device without really thinking through the ramifications.
Apple has avoided this for many years by sandboxing their apps and keeping inter-app communication on the back burner until they developed a way to allow apps to communicate effectively without sacrificing a user’s privacy or requiring them to grant unnecessary privileges to an app that really shouldn’t require it. Naturally, this came at a cost. For years, iOS users have not been able to install 3rd-party keyboards or send information between apps in a way that was “easy” (to be fair, the iOS “Open In…” functionality has allowed users to send documents and files between applications for some time, but required a degree of savviness from users that was sometimes lacking).
Now that Apple has introduced the ability for developers to create “Extensions”, however, that gap has very quickly been bridged, and iOS 8 will allow developers to create new ways for their apps to interact. Some may argue that Apple’s approach may differ from Google’s, but the end-user result is basically the same: a person will be allowed to install and use third party keyboards, send information between apps, and interact more directly with the data in other apps.
What I’m interested in seeing now, however, is what the conversation will center around now. For many years, Android users have told me that Android has been superior because of its customizability. When I would press these users to provide me with more information about what “customizability” means, they would often say two things: support for third party keyboards and home screen widgets.
These two “features” of the operating system, in my opinion, are not very important, and would often open a user’s device up to instability and/or unnecessary resource usage. I have used Android devices, and I have seen the home screen widgets for the apps that I use the most, and there is no version of reality in which the widgets provide a superior experience to using the app. Again, this is my experience, and maybe there are some people who really enjoy looking at two lines of their mail on their home screen underneath a line and a half of their upcoming calendar events, and not really being able to meaningfully interact with either until they open the app anyway.
Third party keyboard support has also perplexed me, but I can understand the utility for people living outside the United States, for whom third party keyboards can offer substantially improved text entry. That being said, none of the Android users that I discussed this with lived outside the United States, so it seems that their argument is a moot point, or at least purely subjective.
Thus, it seems to me that the discussion of Android as an “open” system (again, in the way that most people understand the term “open”) has lost much of its value. Android as an “extensible” operating system has also lost much of its value, as well (at least as a marketing ploy) in light of the new functionality of iOS 8. How, then, should we be defining “open”?
When we look on the post-PC landscape and see two operating systems that allow their users to interact with their data similarly, and enter information into their devices similarly, and allow applications built upon their platforms to communicate similarly, how should a person decide which device to use? Perhaps the discussion shouldn’t be centered around questions like “Which device lets me tinker with system files?” or “Which device will allow me to inadvertently break things if I wanted to?”, but should really be “Which device is better for humans?”
There’s something strange that’s been happening in the world of tech as hotly anticipated products (primarily of the Apple variety) near launch: the world finds out about them long before they’re unveiled.
I think the entire phenomenon is so strange. When kids are young and looking forward to a hot new toy, they sometimes try to approximate its presence in their lives by creating an ersatz model to take the place of the real thing until they can actually touch, hold, and use the real thing. Strangely, this is happening with increasing frequency to the iPhone. The tech world is so hungry for anything iPhone that they will contract graphic designers to create 3D models of the new gadgets, and even go so far as to build full physical models.
The noise is deafening.
Post after post featuring blurry component photos hits the interwebs, and the tech press gobbles them up like bacon-stuffed donuts. Most folks don’t follow tech blogs, don’t really have a pressing desire to know the internal layout of new gadgets, feel no need to really seek this stuff out. They read what falls in their lap and, usually, are better and more sane because of it.
Then the device hits, and it elicits “yawns” from the peanut gallery because they’ve already seen it all. They make sweeping (often literally global) statements about the reception of the product, about the excitement it’s generated, etc. Their actions are, again, childish, just like the kid whose favorite team gets eliminated from the playoffs really early and starts claiming that no one likes [insert sport here] anymore, anyway.
Ultimately, they’re embarrassed.
Who wouldn’t be? Their phones are either knock-offs or faked. The real deal is just that, and consumers know the difference. Companies will try to illustrate how their products “stack up” against Apple’s iPad, or iPhone, or whatever, but it ultimately just makes them look, again, juvenile. I can make a checklist that makes me look like the best human being ever compared to random people on the street. I could create a checklist of the features of a raw, uncooked potato, and compare it to all the features of a slice of deep-dish Chicago pizza, but comparing those two things would make no sense. “Grows in the ground”, “Has eyes”, “Will sprout if placed in water” are all “features” of the potato that the pizza doesn’t have, but who really cares? I’ll take the pizza thankyouverymuch.
Which leads me back to my point. The leaked specs, the feature parity, the checklists, etc. are all meaningless in the face of true user experience and the whole package.
A guy I know had his iPhone run over by a car. It was absolutely destroyed, which was sad for him. He was contemplating purchasing a replacement, but decided to wait it out until his contract was up for renewal so he could purchase a new iPhone 4S. In the meantime, someone gave him a Motorola Droid RAZR (or whatever it’s called…these things have the weirdest names). He ditched the Droid in favor of an iPhone 3G. You read that right. He disliked the Droid user experience so much that he went with a molasses-slow (comparatively) phone, simply because the overall user experience was so superior. When you’re on the losing team, shouting really loudly and making a lot of noise is still fun, sure, but it doesn’t win you ball games. Just ask Cubs fans.
At any rate, it’s clear that people are jazzed about the iPhone 5, and all these “yawn” reactions are just the tech news equivalent of Cubs fans getting uppity. People will choose good design and a fluid, beautiful user experience over checklists and noise.
As they say, it doesn’t take a genius.
So we’ve seen a lot of rumors regarding this fabled iPad 3 floating around recently. Things about screen resolution, graphics performance, and the like. Then I see something like this, and my brain almost explodes because the whole thing is just so inane.
Wh–are you serious? Instead of the rumored A6 chip? Rumored according to whom, you moron? You thi–oh wait. Hold on, I get it. I see what happened here. Hold on, let me go through this. Correct me if I’m wrong. I know I’m not, but I have to say that anyway.
“Oh wow, Apple is using it’s own chips in this stupid iPad thing.”
(iPhone 4 introduction)
“Hey neat! The iPhone four uses the A4 chip! I get it! Oh man, I figured it out and this is so cool because I know what I’m talking about.”
(iPad 2 introduction)
“This iPad uses the A5 chip? What? That’s so crazy! It’s like…wait a second, that means that the next iPhone is also going to be the iPhone 5! I’m so smart!”
(iPhone 4S introduction)
“Hold on. The iPhone 4S uses the A5 chip? And it’s not called the iPhone 5? Hold on. But…5 comes after 4, right? This phone is dumb because it’s not the right number.”
(iPad 3 rumor)
“This new iPad that hasn’t been revealed yet is supposed to use a chip that I made up because I know how to count! But there’s a picture that shows it doesn’t so that means Apple is dumb! Haha I’m so much smarter than Apple because my iPad 3 would have an A6 chip that would be so much better than Apple’s stupid A5X chip. Because it has a 6 in it. I might even just call it the iPad 6 because it’s so much better. Haha. Dumb Apple. Lulz.”
Go ahead and try to tell me that’s not how it went, and I’ll call you a liar to your face.
This won’t work. I’m not saying that it never will, but I don’t believe that this is something that Apple’s framework actually even allows; Apple doesn’t allow this by design. The whole idea of a phone that does the “bidding” of another company, or simply becomes a platform for another company’s ideas, values, and way of thinking is absurd. Google might allow it because they’d find a way to monetize it, but can you imagine that? I mean actually take a minute to imagine a Google Ads-ridden Facebook interface shoehorned onto an Android phone running some forked version of the OS. Jesus, it hurts to even think about. What a horrible, mind-destroying user experience that would be.
I’ve had the pleasure of taking part in a couple of Groupon deals, and I’m really happy I did. I was able to get some delicious food for relatively cheap, and I’ve seen lots of good stuff on the site for meager amounts of money. This is both a good and bad thing. It’s great for me, because I eat food, and I like it when food is both good and cheap. I like supporting local business by eating at non-chain restaurants and cafés, and I thought I could do both through Groupon. What I am discovering, however, is that Groupon can actually be very damaging. This is bad.
From a recent post on TechCrunch:
Groupon can clearly deliver customers. But in order to know if it makes financial sense as a customer acquisition tool, merchants need to know two key numbers:
- The proportion of Groupon customers who are already their customers
- How often new customers come back.
That second metric is key. I’ve seen a lot of businesses have record drives of customers after running a Groupon deal, but I’ve wondered how many of those customers will actually come back. I’ve always thought that seeing a packed house of people whom I know have Groupons waiting in their pockets resembled a swarm of locusts- they consume everything they see and just move on to the next cheap thing. I’m not so sure I want to be a part of that.
Then there’s this whole idea on the back end:
Why is Groupon not merely a tech-bubble datum but a Ponzi scheme? Simple: Groupon has found that you can get local merchants to try anything once if it brings them new customers. A few local merchants in Chicago get them started, and Groupon shows good revenues. In fact, Groupon immediately remits half of those “revenues” back to the local merchant — they were never Groupon revenues in any meaningful sense of the word. But, optically, Groupon revenues look high — which they use to raise a financing round at a high valuation. Then they use the proceeds to hire vast armies of salespeople to dig deeper into Chicago’s local merchant community and repeat the trick in other cities.
This is so bad! I never considered this take, but it really does make total sense. I’m not a big investor, and when I do put my money in companies, I’d like to know that they’re actually good companies and not killing off local business, which have a hard enough time surviving on their own.
We all vote with our dollars every single day, and it’s a sad day when we tell small, local businesses that we won’t buy from them unless they gut themselves in front of us. I refuse to be a part of that, and I think you should, too.
On the verge of the announcement of iCloud, Lion, unicorns, iOS 5, and magic fairy dust, there’s an issue I wanted to dig into a little bit, something that the increasing use of cloud services is going to come into direct contact with: bandwidth.
I came across this article on TechCrunch and felt like I was in some sort of twilight zone. Here are a couple highlights:
The report encourages mobile networks to offer integrated rate plans, while “providing a wide range of segmented postpaid and postpaid tariffs”. It also puts stress on the potential for escalating revenues in the cloud, machine-to-machine, and mobile financial services where networks can leverage their existing assets
The cause of the impending crunch? The report indicates that the dire situation is caused by market saturation and falling average revenue per user, which is causing core revenues to flatline, while the cost of handling mobile data traffic is skyrocketing. As one might expect from the growing use of smartphones (Nielsen predicts there may be 100 million by the end of the year), cellular data traffic doubled in 2010. And it’s expected to increase thirteen-fold by 2015.
If you’re half as confused as I am, then you’re plenty confused. What is all this jibba jabba? The rhetoric pitched to us by data providers (I’m lumping everyone together here, not just the mobile providers) is that “Bandwidth is getting more expensive, guys! You’re using a lot of data, and that’s really expensive! I don’t buy it. The thing is, all these internet providers, whether they’re mobile or otherwise, have seen this coming for a long time. The writing was on the wall back when WAP was the only way to get up-to-the-minute weather and MOVIE TIMES IN YOUR AREA. All these carriers and ISPs knew that people were hungry for more information, and so they started developing new ways to deliver bits and bytes to people’s eyeballs. The problem is that, as the number of subscribers grow, they aren’t making as much per person because people are getting wiser. Coffee shops offering free wifi and libraries with computer access have become the de facto standard. Even grocery stores offer free wifi these days. As in, I walked into a local Jewel and was greeted with a giant sign telling me that the whole store was blanketed in sweet, sweet 802.11b/g. That’s fantastic! I got the same thing when I walked into Nordstrom’s. If offering internet access was so expensive, you can bet that these businesses wouldn’t be offering it for free to anyone who wants it. “Come in and get you some interwebs here” has become this decade’s “FREE TOASTER” promotion. The toasters, as we all know, were cheap. Dirt cheap. People felt like they were getting a good deal because, hey, free toaster. Who doesn’t like toast? I don’t know of anyone. Who doesn’t like free wifi? *crickets*
Then there’s this regarding a recent dig into internet pricing in Canada:
Assuming an inflated cost of 10 cents per gigabyte, it means that Bell, Shaw and Rogers are charging consumers between 10 and 50 times what it costs them to deliver data. This on top of their regular monthly Internet pricing! While I agree that heavy users should be prepared to pay more once they have reached their bandwidth caps, a fair price would be much closer to 10 cents per GB than the inflated $1-to-$5-per-gigabyte charge sanctioned by the CRTC.
The argument that the exponential growth in Internet usage as the primary reason for higher prices is a seductive one. However, it ignores the fact that the technology that drives the Internet has become more powerful and much cheaper in the past decade.
So, exactly what I’ve been saying. Our infrastructure is evolving incredibly rapidly, and companies continue to use outdated arguments and backwards-facing data to justify rate hikes and consumer-hostile practices. Now, this isn’t a problem. I talked to a friend recently about mobile internet usage, and how my internet usage is astronomical each month. He couldn’t understand it. Read my previous post “The Cap’s the Limit” on one of the possible models for Apple’s “music locker” service, and you can see how this is incredibly consumer-hostile. Not on Apple’s part, mind you, on the carriers and ISPs. They’re the ones that see the future of media being cloud-centric, and they want to sink their teeth into more money, despite the fact that they’re already making plenty. Correction: they want more profit. Even so, I have a hard time believing that ISPs will be hurting from the profit angle.
At any rate, this is going to come to a head sooner than later, and I hope we have some strong government officials on our side to help us fight off the behemoth businesses that clearly see themselves as big enough to make whatever rules they want at the expense of the consumer.
For a while now, I’ve been using an app called Audiogalaxy to get back to my music library at home and essentially have access to my library with over 100 gigs of music to supplement whatever tracks I have synced to my iPhone/iPad. It’s fantastic, mostly because I know two things:
- I have music on my iPhone that I can listen to anywhere, regardless of whether I have a data connection or not.
- I can, with a data connection, get access to my huge music library.
The recently-uncovered Apple patent application is simultaneously awesome and horrific for a few reasons, all of which have to do with #2.
One of the most explosive and formative things to happen to America recently is the widespread adoption of mobile data and internet usage. As I’ve discussed before, the mobile telecom providers have used this to push their agendas and create an awful dystopian future that the American wireless subscriber is going to end up paying dearly for. It’s going to be ugly, folks. Get ready for a future based on as-yet-unwritten disgusting rates based on AT&T’s greed.
If you think this reaction is a bit overblown, let’s dissect the groundwork that needs to be in place for a person to listen to music with Apple’s new system. A person would need:
- A computer running iTunes (for syncing purposes). This is pretty much standard, and shouldn’t come as a surprise to anyone.
- An iOS device with a data connection. Not everyone wants to or can run a persistent data connection. iPod touch devices are reliant on wifi, and people with the lower-tier AT&T or Verizon data plan (250 MB for $15.00/month, in AT&T’s case) may not be comfortable with a service that sucks up data every time they wan to listen to a song.
- Possibly: the above computer with a persistent connection to the internet. This is a variable, and the future is hazy here. Depending on how the whole “Music Locker” thing will work, or how MacOS Lion home server is structured, this may or may not be necessary. We’ll see.
Let’s assume that a person has an iPhone, is using AT&T, and is using the $15.00/month data plan for 250 MB of data per month. We don’t know how much of each song will be synced to the iOS device, but let’s assume it’s about 30% of each song to allow ample buffering time. We can then “fit” three times the number of songs on the iOS device due to the reduced footprint of each song on the device’s memory. The remaining amount of each song would then be pulled from a cloud. I say “a” cloud because it’s unclear if that cloud will be the individual’s computer or this “Music Locker” service. Let’s assume it will be from this person’s computer, so as not to incur any additional fees (yet). The computer will have to be on in order to access the library data, which means an extra power demand and a load on the person’s internet usage (we’re also assuming that internet usage is capped, which, despite some companies claiming their data is “unlimited,” is most likely the case). Most likely, the data usage through a home internet connection is insignificant (especially relative to a theoretical cap of 50-250 GB). The proposed data usage relative to mobile internet connection with a 250 MB cap is significant, however, and listening to a day’s worth of music can potentially eat up all of a person’s monthly data before they have to pony up another $25.00 for the higher 2 GB plan.
Did you catch that? Let’s look at it again.
The folks who want to use this feature will be streaming data every single time they listen to music. The amount of data that will be used is unclear, but I predict that listening to music for a prolonged period of time (even a few hours a day) will cut deeply into or completely use up a person’s data for the month (again, assuming usage of a cheaper $15.00/month, 250 MB plan). Even on a 2 GB plan, monthly data usage can quickly skyrocket, shooting people dangerously close to the ceiling or their plan. I use about 1.5 GB/month right now with occasional usage of my Audiogalaxy service to get at my home library. If I were to switch over to a model that used data every single time I played a song, I’d find myself breaking that 2 GB barrier on a monthly basis, which would cost me more money.
AT&T and Verizon made a long-term move here, and we’re staring it in the face right now. Back when AT&T first introduced tiered data pricing, I could see the act as predatory. More and more services are being pushed online, to the cloud, and so forth. What AT&T did was squeeze the pipes before the water started flowing. Netflix is growing in popularity and capability, and their long-term dominance in the mobile media marketplace (I love alliteration!), while not guaranteed, is just shy of that. How are we going to watch movies on our mobile devices if we’re being pinched to do so? How will companies innovate if they know they’re going to be dealing with hamstrung devices? People are going to be paying for subscription services and the bandwidth it takes to use them, a double whammy. The outlook doesn’t look good.
Boy am I glad I got that unlimited Clear iSpot subscription while it was still around.