The State of “Openness”

Benedict Evans once talked about a sort of “openness Tourette’s Syndrome” that occurs whenever people discuss Apple’s platforms vs. competitors. Basically, it goes like this: someone mentions how good an Apple platform is, and then someone else says, “Yeah, but Android is open.”

There’s a pleasant sort of fiction that is promised with “open” that simply isn’t a viable reality for most people. I’ve heard salespeople use this in retail stores, and I’ve heard IT professionals use this when offering Android to their clients. This demonstrates a fundamental misunderstanding of what Android’s various meanings of “open” are. The type of “open” that people are typically referring to when they use that word is actually conflated with “extensible” or perhaps “has relaxed security”, which are very different things than the “open” that Android was conceived with.

Android’s initial form as a project was open-source, and the Android of today is still technically “open-source” but, due to its reliance on Google’s services and cloud features, the current version of Android that comes loaded on many phones is not nearly as “open” as many would have you believe. Would you like to use another mapping service? How about something other than Google Now? Can you use the features of the home screen without being tied to Google services? Sadly, no.

That doesn’t mean that one couldn’t install the Android Open-Source Project’s fork of the operating system, but it means that the marquee features of the operating system, the things that Google and Android fans like to wave in the air, are inherently tied to Google and make it very difficult to use non-Google-developed operating systems.

Instead of this word “open”, then, let’s use the word “extensible”, since that more accurately reflects the Android OS’s ability to facilitate communication between apps, and to allow developers to build software that adds functionality to the OS or preexisting apps.

The problem with Android up to this point has been that security has not been (or at least hasn’t appeared to have been) a priority for developers or users. While I could try to offer up what I see as reasons for this type of behavior (laziness, “Accept” fatigue). I may be wrong on this, but from what I’ve seen, Android are more than willing to download apps and grant them almost completely exclusive access to their mobile device without really thinking through the ramifications.

Apple has avoided this for many years by sandboxing their apps and keeping inter-app communication on the back burner until they developed a way to allow apps to communicate effectively without sacrificing a user’s privacy or requiring them to grant unnecessary privileges to an app that really shouldn’t require it. Naturally, this came at a cost. For years, iOS users have not been able to install 3rd-party keyboards or send information between apps in a way that was “easy” (to be fair, the iOS “Open In…” functionality has allowed users to send documents and files between applications for some time, but required a degree of savviness from users that was sometimes lacking).

Now that Apple has introduced the ability for developers to create “Extensions”, however, that gap has very quickly been bridged, and iOS 8 will allow developers to create new ways for their apps to interact. Some may argue that Apple’s approach may differ from Google’s, but the end-user result is basically the same: a person will be allowed to install and use third party keyboards, send information between apps, and interact more directly with the data in other apps.

What I’m interested in seeing now, however, is what the conversation will center around now. For many years, Android users have told me that Android has been superior because of its customizability. When I would press these users to provide me with more information about what “customizability” means, they would often say two things: support for third party keyboards and home screen widgets.

These two “features” of the operating system, in my opinion, are not very important, and would often open a user’s device up to instability and/or unnecessary resource usage. I have used Android devices, and I have seen the home screen widgets for the apps that I use the most, and there is no version of reality in which the widgets provide a superior experience to using the app. Again, this is my experience, and maybe there are some people who really enjoy looking at two lines of their mail on their home screen underneath a line and a half of their upcoming calendar events, and not really being able to meaningfully interact with either until they open the app anyway.

Third party keyboard support has also perplexed me, but I can understand the utility for people living outside the United States, for whom third party keyboards can offer substantially improved text entry. That being said, none of the Android users that I discussed this with lived outside the United States, so it seems that their argument is a moot point, or at least purely subjective.

Thus, it seems to me that the discussion of Android as an “open” system (again, in the way that most people understand the term “open”) has lost much of its value. Android as an “extensible” operating system has also lost much of its value, as well (at least as a marketing ploy) in light of the new functionality of iOS 8. How, then, should we be defining “open”?

When we look on the post-PC landscape and see two operating systems that allow their users to interact with their data similarly, and enter information into their devices similarly, and allow applications built upon their platforms to communicate similarly, how should a person decide which device to use? Perhaps the discussion shouldn’t be centered around questions like “Which device lets me tinker with system files?” or “Which device will allow me to inadvertently break things if I wanted to?”, but should really be “Which device is better for humans?”


Whither Unified Accounts?

For quite some time, people have been clamoring for unified iTunes Accounts/Apple IDs. It seems like apple is taking a step in the right direction with its “Family Sharing” feature announced today at WWDC. Let’s hope they keep walking.


We Live In the Future – Garmin Heads-Up Display (HUD)

20131011-002959.jpg

Heads-Up Display HUD | Garmin.

 
This is the future we were promised, folks. Well…it’s getting there, at least. Check back in a couple years when this is beamed right into our eyeballs with connected contact lenses.


Standing In Line for Loukoumades

Found Poetry


I’m at the Greek Fest at lake cook rd
In line for loukoumades

Well this place is packed with people young and old
Good food dancing
And the famous Mavrodafni wine tasting.
Which I had to taste so now
I think I start singing
That’s all for now
Kind of light headed
I love you and wish you where here



We Live In the Future – Fos

Something that I’m very aware of at all times these days is the idea that, in many ways, we live in the future. Not necessarily the future envisioned by all Sci-Fi writers of the 20th century, but one that combines bits and pieces from many visions and predictions.

As such, I’d like to dedicate one post per week to something I consider absolutely amazing – an example of “the future” that many of us have been given over the years.

Here, then, is the first of such posts.

Fo̱s – A truly wearable, Bluetooth LED display system by Anders Nelson — Kickstarter.


How to Get Pageviews

Clearly, CNN and MSN know how it’s done:

  1. Include the words “Miley”, “Cyrus”, and “twerk” in a headline.
  2. Lean back.
  3. Count your money.

I know it’s the oldest story ever told, but stuff like this always makes me sad.


They’re Just Not Very Good

The rise of mobile technology has flipped the gaming industry on its head. What used to be the accepted method of producing and selling games was called into question by a generation of people who toil over purchasing a $0.99 app, but have no problem spending $4 on a cup of designer coffee. Quite quickly , the model of “Free to Play” (also seen abbreviated as “F2P” or “FTP”, which is, in and of itself confusing, since FTP also has its own meanings) became the accepted model for games, and has become (in my opinion) a sinister foreshadowing of a bland gaming future in which the experiences that people are actually looking for in their games are left out in favor of clever monetization schemes and endless, repetitive gameplay.

To be fair, I don’t think that paying for items in games through “microtransactions” is a bad idea, as long as it’s done right. Prior to the advent of F2P gaming, the entirety of a game’s reach could be contained within its “walls”, so to speak. That is, a game existed in its own space, with its own rules – its own little universe. Players would inhabit that space, entering into, learning its rules, conventions, and so on. The entirety of the game existed inside itself. Even World of Warcraft, with its subscription model, could fit in this definition, since players paid their subscription fees for an expansive, constantly evolving game world. Once the dues were paid, the door was open, and the only limit to what a player could achieve was, effectively, time. Given enough time, a player could ostensibly find whatever items he or she desired within the game world.

Then Nexon, an online gaming company, came along with a title called “Maple Story”, which was an adorable little side-scrolling RPG that could be digested in bite-sized chunks or marathon gaming sessions. Maple Story was different from other MMORPGs since it was free, and offered players the ability to pay for cosmetic changes to their characters. Players would “rent” costumes for their in-game characters, which would then get layered over their equipment, so that a player could make his or her in-game character look however he or she wanted. People did this all the time, and it was a way for players to feel more connected to their characters, since they could customize every aspect of their character’s look. Additionally, since the costumes were rentals, Nexon would introduce new looks all the time, often coinciding with events in popular culture.

As these Massively-Multiplayer Online Games (MMOGs) gained in popularity, particularly successful players would end up selling their items or entire accounts on eBay or other community-based sale sites. People were paying real money for virtual goods, and the game companies started paying attention. They wanted a cut of these sweet, sweet dollars that were getting sloshed around, and, thus, the F2P model was born.

See, someone realized that if players were willing to pay for virtual goods using real currency, they, like Apple, should simply make that process easier. Why go to eBay and engage in a potentially shady and illegal transaction, when you can just as easily drop $5 on a pack of gems and get some epic loot? Instant gratification, right? Except that it’s not that easy. I recently read a piece that neither condemned nor endorsed the idea of microtransactions in F2P games, and the author suggested that these “microtransactions” are a way for people to actually pay what the game is worth, that a person should pay what they feel the game is worth. There are several problems with that notion:

  1. First and foremost, the game is no longer the product. Players are the product. The game is not designed to be “good”, or to have a cohesive universe, or to tell a story. The game is designed to get players to spend money through carefully-crafted game mechanics that exploit psychological states.
  2. The price of a game is potentially infinite. Purchasing power-ups or items that perform a task or help a player’s character doesn’t add value to the game, it simply allows the player to play the game. The in-app purchases don’t make the game better, they simply make the game whole. Constantly having to pay to play a game makes the cost of the game theoretically infinite.
  3. Ultimately, these games are simply not very good. While one could make the argument that paying $20/month for a World of Warcraft subscription is far more than most players spend on most F2P games ever, the difference is that World of Warcraft was a well-crafted, intentional game with lots of care put into the design and game mechanics. Most F2P games have little to no recognizable story, little character development, and shallow mechanics 1. This renders the argument that “a person will spend what they feel the game is worth” moot, since, once again, many of these games have artificial difficulty multipliers or other mechanics that do not adhere to established gameplay mechanics and make the game artificially difficult or impossible to complete. Thus, it doesn’t matter how much the player feels the game is worth, since they cannot assess the value of the entire game; the entire game is not exposed to the player unless he or she pays an unknown amount of money.
  4. Time is not money. While some developers maintain the illusion that a player could ostensibly spend a large amount of time accruing whatever in-game currency allows the player to progress to the next phase of gameplay, 40 hours of gameplay in a highly-regarded, well-crafted game world is not the same as 40 hours of gameplay in a F2P title. The former is characterized by new experiences, success as well as failures, and (in applicable cases), and the revelation of more of a (hopefully) well-crafted story. The latter gameplay experience is typically characterized by repetitive gameplay and monotonous “grinding” in order to accrue the necessary capital to advance. A player should not be subjected to a below-average gameplay experience simply because he or she does not want to pay real currency to advance.

If we project out along this trajectory, we can see that the game worlds that developers are creating are becoming increasingly devoid of meaning. Why should a player care about any one specific game world? All they have to do is drop $100 on a pack of some form of in-game currency to acquire an item, which they can use to defeat a difficult enemy or progress past a difficult puzzle. The actual “game” becomes meaningless because the “win” state becomes increasingly defined by how much money a person has in his or her bank account. Furthermore, the game worlds that developers and designers create become less about art and vision, and more about simply driving players to microtransactions. Character archetypes become shallower, player statistics are tracked so that the game can adjust difficulty dynamically in order to create the aforementioned artificial difficulty spikes, and game challenges no longer represent tests of mental acuity, reasoning, or reflexes, but rather a combination of advertising and outright paying to win.

The future looks bleak for games.


1 : While comparisons have been drawn to video arcades of the 80s and 90s that were fueled by quarters, this idea ultimately falls flat because the machines could still be purchased and played in their entirety from start to finish if a person chose to do so. Additionally, if a person wanted to play a particular video game, the arcade was the only place a person would be able to play many of these games. The arcade owner was effectively “renting” his or her machine to the player. When a player purchases a game for any price (including $0), they should be able to play the entire game.


Justified

Back when the iPad mini debuted, many people criticized the device’s $329 price tag (for the base model), saying that it was too expensive compared to other tablets of similar size that were on the market. I thought the same thing, until I pulled back a bit and looked at the mini from a longer-term perspective.

See, for most companies, they can get along fairly decently creating products for the right now, adjusting their products, pricing, and features according to market whims. It’s a way of interfacing with the market that has always seemed reactionary to me. Look at what the public is slathering over and give it to them, riding the wave until the notoriously fickle populous decides they want something else. Alternatively, you can just create a smattering of different products with arbitrary and marginal variations, designed to cater to fractionally different subsets of popular culture, and hope that people gravitate towards one product or another, or perhaps just make them enough money to offset the cost of developing, manufacturing, and marketing who knows how many different iterations of a given product.

For Apple, however, the view is longer. The timeline extends 5–10 years out, and is driven internally by the desire to deliver really incredible products into peoples’ hands. That places the locus of control squarely inside the company, instead of vesting that power in the whims of a population that worships reality TV and Hollywood drama. As such, Apple looks at supply chains and forecasts their production costs far further ahead than most companies do, and is thus able to deliver better products over time than their competitors because they’ve had the wherewithal to cultivate and maintain a stable, consistent base (referring simultaneously to supply, production, and consumption). Thus, while the $329 price point may not have made sense for the iPad mini given the original permutation of components, an iPad mini with a Retina screen, which will undoubtedly cost more to manufacture, can still provide Apple with healthy margins since Apple has already been able to account for the decrease in price of Retina displays over time and has been able to invest in battery research to drive the new displays. That, in concert with the other inevitable improvements that Apple has made to the hardware and software of its new iOS devices, will allow Apple to manufacture a better product while still maintaining margins and trying to keep investors happy (a notoriously difficult thing to do). From one angle, it’s very difficult to see the justification for that price point. But, given time, it becomes clear that Apple never priced the iPad mini for the market when it was introduced, it was looking many years down the line.

If that’s what they can do for pricing when looking forward, imagine what they’ll be doing for products.


A Wolf In Sheep’s Clothing

There has been a long-running narrative in tech writing about the downfall of, or the necessity to bring about the downfall of, cable companies. Recently, I was discussing the merits and drawbacks of cable with a friend of mine, and ended up at an interesting predicament. We agreed that cable was expensive, yes; we also agreed that people who subscribed to the services that companies like Comcast provide walk a thin line between freedom (otherwise known as net neutrality) and tight, restrictive control; but, we ended up agreeing that, based on what we’re getting from the “evil empire” of Comcast, it’s not a bad deal.

The discussion started with HBO Go, a service that HBO provides its customers that allow them to stream HBO programming on demand to a large number of connected devices. It’s a great way for members of the same household to watch their favorite shows and movies without having to fight over the TV. The downside is that HBO Go is not purchased as a standalone service, it’s a part of HBO. While that seems ridiculous, someone recently asked me how much I’d be willing to pay for HBO service if it meant I could watch the shows anywhere. Considering what I was paying for Netflix and Hulu, I said I’d probably be ok with paying $8–10 a month. When I got home, I checked the price of HBO, and, to my surprise, it was only $10 a month (I was assuming that it would be much higher, since it’s a premium cable channel, and cable is controlled by “evil” media companies). It was then that I considered what I was getting for my monthly offering at the Altar of Comcast.

For my monthly fee, I get a whole heaping pantload of data, combined with a paltry offering of standard-definition TV channels. The most common narrative I hear is that people feel like they’re bullied into paying for channels they don’t need in order to get the ones they want. “Just let me pay for the channels I want!” everyone seems to say. And I agree with them! Why should someone be charged for something they don’t use and didn’t really ask for? After all, when you walk into a store and pick out the items you want, the store clerk doesn’t start shoveling unwanted merchandise into your basket. The same should go for programming and media, right?

Well, we’re actually a lot closer than we think, in my opinion. In addition to the base fee for my cable, I pay $8 per month each for Netflix and Hulu, which allows me to access a great deal of TV programming and movies for just under $20/month. I can add HBO to my cable package and get HBO go for “free” on my iPad and Apple TV, which accesses the content through…the same cable connection that I’m already paying for. So it’s…not really free, is it? In fact, it’s almost like I’m being charged double…and yes, that’s frustrating. But the same parallel can be drawn with merchandise. Sometimes you don’t want the “free” stuff that gets bundled with your popcorn, or the tchotchke that is shrink-wrapped your deodorant. In this case, I think of the stuff I don’t use as “bonus” features.

The structure I painted above makes sense when you look at it vis-à-vis the type of relationship that most people have with their cell phone carriers. Really, it’s very much the same – a person pays a monthly fee to be able to use that carrier’s network and pipe data to their device, the amount of which is allowed each month varying by user. Any other dues and subscriptions that the person pays are completely separate from the carrier. To carry the analogy even further, look at what’s happened to the precious airtime minutes and text messaging packages – they’ve all been ditched so that carriers can charge users for what they’re really hungry for: data. The airtime minutes and texting is almost a throw-in now.

The example of the carriers, however, should also serve as a cautionary tale. While most people are used to the fee structure now, there was a time when tiered data plans were met with hatred and anger because people realized they were being taken advantage of. Then, when the mighty marketing machines that drive these carriers went to work convincing people that they would save some money every month if they forfeit their right to an unlimited amount of data, people caved left and right…and they did this just as streaming media was entering the spotlight. It was the perfect one-two blow to the American consumer.

Now, the fear is that terrestrial data providers will repeat the same behavior that their wireless brethren got away with. That they’ll introduce tiered data plans, bandwidth caps, and heavy throttling to ensure that they can strangle every last nickel out of the American populace. This all assumes, of course, that net neutrality laws remain in place. While net neutrality laws are supposed to protect the Internet and the free flow of information around the globe, I have a feeling that terrestrial data providers will use net neutrality against consumers, making the argument that they can provide open pipes so long as consumers pay for the privilege.

At any rate, the current situation of à la carte channels and pricing is almost kinda not-really-but-sorta here…if you know where to look.


Many Links, One Chain

Mobile is the future. No one doubts that, and those that do are clearly riding their tiny rafts toward the inevitable plummet off the edge of the waterfall.

What is interesting is how these different mobile OS choices are defined (e.g. available apps, number of users, types of users, user engagement, developers, just to name a few), and what those definitions mean for the larger mobile landscape.

Many people argue for the benefits of iOS over Android, and vice versa, and I think the choice that most people make to go with one operating system or another isn’t driven by some core ethos or belief in how a mobile operating system should behave, it’s driven by far simpler forces – popular culture, how much money is in their wallet, and what feels right.

When it comes to the tablet space, iOS is the clear winner, having scooped up both the lion’s share of the market as well as customer satisfaction. I find it somewhat painful to watch owners of most other tablet devices struggle with basic functionality; I’m left with the feeling that someone, somewhere has done them a disservice by recommending something that did not fit their needs, having pushed some other device into their hands instead.

Where things start to blur, however, is when people start looking to devices outside of the mobile device market, things like connected TVs, appliances, and other gadgets. A person who isn’t fond of Apple can, ostensibly, purchase a Roku box for streaming content to their TV, but how well does that really integrate with a person’s home theater setup if they have iOS devices? How about Android? What about Linux? The trick here is that there are some devices that work well together, and some that don’t. Did you happen to buy one of those early Google TVs? How’s that working out for you? Sorry there aren’t more of them out there, turns out people didn’t like them very much. Sad.

The Apple TV is an iOS device, however, and I think it fills a key role in Apple’s connected living room idea. I’ve talked about this in past posts, as well, but something that many people don’t take into account is the fact that the Apple TV runs iOS, but in a form that isn’t immediately recognizable to most people.

Apple has created a chain of interconnected devices which, on their own, may seem unremarkable. Start linking them together, however, and they become far stronger and more capable than they were on their own.

I’ll end with a little story. I spent a half on the phone with a man recently, trying to help him compose and reply to an email on his new Android phone. I felt sorry for him. He had never owned a smartphone before, and was having a very difficult time using the device. For whatever reason, data was not enabled on his phone and he had to find the setting to turn it on before he could actually send the email. He was very frustrated, and it was clear that he wasn’t feeling confident. He was told that this device was very “user friendly” and that it “just worked”, but his experience demonstrated otherwise. That same night, I had some friends over, many of whom are involved in some sort of music production or performance, or who simply have great taste in music. They were sharing their favorite tracks and videos on my TV, all from their phones, all without having to fiddle with a remote or web browser. They were laughing and talking, all able to discuss and converse without needing to configure anything. They just tapped the AirPlay button and sent the media to the Apple TV. Zero configuration, zero setup.

From where I was sitting, it looked like magic.


Follow

Get every new post delivered to your Inbox.

Join 424 other followers