The State of “Openness”

Benedict Evans once talked about a sort of “openness Tourette’s Syndrome” that occurs whenever people discuss Apple’s platforms vs. competitors. Basically, it goes like this: someone mentions how good an Apple platform is, and then someone else says, “Yeah, but Android is open.”

There’s a pleasant sort of fiction that is promised with “open” that simply isn’t a viable reality for most people. I’ve heard salespeople use this in retail stores, and I’ve heard IT professionals use this when offering Android to their clients. This demonstrates a fundamental misunderstanding of what Android’s various meanings of “open” are. The type of “open” that people are typically referring to when they use that word is actually conflated with “extensible” or perhaps “has relaxed security”, which are very different things than the “open” that Android was conceived with.

Android’s initial form as a project was open-source, and the Android of today is still technically “open-source” but, due to its reliance on Google’s services and cloud features, the current version of Android that comes loaded on many phones is not nearly as “open” as many would have you believe. Would you like to use another mapping service? How about something other than Google Now? Can you use the features of the home screen without being tied to Google services? Sadly, no.

That doesn’t mean that one couldn’t install the Android Open-Source Project’s fork of the operating system, but it means that the marquee features of the operating system, the things that Google and Android fans like to wave in the air, are inherently tied to Google and make it very difficult to use non-Google-developed operating systems.

Instead of this word “open”, then, let’s use the word “extensible”, since that more accurately reflects the Android OS’s ability to facilitate communication between apps, and to allow developers to build software that adds functionality to the OS or preexisting apps.

The problem with Android up to this point has been that security has not been (or at least hasn’t appeared to have been) a priority for developers or users. While I could try to offer up what I see as reasons for this type of behavior (laziness, “Accept” fatigue). I may be wrong on this, but from what I’ve seen, Android are more than willing to download apps and grant them almost completely exclusive access to their mobile device without really thinking through the ramifications.

Apple has avoided this for many years by sandboxing their apps and keeping inter-app communication on the back burner until they developed a way to allow apps to communicate effectively without sacrificing a user’s privacy or requiring them to grant unnecessary privileges to an app that really shouldn’t require it. Naturally, this came at a cost. For years, iOS users have not been able to install 3rd-party keyboards or send information between apps in a way that was “easy” (to be fair, the iOS “Open In…” functionality has allowed users to send documents and files between applications for some time, but required a degree of savviness from users that was sometimes lacking).

Now that Apple has introduced the ability for developers to create “Extensions”, however, that gap has very quickly been bridged, and iOS 8 will allow developers to create new ways for their apps to interact. Some may argue that Apple’s approach may differ from Google’s, but the end-user result is basically the same: a person will be allowed to install and use third party keyboards, send information between apps, and interact more directly with the data in other apps.

What I’m interested in seeing now, however, is what the conversation will center around now. For many years, Android users have told me that Android has been superior because of its customizability. When I would press these users to provide me with more information about what “customizability” means, they would often say two things: support for third party keyboards and home screen widgets.

These two “features” of the operating system, in my opinion, are not very important, and would often open a user’s device up to instability and/or unnecessary resource usage. I have used Android devices, and I have seen the home screen widgets for the apps that I use the most, and there is no version of reality in which the widgets provide a superior experience to using the app. Again, this is my experience, and maybe there are some people who really enjoy looking at two lines of their mail on their home screen underneath a line and a half of their upcoming calendar events, and not really being able to meaningfully interact with either until they open the app anyway.

Third party keyboard support has also perplexed me, but I can understand the utility for people living outside the United States, for whom third party keyboards can offer substantially improved text entry. That being said, none of the Android users that I discussed this with lived outside the United States, so it seems that their argument is a moot point, or at least purely subjective.

Thus, it seems to me that the discussion of Android as an “open” system (again, in the way that most people understand the term “open”) has lost much of its value. Android as an “extensible” operating system has also lost much of its value, as well (at least as a marketing ploy) in light of the new functionality of iOS 8. How, then, should we be defining “open”?

When we look on the post-PC landscape and see two operating systems that allow their users to interact with their data similarly, and enter information into their devices similarly, and allow applications built upon their platforms to communicate similarly, how should a person decide which device to use? Perhaps the discussion shouldn’t be centered around questions like “Which device lets me tinker with system files?” or “Which device will allow me to inadvertently break things if I wanted to?”, but should really be “Which device is better for humans?”


Whither Unified Accounts?

For quite some time, people have been clamoring for unified iTunes Accounts/Apple IDs. It seems like apple is taking a step in the right direction with its “Family Sharing” feature announced today at WWDC. Let’s hope they keep walking.


We Live In the Future – Garmin Heads-Up Display (HUD)

20131011-002959.jpg

Heads-Up Display HUD | Garmin.

 
This is the future we were promised, folks. Well…it’s getting there, at least. Check back in a couple years when this is beamed right into our eyeballs with connected contact lenses.


Standing In Line for Loukoumades

Found Poetry


I’m at the Greek Fest at lake cook rd
In line for loukoumades

Well this place is packed with people young and old
Good food dancing
And the famous Mavrodafni wine tasting.
Which I had to taste so now
I think I start singing
That’s all for now
Kind of light headed
I love you and wish you where here



We Live In the Future – Fos

Something that I’m very aware of at all times these days is the idea that, in many ways, we live in the future. Not necessarily the future envisioned by all Sci-Fi writers of the 20th century, but one that combines bits and pieces from many visions and predictions.

As such, I’d like to dedicate one post per week to something I consider absolutely amazing – an example of “the future” that many of us have been given over the years.

Here, then, is the first of such posts.

Fo̱s – A truly wearable, Bluetooth LED display system by Anders Nelson — Kickstarter.


How to Get Pageviews

Clearly, CNN and MSN know how it’s done:

  1. Include the words “Miley”, “Cyrus”, and “twerk” in a headline.
  2. Lean back.
  3. Count your money.

I know it’s the oldest story ever told, but stuff like this always makes me sad.


They’re Just Not Very Good

The rise of mobile technology has flipped the gaming industry on its head. What used to be the accepted method of producing and selling games was called into question by a generation of people who toil over purchasing a $0.99 app, but have no problem spending $4 on a cup of designer coffee. Quite quickly , the model of “Free to Play” (also seen abbreviated as “F2P” or “FTP”, which is, in and of itself confusing, since FTP also has its own meanings) became the accepted model for games, and has become (in my opinion) a sinister foreshadowing of a bland gaming future in which the experiences that people are actually looking for in their games are left out in favor of clever monetization schemes and endless, repetitive gameplay.

To be fair, I don’t think that paying for items in games through “microtransactions” is a bad idea, as long as it’s done right. Prior to the advent of F2P gaming, the entirety of a game’s reach could be contained within its “walls”, so to speak. That is, a game existed in its own space, with its own rules – its own little universe. Players would inhabit that space, entering into, learning its rules, conventions, and so on. The entirety of the game existed inside itself. Even World of Warcraft, with its subscription model, could fit in this definition, since players paid their subscription fees for an expansive, constantly evolving game world. Once the dues were paid, the door was open, and the only limit to what a player could achieve was, effectively, time. Given enough time, a player could ostensibly find whatever items he or she desired within the game world.

Then Nexon, an online gaming company, came along with a title called “Maple Story”, which was an adorable little side-scrolling RPG that could be digested in bite-sized chunks or marathon gaming sessions. Maple Story was different from other MMORPGs since it was free, and offered players the ability to pay for cosmetic changes to their characters. Players would “rent” costumes for their in-game characters, which would then get layered over their equipment, so that a player could make his or her in-game character look however he or she wanted. People did this all the time, and it was a way for players to feel more connected to their characters, since they could customize every aspect of their character’s look. Additionally, since the costumes were rentals, Nexon would introduce new looks all the time, often coinciding with events in popular culture.

As these Massively-Multiplayer Online Games (MMOGs) gained in popularity, particularly successful players would end up selling their items or entire accounts on eBay or other community-based sale sites. People were paying real money for virtual goods, and the game companies started paying attention. They wanted a cut of these sweet, sweet dollars that were getting sloshed around, and, thus, the F2P model was born.

See, someone realized that if players were willing to pay for virtual goods using real currency, they, like Apple, should simply make that process easier. Why go to eBay and engage in a potentially shady and illegal transaction, when you can just as easily drop $5 on a pack of gems and get some epic loot? Instant gratification, right? Except that it’s not that easy. I recently read a piece that neither condemned nor endorsed the idea of microtransactions in F2P games, and the author suggested that these “microtransactions” are a way for people to actually pay what the game is worth, that a person should pay what they feel the game is worth. There are several problems with that notion:

  1. First and foremost, the game is no longer the product. Players are the product. The game is not designed to be “good”, or to have a cohesive universe, or to tell a story. The game is designed to get players to spend money through carefully-crafted game mechanics that exploit psychological states.
  2. The price of a game is potentially infinite. Purchasing power-ups or items that perform a task or help a player’s character doesn’t add value to the game, it simply allows the player to play the game. The in-app purchases don’t make the game better, they simply make the game whole. Constantly having to pay to play a game makes the cost of the game theoretically infinite.
  3. Ultimately, these games are simply not very good. While one could make the argument that paying $20/month for a World of Warcraft subscription is far more than most players spend on most F2P games ever, the difference is that World of Warcraft was a well-crafted, intentional game with lots of care put into the design and game mechanics. Most F2P games have little to no recognizable story, little character development, and shallow mechanics 1. This renders the argument that “a person will spend what they feel the game is worth” moot, since, once again, many of these games have artificial difficulty multipliers or other mechanics that do not adhere to established gameplay mechanics and make the game artificially difficult or impossible to complete. Thus, it doesn’t matter how much the player feels the game is worth, since they cannot assess the value of the entire game; the entire game is not exposed to the player unless he or she pays an unknown amount of money.
  4. Time is not money. While some developers maintain the illusion that a player could ostensibly spend a large amount of time accruing whatever in-game currency allows the player to progress to the next phase of gameplay, 40 hours of gameplay in a highly-regarded, well-crafted game world is not the same as 40 hours of gameplay in a F2P title. The former is characterized by new experiences, success as well as failures, and (in applicable cases), and the revelation of more of a (hopefully) well-crafted story. The latter gameplay experience is typically characterized by repetitive gameplay and monotonous “grinding” in order to accrue the necessary capital to advance. A player should not be subjected to a below-average gameplay experience simply because he or she does not want to pay real currency to advance.

If we project out along this trajectory, we can see that the game worlds that developers are creating are becoming increasingly devoid of meaning. Why should a player care about any one specific game world? All they have to do is drop $100 on a pack of some form of in-game currency to acquire an item, which they can use to defeat a difficult enemy or progress past a difficult puzzle. The actual “game” becomes meaningless because the “win” state becomes increasingly defined by how much money a person has in his or her bank account. Furthermore, the game worlds that developers and designers create become less about art and vision, and more about simply driving players to microtransactions. Character archetypes become shallower, player statistics are tracked so that the game can adjust difficulty dynamically in order to create the aforementioned artificial difficulty spikes, and game challenges no longer represent tests of mental acuity, reasoning, or reflexes, but rather a combination of advertising and outright paying to win.

The future looks bleak for games.


1 : While comparisons have been drawn to video arcades of the 80s and 90s that were fueled by quarters, this idea ultimately falls flat because the machines could still be purchased and played in their entirety from start to finish if a person chose to do so. Additionally, if a person wanted to play a particular video game, the arcade was the only place a person would be able to play many of these games. The arcade owner was effectively “renting” his or her machine to the player. When a player purchases a game for any price (including $0), they should be able to play the entire game.


Follow

Get every new post delivered to your Inbox.

Join 424 other followers