On the Predicted Death of the PC, the Game Console, and Scripting Languages
In 2011 I have heard predicted the death of the PC, of gaming consoles, and scripting languages so often that the claim hardly seems shocking anymore. The cottage industry of making bold predictions has gotten so big that in order to make any true waves you need to predict increasingly more outrageous outcomes and just hope you strike it lucky often enough that no one notices all the times you get it wrong. But the three predictions above strike a little too close to home for me to not take notice. The death of the PC, the gaming console, and scripting languages would essentially mean the death of my knowledge and interest in technology.
I admit, for all my technical knowledge, among other developers I’m a bit of a luddite. I’ve never owned a game console not made by Nintendo, I run Linux on all of my computers, and my first scripting language was Perl, which I only reluctantly gave up for Ruby a couple of years ago. I’ve not drawn to buy the latest PlayStation, or keep up with the latest desktop PC, or to the hawt new computer language that lets you run your software on 32 cores with only four lines of code. I think the reason is that my attraction to these technologies was never their newness or their cool factor, it was that they were good solutions to problems I wanted to solve. Those problems still exist today, and I argue that while there may be new technologies out there that solve other problems better, they don’t solve the original problems in a way that poses a serious challenge.
Gaming: A Search for Experience
The easiest one to debunk is the coming death of gaming consoles. In a recent interview Richard Garriott, the man behind the Ultima franchise, is reported to have said the following when asked about gaming consoles:
I think we might get one more generation, might, but I think fundamentally they’re doomed. I think fundamentally the power that you can carry with you in a portable is really swamping what we’ve thought of as a console.
And, of course, he’s right about the power of my mobile device. It’s clearly got more power than my old Nintendo or SNES ever had, and I imagine it’s probably more powerful than my current (but aging) generation console, the Wii. Where I don’t agree is with the claim that it’s swamping anything. My phone’s great and all, but it’s never going to replace a game console for me. What makes game consoles great is they leverage my existing investments. Hanging on my wall right now is a big TV, surrounded by comfy couches, and hooked into a 5.1 surround sound system. Today I can go and buy a Wii, a PS3, and an XBOX 360 and all will hook neatly into my existing infrastructure. What’s my phone got that will ever compete with that? Controls that are difficult to use because they have to exist on a flat surface? A screen that is tiny by design? Battery life that can be reasonably measured in minutes when under active load? A network connection that is a faction of my home connection?
Which isn’t to say that the mobile device won’t be a successful gaming platform! Far from it. Publishers that crack the nut of good mobile gaming are going to make bucket-loads of money tapping into a market that has been historical averse to video games. More power to ‘em, I say! But they are trying to solve a new problem: how to attract new non-gaming customers? But see, that’s a problem that game publishers have… it is not a problem I have, or the million of other gamers who grew the gaming industry into a billion dollar juggernaut. Our problem remains the same: how do I play immersive games that best leverage the technology I already own? I don’t want my new 40″ TV to sit idle while I squint at my phone, and I don’t think anyone playing games for the past 20 years wants to either. It’s silly to let such a resource lay fallow. Game consoles solve that problem and do so incredibly well, while giving me customized controllers, fast internet, and a wired power source. If someday the mobile device is created that lets me do all those things, I would suggest that game consoles won’t have been killed by mobile devices, so much as mobile devices became game consoles.
Personal Computers: “It’s not in the box, it’s in the band”
Thirty points if you identified the above quote as coming from Antitrust a highly-decent film about an evil software company who is literally killing its young competitors. There’s a moment in the film where the Bill Gates-esque figure (played by Tim Robbins) turns to the young protagonist (played by Ryan Phillippe) and mutters the above line. This brilliant phrase allows them to overcome some roadblock with the development of Nurv, their amazing new content delivery system that will revolutionize the world. Ahead of its time, Antitrust seems to be making a cloud computing comment way back in 2001… that the solution to their problem wasn’t with the devices, but the network that connected those devices.
Sadly, in today’s world of technology predictions, the ability to differentiate between those two concepts seems rarer and rarer. On more than one occasion I’ve heard it declared that the age of the Personal Computer is over. As evidence, the prognosticator points to Cloud Computing as PC computing’s angle of death. I can only assume these people just don’t understand what Cloud Computing even means when they say such things.
Let’s start with the stipulation that Cloud Computing is a poorly defined term, and it can mean all things to all people. But given that, it still have a reasonable well accepted definition for 2011… it is the process of moving computer resources from specific nodes on the resource graph to arbitrary nodes.
Perhaps an example will help. Consider a media server that stores gigabytes of media files which can be accessed by various client computers. In this example, the resource is storage, because we are talking about the data that comprises these media files. But contrary to what you might have heard, a media server isn’t cloud computing! The relationship I described has been around since we had networks… it’s the standard server/client model where there are discrete resources on specific nodes. What would make this cloud computing is if instead of a single server, there were lots of servers all working together to store the music, but that the client was unaware of this arrangement. The client connects to the “cloud” and just gets the media files like it always has, while the servers work to move the files around ensuring prompt delivery. Now we’ve come to the cloud.
Something should jump out at you about this example… the client didn’t change! And the client in this case is the supposedly dying PC! You see, we still need a specific client to connect to our clouds. Whether we are talking about huge CPU clusters or massive petabyte data arrays, we need clients to make use of these non-fixed resources. To give another example, I often hear it claimed that gmail revolutionized email. And yes, it gave us a radically new user interface, but it didn’t change anything when it came to the storage and delivery of email. The idea of email residing on a server that is universally available has been around since the invention of IMAP in 1986.
Perhaps the doomsayers of the PC era aren’t talking about the cloud so much as they are talking about the different nature of devices that will be able to connect to the cloud. Today I can access my email from my phone, my tablet, and no doubt, someday, my refrigerator. Which is great, I’m all for more options when it comes to data consumption. But the assumption that these devices are going to kill the PC makes the same mistake as the gaming console. If you don’t start by looking at the problem PCs were invented to address, you’ll never understand what it will take to replace the PC.
I propose the PC was invented to give us a way to use computer resources in a productive manner. It follows that to end the PC era, we need to invent a device (or set of devices) that is more productive than the PC. Of course, this doesn’t answer the question of what is productive. That’s going to be a personal evaluation, but I think we can all agree that communicating is a productive task. So, let’s take communicating and evaluate it in the context of the PC and the smartphone.
The smartphone has some incredible advantages, not least of which is that its mobile. But in addition to a mobile network connection, smartphones are phones, and thus are ready for voice communication right out of the box. They also increasingly equipped with video cameras, opening the possibility for video communication. So, it’s got mobility, it’s got voice, and it’s got video. All great stuff… but I’m not the least bit worried about it replacing the PC as the primary form of communication. Contrary to what Star Trek may have lead us to believe, video and audio communication poses few threats to text based forms of communication. The problem with audio and video is it’s all single band. For demonstration, I encourage you and your significant other to break out your smart phones in the same room and start video chatting, simultaneously, with your respective families. Let me know how it turns out. Now, try writing an email to your parents at the same time… easier, right? You can both do it at the same time, because text doesn’t collide with others engaging in the same activity.
Sure, smart phones and tablets do text, but nothing beats a QWERTY keyboard when it comes to text input. The PC form factor provides unrivaled means of creating content. Whether it’s writing text, editing a spreadsheet or cutting up a Photoshop document, the mouse/keyboard/monitor are the gold standard. Sure, you can hook up a tablet to those same input devices, but at what point have you just built a PC in a crazy slim form factor?
In the end, these new devices provide exceptionable mobility, and that’s a powerful answer to the problem: how can I be productive when I’m mobile? But when put to the question, how can I maximize the productivity of my computing resources, it’s gonna fall short. But to those who insist they can ditch their PCs entirely… I say, bring it on! I’m all too happy to have the advantage.
Scripting Languages: Putting Text on a Screen
But what about NodeJS suggests the death of scripting languages? For that matter, what about scripting languages suggested the death of static documents? Like before, it all goes back to the nature of the problem needing to be solved. With static documents, we were trying to get information out to the client as quickly and as reliably as possible. Static HTML answered that problem brilliantly (more so than I think anyone expected at the time). With dynamic pages, we had two objectives: (1) allow a page to be built through the combination of a smaller discrete parts; and (2) the customization of content based on the user’s input or environment. But here’s the rub… the best web applications still work with static HTML as much as possible. Caching pre-rendered objects, or whole pages, is essential to any application that is going to deal with significant load. So it’s not that scripting killed static pages, it’s that it enhanced what we could do with it. But a developer who doesn’t learn the basics of serving static content does so at his or her own peril.
So, does NodeJS allow us to solve either of the problems that static or dynamic documents solve? I suppose you can use NodeJS as a dynamic document generator. But, that’s not really it’s strong suit as I understand it. It’s power is the ability to handle multiple concurrent users and allow them to exchange information. Which, like I said, is awesome. But it doesn’t mean we won’t need to serve static pages in the future.
In part, I realize, I’ve fallen victim to the prognosticators’ bombastic claim. To get press, they seem to need to say more and more outrageous things. No one bothers to print the headline that says “Technology gets incrementally better, most things remain the same.” I worry that too many developers get educated in a world of over-hyped expectations and a thirst for the bleeding edge. This obsession strikes me as dangerous. The point of technology, from a societal standpoint, is to help us solve problems. Being shiny isn’t a good enough reason to discard what came before. If all of the current developers are dancing on the graves of old technology, how will we effectively evaluate it against the latest and greatest?
The next time you read an article that says HTML5 will end the way we think of webpages or that your smartphone is going to replace your tax accountant, stop and ask yourself: what is the problem trying to be solved here, and does this new technology solve it in a way that totally displaces the previous solution, or does it just solve a particular part of the problem in a new way? My personal experience is that, in most cases, technology is rarely revolutionary… and when it is revolutionary, no one sees it coming until it’s already here.