Pockets of Blue

musings of my mind

Category: Software

Goodbye, Personal Computer

I spent a couple hours procrastinating today, and, as is often the case, started surfing the web.  Mainly Hacker News as it’s pretty much replaced every other news source in my life. (Crowd-sourced technology-centric news, with brilliant, engaging discourse on each item?  Yes please!)  Today brought me to an article by Max Klein, originally about his wondering what do with $24k made in a single month on the iPhone App Store, then to an interesting suggestion to Apple about their flagship iPhone.  Basically, add a mini-DVI (or mini-DisplayPort, whatever) output so you can view your phone on your giant 24″ computer monitor.  Applications would have two interfaces — one for the phone, and another scaled-up for an external monitor.

While thinking about a world where this was possible, my imagination took over:

  1. Having two interfaces is inherently awkward, especially when one is on a touch-screen and the other controlled with a mouse and keyboard.  I learned this the hard way while trying to “fix” a simple website to work properly on iOS. A shared, alternative input method (voice recognition?  Neuron readings? I digress) would help.  Or why not just have cheaper, largish (by today’s standards) multi-touch-capable monitors?
  2. What about a pocket-sized projector that plugs into your phone?  Then, you have the processor (the “brain”) and a large display, both of which are completely portable.  Perhaps we’ll have white 4×4 squares painted every 100 yards on every building, wall and home to facilitate projection.  Applications these days are increasingly “cloud-based,” meaning they live on servers rather than on the devices themselves, so as long as there’s connectivity, the functionality will be (already is) there.  The only thing missing is the transformative input device, as keyboards and mice obviously won’t work (and when you think about it, are laughably outdated).
  3. It would pretty much make the personal computer obsolete.  I’m not saying nobody would sell PCs anymore, but for most users, those who just use the web and email, there’s really no point in having an electronic device other than your “smartphone” (which, in this case, would desperately require a new name).  We’re already seeing this with the iPad.  Current laptops may be the hardest hit, as the only real use for a separate device is for “work” purposes (photo/film/media creation, design, programming, etc.), and you might as well have some beefy hardware (think towers) for power and speed.  With further hardware advancements, even phones as we know them will be capable of performing “real work.”

Realistically, personal computers and laptops will still have their place for a few years to come, but their market share will dwindle drastically as smartphones become ubiquitous. This will be especially true in poorer countries (which have effectively already skipped the personal computer era).  It seems that the next big technology disruptions will be enabled from two things:  improved batteries and an intuitive, portable input device.

The Great Democratization

It’s been creeping up on us for a while. You may not have even noticed it. It’s been eleven years since Napster changed the rules, forever. Nine since the most comprehensive encyclopedia the world has ever known changed the nature of research. You don’t even have to leave the house anymore to find out anything about anything.

The Information Age, it’s been called. It took a while for us to realize the power inherent. But I believe it’s enabling a sea change in transparency, accountability and communication. Sites like Yelp, GlassDoor, and countless others are giving us a voice. A really, really influential voice. For better or worse, many stores and restaurants live and die by their Yelp reviews, especially in tech-loving areas like San Francisco. Naturally, though, the best rise to the top, as in any efficient market. Sure, there are ways to game the system, but when aren’t there? (Just look at our political system. Yikes…) Offer a valued service, keep your customers happy, and you’ll thrive. And what’s a more valued influence on your day-to-day decisions, a radio ad or two hundred positive reviews by people just like you?

It’s gotten really easy to build software that connects people. All you need is some (OK, a lot of) programming expertise, a laptop and fast internet and you’re off to the races. We’re going to have more and more ways to express ourselves and our opinions, and I think this is a very good thing for our society. May the good guys win!

A Programmer’s Role

As I was making my weekly rounds of the software blogging universe, I came across this fascinating post by John Cook. Besides having a delightfully alliterative title, it struck a chord with me due to my similar experiences.

I’ve been a student of programmer productivity for over three years now, especially as it relates to hiring, pay, and experience. The last two paragraphs of Mr. Cook’s post made me smile, as I can relate. The best (most productive in this sense) programmers aren’t necessarily the smartest or most technically skilled; they just have a special talent for recognizing and applying the best solution to a given problem. They recognize common problems and that existing, stable code exists to solve them. At a higher level, they realize when they’re building the wrong thing or applying a hack rather than a long-term solution, and call it out to the appropriate people. In fact, these skills aren’t endemic to the current definition of a “programmer” at all, but are often best applied at the managerial or director level. Often, developers (or even managers) don’t have the clout to interrupt an ill-designed project or feature, even when it’s clearly off-track.

I’m very excited to start putting on some more hats in my new business endeavor besides the old, crusty programmer one. Call me biased, but I’ve always considered developers the single most important link in the production of software. They are what connect the business idea to its implementation, and thus can have the biggest impact on not just performance and accuracy, but requirement satisfaction and usability. The best developer can not only implement a fast, clean, and elegant software solution, but prevent a poor user interface or inappropriate system from ever being realized. Those latter cases represent the lion’s share of wasted time and money in a software project as constant redesigns and reworks are required.

It would be interesting to extrapolate this concept to the productivity divide between large and small software companies or the necessity of software CEOs to have an excellent technical background. Perhaps in a future post.

Automatic updates and software accountability

I bought an iPhone a few months ago. It was a difficult decision, weighing the sheer beauty of the device versus its commandeering popularity. Having always resisted spending too much time on it, I never bothered searching out and finding interesting and useful applications. I did end up installing one game though, Scrabble, and have become quite attached to it. The other day I initiated a routine App update, let the updates install, and launched Scrabble.

Same splash screen as always, then, wait….nothing?

Just a black screen. I waited for a couple seconds and clicked the home button. Tried again. Same thing.

I must say that my experience on the device has been pretty great thus far. Beyond some annoying service interruptions here and there, everything of significance has worked quite well. Undoubtedly other, more enthusiastic users haven’t had the same experience, but this was a first for me.

Immediately I thought, crap, why do I bother downloading these updates? Everything works great, what’s the point? What’s in it for me?

Well, not a lot, really. I suppose it depends on the user, but if I download an app and it does exactly what I expect it to, I don’t think about changing it. Updates are just pushed down from the developers to provide new features, bug fixes, or, (unfortunately) advertising.

Not to say that perfection should be expected from version 1.0. This would be near impossible in any app with enough complexity. Bugs occur and need to be fixed. But what happens when users download these routine updates which render their applications unusable? Naturally, they stop trusting them, and with enough annoyance, stop bothering completely.

This is unfortunate for both users and other developers, rendering a useful feature less so. Yet software is increasingly deployed more or less automatically, rarely requiring more than a simple user click. The day of the shrink-wrapped software is past.

Don’t pollute the ecosystem! Think about, really, how vital is this new software update? How well has it been tested on each platform? How will it affect the user experience? And damnit, Scrabble, what am I going to do on the bus now?

Perfectionism

Seven years ago I sat down at a Unix terminal in a dark, stuffy, tiny computer lab in the old Computer Science wing at RIT. Coming from a non-programming but computer-enthusiast background, the whole command-line environment was quite foreign. I was tasked with implementing a few Java methods as part of a weekly lab and, though looking back it seems trivial, was rather overwhelmed. How many lines should this algorithm require? Are there naming conventions I should be following? How should I format my code?

I’m a perfectionist. I got it from my Dad. For a while I thought it was a bad thing and that it was preventing me from getting stuff done. I don’t think that anymore; rather, I think that in order to be a competent programmer you must be a perfectionist.

This isn’t to say I’ve some sort of rock-star programmer. Far from it, actually. Writing code is hard. There are so many variables to consider when writing the simplest of functions it makes your head swim. What if the database server goes down? What if, after checking for a user’s existence, another thread creates that user right before you try and create it yourself? What if all the parameters are null?

Every day, I find myself making incremental improvements in my code. Realizing new error conditions. Documenting more thoroughly. But what am I still leaving out? A lot, undoubtedly. But only through constant re-analysis do I even come to these small realizations. Re-reading methods and comments. Refactoring constantly. Writing girl code. I can’t imagine what a less-obsessive developer is leaving out. Frankly, it’s scary. But, then again, programming is hard. It’s not going to get easier any time soon, and there’s no silver bullet. For now, I’ll just be my anal self, which, luckily, comes rather naturally. Some aren’t so lucky, and, arguably, perhaps shouldn’t be writing code.

© 2024 Pockets of Blue

Theme by Anders NorenUp ↑