The Human Oriented Technology Lab at Carleton University’s website reads “As technology becomes increasingly sophisticated and pervasive in people’s lives it is important to foster research and innovation that remains closely linked with the needs, wants and capabilities of people.”
Sure, but whose needs, wants and capabilities were taken in to account before now?
For decades, gadgets marginalised people who didn’t understand them. In the 1980s programming your VCR was as much a standing joke as airplane food. In the 1990s, nobody over the age of 20 could send a text message in less than half an hour.
These gadgets didn’t meet our expectations. Tapping dozens of buttons, scrolling through list-based menus or trying to work out what two circles joined by a line at the top aren’t natural ways of interfacing with the world. That’s not to say they weren’t popular, but rather than them being used intuitively their owners had to sit down and learn how to use them. Not quite man-machine synergy.
I am 24 years old, and for 14 of those years I have been the technology expert in my family. My dad, an intelligent, articulate, savvy man, has not only been confused by his gadgetry, but has sometimes been so flummoxed by the thing that he’s been unable to describe the problem.
Why? Because they don’t do what he expects them to do. My nan’s first mobile phone didn’t have predictive texting on it. Her second one, bought after a year of practicing with her first, did – and did by default. She couldn’t work out how to turn it off, so she stopped texting again. Are we born with the innate knowledge that hitting the hash key twice switches from predictive to normal texting? I’m no neuroscientist but I doubt it. Is not understanding the intricacies of these gadgets her fault? No.
If anyone’s to blame, it’s the guy who designed the phone, but we have to bear in mind the limitations imposed on mobile phone designers over the last ten years – size, cost, performance, hardware limitations, battery life… when you really start to think about it, there’s little wonder the things have been such a chore to use.
There is an awful lot that we do innately understand, though. Bobby McFerrin demonstrates this brilliantly in the video below:
Did you know that he was demonstrating the pentatonic scale? Even if you did, did you need to in order to sing along? Thankfully, we’re now at a point where technology has advanced enough to meet us half way in rising to these natural expectations within us all.
The two examples that leap to mind are the iPhone and the Nintendo Wii.
Give someone an iPhone (even, shock horror, someone over the age of 40) and just through playing with it they will very quickly learn how to use it, initially being impressed by the clever little touches like swiping to scroll, pinching to zoom & twisting to rotate up to typing on the qwerty keyboard & noting the changes in interface when the accelerometer detects it has been tilted. It’s outselling most other phones in most markets, and not just because it’s cute.
The majority of Nintendo Wii games are cute, with oodles of the fuzzy anime-like charm Nintendo does so well, but the real reason behind the blinding success of the console is in its revolutionary, pseudo-real movement-based interface. Demonstrating this with my year 11 top set, I mimed my way through several different games with a Wii controller in my hand (baseball, tennis, shoot ‘em up, golf, etc) while dragging a hapless volunteer to stand beside me and mime the same genre of game using a PS3 controller. Smirks all round, but they got the point. These inventions make sense.
“Who here has a Nintendo Wii?” Hands fly up. “Keep your hand up if you have ever needed to read the manual.” Hands snap back down.
So are we there? Have we truly reached the technological nirvana of seamless human-computer interaction? Not quite, but over the last few years we have seen some considerable steps in the right direction.
“What else do we need?” was the next question for my class. Here’s what they came up with:
- Voice recognition & commands: lights on, music, coffee
- Gesture recognition: wave, point, stop/go, nod/shake
- Further developments of touch technology: integrate with PCs, not just phones
- Develop movement recognition: Xbox Project Natal is another step in the right direction
- Biometrics: fingerprint, iris scan, DNA recognition
Quite an extensive list, really – I was impressed by the last one. The guy who suggested it had been reading up on the ID cards that have been causing a furore since they were announced several years ago. The idea of biometric integration with technology reminded me of the closing section of an excellent lecture given at Huddersfield University last year by Professor Sir Alec Jeffreys, inventor of the DNA fingerprint.
In the lecture, he described the effect technological progress had had on the process of identifying a sample for DNA fingerprinting – identification of a person, or even a person’s relatives, by using samples of skin, hair or bodily fluids.
When he invented the process in 1984 it took two weeks in a laboratory in order to get a result. In 2008, it took about an hour with a kit that could fit into a briefcase.
And so he revealed his vision for the future; a Britain where nobody would ever worry about losing their keys. A nation of doors without locks or handles – when you get home you wouldn’t have to put your key into the lock. You would spit on the door, it would analyse the DNA contained in the sample and as if by magic open before you.
A utopian vision of the future; millions of homes with phlegm-covered doors.