In the early 1960s, a mathematician named Edward Thorp teamed up with the information scientist Claude Shannon and built the first wearable digital computer – an easily concealed cigarette pack-size device they used to beat the roulette wheels at Las Vegas. They worried about being caught – “That was the era of kneebreakers, and worse”, Thorp told me – but it worked. Beginning in the 1980s and 1990s, computer components became small and light enough to make it feasible to attach them to your body. A group of students at MIT – often called “the Borg” after the part-machine, part-organic alien collective of Star Trek – began experimenting with wearable designs.
One student, Thad Starner, created such a computer in 1993 to solve the problem of taking notes in class. Starner noticed that whenever he wrote down what his professors said, he stopped paying attention to what they were saying; his notes were often illegible, too. “All these lessons I was learning were going in one ear and out the other,” he says. So he put computer parts in a backpack and connected them to an LED display that he clipped to his head and positioned a few centimetres in front of his right eye.
To input information, he used a one-handed keyboard called a Twiddler. This way, he figured, he could write notes in class while keeping his head up and following the professor. For the next 20 years, many of them as a professor of computer science at the Georgia Institute of Technology, Starner wore his computer almost daily. He used the device to capture and instantly retrieve knowledge. During pauses in conversation, while riding in cars, while at a lecture, he’d record the most interesting parts of what people were saying. When I visited him in January, his archive was 2,2 million words. Talking to Starner can be a remarkable experience, because he’ll startle you by bringing up precise details from conversations held months, or even years, earlier.
Strict social protocols
Starner has evolved strict social protocols about when and how to use his wearable computer, to avoid ignoring people. For example, he never checks e-mail while talking to someone. “Your IQ goes down like 40 points,” he says. “You’ve got to make the systems so that they help people pay attention to the world in front of them,” Starner argues.
The distinction, he says, is between trying to juggle rival cognitive tasks – task switching – and using several information streams that are all focused on the same matter and reinforce one another. While Starner gave a presentation to the National Academy of Sciences in 2010, for instance, a group of his students listening in remotely at Georgia Tech texted factoids to him. “My students, seeing me stumbling on something, would throw up a URL,” he says. “It made me seem smarter than I am.” Since the facts were germane to his presentation, they didn’t distract him; he easily incorporated them into his talk. Today’s mobile phones, of course, can do much of what Starner’s home-built machine could do. But the difference, he claims, is how much faster his wearable was.
Although we joke about constantly Googling information on our phones, in reality, he suggests, people don’t pull them out for that purpose all that often. In contrast, he can find something in his archived notes in seconds – so he checks it all the time. “The big thing about augmented memory is access time,” he says.