The technologies that will transform our lives decades from now are already taking shape in laboratories around the world. David Pogue imagines what the Tech page of The New York Times might look like 10, 20, or 30 years from today, as he meets the innovative engineers and computer scientists working to create thought-controlled video games, robotic exoskeletons, and virtual reality that seamlessly integrates with the real world.
What Will the Future Be Like?
PBS Airdate: November 14, 2012
DAVID POGUE: Technology is on the rise. And who knows how far it will go?
If my goggles don’t deceive me, that’s you sitting across from me.
HENRY FUCHS (University of North Carolina): Ah!
DAVID POGUE: I’m David Pogue, and on this episode of NOVA scienceNOW,…
…I’m peering into the future to find out…
…if robots can learn how to walk, will they become our constant companions?
DENNIS HONG (Virginia Tech): My dream is to have robots living with us in our home.
DAVID POGUE: Could wearable robots give us super human strength…
RUSS ANGOLD (Ekso Bionics): I can actually put my weight on the structure, while you are wearing it, and you don’t feel it.
DAVID POGUE: …and transform our lives?
RODNEY BROOKS (Rethink Robotics): The boundary between a person and robots is already starting to change.
DAVID POGUE: And…
My god! I can control this thing with my mind!
Is it possible for a machine to read your mind…
I am a superpower!
…and reveal your innermost secrets?
The computer is correct.
MARCEL JUST (Carnegie Mellon University): What if all of our thoughts were public? Lying would go away. It’s sort of like a mental nudist colony.
DAVID POGUE: Where is all this leading us?
I’m exploring the good,…
So now you can walk. That’s mind-blowing!
SHERRY TURKLE (Massachusetts Institute of Technology): Not every advance is progress. Not every new thing is better for us humanly.
DAVID POGUE: …and the not so pretty side of technology, all to find out…
Whoa, that’s scary!
What Will the Future be Like?
And he gave me a dirty look!
Up next on NOVA scienceNOW!
When we think about the future, one of the first things that comes to mind is the robot, like those droids in Star Wars, that tirelessly fulfill our every need. But how close are we to the sci-fi dream of robots that seamlessly fit into our world? Humanoid robots that look and act like us?
ROBOT BARTENDER: Your second round, sir.
DAVID POGUE: Here at Virginia Tech, roboticist Dennis Hong believes that day will come, only when robots master an incredibly complex skill, something that’s very difficult for them to learn, but comes naturally to us: walking.
DENNIS HONG: Nobody is controlling anything. And if you look at his head you can see that he’s looking around, so he’s trying to figure out where he is in the soccer field.
DAVID POGUE: He may be short, he may be skinny, but he walks and can even kick a soccer ball.
DENNIS HONG: Goal!
DAVID POGUE: Wouldn’t it be simpler to build something with treads, like on a tractor or something?
DENNIS HONG: That’s a very good question. Some people say we can do things with wheels.
DAVID POGUE: That’s right.
DENNIS HONG: But my dream is to have robots living with us in our homes.
DAVID POGUE: And to do that they need to climb stairs, open doors, take out the trash and clean up the dishes.
DENNIS HONG: If you want to have these types of robots living with us, in an environment designed for humans, then I say that the robot has to be the shape of a human being.
DAVID POGUE: So, you’re acknowledging that it’s a much more difficult task to make a humanoid robot that walks, but you’re saying the payoff will one day be worth it?
DENNIS HONG: Absolutely.
DAVID POGUE: Dennis hopes, in the future, humanoid robots won’t just do the dishes, they’ll do jobs that are risky for us to do.
DENNIS HONG: Dirty, dangerous tasks…
DAVID POGUE: Like cleaning up after a chemical plant leak, helping people out of harm’s way, or fighting fires.
DENNIS HONG: Humanoid robots are really great for those kind of things—real, useful tasks that can actually save people’s lives.
DAVID POGUE: How do you transform metal, motors and microchips into a machine that can walk on two legs?
All right, so Dennis, this is your “home robot construction kit?”
DENNIS HONG: This is the robot cooking show, as a matter of fact.
DAVID POGUE: Cooking show?
DENNIS HONG: You’ll be making one of these.
DAVID POGUE: Oh, wow.
DENNIS HONG: This is called “Darwin O.P.”
DAVID POGUE: Darwin is a pint-sized humanoid robot that Dennis created back in 2004, a research platform he uses to find out what it takes for a robot to walk on its own two feet.
DENNIS HONG: This is all the parts you need, and you are going to put it together today.
DAVID POGUE: Yeah, right.
DENNIS HONG: Are you ready?
DAVID POGUE: No.
This is not Legos, let me tell you. We’ll start out with the M4.
To create Darwin, Dennis took his clues from nature. Number one, he needs to see. As you walk, you use your eyes to assess your environment. Darwin sees the world through this tiny webcam.
DENNIS HONG: Actually, the eyes, the big ones, are just fake. The nose is actually the camera.
DAVID POGUE: So your saying that all this is just cosmetic?
DENNIS HONG: Those are not the eyes.
DAVID POGUE: Oh, you’re such a…
DENNIS HONG: We cheated.
DAVID POGUE: Number two, he needs a sense of balance. As you shift your weight from one foot to the other, your inner ear is able to sense the change in your position and keep you from falling. Darwin gets this ability from a sensor in this circuit board.
DENNIS HONG: These two small things, these are the balance sensors. So it knows its orientation and direction.
DAVID POGUE: Ah, that seems to want to fit right here.
DENNIS HONG: You’re good at this. Have you done this before?
DAVID POGUE: Thousands of times.
Even if Darwin can see and balance he still can’t move without muscles and joints. Number three: your muscles are your body’s engine, you can’t move without them. Darwin moves with the help of actuators.
DENNIS HONG: For each moving joint we have one of these actuators that is basically an electric motor.
DAVID POGUE: A motor that converts electrical energy into motion, and that gives Darwin the ability to move.
DENNIS HONG: He’s done.
DAVID POGUE: We have arm!
DENNIS HONG: Yay!
DAVID POGUE: As for his sense of touch, he gets it from these four little sensors on the bottom of his feet.
Hours later, I’m finally finished.
Tell me that’s the last step.
DENNIS HONG: You’re almost done. Now you need to attach the arm to the rest of the body, and that’s the last step.
Hello, I’m Darwin.
DAVID POGUE: He’s adorable. I can already feel him seeking world domination and wishing his screws were done better.
Even though my robot’s fully loaded, he can’t walk yet, because he still needs a brain. And that’s where roboticist Dan Lee comes in.
DAN LEE: So, the first thing we’re going to show you is how the robot uses its vestibular sense, which is its sense of balance.
DAVID POGUE: “Vestibular” from the word “vestibule,” which is one of the things in our ears.
DAN LEE (University of Pennsylvania): Exactly; inside your inner ear. And so, without using its vestibular sense, what would happen if we pushed the robot?
DAVID POGUE: All right.
DAN LEE: It just will fall over and then get back up.
DAVID POGUE: Wow. That’s pretty cool, right there.
DAN LEE: So, go ahead. Exactly.
DAVID POGUE: Wow!
DAN LEE: So, now, what we’ve done, we’ve trained this robot using something called reinforcement learning. So, exactly what you just did, we kept pushing the robot over and over, and it was now able to figure out that every time it fell down it was kind of a form of punishment.
DAVID POGUE: When Darwin falls, his software gets an electronic signal that basically says, “This is bad.” After being bullied around hundreds of times, he finally learns it’s better to do this.
Fall over. Oh no!
So, the hundreds of different reps are so that it can learn from this angle and this angle and this hard and this hard and this hard.
DAN LEE: Exactly.
DAVID POGUE: I see.
Dan’s even teaching his robots to learn through imitation.
(Singing) We will, we will rock you.
This camera detects my body’s movement and sends that information to Darwin’s software, which quickly translates it into a copycat movement of his own.
You’d think, with all this sophisticated software, Darwin would be able to keep up with me.
Look forward. Oh, so sorry.
But he can’t.
DENNIS HONG: Just trying to make a robot walk steadily is ridiculously difficult. I won’t say impossible, because I don’t like that word, but it’s a very difficult challenge.
THURMON LOCKHART (Virginia Tech): All right, so, David,…
DAVID POGUE: Yet, we’re masters at it.
THURMON LOCKHART: Come on in.
DAVID POGUE: Wow. What are you doing here?
And engineer Thurmon Lockhart is trying to figure out why. He’s built this strange looking device to analyze, not just how we walk, but how we avoid falling.
And I’m about to try it out.
But first, I need to suit up.
Thurmon’s outfit is filled with sensors that measure how far, how fast and in which direction I move.
I can’t decide if I feel more like a superhero or a Broadway dancer.
THURMON LOCKHART: Both.
And you’re going to have to wear a headband, as well.
DAVID POGUE: This is just an elaborate prank to humiliate me on television, isn’t it?
The little white balls are part of an optical motion capture system that instantly creates a stick figure of me.
(Singing) Well you can tell from the way I use my walk, I’m a woman’s man, no time to talk.
Next, I’m put in a harness, because it’s time to take a stroll through Thurmon’s obstacle course.
THURMON LOCKHART: That’s good.
DAVID POGUE: After walking back and forth several times, suddenly…
THURMON LOCKHART: Oh, good.
DAVID POGUE: What the hell?
THURMON LOCKHART: Did you feel that a little bit?
DAVID POGUE: Did I feel it? You made the earth shake under my feet.
But check out what happens when I do this a second time.
I handle the jolt much better. My body and brain have already integrated thousands of pieces of information in a flash.
THURMON LOCKHART: You ready?
DAVID POGUE: That’s because we have the exceptional ability to adapt to sudden changes in our environment.
Walking is a skill that took millions of years for us to develop. And, when you think about it, it still takes each of us about year to go from floppy, to crawling, to waddling and, finally, mustering the skills and courage to walk on our own two feet.
And we never stop learning how to adapt to the many obstacles we confront every day.
So walking is not just more complex than I thought, it’s much more complex than I thought. And if you wanted to design a robot that could walk as well as a person, I mean, this would be fantastically complicated software. I mean, it would have to be doing billions of calculations with every step.
THURMON LOCKHART: It is amazing that we are able to do it almost innately and without really even thinking about it.
DAVID POGUE: Dennis Hong hopes that in the future, his robots will be able to master this extraordinary human skill. And they’re learning how to do it one kick at a time.
DAVID POGUE and DENNIS HONG: Score!
DAVID POGUE: Every year, hundreds of teams from around the world compete at RoboCup Soccer, a competition designed to foster research in robotics and artificial intelligence.
DENNIS HONG: To make a autonomous soccer-playing robot, you really need to solve all the grand challenges, the really difficult problems in robotics. Robot vision, autonomous behavior, bipedal walking—and running, in the future. All of these need to be solved to truly build a soccer-playing robot.
DAVID POGUE: But what comes naturally to us,…
DAVID POGUE and DENNIS HONG: Whoa…
DAVID POGUE: …comes a lot harder for Dennis’s robots.
You think you’re pretty good at soccer? I’m better. I can do this. Whoa, that’s scary. He got right back up, and he gave me a dirty look!
Despite his robot’s shortcomings, Dennis is optimistic.
DENNIS HONG: By the year 2050, we want to have these type of full-size humanoid robots play soccer against the human World Cup champions and win.
DAVID POGUE: You’re going to have robots playing humans and you expect them to win?
DENNIS HONG: Yup.
DAVID POGUE: Here. Twenty bucks. There you go.
While this may sound like a sci-fi fantasy, many experts believe humanoid robots could progress a lot faster than we think. Sophisticated robots are already building our cars. Pretty soon they could be serving us drinks and even doing the laundry.
In Japan, where the aging population is growing faster than in any other country, researchers are developing robots to care for the elderly, from bathing them, to moving them.
And one day they may even babysit our kids, a job that has always required a human touch.
Sherry Turkle, a researcher at M.I.T., who’s written several books about the effects of technology on humans, is concerned about our future relationship with robots.
SHERRY TURKLE: It’s too easy to look at them and say, “Oh, they’re not there yet.” Well, they will get to something very powerful that we will want to hang out with. And then you have to say, “Well, where will we have gotten to? Why is that something that we want to develop?”
DAVID POGUE: And in the future, robots won’t just be taking care of our kids, they may become a part of us, literally merging with the human body and transforming both of us into something in-between.
At Ekso Bionics, they’re developing a robot that could restore our ability to walk.
RUSS ANGOLD: This is our exoskeleton. So, you can see it’s a robot that can walk and move without somebody in it. And this one’s designed for paraplegics, to get them up and walking again.
DAVID POGUE: People like Amanda Boxtel.
You’re breaking a few speed limits there.
In her 20s, Amanda was injured in a skiing accident.
AMANDA BOXTEL: I lost all sensation and movement from my pelvis down. There’s nothing.
DAVID POGUE: So, crutches, no good?
AMANDA BOXTEL: No. I mean, I can’t move my legs.
DAVID POGUE: But in the future, that could change, with the help of a wearable robot.
Wow, is it standing you up now?
AMANDA BOXTEL: Yeah, I couldn’t do that on my own.
DAVID POGUE: Wow. That’s mind-blowing. So now you can stand and you can walk?
AMANDA BOXTEL: Yes.
DAVID POGUE: This I’ve got to see.
AMANDA BOXTEL: Let’s do it.
The exoskeleton is intelligent enough that it senses my center of gravity and, also, when I shift my weight over to a foot, then it triggers another step.
RUSS ANGOLD: So Amanda shifts her body like you or I would to take a step, and sensors pick up that intent.
DAVID POGUE: That sends a message to this onboard computer, which tells the exoskeleton it’s time to take another step. Right now, the exoskeleton works only on flat surfaces, but one day, these researchers hope Amanda will be able to use it anywhere, even walking up stairs.
And they aren’t just helping people like Amanda, they may help average Joe’s like me, with this.
RUSS ANGOLD: So, David, here we have the HULC exoskeleton.
DAVID POGUE: The HULC is designed to help people carry heavy loads, and it would come in really handy if you happen to be a firefighter.
How much weight do these guys have to lug?
DAVE (Ekso Bionics): All right, pants: 10 pounds.
DAVID POGUE: Boots.
RUSS ANGOLD: Two pounds.
DAVE: Let’s go to field jacket here. That’s another eight pounds.
DAVID POGUE: All right. Wow.
DAVE: Breathing apparatus: 30 pounds.
DAVID POGUE: Whoa.
DAVE: You can’t fight a fire without hoses.
DAVID POGUE: Oh, jeez.
DAVE: That’s 50 pounds of hoses.
DAVID POGUE: I’ve got a hundred pounds of equipment on me.
RUSS ANGOLD: Is it heavy?
DAVID POGUE: Is it heavy? It’s a hundred pounds!
DAVE: Your buddies up on the third floor of the building are going to need some…
DAVID POGUE: No, no, no.
DAVE: …some more air.
DAVID POGUE: Oh, jeez, wait a minute, wait a minute. No firefighter would go in like this, come on.
RUSS ANGOLD: They do what it takes.
DAVID POGUE: Okay, I’m going to break your scale. I hope you don’t mind.
RUSS ANGOLD: You’re carrying an extra 130 pounds of weight.
DAVID POGUE: I’m carrying 130 pounds, and I’m supposed to put a fire out.
RUSS ANGOLD: That’s right.
DAVID POGUE: Wow, all right, so you’ve made your point. This is unbearable.
And here’s where the HULC comes in.
RUSS ANGOLD: This structure actually surrounds your body. The idea is to take all the weight you put on it, all the way down the ground, completely bypassing you.
All that weight that you put on that external skeleton actually bypasses the user that’s inside, in taking that weight to the ground. So they actually don’t feel the weight that they are carrying.
DAVID POGUE: Not only that, the HULC is also designed to increase your strength.
RUSS ANGOLD: And what I’m going to do is turn it on. Now you can actually feel a little bit of power.
DAVID POGUE: Oh, I see, it’s picking itself up.
RUSS ANGOLD: It’s picking itself up.
DAVID POGUE: So this is going to be my thigh, and when my thigh moves, that triggers it to start helping me?
RUSS ANGOLD: Actually, this is at our very lowest setting.
DAVID POGUE: Can you turn it to the highest setting, let me see what happens.
RUSS ANGOLD: Now try that.
DAVID POGUE: Whoa! Hey that works.
Now, I put on the HULC and get loaded up again.
DAVE: Ready for the hoses?
DAVID POGUE: Yeah. This is what killed me before. Go ahead and put on the hoses, Dave.
These are different hoses.
RUSS ANGOLD: Same hoses.
DAVID POGUE: That’s amazing. I feel nothing. It’s like, you might as well put it on the roof of my house. It doesn’t affect me at all.
RUSS ANGOLD: That’s the idea. I can actually put my weight on the structure, while you’re wearing it, and you don’t feel it.
DAVID POGUE: Oh, dude, that’s amazing.
RUSS ANGOLD: So, all that weight bypasses you, comes down that torso, down these titanium legs, all the way into the ground. So you actually don’t feel that weight.
DAVID POGUE: Now should I try walking?
RUSS ANGOLD: Yeah, give it a try.
DAVID POGUE: It takes some getting used to.
There are fractions of a second when I suddenly feel really heavy and then the robot says, “Oh, here, I can help you, pal,” and takes some of the weight off.
RUSS ANGOLD: And takes it over, yup.
DAVID POGUE: Wow.
Don’t worry, ma’am, I’ll save you. That’s what I’m here for, ma’am. Give me your hand.
David Pogue is Backdraft.
RODNEY BROOKS (Rethink Robotics): The boundary between a person and a robot is already starting to change. Lots of people have mechanical hips, and then people have electronic interfaces to their, their cochlea. We are going to merge with our machines more and more. We’re already merging with our machines.
DAVID POGUE: In the future, will wearable robots like this give almost anyone a leg up?
No one looking at us would ever know that you’re a paraplegic and I’m a skinny nerd.
AMANDA BOXTEL: Yeah.
DAVID POGUE: It’s hard to imagine just how much robots may change our world.
One of the first known
was invented by…
Leonardo da Vinci!
Made from a suit of armor packed
with springs, gears, and pulleys,
It could reportedly sit, stand, walk,
and raise its arms.
The mechanical knight
may not have launched
a Renaissance Robo-Revolution…
…but a modern version
was recently reconstructed.
So watch out.
DAVID POGUE: Mindreading!
It’s a timeless fantasy that’s shown up in science fiction and movies for decades, but now, scientists may finally be figuring out how a machine could read your mind. And, for the very first time, mind reading headsets are becoming real.
TAN LE (Emotiv Lifesciences Inc.): You really want to just slowly imagine the cube fading out into that black.
DAVID POGUE: Look what I can do to the orange cube, without touching any dials or keyboards, but just thinking, “Disappear.”
My god. I can control this thing with my mind.
Tan Le is an entrepreneur, with a headset that must be reading my mind,…
TAN LE: We have to actually train the system.
DAVID POGUE: …because she’s turned it into the ultimate remote control. Just by thinking commands, I can make the orange cube on a computer lift; I can start this car; and launch this helicopter.
The future is going to be awesome.
I am a superpower!
So how does this contraption work? Is it mindreading?
TAN LE: I wouldn’t say, necessarily, “mindreading.”
DAVID POGUE: The headset doesn’t actually hear my thoughts, but its 14 electrodes do pick up patterns of electrical activity coming from my brain, my brainwaves.
Brain cells communicate with each other by firing off tiny chemical and electrical signals. And whenever I think something like “disappear,” a particular pattern of brainwaves is generated. The headset picks that up.
TAN LE: So, as the neurons inside your brain fire up, the signal gets weaker and weaker, as it travels through, and then gets projected onto the surface of the scalp.
DAVID POGUE: Oh, wow. Okay.
TAN LE: So it’s very, very faint.
DAVID POGUE: So they’re not thoughts. It’s not mindreading. It’s like the echo of neural activity deep in my brain?
TAN LE: That’s right.
DAVID POGUE: Even though it’s just an echo, the signal is good enough for the computer to recognize a simple brain pattern, once it learns it, like, “Lift.”
And voila! It’s reading my mind.
Can you imagine, I mean, some future world where everything is hooked up to this? I could just make anything happen, just by wishing it. Or at least, that’s what I was hoping, until Tan Le tells me this headset can be easily confused; in other words, wrong.
TAN LE: If you were wearing this all day long, I can imagine instances when you might have a brain pattern that’s very similar to when you were thinking about disappear, and it may trigger that same action.
DAVID POGUE: You mean, things might happen when I’m not wishing them to?
TAN LE: That’s right.
DAVID POGUE: Any mindreader that relies on electrodes on the surface of the scalp is bound to be imperfect, because what it “hears” is a mere echo of my brain cells firing. But what if we could tap directly into the brain?
That’s what they’re attempting here at Brown University.
Cathy Hutchinson is paralyzed from a stroke, but she’s controlling a robotic arm, with much more precision than any headset would allow, thanks to sensors that have been implanted directly onto the surface of her brain.
Cathy made headlines when she played a crucial role in a groundbreaking mindreading experiment. She simply thought about reaching out to pick up a cup of coffee, the sensors in her brain picked up electrical impulses, and a computer turned them into commands, controlling the robotic arm.
It’s an astonishing breakthrough for brain science, that offers hope for the paralyzed.
I went to see John Donoghue, one of the heads of the BrainGate team at Brown, to find out how they turn mind into motion.
This is a model right?
JOHN DONOGHUE (Brown University): No, this is a real human brain, with its spinal cord attached.
DAVID POGUE: Come on!
JOHN DONOGHUE: This is an adult brain. This is, you know, it’s the right size to fit inside your head.
DAVID POGUE: John’s been working toward a machine that can tap into our brains for more than 20 years.
JOHN DONOGHUE: The problem is really quite immense. We had to know where in the brain the signals are; but we’ve known that. If you follow back a little distance behind the middle of the brain and you run into this little bump, this is the marker for the arm, this little twist. And that little twist is the place, the gross anatomical landmark for where your arm is actually controlled.
DAVID POGUE: So every time you move your arm, first, this one little spot on the brain says “go” and sends signals to a particular set of muscles, and then the arm moves.
JOHN DONOGHUE: The next problem is, how do you get that signal? And we need to have a sensor, you need to have something that can pick those signals up. So, we’ve developed this microelectrode array which is extremely tiny.
DAVID POGUE: The size of a baby aspirin, the microelectrode, with 100 tiny probes, was implanted on the spot in Cathy’s brain that controls the arm.
Still, turning the signals into clear instructions for the robot wasn’t easy.
So this seems to be the arm. This is the one I saw in the video of Cathy Hutchinson controlling it with her brain?
LEIGH HOCHBERG (United States Department of Veterans Affairs): That’s right. This is one of the two arms that she was using.
DAVID POGUE: Wow. And so, how does it work, exactly?
LEIGH HOCHBERG: Well, why don’t you give it a try?
DAVID POGUE: Okay.
To demonstrate how incredibly complex the brain’s control of movement really is, neuroscientist Leigh Hochberg asked me to try to move this robot arm with a joystick.
All right. Oh! On the white rug, too. Oh, dear.
LEIGH HOCHBERG: Try again?
DAVID POGUE: It would be so much easier, if I only had a brain.
Stop, stop! It’s taking over! It’s an uprising!
LEIGH HOCHBERG: Almost.
DAVID POGUE: I can see it takes practice.
LEIGH HOCHBERG: It takes some practice.
DAVID POGUE: So, if such simple commands are difficult, imagine how hard it would be to actually read complex thoughts.
Could a machine ever do what the Amazing Kreskin used to claim to do, on his classic 1970s TV show?
KRESKIN (Mindreader, The Amazing World of Kreskin/Film Clip): Does his birthday fall on March the 6?
WOMAN IN AUDIENCE (The Amazing World of Kreskin/Film Clip): Yes!
KRESKIN (The Amazing World of Kreskin/Film Clip): Thank you very much for standing, ma’am.
DAVID POGUE: I hear that this could be the mechanical Kreskin. And it’s not a magic trick. It’s a nine-ton M.R.I. at Carnegie Mellon University, in Pittsburgh.
Psychologist Marcel Just and computer scientist Tom Mitchell use the M.R.I. to peer directly into the brain as it works.
M.R.I. TECHNICIAN: Hi, David. How’re you doing?
DAVID POGUE: Good.
M.R.I. TECHNICIAN: In this study, you’re going to see labeled pictures of objects.
DAVID POGUE: While I ponder the objects projected onto a screen, the scanner isn’t reading brainwaves or electricity. Instead, it’s measuring the flow of oxygen-rich blood in my brain, to detect exactly which parts are active when I think about different objects.
M.R.I. TECHNICIAN: Okay, great job, David. We’ll come get you in one second.
MARCEL JUST: When you think of something, your brain activates in those places that correspond to your interactions with it.
DAVID POGUE: Like, if I think of “skyscraper,” is there an area of the brain for skyscraper pictures?
TOM M. MITCHELL (Carnegie Mellon University): If you think of a skyscraper, you actually think of many things: you might think of very tall thing; you might think of the material; you might think of going inside of it. What we’ll see in the brain is a whole collage, and, put together, it becomes the signature for “skyscraper.”
DAVID POGUE: The team has already identified the areas in the brain that activate for shelter, for food, and for holding something in your hand.
MARCEL JUST: It’s not like a dictionary definition; it’s kind of an experience definition.
DAVID POGUE: By studying my brain scans, can their mindreading computer guess what I was thinking?
So, I saw 20 pictures flash before me, and on each one I thought about it, imagined it, envisioned it. So how do we know if the computer knew what I was thinking.
TOM MITCHELL: The computer is going to take pairs of those words.
DAVID POGUE: The mindreading computer is given a pair of my brain scans: one when I was thinking of a grape, the other, of a cave. But which is which?
If the shelter area of my brain lights up, the computer guesses I was thinking “cave.” Since the other scan shows activity in the food and handling areas, it guesses that a “grape” was on my mind.
And was it right?
The computer is correct. Number one.
Picking between two words, the computer’s chances are 50-50. But can it keep it up?
Two for two!
Oh, it got nine correct.
And the 10th was correct, also. Nicely done, 10 out of 10. That’s unbelievable!
For all 10 pairs, the computer gets it right, and that’s pretty impressive.
It’s a far cry from walking down the street with a device that could read the everyday thoughts of passersby, but it’s enough to have some experts on the future concerned.
SHERRY TURKLE: Whenever you’re starting to talk about the integrity of the body, the integrity of the mind, and being able to somehow violate that, in any way, it becomes scary.
DAVID POGUE: I’m sure there are many people right now, going “Oh, my god, take away their funding! I don’t want to have my mind read. I want my innermost thoughts to remain innermost.”
MARCEL JUST: What if all of our thoughts were public? Lying, for example, would go away. It’s sort of like a mental nudist colony.
TOM MITCHELL: Well, here’s one way to think of it. Like any big technology, there are all kinds of ways you can use it. And here you could use it for some pretty amazing things. There are also things that none of us would want to do. It’s a good time, now, to begin thinking of those and thinking about what kind of guidelines we want to put in place.
DAVID POGUE: Our world is rapidly changing.
Let’s say I’m in an unfamiliar neighborhood, and I get lost. Just a few years ago I would have spent hours wandering around. Today, all I have to do is pick up my smartphone. An app like New York Subway shows me if there are stations nearby and even shows me which way to walk to get there.
If I need something translated, no problem, Word Lens replaces the text in my view with my own language! And in the future, will it be possible for us to do this? Get information about anyone in an instant?
A paradigm shift is taking place right before our eyes. The real world and the virtual are merging. It’s called augmented reality. And you can experience it with the help of hundreds of apps on your smart phone. But one day soon, companies like Google, the internet search giant, think all this information will be delivered in revolutionary new ways.
BABAK PARVIZ (Google, Inc.): This is Glass, a very different type of computing and communication device.
DAVID POGUE: Think of Google Glass as a wearable smart phone…
Can I try it?
BABAK PARVIZ: Yeah, you can put it on if you like.
DAVID POGUE: …but lighter and quicker to access.
Right now Glass is a work in progress.
I can just flick my eyes into this corner, and I see a very crisp screen.
The little square you see glowing here is actually a tiny computer screen.
Google hopes that in the not too distant future it will bring us our email, show us our text messages and provide access to the internet. And the tiny camera here will be a new way to share your experiences with friends.
So, this is wild! So, I’m seeing a beautiful path through a woods. As I turn my head, I’m actually looking around the scene. Oh, I can even look down, up. Wow, those are beautiful.
But Google Glass is just the tip of the iceberg. Researchers, like Henry Fuchs at the University of North Carolina, are developing technologies that blend the virtual and physical worlds, augmenting our reality with the stuff of sci-fi movies.
HENRY FUCHS: So, this is one of the labs, David, that we have set up to work on augmented reality and telepresence.
DAVID POGUE: So, as I understand it, this is like in Star Wars, when Princess Leia gets beam out of R2-D2’s head.
Remember that famous scene where Princess Leia records a hologram of herself and sends it to Obi-Wan Kenobi?
CARRIE FISHER (As Princess Leia in Star Wars/Filmclip): Help me Obi-wan Kenobi. You’re my only hope.
HENRY FUCHS: That was all special effects. What we hope to develop here is the real thing.
DAVID POGUE: And that’s as hard as it looks.
First, I put on these stylish shades.
TECHNICIAN: Is that too tight?
DAVID POGUE: Nope.
Once we’re set up, we’re ready to roll. We pretend Henry’s on vacation, in his beach house in Hawaii, while I’m stuck in my office here in New York. And suddenly…
Wow! If my goggles don’t deceive me, that’s you sitting across the table from me.
HENRY FUCHS: Wonderful! That’s just the effect that we would like.
DAVID POGUE: Wow, so that’s crazy. You look, you look the right height, size, shape and angle, as though you were actually sitting right in front of me.
I have to admit, it’s the closest I’ve ever come to having a conversation with a hologram, although the image is far from perfect.
So, this is the Model T we’re wearing right now.
HENRY FUCHS: Oh, we’re not even Model T.
DAVID POGUE: Not even Model T?
HENRY FUCHS: Oh, no, no. This is like, you know, 20 years before the Model T.
DAVID POGUE: But the technology needed to create this illusion is anything but 20th century. In order for Henry and me to see each other, he’s rigged our rooms with a bunch of 3D cameras. They are, in fact, Xbox Kinects. With the help of some sophisticated software, they transport his virtual image into my stunning handcrafted headset. And these little silver balls that make us look like aliens are part of a complex tracking system that pinpoints where we are in space.
There’s one serious problem with the video though.
HENRY FUCHS: Yes?
DAVID POGUE: It looks like you’re wearing a hideous, obnoxious Hawaiian shirt of some kind.
HENRY FUCHS: Ha ha ha.
DAVID POGUE: In the future, telepresence systems like this one could come in handy. And wearable smartphones, like Google Glass, may give us a new way to access the virtual world.
The question is: will technologies like these, that aim to immerse us even more into a digital world, improve the quality of our lives?
A lot of experts are wary. They say our immersion in technology is already a problem.
Look at kids today: they’re listening to music, while texting a friend, having a conversation on Facebook and doing their homework, in other words, multitasking. Can their brains keep up with all this connectivity?
Stanford Professor Clifford Nass thinks they can’t, that doing so many things at once is making it harder for them to focus.
And he’s devised a test to prove it.
CLIFF NASS (Stanford University): What we’re going to ask you to do is to simply focus on a couple of red rectangles and to ignore blue rectangles. This sounds like a pretty easy thing to do.
DAVID POGUE: Yeah, I’m down with this.
CLIFF NASS: And we’ll see how you do.
DAVID POGUE: As I start the test, two images containing red and blue rectangles flash on the screen. If the red rectangles move from one image to the next, I press one button, but if they don’t move, I press another.
Ah, that changed.
Well it is, at first.
Jeez, there’s like 1,500 of them.
But as more and more rectangles appear on the screen, it becomes harder and harder to focus on the red ones and ignore the blue.
You completed the filtering task study. You made some careless mistakes. You feel like an idiot.
But, before I can find out how much of an idiot I was, Nass gives the same test to 16-year-old Jordan Ford. Jordan is a proud multitasker and doesn’t think it affects his ability to focus at all.
CLIFF NASS: All right, well, let’s see how you guys did.
JORDAN FORD (Multitasking Test Subject): All right.
DAVID POGUE: Give it to me straight, Doc. How long have I got?
CLIFF NASS: Well, I’m going to keep you in some suspense. Let’s look at Jordan first. So, what we see on this graph is a pretty precipitous downward slide. The more rectangles there were, the worse you did. This is a very, very common pattern we see among teenagers who multitask frequently, because their brains are constantly looking all over the place and trying to process multiple things at once, even when they know they shouldn’t.
Now, let’s see, David, how you did, and that’s this blue line here.
DAVID POGUE: Oh.
I am impressed.
Smoked! Sorry, pal.
CLIFF NASS: Even though you’re a heavy technology guy. When you use technology, you do one thing at a time, you really focus on what you’re doing. And what we see here is a pretty predominant difference.
So what we worry about here is, Jordan, when you really need to focus attention on something, it’s going to be harder and harder.
DAVID POGUE: Parents have been saying, “Quit using X technology, you’ll rot your brain” since my grandparents’ time. You know, like the cave men were probably, “Quit making fire with sticks. You’ll rot your brain,” right? It’s a chronic thing to be suspicious of new technology, to worry about the effects. So, should we really worry yet?
CLIFF NASS: Well it’s silly to say to a child, “Stop using the computer.”
DAVID POGUE: Right.
CLIFF NASS: You know, just as it’s silly to say, “Stop using fire.” But, you know, you shouldn’t play with fire when there’s dry brush around.
With any new technology, we should be both excited and we should worry ’cause it often brings some negative things, as well. So the question is, how do we balance the two. That’s really the critical issue.
SHERRY TURKLE: It’s pushing us against a moral imperative, you know? Not every advance is progress. Not every new thing you can do with this incredible technology is better for us humanly.
DAVID POGUE: So where is all this technology leading us? If digital information is becoming even more available to us, and even more immersive and distracting, are we headed to a future where we don’t actually learn things anymore? Are we headed for a world, like in the movie WALL-E, where humans are such passive consumers of technology we become dumb and helpless? Is this what the future will be like? Will the Pogues of tomorrow turn into lazy, couch potatoes?
At Maker Faires like this, in the heart of America, folks are determined not to let that happen. Instead of becoming passive consumers of technology, these people are learning to master it. Usually, it’s not about building something with commercial value, but using it to express themselves in the most creative ways possible: like Mickey Miller, a high school sophomore from Toledo, Ohio, who recycled scraps from his garage to create a unique form of transportation;…
So what do you call this thing?
MICKEY MILLER (Maker Faire Particpant): Recyclobot.
DAVID POGUE: Recyclobot?
…and kids like these, who join robotics clubs to develop the skills they’ll need to invent a better future.
GIRL: When you work a lot with technology, you try to challenge technology, to make it better, to make yourself better.
BOY: After doing this, I felt like I could build whatever I want and decided to take the initiative and build my own computer.
DAVID POGUE: But this is more than just a fun hobby; it’s becoming a movement that may reshape our world.
RODNEY BROOKS: I think the maker movement is great, this is the new form of hobbyist, and actually they’ve got some pretty interesting tools. The makers have taken the best of microprocessors and 3D printers and building it in a way that ordinary people can control them and can do stuff with them.
DALE DOUGHERTY (MAKE Magazine): We’re showing them, at a really basic level, that they can control technology, that they can do something with it themselves, rather than just be users, that they can create something and make it do something that they want it to do.
DAVID POGUE: And that’s why people are here, taking control over technology instead of being controlled by it, creating a better future from the ground up, a future of their own making.
So maybe there’s hope.
RODNEY BROOKS: There’s a lot of hope.
DAVID POGUE: Excellent.
How hard is it to predict
the future of technology?
Over a century ago,
Thomas Alva Edison
perfected the lightbulb…
…and predicted a bright future driven by electricity.
He also predicted we’d figure out
how to transform iron into gold.
We’d ride around in golden taxis.
Our houses and furnishings would be gold, too.
(If it were too soft…
…we would opt for steel.)
Bright future, indeed.
DAVID POGUE: If you’re worried about the role of technology in our lives in the future, you might be concerned about video games. Worldwide, people spend billions of hours per week playing them. But if you think that’s a colossal waste of time, then you haven’t met computer scientist Adrien Treuille.
He wants to make all that time and energy we spend on videogames, count. He believes that in the future, our obsession with them could help solve some of the world’s biggest problems.
ADRIEN TREUILLE (Carnegie Mellon University): Think about Angry Birds. People play that game 3,000,000 hours a day. If we can produce a game that benefits society, with that level of engagement, we could change the world in a week.
DAVID POGUE: But how do you make a videogame that benefits humanity?
Ask Adrien. He’s already done it twice. His games could bring us closer to curing diseases like cancer and H.I.V. And the people playing them aren’t scientists.
The Carnegie Mellon professor harnesses the brainpower of the millions of people who play videogames to solve biological mysteries. It’s a concept called crowdsourcing, putting the crowd to use.
ADRIEN TREUILLE: It’s allowed us to organize humanity in ways that were never before possible. And I think we’ve just scratched the surface.
DAVID POGUE: Adrien has always loved games.
ADRIEN TREUILLE: A game is very much like a science, and there’s rules and there’s physics that you have to obey. But it’s also, like, this, totally this art.
Okay, so the rule is: “everything you say is a rule.”
DAVID POGUE: He’s been inventing them since he was little.
ADRIEN TREUILLE: When I was 12, and I had appendicitis, I started inventing card games. Since then, it’s just like, you are bored, you invent a game. That’s really fun.
DAVID POGUE: He invents games everywhere, even at the local diner.
ADRIEN TREUILLE: …being like, “The sugars are the board.” And I’m like, “You’re the Tabasco sauce, I’m the salt. Our job is to get the pepper. You can only move like this.”
DAVID POGUE: And he thinks about games 24/7, from his sleeping to his waking hours.
ADRIEN TREUILLE: I always have this notebook with me. When you’re writing, you’re, like, taking stuff out of your brain, and sometimes you need just a little bit of extra space.
DAVID POGUE: Adrien’s first science videogame began when biochemists had a problem: to fight diseases, they needed help solving protein puzzles. Computers are terrible at visual puzzles, but humans are great at them.
So where could Adrien find a massive workforce? What if he could get the millions of people playing videogames to play a different kind of game?
ADRIEN TREUILLE: And that that was the beginning of Foldit. It’s a 3D puzzle.
DAVID POGUE: Instead of birds and bombs, this game would be about protein-folding. Proteins are molecules made up of long strands of amino acids. They’re the workhorses of our cells, the machinery that keeps our body running, and they fold in thousands of different ways.
ADRIEN TREUILLE: It’s like all the pieces of the protein lock together in this sort of puzzle, like Tetris.
DAVID POGUE: How they fold will change what they do in your body.
RHIJU DAS (Biochemist): Depending on how they fold, they can either form the fibers in your hair, or the motors that run your muscles,…
DAVID POGUE: A mis-folded protein can help H.I.V. replicate and cause diseases like cancer, but a well-folded one can help cure diseases.
In the lab, scientists can make the long chains of amino acids, but when it comes to folding them up properly, they have trouble. And that’s where people playing Foldit come in.
ADRIEN TREUILLE: Fundamentally, Foldit is a game where we use people to help us understand how to build these molecules, to build next generation cures.
DAVID POGUE: Adrien labored over how to get people to play, let alone understand, a game about protein-folding. So how’d he do it? He made proteins a toy.
ADRIEN TREUILLE: So, a game has rules, but a toy is just something you want to play with. Even if you didn’t know the rules, you still want to just wiggle it around and see how it works.
DAVID POGUE: Just like a Rubik’s® cube, you can twist and turn it.
ADRIEN TREUILLE: We see if they can intuit and fold a protein into that most stable shape, just by looking at it and by thinking about it and by playing with it.
DAVID POGUE: Adrien hoped the crowd could solve protein puzzles, but would the crowd show up? On May 8, 2008, they released Foldit to the world and waited.
ADRIEN TREUILLE: The servers crashed within, like, 24 hours. The public played it, and they cared about it, and they understood it, and it was one of the greatest feelings of my life.
DAVID POGUE: Foldit made history. And it isn’t Angry Birds. With over 300,000 players, this game advances science.
ASTRO TELLER (Google Inc.): It actually adds value to the world. Adrien has completely broken this mold. There are no elves in Foldit. There is no magic, there are no unicorns, and yet people love it.
DAVID POGUE: In 2011, Foldit players solved one of these puzzles in just three weeks: the riddle of a bad protein that helps H.I.V. reproduce. Identifying that structure brings us closer to designing better treatments.
So what kind of person plays a game about protein-folding?
Meet one of the top ranked Foldit players in the world. He isn’t a scientist; he’s a 9th-grader. Like most 15-year-olds, Michael Tate loves playing around.
MICHAEL TATE (Foldit Player): I play Foldit as often as I can, and I stay on for hours and hours.
DAVID POGUE: To boost your score, you have to follow the rules of protein folding: make the protein compact, and avoid empty spaces.
MICHAEL TATE: These red things are voids that I have to fill in, and I can add rubber bands that bring the protein closer together. And if you make the most hydrogen bonds between everything, you get the highest score.
ADRIEN TREUILLE: We thought we’d have to hide the science behind this veneer of a game. And then it was almost like we’d punk’d them, you know? But it was the opposite.
MICHAEL TATE: Getting lectures from your teachers, I mean, that’s boring, but if the students are having fun playing a game, oh, my gosh.
DAVID POGUE: There are thousands more like Michael. From architects to historians and bankers to organic farmers, they are the crowd.
Adrien has always had an interest in crowds. It began early, as a boy growing up in New York City.
ADRIEN TREUILLE: I would stand on the windowsill, when I was a little kid, and just stare out my window.
DAVID POGUE: Watching the flow of people from his window changed his view of crowds.
ADRIEN TREUILLE: Here was the beating of a heart, made of people.
DAVID POGUE: The effect was so powerful that in grad school, he studied and simulated the patterns that crowds make.
ADRIEN TREUILLE: It really did inspire me to think that large groups of people can be coordinated into, sort of, a dance that could, very, very powerfully be harnessed for good.
DAVID POGUE: Adrien did that with Foldit, but could he do it again? Biochemist Rhiju Das needed help with another puzzle: ribonucleic acid, or R.N.A.
Once again, computers weren’t able to solve the visual R.N.A. folding puzzles. Rhiju thought people would be better and faster. And he knew exactly whom to call to assemble his workforce.
Like proteins, R.N.A.s begin in a long chain.
ADRIEN TREUILLE: …and then self-assemble into these beautiful, complex shapes, like, snowflake-like patterns.
DAVID POGUE: Again, their shape determines what they do in our bodies, from forming the genetic code of some viruses, to helping to create other molecules, like proteins.
So if we can figure out how they’re structured, we can fight diseases.
RHIJU DAS: Understanding how R.N.A.s work is critical for understanding life and for defeating diseases like AIDS and influenza.
DAVID POGUE: But with R.N.A., there was only one accurate way to tell if players assembled the puzzles properly: make the molecules in a real laboratory.
ADRIEN TREUILLE: We realized, “Oh, wow. That’s the game. When you hit Submit, we’re actually going to make the R.N.A., and we’re going to send you back the results.” We were like, “This is going to be so awesome. There is nothing else in the world like this.”
DAVID POGUE: Under Adrien’s guidance, his graduate student, Jee Lee, transformed their vision into EteRNA, a game that would be “Played by Humans. Scored by Nature.” People would be scored by how well their molecules folded in real life. In 2011, they launched EteRNA.
ADRIEN TREUILLE: It just came alive, as if someone had just flipped the switch, and turned on New York City. We were just, like, “I think we maybe did it”
DAVID POGUE: In the lab, they synthesized the molecules the players designed. They looked like stable shapes in the game, but would they fold like that in real life?
ADRIEN TREUILLE: And the results come back, and it’s basically nothing. What happens when you create a game, and you tell everyone, “We’re all going to do science together,” and then nothing comes out?
DAVID POGUE: None of the R.N.A.s folded into stable shapes, in the lab. Adrien worried EteRNA was a bust, but what he didn’t count on was the community.
EteRNA let the players see data on how their molecules had actually folded in the lab. The chat forums lit up.
RHIJU DAS: They’d all just start talking about data, data, data. They’d analyze it to pieces.
DAVID POGUE: Hundreds of players were discussing their mistakes and revising their strategies.
ADRIEN TREUILLE: It was like the very beginnings of science. It was almost like when alchemy was slowly turning into chemistry.
DAVID POGUE: Six months in, they plotted the players’ progress, and Adrien saw a change.
ADRIEN TREUILLE: And it was just, like, “Zhoom. The players have learned how to fold R.N.A.” And it was just like sent chills down our spine.
DAVID POGUE: The players R.N.A. molecules were folding correctly.
RHIJU DAS: And I thought “Wow, that is real, that’s real science.” And that’s when I knew that it was going to work.
DAVID POGUE: The worst player design was better than the best computer design. Humans were better and faster.
ADRIEN TREUILLE: When you hire someone, you don’t know whether they’re going to be good or bad at the job. Well, we sort of hired the world, and the world turned out to be awesome.
DAVID POGUE: Adrien has only three grad students,…
ADRIEN TREUILLE: I wonder if we’re going to get better data.
DAVID POGUE: …but he has over 40,000 EteRNA players, who have discovered new rules for how R.N.A. folds.
ADRIEN TREUILLE: We simply have much more manpower than anyone else, and science is about manpower.
ASTRO TELLER: I think that it’s possible that what we’re seeing in EteRNA is the tip of an enormously exciting iceberg. I don’t know how large the largest problems that can be solved are, if we all contribute our own little piece.
DAVID POGUE: Adrien envisions a future where anyone and everyone can contribute to solving such huge problems, even 15-year olds, like Michael Tate.
ADRIEN TREUILLE: Maybe science is going to be like a team sport. And I don’t mean, like, a 30-person team. I mean, like, a 30,000-person team. Take these crowds and multiply it by human creativity. We’re dealing with a force way more powerful than anything we had before.