When he was just 4 years old, Jelani Nelson got his first taste of what computers could do, and it came in the form of an animated plumber named Mario.
For his birthday that year, Nelson’s parents gave him a Nintendo Entertainment System, and he quickly immersed himself in the world of video games. By the time he was 10, Nelson had moved on to PC and online gaming, and it was there that he discovered — nearly by accident — the code that makes the online world work.
“I got into computers through gaming, and I had a computer at home with an internet connection,” he said. “I remember at some point I right-clicked on a webpage on my browser, and saw that it said ‘view source.’ That was how I learned HTML was a thing.”
That discovery, Nelson said, not only set him on the road to teaching himself to code — first in HTML and later in other, more complex languages — but in some ways also put him on the path that eventually led to Harvard.
These days, Nelson is the John L. Loeb Associate Professor of Engineering and Applied Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences, where his work is focused on developing new algorithms to make computer systems work more efficiently. At the end of this academic year, he will leave Harvard to join the faculty of the University of California, Berkeley.
“I’m in the theory of computation group here, and broadly what that means is modeling computation mathematically and proving theorems related to those models,” Nelson said. “So we may want to prove that a particular problem cannot be solved, or that any method that solves a problem has to use at least this much time or memory. Proving that there are methods that are memory- or time-efficient — that’s generally called algorithms.”
Streaming and Sketching
Generally speaking, Nelson’s work is focused on two types of algorithms — streaming and sketching, which are focused on using very little memory.
A basic way to understand streaming algorithms, Nelson said, is to imagine someone reading a list of numbers, then asking for their sum.
“If I ask you at the end to tell me the sum of all the numbers, you can do that by keeping a running sum in your head, but you don’t need to remember every number,” he said. “But then if I ask you to tell me the fifth number, you won’t remember it, because you only kept this running tally. You only used just enough memory to answer a specific type of query.
“That’s a simple example, but there are other situations that are not so simple, where it turns out there are very memory-efficient algorithms that don’t need to remember every item, but can answer nontrivial queries about the past,” he added. “For example, if I’m Amazon and I want to know what were the popular items people bought yesterday between 7 and 8 p.m., I want to be able to answer that without my data structure actually remembering the record of every sale during that time.”