Science & Tech

An idea that changed the world

5 min read

Wintersession panel explains the pivotal breakthrough from a lecture 100 years ago

The Russian Revolution of 1917 was called the “Ten Days That Shook the World,” the title of a book by foreign correspondent Jack Reed, Class of 1910.

But how about the one day in Russia that shook the world, and still does? That was Jan. 23, 1913, a century ago this week. Mathematician Andrey A. Markov delivered a lecture that day to the Imperial Academy of Sciences in St. Petersburg on a computational technique now called the Markov chain.

Little noticed in its day, his idea for modeling probability is fundamental to all of present-day science, statistics, and scientific computing. Any attempt to simulate probable events based on vast amounts of data — the weather, a Google search, the behavior of liquids — relies on Markov’s idea.

His lecture went on to engender a series of concepts, called Markov chains and Markov proposals, that calculate likely outcomes in complex systems. His technique is still evolving and expanding. “This is a growth industry,” said Boston-area science writer Brian Hayes. “You really can’t turn around in the sciences without running into some kind of Markov process.”

Hayes writes the “Computing Science” column for American Scientist magazine and delivered one of three lectures about Markov on Wednesday. The session at the Maxwell-Dworkin building, “100 Years of Markov Chains,” was the first of three symposia this week in ComputeFest 2013, sponsored by the Institute for Applied Computational Science at the Harvard School of Engineering and Applied Sciences (SEAS).

There are close to 200 lectures, classes, performances, and workshops during Harvard’s Wintersession this January, a quickly evolving tradition of freewheeling intellectual stimulation between semesters. But only one event celebrated the centenary of a landmark idea.

Before Markov, said Hayes, the theory of probability involved observing a series of events that were independent of each another. The classic example is flipping a coin, an activity that makes probability easy to calculate.

Markov added the idea of interdependence to probability, the notion that what happens next is linked to what is happening now. That creates what Hayes called a “chain of linked events,” and not just a series of independent acts. The world, Markov posited, is not just a series of random events. It is a complex thing, and mathematics can help reveal its hidden interconnectedness and likely probabilities. “Not everything,” Hayes told an audience of 100, “works the way coin flipping does.”

In an article on Markov that will appear in the next issue of American Scientist, Hayes contrasts the probabilistic simplicity of coin flipping with the complexity of the board game “Monopoly.” Moves rely on a roll of the dice, but where the player ends up — Park Place? Jail? — also depends on where the player begins. Suddenly, probability (where you end up) is linked to a present state (where you start). Events are linked, not independent. That’s a Markov chain.

A “Monopoly” board has 40 possible “states,” the same as the number of squares. But Markov chains now are vastly larger. Google’s PageRank search algorithm, for instance, has as many states as there are pages on the Web, perhaps 40 billion.

Markov was abrasive, confrontational, and iconoclastic, “Andrew the Furious,” one contemporary called him. He was also inclined to look at things in purely mathematical terms. But in his 1913 lecture, he introduced his technique by analyzing the frequency of vowels and consonants in a work of literature. (Markov used the first 20,000 letters of Alexander Pushkin’s 1833 verse novel “Eugene Onegin,” a work that almost every Russian knew).

Such numerical analysis “is about the most primitive and superficial thing you can do to a poem,” said Hayes. But it proved what Markov wanted: that letters in language are interdependent, and that over time they converge into stable patterns. These patterns of behavior in a complex system are at the bottom of what modern scientists want — a simulation of what reality is, whether on the level of a cell or on the level of the entire Web.

Markov’s work is a core abstraction needed for modeling probability today, said Harvard’s Ryan Prescott Adams in a second lecture, and in turn that kind of modeling is “fundamentally important for reasoning about the world. The world is a big, noisy place,” he said, and “the calculus of probabilities provides us with the required tools for reasoning under uncertainty.”

So probability is a lens for inferring hard-to-see realities in complex natural systems. Adams, an assistant professor of computer science who leads the Harvard Intelligent Probabilistic Systems group, provided a few examples from his own research collaborations with colleagues in other disciplines: cell deformities that may help infer cancer in cells; simulations that predict photovoltaic efficiency; and ways to predict (model) mortality in patients hospitalized in intensive care units. The idea behind this Markov-style modeling, he said, is to “connect noisy and incomplete observations with the hidden quantities we wish to discover.”

Science owes a lot to Markov, said Pavlos Protopapas, who rounded out the event with insights from a practitioner. Protopapas is a research scientist at the Harvard-Smithsonian Center for Astrophysics. Like Adams, he teaches a course touching on Markov chains. He examined Markov influences in astronomy, biology, cosmology, and meteorology.

Rosalind Reid, executive director of the Institute for Applied Computational Science, had the last words. “Cranky as he was,” she said, “Markov would be very pleased to hear all this.”