Open a book to read. Gaze at a painting. Listen to music.

For centuries, these private acts were at the heart of scholarship in the humanities, the cluster of academic disciplines that study the human condition.

But 160 years ago, scholars began to think of literature at least as a cultural artifact subject to quantitative interpretation. In the 19th century, vocabulary-counting schemes were used to investigate the authorship of St. Paul’s writings and the plays of Shakespeare.

Then came computers, and with them a growing desire to apply computational power to the humanities. Starting in 1949, an Italian Jesuit priest named Roberto Busa enlisted the aid of IBM computers to produce an index of the 11 million words of medieval Latin in writings by Thomas Aquinas and others. A flurry of interest in literary concordances followed, ushering in the first age of digital humanities.

But now we are in the age of Digital Humanities 2.0, according to authorities at a recent panel of the same name, held at the Barker Center on Feb. 10 and sponsored by the Mahindra Humanities Center at Harvard.

Future digital scholars will explore the meaning of a single work, figure, or period by layering text with images, audio, film, 3-D artifacts, markups, and other multimedia resources housed — often untouched — in archives around the world.

Emerging models of such scholarship represent a “rich moment,” said panel moderator Jeffrey Schnapp, a Harvard professor of Romance languages and literatures and the founder 11 years ago of the groundbreaking Stanford Humanities Lab.

Today, Schnapp directs the metaLAB (at) Harvard, a cross-University center for investigating new forms of digital scholarship. He called the center “a cluster of experiments” and “an invitation” to Harvard’s community of scholars — all of them poised at this new digital frontier.

MetaLAB is hosted by Harvard’s Berkman Center for Internet & Society, where Schnapp is a fellow. (He is also a visiting professor of architecture at the Harvard Graduate School of Design.)

Humanities 2.0 will offer up new “plausible genres for scholarly exchange,” said Schnapp, and they will likely share four features:

  • The “animation” of archives: to process, preserve, distribute, and link archival material in a way that recognizes “visualization as a core feature of humanities scholarship,” he said. “The linguistic will take a visual turn” in this new scholarship, said panelist Peter Lunenfeld, a design and media arts professor at the University of California, Los Angeles (UCLA).
  • “Artifactual knowledge”: ways to layer 3-D artifacts and other images into more traditional narrative forms. Archives have expanded their collections, yet access is limited to these vast cultural repositories. Digital tools can “break up this logjam,” said Schnapp.
  • “Thick mapping”: adding geospatial layers to arts and humanities scholarship. MetaLAB is incubating Zeega, an open-source tool kit that enables immersive multimedia projects. It’s being developed, in part, at Harvard. And panelist Todd Presner, a comparative literature professor at UCLA, introduced his 10-year project HyperCities, a way of presenting urban histories that layer old conceptions of place with new, interactive ones. “It’s a very, very rich way of thinking about place,” he said. “This is not ‘thin mapping’ anymore.”
  • “Literary genomics”: a way of using “vastly expanded data sets” to investigate literature and other cultural treasures, said Schnapp. (As reported in the journal Science last year, a team of Harvard researchers used a 500 billion-word data set from 5.2 million Google-digitized books to analyze word occurrences between 1500 and 2008.)

Despite these new tools, digital humanities remains “a complement to traditional practices of scholarship,” said Schnapp, not a way to displace them.

But digital humanities will require rethinking what being an “author” means, panelists said. In a world known for scholars toiling in jealous solitude, the humanities may adopt a collaborative concept already common in the sciences: multiple authorship.

How do you give credit for a digital project that uses new software, peer-reviewed literature, oral histories, and a stew of other inputs? “It raises huge questions,” said Presner, who described one project that had 18 authors, 12,000 lines of code, and other creative layers.

Back when digital humanities meant using computers to count words, there was a “pleasure in explicit regimes” of quantitative scholarship that once belonged only to the sciences, said Johanna Drucker, an information studies professor at UCLA.

But in a realm that is now so versatile and visual, the digital humanities have to find ways to express the ambiguity at the heart of so much culture.

“Things that are subtle and complex,” said Drucker, “take longer to understand.”

Following the genomic road map