Campus & Community

AI evolution: From tool to partner

9 min read

Barbara Grosz is working on making computers co-workers, not servants

Barbara
Barbara Grosz, Higgins Professor of the Natural Sciences in the Division of Engineering and Applied Sciences and dean of science at the Radcliffe Institute for Advanced Study, has been working to develop collaborative human-computer interfaces: ‘We’re aiming to have computer systems be team players, acting collaboratively to help us accomplish our goals. For almost any task on which you’d like a computer to help, you’d rather have it be your partner, not an (unthinking) servant.’ (Staff photo by Jon Chase)

Computers have revolutionized daily life over the past two decades, but to Barbara Grosz, they haven’t come near to reaching their full potential.

Today, computers are tools. Advanced tools, yes. Flexible tools, often. Super-fast tools, OK. But essentially they do the work of typewriters, calculators, and electronic date books. Although they’re capable of beeping when a deadline approaches, they are not able to negotiate a change in that deadline if a conflict occurs.

But it doesn’t have to be this way, Grosz says.

Grosz, Higgins Professor of the Natural Sciences in the Division of Engineering and Applied Sciences and dean of science at the Radcliffe Institute for Advanced Study, is one of the world’s foremost experts in getting computers to behave more intelligently.

Her work is in the area of “artificial intelligence.” Although many of the tasks she endeavors to get computers to perform are relatively mundane, doing them correctly often requires sophisticated reasoning.

“We’re aiming to have computer systems be team players, acting collaboratively to help us accomplish our goals,” Grosz said during a recent interview in her Maxwell-Dworkin office. “For almost any task on which you’d like a computer to help, you’d rather have it be your partner, not an (unthinking) servant.”

Crash, reboot, and insert file

Grosz says an everyday example is the usual response to problems in the use of a computer’s “insert file” command — perhaps to put a photo into a larger document. Today, if the computer can’t find the file you request, it sends up a dialog box: “File Not Found.” It even makes you click “OK” before you can continue with your work — that’s if it doesn’t crash your computer and force you to reboot, losing whatever you were working on. Then, if you want it to try again, you have to tell it which drive, folder, and sub-folder the file is in.

Grosz would like the computer to respond differently: “File not found, searching other directories” — and without putting a dialog box in the middle of your work, either. That way you know the file isn’t where you thought it was, but you can also rest assured that the computer is looking in other places. That new dialog box might have a “cancel” button on it, in case you didn’t want it to keep looking — if, for instance, you remember the photo was on another computer.

Wouldn’t that be helpful, like an assistant?

Though the difference between the first and second example may appear simple, the change in software is profound. Instead of performing a directed task, the software now has to make a series of judgments: Can the file be found elsewhere? Should it even begin looking? Where should it look? How should it order the choices of possible locations?

Real world examples of such a collaborative system exist, albeit as laboratory experimental systems. For instance, Grosz and colleague Stuart Shieber, Gordon McKay Professor of Computer Science, along with members of their research groups, have developed Writer’s Aid, a program designed to ease writing research papers.

One tedious chore in this process is the insertion of references, in proper format, for information cited in the paper. While the references are necessary, few would object to delegating the search for the right journal, author, paper, and page for the citation to an assistant.

What if that assistant were a computer?

To use Writer’s Aid, the author types in keywords and authors and the computer searches relevant bibliographic sources to come up with a list of likely references. The author selects the correct one for the article and Writer’s Aid inserts the appropriate citation and bibliographic entry.

Prioritizing and trading off

One of the problems faced in different facets of Grosz’s work is getting computers to weigh tradeoffs, a necessary accomplishment if computers are to be team players. Weighing tradeoffs is a basic human ability, something we do often without thinking. It is very difficult for computers, however. In her research on the design of collaborative systems, Grosz is trying to get computers to weigh options for different activities in ways that mesh with users’ priorities.

The difficulty is easily illustrated with an example people face regularly: Deciding what to do when a new opportunity conflicts with an already scheduled activity, for instance, a meeting with several others. If the world were black and white, the task might be easy. If the meeting was the most important thing to all the people involved, then no one would ever cancel. Real life, however, is rarely that simple. If the meeting involved several doctors, a sudden illness or auto accident could force one to miss the meeting to treat a patient. It’s obvious — to us anyway — that human lives outweigh attending a meeting.

On the other hand, if a doctor were asked to go see a movie on the night of the meeting, he or she would probably quickly turn the movie down to attend the meeting. Business before pleasure. Obvious, right?

But what about other priorities that we manage to balance every day? A child’s performance? A class? And what about how one doctor’s decisions affect the other meeting participants and their schedules? We humans are good at weighing so many differing priorities in life, often without even knowing what we’re doing. The trick is to make computers do this kind of decision-making in ways compatible with our priorities.

“The problem is in the unwashed middle where you have to trade things off,” Grosz said. “It is fascinating to try to understand what it takes to be an individual as part of a team.”

To solve this problem, Grosz is experimenting with what she calls “social commitment policies.” Essentially, they are rules that govern the way computers make decisions about competing opportunities. The policies are imposed by the group onto its members, analogous to paying a fine if they miss a meeting or getting a less attractive assignment. In addition to these policies, Grosz is experimenting with the use of internally generated criteria, such as the analog of a social conscience so that, given the chance, systems may sometimes prefer doing what’s good for the group, rather than what’s immediately best for them individually.

Language processing

Grosz’s work on collaboration builds on her early work on language processing. Throughout the 1970s and 1980s, Grosz worked on ways to get computers to carry on dialogues in human language, as opposed to a programming language. The problem, as with collaboration, is not in the black-and-white meanings of words, but in the subtle shadings of intonation and context.

She gives the innocuous sentence, “Incidentally, Jane swims every day,” as an example. You and I know, from looking at the sentence, that it was probably an aside in a conversation about something else. A computer looking at the sentence would stumble if it didn’t recognize that “incidentally” is a break in the previous conversation, not a reference to the sentence it’s in.

“It’s not incidental that Jane swims every day, the incidentally refers to the larger discourse, not to the subject of the sentence,” Grosz said.

Grosz’s enthusiasm for her subject is still evident even after years in the field. She goes on, describing different difficulties and illuminating them with examples of the subtleties and nuances that roll off our tongues every day but which trip up computers. For instance, we’re good at figuring out probable intentions behind what someone says, or why they’re talking, not just what they’re literally saying.

“If you say to me, ‘It’s warm in here,’ what you want is for me to lower the temperature,” Grosz said, adding that a computer, on the other hand, might thank you for the offered information and just say “fact recorded.”

Grosz has been working with colleague Stuart Shieber, Gordon McKay Professor of Computer Science, to develop collaborative human-computer interfaces. Shieber is studying communication with humans through natural languages, with computers through programming languages, and with both through graphical languages. Shieber says Grosz has performed pioneering work in the field of natural-language processing, where she worked in the 1970s and 1980s, as well as in her work on collaboration.

“Professor Barbara Grosz is one of the most prominent researchers in the field of artificial intelligence, internationally known for her scientific contributions to the fields of natural-language processing and multi-agent collaboration,” Shieber said. “(Her current work) is highly interdisciplinary, drawing on theories and results from economics, philosophy, and core computer science and will ultimately have an extremely significant impact on a range of distributed applications.”

While Grosz says the scientific groundwork already complete would permit changes in how software is designed, she is cautious about predicting the future.

“The history of computing is rife with the best ideas losing in the marketplace,” Grosz said.

Grosz decries the woeful state of current software design, saying that if cars were as poorly designed as much of today’s software, there would be a public uprising. But that’s what people have come to expect from computers, so they put up with having to reboot balky systems or having to search endlessly for lost files.

“Being able to model collaborative behavior and design collaborative software systems will cause a fundamental change in the systems that are available. We’re still asking people to adapt to computer systems (rather than the reverse),” Grosz said. “Writing with a computer is a lot easier than writing with a pen, but it’s still not as good as writing with a research assistant. I’d like to move toward the computer acting more like a capable assistant than a pen.”