Through networks such as edX, the education platform co-founded by Harvard University and the Massachusetts Institute of Technology (MIT), MOOCs (massive open online courses) are now mainstream, and a lot of learning has gone digital.

Whether directly (purely online) or in a hybrid fashion (a residential course that uses a learning-management system to do basic administrative work or more sophisticated tasks such as assessments or discussion boards), faculty and learners are working in a new kind of classroom.

But the field is young, and the next-gen classroom often doubles as a laboratory. With millions of global learners and teachers taking advantage of a vast array of free or low-cost online courses, researchers are studying the resulting information trail, from mouse clicks to discussion posts, to understand what, how, and even whether students are learning.

Andrew Ho, a professor at the Harvard Graduate School of Education and chair of the HarvardX research committee, and Isaac Chuang, MIT professor of physics, professor of electrical engineering, and senior associate dean of digital learning, have been at the forefront of this interdisciplinary field, having co-authored several benchmark research studies on MOOC learners. For the start of the academic year, they submitted answers to questions in tandem about the opportunities and challenges in the science of learning.

GAZETTE: Online and interactive forms of learning are not new, and neither is educational research. And yet there is a sense with the rise of MOOCs that something has changed. Why is that? How does such research differ from what you would or could do in classroom settings?

HO AND CHUANG: The MOOC classroom is like no physical classroom on Earth. It has lowered boundaries for learning in space and time. As a result, we don’t just have many learners, but incredible diversity and asynchronicity. And the data we have about their behavior are richer than the data from most conventional classrooms. The diversity of learner backgrounds and motivations is a challenge for analysis, because an instructor’s and an institution’s goals may not and need not align with the learner’s goals. In our own research, rather than start by assuming that MOOC learners are a monolithic body, we start by asking: Who are each of you, and why are each of you here?

GAZETTE: Looking beyond the learners, what is different about how learning happens in a digital setting? For example, what is the state of novel (and sometimes controversial) features such as auto-grading and peer assessment?

HO AND CHUANG: Auto-grading of problems is a central feature of MITx and HarvardX courses on edX, for a wide variety of problem types, including not just multiple-choice questions but also sophisticated mathematical equations, labeling of images, and computer programming code. Beyond accurate grading, many assessment problems are also designed to encourage feedback and learning. Machine learning has been helpful for this, e.g., in analyzing patterns within common responses, for generating automatic hints and guidance which are customized to individual learners. Early on, MOOCs also experimented with machine learning for grading free-form input text essays, but peer grading has largely supplanted this, with the recognition that one of the best ways to learn is by teaching others, including your peers.

GAZETTE: Beyond the overzealous claims like “robots are taking over teaching,” what are some of the risks with online assessment and open online courses more generally? You both recently co-authored a research report about new ways that students can cheat.

HO AND CHUANG: There are only risks if there are stakes. What we’ve seen is that some students take MOOC certifications very seriously, so seriously that they are willing to cheat to obtain them. Our research on what we call CAMEO (copying answers using multiple existences online) will inform change in policies for assignment of certificates (detection) as well as a change in the design of courses and the platform (prevention). For example, honor codes may become more prominently displayed and enforced. Or virtual proctoring of exams may become more widespread. And courses may elect to utilize more randomization, presenting individual students with different problems. The challenge is to balance the cost and design of preventive measures with providing opportunities to learn. Many techniques that prevent cheating also restrict learning opportunities.

GAZETTE: On the positive side, with digital education platforms there are more ways for faculty and learners to chart progress or do analysis through what are called “learning dashboards.” How can such dashboards best be used?

HO AND CHUANG: Most dashboards assume that students are there and share specific intentions to learn. This is typically a very small subset of the MOOC population. The challenge is to design learning tasks that are perceived to be enjoyable and valuable, and make feedback an integrated part of that experience. Ideally, we wouldn’t have to click to get to a dashboard. We would know where we were and what we needed to do by virtue of immediate feedback through the learning process.

GAZETTE: And yet, while these new tools can be exciting, not all faculty and teachers are learning experts or data miners. What are some easy ways of bringing digital feedback into the college classroom?

HO AND CHUANG: Start by using data to answer questions that instructors and/or students care about. Are students paying attention? Which students do I need to worry about? How am I doing? What resources have helped similar students? We need to make answers to these questions accurate and visible. One of our hopes is that online learning platforms like edX may make it easier for teachers to employ well-designed content driven by such answers, to help their own classroom teaching.