Even proponents of educational technology admit that a lot of software sold to schools isn’t very good. But they often highlight the promise of so-called “adaptive learning” software, in which complex algorithms react to how a student answers questions and tailor instruction to each student. The computer recommends different lessons to different students, based upon what they already know and what they still need to work on.
Wonderful in theory, but does it work in practice?
The Bill & Melinda Gates Foundation sought to find out, and gave money to 14 colleges and universities to test some of the most popular “adaptive learning” software in the marketplace, including products from a Pearson-Knewton joint venture, from a unit of McGraw-Hill Education called ALEKS and from the Open Learning Initiative. Most of the universities combined the software with human instruction, but a few courses were delivered entirely online. Almost 20,000 college students and 300 instructors participated in the experiment over the course of three terms between 2013 and 2015. It’s probably the largest and most rigorous study of adaptive learning to date. Then Gates hired SRI International, a nonprofit research institute, to analyze the data. (The Gates Foundation is among the funders of the Hechinger Report.)
What SRI found was sobering. In most cases, students didn’t get higher grades from using adaptive-learning software, nor were they more likely to pass a course than in a traditional face-to-face class. In some courses the researchers found that students were learning more from adaptive-learning software, but even in those cases, the positive impact tended to be “modest.” The report is here.
“I wouldn’t characterize our report as cynical, just cautious,” said Barbara Means, director of the Center for Technology in Learning at SRI International and one of three authors of the report.
Although the study was conducted exclusively at colleges and universities, Means said she suspects researchers would find similar results with adaptive software used at elementary, middle and high schools.
Means emphasized that it was an analysis of the technology available back in 2013 and that better products have come to market since. “It shouldn’t be regarded as though this is the last word. It’s just a very early snapshot,” Means added.
Still, two important lessons emerged from the report, which may continue to apply even as the software improves.
1. The software in and of itself isn’t a magical teacher
“Every piece of learning software I’ve ever studied gets positive effects in some places and not others,” said Means. “When you try to understand why that is, you find out that students and instructors used it in very different ways.”
When instructors use the same language that’s used in the software during the face-to-face instruction, it’s more potent. It also matters when teachers look at the data that the software is generating, and spend class time reinforcing ideas that were troublesome after students used the software. And on a most basic level, students need incentives to use the software. Sometimes, the instructor just says, “Go use it,” but doesn’t monitor whether students log in or not. Not surprisingly, usage is low or sporadic. “Sometimes instructors give students the impression that what they do in the courseware doesn’t matter,” said Means.
The research also highlighted that the technology was more effective when the professor or the university completely redesigned the course around it. One example is flipping the classroom, where lectures are delivered online and the entire classroom time is spent in smaller groups with instructors who can review difficult problems or conduct a Socratic dialogue.
Another example is to use the technology to allow students to skip some prerequisite hurdles. Students still had to learn the material, but it could be taught online, by the adaptive-learning courseware, to fill in holes while the student was learning in a more advanced class. That can help students graduate on time within four years.
“We can’t expect all the power to be in a piece of software. Because we know it’s not,” said Means.
2. Universities aren’t monitoring whether the technology they’re using is working
In conducting the study, Means frequently found that colleges and universities weren’t prepared to measure student learning in a way that would stand up to academic scrutiny. To measure how much students are learning, you need to know what students knew before they started a course. You can’t just compare student grades in an adaptive-learning class with those in a traditional class because you might have stronger students in one of them. It was a particular problem to compare different semesters, because students who fail an introductory course in the fall often retake it in the spring, and the spring classes were filled with students who struggle more.
A lot of the data collected in the study couldn’t be analyzed because it was hard to make apples-to-apples comparisons.
“I think education institutions making major changes in the instruction, such as a reliance on adaptive courseware, have a responsibility to be monitoring the effectiveness of what they’re doing,” Means said. “And then try to improve it, in a kind of continuous improvement framework that you would see in some of the leading companies in any field.
“They don’t really know if what they’re doing is a change for the better, or not,” she said. “Given the cost of higher education today, which we all know a lot about, students and the public really have a right to expect this kind of attention to the quality of the product.”
This is not just in the public interest. SRI is also in the business of selling analytical tools to universities. But if universities were to start tracking student learning, it might eliminate the need for “snapshot” reports like this, which quickly become obsolete.