Each fall, I teach a freshman history seminar called “Why College?”
Fifteen new students crowd around a table on the first day, exuding a mix of excitement and caution. They are watching me—and, of course, each other—as we all wonder what lies ahead.
I begin the class by sharing the founding mottoes of several dozen American universities. What, I ask, do these phrases tell us about the institutions that proclaimed them?
Slowly, as the veils of shyness come off, a few answers emerge. Students point out that some well-known secular schools of today had religious roots: Consider Princeton (1746: “Under God’s power she flourishes”), Brown (1764: “In God we hope”), or Colgate (1819: “For God and for truth”). Most of all, though, the students are surprised by the mottoes’ explicit emphasis on personal qualities—especially virtue, morality, and character—in the service of others. Howard University (1867) declared allegiance to “Truth and service. Elizabethtown College (1899) said it would “Educate for service”; three years later, the University of Indianapolis opened its doors under the same motto. The students also notice that state universities emphasized their own duties to serve the citizenry. The University of Missouri (1839) proclaimed its commitment to “The welfare of the people”; North Dakota State University (1890) was founded “For the land and its people.”
That’s still the story we tell ourselves about American higher education: It serves all of us. Over the past two and half centuries, the United States has developed the most extensive and diverse system of college and universities in human history. The public goods generated by that system—technical innovation, social mobility, and informed citizenship—are beyond dispute. But we never agreed to pay for it as a public good. Its costs have been borne heavily—and unevenly—by private citizens. We praise it as the basis of shared national prosperity and progress, then we turn around and present students—and their families—with the bill.
To be sure, America has also developed elaborate forms of government assistance to help people attend college. Every state created public universities; most of these schools remained free (or close to it) into the 20th century, reflecting their charge to serve all their citizens. The University of Kentucky’s president declared that his school should be “accessible to the poor youth of the land”; in the 1880s, he invited state lawmakers to visit the campus and count the number of young men “with bronzed features and hard hands.” The federal government passed important measures extending higher education to workers and farmers (the Morrill Act of 1862), military veterans (the GI Bill, 1944), and middle-class borrowers (the Higher Education Act, 1965). The language of these laws spoke to the perpetual dream of American higher education: that we could create a system “accessible to all, but especially the sons of toil,” to quote the Morrill Act.
We never did. Federal and state intervention opened the college door to millions of Americans, but it failed to inscribe higher education as a truly public good—that is, something anyone can receive, regardless of circumstance or background, precisely because it benefits the nation as a whole. That doesn’t necessarily mean it should be free of cost to students; instead, it means the costs shouldn’t keep some people out and saddle untenable debt on others. The problem was far less urgent in earlier eras—when only a small fraction of mostly well-heeled white men went to college—than it is in our present moment, when almost all of us will need some kind of postsecondary education. But by examining how Americans debated college in the past, we can learn lessons that help us imagine a different future. College isn’t—or shouldn’t be—a consumer good, purchased by people who can afford it so they can improve their own opportunities. If it’s good for everyone, we need to make it accessible to everyone as well.
At the time of the American Revolution, the new nation had nine colleges. That number tripled over the next 30 years, and then tripled again in the three decades after that. The United States boasted 811 institutions of higher education by 1880, which dwarfed the rest of the world. The United Kingdom had 10 universities, up from six in 1800; during the same period, the number in France rose from 12 to 22. In all of Europe, there were just 160 places to receive a post-secondary degree. “Colleges rise like mushrooms in our luxuriant soil,” one American college president enthused in 1827.
Part of the reason, of course, was the widespread availability of that soil: As the United States moved westward, appropriating ever more territory, institutions could readily acquire cheap land on which to build. The other reason was the lack of a central regulatory authority, in religion or education. Any group of people could hang up a shingle and call themselves a church, or a college; sometimes they were one and the same. And just as churches tried to enlist the most souls, so did colleges compete for students. That created enormous opportunity as well as huge flux: Some schools thrived, while others fell by the wayside.
But it also meant that the public good—the high-minded civic purposes in college mottoes—often got buried in the hurly-burly quest for customers. That’s the main takeaway of the historian Adam Nelson’s new book, Exchange of Ideas, on the economics of higher education in early America. (Full disclosure: Nelson is a friend, and I supplied a blurb for his book.) Consider the University of Pennsylvania—where I teach—and Dickinson College, which were both started by national founding fathers named Benjamin (Franklin and Rush, respectively). Penn tried to block the creation of Dickinson, in nearby Carlisle, lest it lure young men who might otherwise head to Philadelphia. Once Dickinson was up and running, meanwhile, Rush worked to prevent the chartering of other new schools in Pennsylvania. Dickinson advertised itself as a “healthful” rural alternative to Penn, and Rush wanted to make sure it remained the only one.
There were two other ways to increase market share: cut prices or cut corners. The first one required new sources of income, to defray tuition costs. Penn and Princeton both sent emissaries to the West Indies to solicit donations from the owners of slave plantations; to a degree they have only begun to acknowledge, the early American colleges relied on profits as well as labor from slavery. (Princeton’s first nine presidents owned Black slaves, who helped build and maintain the campus.) The colleges also got periodic infusions of cash from their respective statehouses, reflecting the widespread belief that these institutions served the broader public. In its first 150 years, Harvard accepted more than 100 appropriations from Massachusetts’s colonial and state legislature; Williams and Bowdoin (the latter was located in Massachusetts until 1820, when Maine became its own state) dipped into the same till; and Penn received $287,000 from lawmakers in Harrisburg, about $8 million in today’s dollars. As Bowdoin’s first president declared, these colleges had been “founded and endowed for the common good, not for the private advantage of those who resort to them for education.”
Another tactic for besting the competition, paradoxically, was to lower your standards. The first president of Dickinson, whom Rush recruited from Edinburgh, complained that the school admitted every applicant and conferred a degree upon him regardless of academic performance; some people received bachelor’s degrees after just one year of study. Put simply, colleges sold credentials to anyone who could pay for them. Indeed, Princeton’s president glumly observed, credentials—and their imagined cash value—were the only thing that seemed to motivate the young men in his charge. Students “consider education as nothing more than a subordinate art to getting money, and they aim at no other scholarship than what will soonest put them in way of turning a penny,” Samuel Stanhope Smith complained. “Such are the reproaches of foreigners verified; that we are a nation of little-dealers, & shifty-sharpers, without any dignity, without taste, without a sense of national honor, & intent only on profit.” To shore up the country’s dubious reputation on the other side of the Atlantic—and to discourage young men from patronizing schools there—several eminent politicians proposed establishing a national university in America’s new capital; most remarkably, George Washington willed a portion of his estate to it. But that project likewise foundered on the shoals of institutional self-interest. The established colleges lobbied against the national university, fearing that they would lose students—and tuition dollars—to it.
They also worried that any new institution would be captured by whatever national party or faction held sway. Especially after the French Revolution, the Federalists—led by John Adams and Alexander Hamilton—warned that colleges and universities were being overcome with “seditious” ideas from the Continent; meanwhile, students who backed Thomas Jefferson and his fellow Republicans claimed that Federalist educators were imposing their own conservative dogmas. Ultimately, these dynamics made it next to impossible to persuade wary state legislators to provide consistent support for higher education. You can’t make a strong case for colleges as a public good if they seem to serve one segment of the public at the expense of the rest. So even as new states in the Midwest created universities, they struggled to generate public appropriations. A territorial governor appointed by Jefferson proposed a “University of Michigania” in 1805, but the Ann Arbor campus didn’t open until 1841. Indiana’s 1816 constitution called for a state university, “wherein tuition shall be gratis, and equally open to all,” but it took more than 30 years for the state legislature to start implementing that promise.
At the federal level, meanwhile, the Civil War unleashed the first large spigots of cash for American universities. Southern lawmakers had blocked assistance to higher education on states’ rights grounds; a Virginia senator called it an “unconstitutional robbing of the Treasury.” But secession cleared these dissenters from Congress, which distributed 17 million acres of land for states to sell or rent to “promote the liberal and practical education of the industrial classes in the several pursuits and professions in life,” as the 1862 measure sponsored by Vermont Representative (later U.S. Senator) Justin Morrill proclaimed. Some states decided to use their land grant money to support existing colleges, such as Rutgers and MIT; others created entirely new institutions like Michigan State and Oregon State, which entered into feisty competition with the already-established universities in Ann Arbor and Eugene. Long celebrated as a hallmark of democratic education, the Morrill Act has come under fire in recent years from scholars who note—accurately—that at least a quarter of the federal land grants sold to support universities were seized from Native peoples or appropriated via treaties that were never approved by the federal government. But even these critics often assume that a broad swath of white people benefited from the act, as its hopeful language (“accessible to … the sons of toil”) suggested.
That turns out to be a myth. Only four states—Illinois, Wisconsin, Michigan, and California—passed permanent property taxes to fund their universities in the 19th century. So land grant schools had to rely on tuition and fees for room and board, which priced out many Americans. Tuition in the mid-century was modest, ranging from $10 at the University of Wisconsin to $75 at Harvard; the big-ticket item was living expenses, which were often double or even triple the tuition charges. Nor were these fees enough to keep schools afloat. Some cash-poor land grant universities sold parts of their campuses or spent down their endowments; others tried to cut costs by requiring faculty and students to maintain and clean their buildings, while still others temporarily shut their doors. At a time when only a tiny fraction of Americans went to college, state lawmakers continued to regard it as something that advantaged select individuals. So they balked at appropriating tax dollars for the universities, which seemed to subsidize some people at the expense of others.
Meanwhile, supporters of the Morrill Act were divided about whether it would provide practical training for workers and farmers—as the act proclaimed—or catapult them into the burgeoning middle class of teachers, lawyers, and other white-collar professionals. Although he was the son of a blacksmith, Justin Morrill became a wealthy entrepreneur who invested in railroads and supported tariffs on foreign goods; his main goal was to prepare a new generation of engineers and managers who could advance American capitalism against its global competitors. Similarly, the head of Yale’s Sheffield Scientific School—Connecticut’s first land grant institution—insisted that it would fit graduates for industrial and scientific leadership rather than for “labor with the hoe or anvil.” These claims angered farming organizations like the Grange, which feared that the land grant schools would lure graduates away from the soil; they also complained about entrance exams, which required Latin and other academic subjects that rural boys rarely studied. In Connecticut, where a Sheffield professor scoffed that “Yale College does not propose to run a machine shop,” pressure from the Grange persuaded legislators to move the state’s land grant school from stuffy New Haven to a new “agricultural school” in Storrs. But the students who went there aspired to middle-class careers, just as Morrill wished. The school soon morphed from an open-admissions manual-training institution to a more selective state college centered on the liberal arts. Tuition rose in tandem with admission standards, dampening opportunities for poor and working-class candidates.
Elsewhere, land grant universities hewed more closely to their founding mission. At North Carolina College of Agriculture and Mechanic Arts—which became North Carolina State University—the state’s burgeoning Populist movement skewered the land grant school for providing “theoretical, literary, and ultra-scientific education” instead of teaching more “practical” arts. The university squared the difference by mandating three hours of classroom recitation—the standard mode of academic instruction—and three hours of manual training; it also required all students to take agriculture, horticulture, shop work, and mechanical drawing. Most of all, Populists ensured that state universities remained either free or very close to it, so that—at least in theory—anyone could attend. “Fie upon the people’s higher schools, if they are to be but rich men’s schools!” thundered the president of Kansas State Agricultural College (later Kansas State); indeed, he added, “democracy should tolerate no tollgates on the educational highway.” In 1887, Arkansas barred its state university from charging tuition to students taking vocational courses. “The son of a rich man can go to Harvard, Yale, Columbia or Princeton, and pay the $150 or $200 per year demanded by these institutions for tuition,” the Nebraska Farmers’ Alliance declared in 1891, “but the boy from the poor man’s home cannot do this … The free state university is his only hope.”
Thanks to its Populist defenders, the University of Nebraska didn’t charge tuition during these years. But fees for room, board, and books could run as high as $175, which placed the school beyond the means of many poor and working-class families. And despite the proliferation of new colleges in the 19th century, a relatively small number of Americans patronized them. In 1880, only 26 of 811 higher education institutions had more than 200 students. Lacking the donors and endowments of the more established private colleges, state universities were especially slow to get off the ground. In the 1880s, the University of Wisconsin and Thomas Jefferson’s University of Virginia were smaller than Amherst; Indiana University had fewer students than Williams, and the University of Minnesota was about the same size as Bowdoin. Students came overwhelmingly from the upper classes or from the burgeoning new middle class of managers, lawyers, and other professions. And insofar as poorer students went to college, they too aimed to join the white-collar labor force. Despite Populist paeans to agriculture, a Colorado educator explained, country boys came to college to avoid a “slave life of labor”; indeed, a Tennessee Populist politician admitted, they were “sick and disgusted with farming.” They sought jobs in business and the professions, even though most of these positions did not yet require a bachelor’s degree for entry. A college credential might help pave the way for success, but it was hardly a prerequisite for it.
All of that would change over the next century, when postsecondary education became the sine qua non of security and sustainability. In 1900, only 2 percent of high school graduates in the United States went to college; in 2019, before the COVID-19 pandemic, 66 percent did. An institution that formerly served just a sliver of white males now enlists a majority of Americans. And there are roughly 4,000 two- and four-year degree-granting institutions in the United States, including public, private nonprofit, and for-profit schools.
But even as we created an economy that required postsecondary education to get ahead—or even to get by—our polity made higher education a consumer good that only some citizens could reasonably afford. That’s the key theme of a bracing new book by the historian Elizabeth Tandy Shermer called, yes, Indentured Students. As Shermer acknowledges, government assistance has helped millions of Americans attend college. But postsecondary education remained beyond the reach of many other people, while still others have gone into crushing debt to obtain it. Colleges continue to attract as many customers as they can, just as they did during the early years of the republic. And the rest of us seek the best deal we can find, balancing the cost of a credential against its market value.
The first sharp rise in student attendance occurred in the boom years of the 1920s, when roughly one new college opened every 10 days. High schools multiplied as well, allowing the elite private schools and big state universities to raise their admission standards—and their tuition—without reducing enrollment. “The best brains in the state should have the best training available, but mediocre and stupid persons should be positively discouraged from entering college,” a UVA professor surmised, celebrating his school’s newfound selectivity. The University of Kansas hiked its fees by 25 percent in the 1920s; in Nebraska, legislators authorized the state university to charge students for the first time. In a 1927 address at Brown, John D. Rockefeller Jr. argued that the era’s famously rowdy students (think speakeasies and flappers) should party on their own dime. “Today … the majority of the students go to college for a good time, for social considerations or to fit themselves to earn money,” Rockefeller, son of the famous industrialist, argued. “The idea of service to the community is no longer the chief consideration. It would seem, therefore, that under these changed conditions the student might properly be expected to pay for the benefits he receives.” The private colleges, especially, jacked up prices to meet their skyrocketing expenses. Despite increased donations from philanthropists and alumni, student tuition covered 63 percent of Columbia’s operating costs; at MIT and Penn, it was close to 90 percent.
The Great Depression brought that system to a crash, generating the first forms of federal student aid. Between 1929 and 1934, New York University lost 10,000 of its 13,000 students in its engineering, commerce, and education programs. More than 10 percent of private colleges took IOUs in lieu of tuition; Carthage College in Wisconsin accepted coal as payment, while students at the University of North Dakota covered their fees with farm produce. Franklin D. Roosevelt’s answer to the problem was the National Youth Administration, which funneled federal money to colleges so they could give students part-time jobs. The NYA reflected FDR’s penchant for decentralized public-private solutions as well as his antipathy to “the dole”—that is, direct government payments to needy Americans. While it surely kept some students in college, the NYA also sparked yet another spike in tuition: Since students had more money in their pockets, schools could charge them more as well. The NYA also reflected the ubiquitous racism of its time. Funds were distributed via local authorities, who rarely assisted the small number of African Americans attending college. Nominally, the NYA prohibited racial discrimination. Especially in the Jim Crow South, however, white officials made sure that most work- study jobs went to whites, as well.
A similar pattern marked the GI Bill of 1944, which is justly celebrated for opening the college door to millions of American military veterans. By 1947, veterans made up almost half of American students; a decade later, when the original GI Bill expired, more than 2 million of 14 million eligible veterans had received educational benefits under the measure. Given how much we venerate the GI Bill today, it’s easy to forget the skepticism that greeted it at its birth. University of Chicago President Robert Hutchins—himself a military veteran—warned that the bill would turn college campuses into “educational hobo jungles”; at Harvard, meanwhile, President James B. Conant feared that the GI Bill failed to distinguish between “those who can profit most from advanced education and those who cannot.” They shouldn’t have worried. Eschewing the rah-rah high jinks of campus life, veterans outpaced other students by every academic metric. “All they care about is their schoolwork,” one undergraduate complained. “They’re grinds, every one of them. It’s books, books all the time.”
But GI benefits—like the New Deal ones—flowed almost entirely to white men. Just as African Americans were often blocked from receiving home mortgages under the GI Bill, so were they frequently barred from accessing higher education. Most Black people still lived in the South, where they were limited to the roughly 100 institutions that would accept them; and most of these schools were historically Black colleges and universities, which lacked the resources to accommodate a new influx of students. Meanwhile, racist officials in local Veterans Administration offices—which administered funds for the GI Bill—barred African Americans from attending universities in the North, which were facing huge challenges of their own. The tidal wave of veterans overwhelmed state universities, which built Quonset huts and other makeshift structures to house the new students and their growing families. (The GIs made lots of babies.) And the universities addressed the new challenges in the manner in which they always do: by raising tuition. That was the only way to meet the enormous new demands on them.
The next great wave of higher education growth occurred in the 1960s and early 1970s, fueled in part by a new form of student aid: federally backed grants and loans. The GI Bill had provided educational opportunities to a specific category of students; the Higher Education Act of 1965 offered aid to everyone, prompting The New York Times to call it a “natural extension of the G.I. Bill.” Some members of Congress pressed for federal scholarships that would have made at least the first two years of college free for everyone. But a loan system sounded more “American,” insofar as it placed responsibility on individuals rather than on their government. Only 22 members of the House and three in the Senate voted against the HEA, which was hailed as a model of bipartisan consensus. It was signed into law by Lyndon Johnson, who had taken out loans to attend college during the Depression—when few Americans could afford it—and wanted others to enjoy the same opportunity. Meanwhile, states increased their subsidies to higher education as well. Bolstered by generous funding from Tallahassee, the University of Florida didn’t charge tuition until 1969. Most remarkably, California’s 1960 “Master Plan” guaranteed tuition-free admission to its University of California campuses (for students graduating in the top eighth of their high school classes) and to California State institutions (for those in the top third). Anyone with a high school degree could attend one of the state’s burgeoning community colleges, which likewise didn’t charge tuition.
We all know what happened after that. Since Congress had agreed to cover defaults on loans, banks handed them out like candy; it wasn’t until the Obama administration that the federal government cut out the middleman and became America’s student lender. And when state legislatures started slashing their higher education budgets in the 1980s, the colleges had little choice but to raise their fees yet again. Whereas prior tuition hikes had been relatively modest, the new increases were draconian. Between 1987 and 2010, the average per-student state appropriation at four-year public colleges declined 31 percent; during the same span, tuition costs doubled. Adjusted for inflation, state appropriations have picked up since then. But college costs keep soaring, vastly exceeding what many Americans can afford.
Now the bill has come due—to our students, of course. More than 60 percent of students have borrowed to make ends meet. The total student debt burden has topped $1.5 trillion, which is more than our collective debt on credit cards. And nearly a quarter of student borrowers are in default, which exceeds the rate among homeowners. College has become yet another consumer good, like a house or a car. But the consumers can’t afford it without taking on enormous debt, which will hound them for the rest of their lives. Even with President Joe Biden’s recently announced income-driven repayment program, which promises to reduce borrowers’ monthly payments by more than half, millions of Americans will struggle to free themselves from the indenture of student debt. The costs of that to the country—in rates of homeownership, entrepreneurship, childbearing, and much else—are incalculable.
Lyndon Johnson was right: Postsecondary education is a necessity for most people in the United States. In a 2014 Gallup poll, 96 percent of respondents agreed that “it is important to have a degree or credential beyond high school.” But 79 percent added that they did not think postsecondary education was affordable for everyone who needed it. College has become a practical imperative but not a true public good, which we agree to subsidize so that every American can take advantage of it. It certainly benefits the broad American polity: People with university and community college degrees are healthier, more civically active, and more likely to start businesses than people without them. But we haven’t framed a good enough argument for college to be funded as a public good instead of by the frazzled citizens who take out second mortgages, piece together multiple loans, and sometimes even forsake meals to pay for college. Prior to the pandemic, nearly a third of undergraduates reported that they had experienced food insecurity during their college careers. That is—or should be—unacceptable, in a nation that claims to value education as the key to personal and collective progress.
How could we make the case for college as something every American should be able to experience without placing their personal and fiscal health in peril? Somewhat paradoxically, it would start with a frank acknowledgment that not everyone will or should go to college. If people without post-secondary degrees sense that the rest of us are condescending to them, they won’t support any new form of government aid to higher education. Remember the liberal snickering when Donald Trump declared, “I love the poorly educated,” after polls in 2016 showed he won a huge fraction of voters who have a high school degree or less? The tsunami of mockery among Democrats made it even more likely that people without postsecondary degrees would cast ballots for Trump—and oppose public assistance to colleges and universities.
Second, if we want more public dollars, we need to show the public that we care about whether—and how much—our students learn. I’m looking at you, my fellow professors. We are incentivized to produce research, which wins promotions and salary increases; teaching doesn’t. At every level of higher education, from community colleges to elite private universities, the best predictor of a professor’s salary is the fraction of their time they devote to research; meanwhile, time spent teaching is inversely proportional to pay. Is it any wonder that so many students—and their parents—look at us as selfish pedants who are feathering our own nests? True, there is a budding “student success movement” of faculty members who are trying to improve undergraduate learning, especially among minority and first-generation students. But the very need for such a campaign demonstrates the lack of overall institutional commitment to it.
Third, we need to stop the arms race that inflates the cost of college—and inhibits public support for it—across the country. In the early years of the republic, schools battled for students by reducing tuition and amenities. Now they try to outpace each other in the opposite fashion, providing ever-nicer dorms and gyms that might catch the eye (and open the wallet) of young customers. They also vie for the most high-profile leaders, which has spawned yet another form of competitive excess. My own employer recently gave departing President Amy Gutmann a cool $23 million, $20 million of it in deferred compensation accrued during her 18 years at the helm. Like fancy facilities, extravagant salaries make it harder to imagine higher education as a public good that merits public support. They recall John D. Rockefeller Jr.’s comment in the 1920s: If all of this is just a playground, designed to make rich kids happy and college presidents rich, students should pick up the tab. You can’t make an argument for more taxpayer dollars when you’re squandering the ones you have now.
A century ago, Americans resolved to make public high schools free and open to all. Our goal should be to make college free, too, or at least accessible enough that nobody needs to go into penury to pay for it. But if Washington simply gave states enough money to allow their public colleges to charge no tuition, as Bernie Sanders and Elizabeth Warren have urged, it would reward the states that starved their universities and penalize the ones that spent more to keep tuition low. A much better solution, as Kevin Carey has argued in these pages (“How to Save Higher Education, September/October 2020), would be for the federal government to pay a flat per-student subsidy to any college that agrees to reduce class sizes, enhance instruction, and take the other steps that help people succeed. Or, as Paul Glastris has proposed (“Free College if You Serve,” September/October 2021), we might allow students to defray tuition by performing national service. (And there’s a nice historical precedent for that, too: Back in the 1880s, students could attend William & Mary for free if they pledged to teach in Virginia public schools for two years after they graduated.) But no new large-scale measure to assist students will stand a chance unless we can persuade a wary public that colleges are a public good, not something that simply helps some people at the expense of others. And that begins with doing a better job with what we already have.
It also will require us to look back to moments when the nation expanded access to higher education. Speaking in 1850, two years after his institution’s founding, the chancellor of the University of Wisconsin spelled out his vision for it. “The American mind has grasped the idea and will not let it go, that the whole property of the state, is holden subject to the sacred trust of providing for the education of every child in the state,” he declared. “Without the adoption of this system, as the most potent compensation of the aristocratic tendencies of hereditary wealth, the boasted political equality of which we dream, is but a pleasing illusion. Knowledge is the great leveler. It is the true democracy.” That’s the dream that inspired the Morrill Act, the GI Bill, and the Higher Education Act. It’s what led agrarian radicals to demand tuition-free public colleges in the 1890s, and it’s what sparks the call for the same today. We shouldn’t romanticize the campaigns of the past, which so often fell short of their mark. Yet we also need to revive the spirit of those efforts, if we are to meet the even greater challenges that lie ahead.
At the end of every semester, I ask my students to name their favorite university motto. Last year the most popular ones came from Dickinson—“Freedom is made safe through character and learning”—and from Davidson—“Let learning be cherished where liberty has arisen.” Intuitively, the students understand that our democracy depends on an informed populace. For most of our history, primary and secondary schools shouldered the burden of preparing citizens; that’s why we funded these institutions from the public purse and required everyone to attend them. In recent years, postsecondary education has become a near-universal expectation for American citizens as well. The question is whether our democracy can generate the will to help all of them obtain it.