American farmers are pleading for exemptions from President Donald Trump’s tariffs. Republican members of Congress from farm states are working to deliver the relief farmers want. But farmers do not deserve special treatment and should not get it.
Tariffs will indeed hurt farmers badly. Farm costs will rise. Farm incomes will drop. Under Trump’s tariffs, farmers will pay more for fertilizer. They will pay more for farm equipment. They will pay more for the fuel to ship their products to market. When foreign countries retaliate, raising their own tariff barriers, American farmers will lose export markets. Their domestic sales will come under pressure too, because tariffs will shrink Americans’ disposable incomes: Consumers will have to cut back everywhere, including at the grocery store.
Farmers will share this tariff predicament of higher costs and lower incomes with almost all Americans—except the very wealthiest, who are less exposed to tariffs because they consume less of their incomes and can offset the pain of tariffs with other benefits from Trump, beginning with a dramatic reduction in tax enforcement.
Farmers are different from other Americans, however, in three ways.
First, farmers voted for Trump by huge margins. In America’s 444 most farm-dependent counties, Trump won an average of 77.7 percent of the vote—nearly two points more than Trump scored in those same counties in 2020.
Second, farmers have already pocketed windfall profits from Trump’s previous round of tariffs.
When Trump started a trade war with China in 2018, China switched its soybean purchasing from the United States to Brazil. By 2023, Brazil was exporting twice as much as the United States. Trump compensated farmers with lavish cash payouts. The leading study of these effects suggests that soybean farmers may have received twice as much from the Trump farm bailout as they lost from the 2018 round of tariffs, because the Trump administration failed to consider that U.S. soybeans not exported to China were eventually sold elsewhere, albeit at lower prices. The richest farmers collected the greatest share of the windfall. The largest 10 percent of farms received an average of $85 an acre in payouts, according to a 2019 study by the economists Eric Belasco and Vincent Smith for the American Enterprise Institute. The median-size farm received only $56 an acre. Altogether, farmers have been amply compensated in advance for the harm about to be done to them by the man most farming communities voted for.
Third, farmers can better afford to pay the price of Trump’s tariffs than many other tariff victims.
Farmers can already obtain federal insurance against depressed prices for their products. Most farmers report low incomes from farming, but they have a high net worth. The median American farm shows net assets of about $1.5 million. Commercial-farm households show median net assets of $3.6 million. The appearance of low incomes is in any case misleading. Again, according to Smith, families that own farms earn only 20 percent of their income from farming. Even the richest farmers, those with farm assets above $6 million, still earn about half their income from other sources, including a spouse’s employment in a local business or through a rural government job such as a county extension agent. On average, farm families earn higher total incomes than nonfarm families, and their debt-to-equity ratio is typically low.
None of this is to deny that farmers will suffer from the tariffs. They will. A lot. But so will city people. As will people in industries that use steel or aluminum or copper as components. As will people in service industries and export industries, people who rely on the trade treaties trashed by Trump to protect their copyrights and patents. And anybody with money invested in stocks. And anybody who drives a car or truck.
During the 2024 election campaign, Americans were told, in effect, that no sacrifice was too great to revive the domestic U.S. toaster-manufacturing industry. If that claim is true, then farmers should be proud to pay more and receive less, making the same sacrifice as any other American.
But if a farm family voted for Trump, believing that his policies were good, it seems strange that they would then demand that they, and only they, should be spared the full consequences of those policies. Tariffs are the dish that rural America ordered for everyone. Now the dish has arrived at the table. For some reason, they do not want to partake themselves or pay their share of the bill.
That’s not how it should work. What you serve to others you should eat yourself. And if rural America cannot choke down its portion, why must other Americans stomach theirs?
The first was the bad appointment: the somber technician; the clinical, straightforward news—not enough growth for eight weeks and, worse, no heartbeat. She was so sorry; the doctor would be in touch.
The second act was the D&C, short for “dilation and curettage”: the paperwork, the kind and efficient nurses, the IV and the sterile room—all stainless steel and bright lights, solid stirrups, and tissue-paper gowns—and the scraping from my uterus of what was supposed to have been my baby.
The third act was another D&C: the same as the previous time, but now even less dignified, somehow, because shouldn’t it be enough to miscarry once? There’s extra tissue, they said; sometimes this happens.
I had not ordered the upgraded version, the miscarriage with a side of miscarriage.
The day I learned my pregnancy ended was March 1, 2022, but I remember it mostly as also being the day of President Joe Biden’s State of the Union address. As my husband, Mike, and I left the doctor’s office, the midday sun glinting off my tears, Mike said he felt like he was going to throw up.
We walked through downtown D.C. holding hands for a few blocks. Then I went back to work, and waited for Biden’s speech. While I was waiting, the doctor called, as promised. I grabbed a reporter’s notebook and ducked into an alcove so she could walk me through my options.
You can just wait to miscarry naturally, she said, but she didn’t recommend that option. She explained that it can take days, weeks, even months before your body realizes the baby has stopped growing, and most people have trouble with the waiting.
The next option was to take a pill that starts the process of passing the blood, the cells, the tissue; she could call the prescription in, and I could begin the process at home. Her office would give me medicine to manage the anxiety, medicine to manage the nausea, medicine to manage the pain. I’d call them every few hours, to update them about my progress.
The final option was a D&C, a procedure also used to end pregnancies, though I didn’t understand this at the time. I’d have to wait for an opening in the schedule, she explained, which could take a few days, but then it would be quick and relatively painless; I’d come in, undergo deep sedation, have the tissue scraped from my uterus, and go home the same day.
Her recommendation was the pill—the D&C could occasionally lead to scarring—but the choice was mine. I felt helpless, unsure of what to do other than cry. But I did what I often do when I’m upset, and began oversharing with my friends and family.
Every Wednesday night, I have a standing Zoom with five other women, all friends from college. The tradition started as a one-off whim during the pandemic but is now in its fifth year. Sometimes we skip a week, and rarely do all six of us make it, but almost every Wednesday night at 8:30, some configuration of us meets virtually to gossip and talk about our lives.
Between us, we have been pregnant 24 times and collectively given birth to 14 kids.
This statistic—six women, 24 pregnancies, 14 kids—stops me cold every time it occurs to me, but it shouldn’t. As many as one-quarter of known pregnancies end in miscarriage, according to the National Library of Medicine, and some research suggests that the actual number may be much higher, because some women miscarry without ever knowing they were pregnant. And most of us were in our 30s—if not “geriatric” or of “advanced maternal age,” in OB-GYN parlance—when we got pregnant.
And so, in the unluckiness of others, I was lucky. Because my friends had talked openly about some of their toughest moments, I felt less alone during mine.
I sometimes think of one of these friends, the one who miscarried three times in 2019, as the human embodiment of the what-to-do-when-you’re-miscarrying options that my doctor had talked me through. Except my friend wasn’t any one of those options. She was D, all of the above. And she was a real person, who had grieved each loss.
She explained that the time she had miscarried naturally had been the best by far—finding out the bad news from the cramps and the blood—but because that wasn’t an option for me, she definitely recommended a D&C. She had taken the pill for one of her miscarriages, but she’d ended up hemorrhaging, which meant she’d been rushed to the hospital for what ultimately ended in a D&C anyhow.
I called another friend, a fellow journalist who had done eight rounds of IVF to have a baby on her own after a divorce. She’d also had two miscarriages—including one during the IVF process—and found the clinical precision of a D&C comforting amid all the variables she couldn’t control. “You don’t want to be dealing with that shit at home, alone,” she said.
She was right. I have a low tolerance for pain, and I freak out at the mere mention of vomit. (Perhaps my proudest parenting achievement was when I taught my then-4-year-old to get to the bathroom entirely on her own at the first ominous mention of a middle-of-the-night stomachache). I imagined myself in our upstairs bathroom, passing blood clots the size of golf balls, sweating and staving off dry heaves, and trying to suss out via telehealth if this was all “normal”—just an especially gory outtake from Scream—or if I was on the verge of bleeding out.
I called back and scheduled a D&C.
My grief was surprising, absurdist, nonlinear.
I walked back to my office and that night wrote about Biden’s speech. I remember looking around the newsroom and taking in my Washington Post colleagues, whose biggest concern seemed to be where our editors had ordered pizza from that night, and silently wondering, How do you not know what’s going on inside my body right now? But I also filed 35-odd inches, clean, quickly, on deadline.
By the time I finally arrived at my D&C appointment a few days later (the first available slot), I felt like a beach towel that had been wrung out—sand-raw, but also damp and rumpled. As I filled out the relevant paperwork affirming that, yes, I understood there was a rare risk of life-threatening complications—that would be a real doozy, I mentally noted—I couldn’t help but laugh grimly to myself as I called the nurse over. “You gave me the wrong forms,” I said. “These say I’m here for an abortion; I’m here for a miscarriage.”
A quick half-smile broke her face into a moon of empathy: This procedure is an abortion, she said. You experienced pregnancy loss, but you are technically having an abortion. Of course I understood this intellectually, but in the moment, it all seemed so backward. I didn’t want an abortion. I wanted a baby. But things hadn’t gone the way I wanted them to.
There were other misunderstandings before the actual procedure. I had brought a book with me, Vladimir, by Julia May Jones, marketed as a “razor-sharp” debut novel about on-campus sexual politics and power in the #MeToo era; it had earned critical acclaim. The cover featured, simply, the naked torso of a faceless man slouched rakishly, and one of the nurses made a kind, throwaway joke about my “steamy” reading.
Naked but for a gown, an IV in my arm, I somehow found myself desperately trying to explain to her that this was modern literature, not some erotic paperback, and that I was a journalist, an intellectual, or at least a pseudo-intellectual; that I was someone who had done all the right things; that I’d taken prenatal vitamins and avoided all the vices you’re supposed to avoid; and that this dead pile of cells inside of me was not my fault.
After the procedure, my entire midsection achy with cramps, I just wanted to go home and lie under the covers and weep. But also, I asked Mike, weren’t we near that good soup-dumpling place? And so, on a gray and chilly Tuesday, we slurped rich, porky broth before driving home. “At least now,” I said, “I can’t tell if my stomach hurts from the procedure, or because I ate too many soup dumplings.”
What followed was not what I expected. At first, it was the compassion of others that shattered me. I had emailed my mostly male editors to tell them why I’d be missing a day of work, and they were kind in ways I hadn’t imagined. They called, repeatedly, in the following days, just to check in. My friends, too, sent flowers and cookies and soup—so much soup. I found myself standing in my kitchen late at night, chugging soup, to make sure the matzoh balls and chicken and noodles didn’t go to waste.
Several weeks later, after the chromosomal test came back, my doctor called to tell me the results and asked if I wanted to know the sex. Some people don’t, she said. But I’m a journalist and a gossip; I always want more information. Yet when she told me I would have had a baby girl, I felt as if I’d been kicked in the stomach. Somehow, that one detail made everything feel real.
Other details, ones that I thought would stay with me forever, faded away unexpectedly. The same day my doctor told me the gender, she also told me the specific trisomy my baby had, which was incompatible with life. I Googled it, staying up late on Reddit threads, imprinting the horrible condition onto my mind. Now, three years later, I can’t remember the name.
These days, I can bring up my miscarriage in casual conversation, just another fact of my 42 years, an almost offhand detail. Oh yeah, that State of the Union was very stressful—a crazy deadline, and I happened to be miscarrying.
Mine was not a “bad” miscarriage.
Like many women, I kept on apologizing, caveating my pain. I’d only been eight weeks along, I explained. It had been touch-and-go from the beginning, I qualified. I already had two great kids—my daughter and my stepdaughter—and maybe I was being greedy, wanting too much.
After all, I knew about bad pregnancy loss. The friend who miscarried three times in a calendar year. The friend who, after an easy beginning of her pregnancy, got a devastating diagnosis at her 20-week anatomy scan, where she was told that the little boy she’d already nicknamed Baby Butterfly would have, in the absolute best-case scenario, a life consumed by pain. (She and her husband decided to seek a late-term abortion, and wrote movingly about their experience in their hometown paper, The Boston Globe.) The friend whose wife, after eight “bad” appointments, finally got pregnant, and at 24 weeks—just when they’d begun to relax—delivered a perfect stillborn girl.
Mine was none of those things. But my miscarriage was also mine, I came to realize, and no less painful than anyone else’s.
I had gotten pregnant easily with my first daughter. Mike and I bought a house, stopped using protection, got engaged, and got pregnant. It took just three months. I was four months pregnant when we got married in our Washington rowhouse—me in brown cowboy boots and a short white dress I’d bought for $80—and I started showing on our honeymoon in Japan.
Every time I had a checkup, Mike came to the appointment bearing a small gift—a onesie, a baby book, a tiny pair of socks. And despite knowing all the stories, all the risks, I waltzed into every appointment practically floating, eager to hear the heartbeat and to see the pea-size, then kiwi-size, then squash-size little being who would one day be mine. Who was already mine.
My second pregnancy was harder. I was older, so was Mike, and after a year, we finally went to a well-known Washington fertility clinic, Shady Grove Fertility. Shady Grove was so popular, in fact, that I was constantly bumping into people I knew in the waiting room. Sometimes, I felt that instead of resenting the early-morning blood draws and vaginal ultrasounds, I should be taking notes and pitching an essay about the hottest club in town.
Ultimately, the unofficial diagnosis was age. We were old, and getting pregnant is generally harder when you’re older. My doctor recommended IVF, which she said, based on our particulars, was the best way to go home with a live baby. But she also said it was our choice, and outlined the other options.
We started with intrauterine insemination, or IUI. The recommendation was IUI with Clomid, an estrogen-receptor modulator that helps stimulate ovulation and may result in your body releasing multiple eggs. Or, put slightly more crudely: If IUI is essentially a more effective version of the turkey-baster method, then IUI with Clomid is like basting several turkeys. Already exhausted from two kids, a job, and regular fertility appointments that felt like a second job, I decided I’d rather go home with no turkey than two turkeys. So we started natural-cycle IUI.
The first time we did it, I miscarried around eight weeks—the State of the Union miscarriage. The second time—a few months later, after my body had healed and I’d gotten my period again—the procedure just didn’t take. It was, in short, like having sex and simply not getting pregnant. And the third IUI cycle, I got pregnant with my youngest daughter, now almost 2.
If I’d floated through my first pregnancy, nine months full of little gifts and high hopes, this one I muscled through like a clenched fist. I was always bracing, waiting for the long pause from the sonogram tech, the hushed whispers of the doctors, the bad news that any woman knows is always lurking, ready to make you question how you could have been better, or what you did wrong.
When the truth, almost always, is: nothing. You did nothing wrong.
When I was younger, getting pregnant felt like an inevitability, something that happened to every woman, almost effortlessly, and something to guard against at all times. (Still a virgin during my sophomore year of college, I once took a pregnancy test because a boy had ejaculated on my thigh.)
As I got older, I came to view pregnancy as something to be worked at. People were always “trying” to get pregnant. As one of my closest high-school friends explained to me: “All natural pregnancies after a certain age are just the result of prescheduled, vaguely stressful, vanilla sex.” And as I got older still, and “trying” led to not getting pregnant, I came to view pregnancy as an elusive goal, just out of reach—as grueling work, and something to “science the shit out of,” as a former colleague who had also done fertility treatments once memorably put it.
But really, the act of getting pregnant is a leap of faith. If it happens; when it happens; how it happens; and now, after the overturn of Roe v Wade, if and how you can end it—nearly everything about pregnancy entails an act of cosmic hopefulness.
I learned this truth again and again, including in my first “easy” pregnancy with my now 6-year-old. When I was about seven months pregnant with her, we showed up for a routine appointment and were told that she had “fallen off the growth chart.” The measurements of her head and limbs were especially alarming. Mike, a fellow political reporter used to devouring campaign polls, asked if her growth issue could be “within the margin of error.” (It could not, came the reply).
Instead, the doctors told us there were four likely outcomes based on our daughter’s suddenly problematic measurements: She could have Down syndrome, she could have Zika, she could have dwarfism—or she could have none of the above. Maybe she was just going to be small, they said. Maybe we just had small heads. We’re sitting right here! I wanted to shout. Do we or do we not have small heads?! This feels knowable!
Instead, I asked: “Would it help if I changed my diet? Or exercised more?” (It would not, came the reply.)
And so, for two months, we waited. Then, finally, we found ourselves in the hospital, in an operating room awaiting an unplanned C-section after roughly 24 hours of labor. When the doctor pulled our baby from my uterus, I was still strapped to the steel table, and Mike shouted out his observations, answering my unasked questions. “Her head is the right size! Her arms are the right size! Her legs are the right size!”
The doctors must have just thought we were rapturous first-time parents. But they were wrong. We were overjoyed, yes. But we also already intuitively understood then what we would learn later, again and again: just how little control we had, how little control any parent has, over any part of what it means to bring a life into this world.
In President Donald Trump’s second administration, the key political battles so far have turned on the question of who should decide big, important things: which immigrants are deported, which funds are distributed, which bureaucrats are fired, which vaccines are approved. The new administration’s answer to nearly all of these questions is that Trump should decide. This has left many Democrats incensed. Trump is not a monarch, they charge. Our constitutional system precludes him from making these changes unilaterally. Executive authority needs to be curtailed.
But just last year, when the Supreme Court overturned a Reagan-era decision requiring federal judges to defer to bureaucratic expertise, progressives were singing a different tune. They were aghast at the prospect of legislators and judges impinging on executive-branch decisions. They wanted to protect Biden-administration prerogatives. Democrats may not want Trump’s lackeys wielding authority, but set the personnel aside and they’re still fundamentally at odds with themselves about which institutions should make important calls.
For Trump’s detractors, this is an enormous, if underappreciated, problem. Democrats are, at root, the party of government—they believe that public authority is key to improving people’s lives. Yet government, according to most Americans, simply doesn’t work today. Every big public decision seems to get mired in endless wrangling and legal complications. We’re not building infrastructure, or harnessing clean energy, or keeping up with housing demand. Given that reality, Trump’s reelection is not hard to understand. In the face of a failing system, voters were eager to empower someone to shake things up—and that’s exactly what Trump promised to do.
To beat the Trumpian zeitgeist, Democrats will need a plan for making government work. But to make that notion plausible, they will need to wrestle with the core contradiction that bedevils their governing ideology. Progressivism today wants at once to allow government to make big decisions efficiently and to ensure that no one gets snowed in the process. To that end, they decided decades ago to throw their lot in with a notion that procedure can replace discretion. Today, we are all living with the consequences of that mistake.
From progressivism’s founding in the late 19th century into the 1960s, the movement offered a simple answer to the question of who should decide. Scientifically driven expertise was, to the progressive mind, the key to good public policy. That meant authorizing expert public officials—the establishment—to stand up holistic solutions to big challenges. Let the engineers design good sewer systems. Let the social workers design proper social-safety nets. Let the medical professionals design the health-care system.
New Deal Democrats extolled the “wise men” and “city fathers” wielding broad tranches of public authority. The establishment produced New York’s Port Authority, the Tennessee Valley Authority, the Social Security Administration, the Marshall Plan, and the Federal Highway Administration. On issues of great public import, progressives agreed that big bureaucracy might not be beautiful, but it got the job done.
By Jimmy Carter’s presidency, however, progressives had lost that faith. Urban renewal robbed Americans of their confidence in municipal government. Vietnam had taken away the progressive reverence for the military. The civil-rights movement had exposed the bigotry within the governing class. And Watergate had revealed just how sick and self-serving the power elite really were.
But progressivism’s sudden turn against the establishment left reformers in a quandary. If they couldn’t trust the experts to exercise authority, who would make important decisions in their stead? Who would decide where to build the new bridge crossing the river, or which neighborhoods were going to be compelled to accommodate new housing projects? The answer seemed obvious—thepeople would decide.
But amid the rush to decapitate the establishment—to end the war, to snuff out racial discrimination, to speak truth to power, to “put your bodies upon the gears and upon the wheels”—the question of just how the people would decide was left hanging. The problem, to the progressive mind, was that fallible bureaucrats and elected officials had been imbued with too much discretion. To counter unconstrained discretion, progressives would seek to impose process.
Having witnessed the abuse of authority, reformers resolved to provide the victims of the system with the means of protecting themselves. The people would be given seats at the table, voices in the deliberation, and opportunities to appeal when bureaucrats tried to bowl them over. They would lean on the courts’ authority to stop the executive branch in its tracks. Discretion, after all, was primarily the province of the executive and legislative branches. Progressivism’s new project was to move power from those two branches and into the third—which, if nothing else, had the power to say no.
If progressives a generation earlier would have been shocked to witness the movement’s turn against the establishment, they would have been gobsmacked to see reformers lionize the courts. For most of the movement’s early history, judges had been cast as the villains. They’d shielded trusts and monopolists from prosecution. They’d eviscerated minimum wage and maximum-hour statutes. They’d upended various elements of the early New Deal. Franklin Roosevelt had complained that judges wanted to take the country back to the “horse and buggy days.” Now progressives changed their tune: It was the executive and legislative branches that were suspect and veto-wielding judges who were knights in shining armor.
Drastic as this turn may have been, it was catalyzed by a novel political reality. The judiciary, which had so frustrated Roosevelt, had taken on a very different character by the time Richard Nixon haunted the West Wing. The conservative justices who had upended elements of the New Deal had been replaced by the likes of Thurgood Marshall and William Brennan. The progressive Judges Skelly Wright and David Bazelon, meanwhile, served on the D.C. Circuit Court of Appeals. But if the old conservatives had championed laissez-faire, and the new crowd embraced what critics would call “judicial activism,” the two sets shared something profound: a deep skepticism of the elected branches. And by the late 1960s, progressives everywhere shared that same mindset.
During the three decades when first Earl Warren and then Warren Burger served as chief justice, the Supreme Court operated as a bulwark against the abuses of elected officials and bureaucrats alike. The Court told local governments that they had to desegregate their schools, that police officers could no longer coerce unsuspecting citizens into self-incrimination, that politicians could no longer prevent journalists from publishing unflattering articles, and that policy makers could no longer prevent women from making their own health-care decisions.
But the impulse to curb the establishment’s power also extended into other realms. By the 1970s, ordinary people were tired of highway engineers imperiously disrupting neighborhoods in their zeal to construct expressways. They were tired of developers razing whole neighborhoods in the name of utopian schemes for glittering high-rises. They were tired of utility executives colluding with public officials to license smog-inducing power plants.
The great social triumphs of the Warren and Burger Courts are seldom considered in tandem with contemporaneous decisions issued to protect the environment, preserve historically significant buildings, and preclude a president from going to war without proper authorization. But they all were grounded in the same underlying effort. The centralized power brokers whose discretionary power emanated from the elected branches had turned bad; progressivism would return that authority to the people using processes governed by the courts.
If the courts grew more likely to veto the decisions taken by officials, though, they seldom stepped in to make the decisions themselves. That left progressives to answer a pressing question: If government officials couldn’t be trusted, then who should decide?
In some cases, the answers seemed simple to them. When it came to abortion, for example, the pregnant woman should make the call. When a Black family was looking for a place to spend the night, they themselves should be able to choose among all the various public accommodations. But what about when the highway department proposed an expressway for suburban commuters and those living along its route worried about noise and pollution? Who would decide when the people weren’t united against the powerful but instead were arguing among themselves—when the issue wasn’t a Manichean struggle between right and wrong but a messy question of trade-offs?
Progressives sidestepped this question by immersing themselves in proceduralism. Reformers imagined that if everyone was able to have their concerns taken seriously, some correct answer would eventually emerge. Each affected party needed to be able to exercise some real power, and to be able to appeal for the protection of the court. If a process came to the wrong conclusion, those poised to suffer could demand that a judge throw the emergency brake.
Those opposing a project no longer had to debate proposals on the merits; the question was no longer about the wisdom of any official’s exercise of discretion but about whether the process had been properly followed. Had the affected communities been consulted with enough notice? Had they been given sufficient time to respond? Had all the impacts—on wildlife, on noise, on sunlight, on historic buildings—been taken into account? Opponents no longer had to argue that their concerns substantively outweighed the merits of any proposed project. Procedure was now king. And on process-oriented questions, the judiciary veto reigned supreme.
But even the best process cannot obviate the need for trade-offs. Nobody wants a sewage-treatment plant sited near their neighborhood, but cities sometimes need to filter water. No community wants to be sliced by a new transmission line, but the nation’s electrical grid needs to be enhanced. Sometimes, inconveniences can be offset by new amenities—a neighborhood forced to host a new homeless shelter might be bequeathed a new recreation center. But no new ice rink is going to compensate for the disruption wrought by the construction of a new high-speed rail line. In many cases, there’s no way around a simple reality: Someone’s ox is going to be gored.
The old establishment power brokers have never been formally stripped of their discretion. Transportation officials are still in charge of where the next expressway will be constructed. Utility executives are still charged with deciding where the next transmission line will go. Mayors, governors, and development officials still get to make the call on various proposals to build more housing. But in reality, the processes created to regularize and democratize these decisions—the boards that need to weigh in, the considerations that need to be studied, the hurdles that need to be cleared—have effectively eviscerated their actual discretion. Failures to heed some element of the process invite lawsuits. And once any disagreement lands in litigation, the real decision-making power is vested in a judge.
That, of course, was precisely the reformers’ intention. For decades, progressives viewed the shift from discretion to procedure as a change that effectively took power out of Richard Nixon’s hands and placed it in Thurgood Marshall’s. But as the courts have turned more conservative—Samuel Alito is no Bill Brennan—that faith has wavered. More important, progressivism has begun to rethink the underlying wisdom of that shift.
At the dawn of the progressive movement, reformers had wanted government to do big things. Institutions were created on the express promise that the experts would run them. Only during the 1960s and ’70s, as abuses of power were exposed, did the movement begin to venerate the judiciary. That worked well enough so long as the communities the movement cared to serve were better off without big changes—housing starts, grid enhancements, public infrastructure. But decades on, amid housing shortages, dilapidated infrastructure, and the threat of climate change, many reformers have been forced to reconsider.
In the early 20th century, progressives were frustrated that a judicial approach born from conservatism—what W. E. B. Du Bois would label “the counter-revolution of property”—prevented government from pursuing policies to help the poor and working classes. Today, by contrast, progressives may be similarly enraged by a conservative Supreme Court—but their frustration is born from a jurisprudence of their own making. The shift toward proceduralism had been an explicitly progressive endeavor. Progressives wanted to throw sand into the gears of the machine. Now, as they dream of doing big things again, they are left to shake their fists at their own legacy.
Think of the young activist groups protesting against solar farms, or liberals bucking reform efforts to build multifamily housing in the suburbs. It was the left, too, that undermined a late-Biden-era effort to speed permitting for clean-energy projects for fear that some fossil-burning facilities might also get a boost.
To be fair, these examples reflect real conflicts. Some will be incensed that anyone would stand in the way of a wind or solar farm, others outraged at the way those farms may affect the existing wildlife. Some will be irritated that longtime residents might stand in the way of new housing construction, others indignant at the prospect of gentrification. The problem here isn’t that the movement isn’t of one mind on the merits of any given idea. Rather, it’s that reformers have no real concept of a process that would fairly arbitrate between trade-offs. Any adverse outcome—and when trade-offs are required, there is definitionally an adverse outcome for someone—is a signal that the underlying decision-making process is corrupt.
Who, then, speaks for the people? Congress might be the most obvious answer, legislators being directly tethered to the interests of their constituents. Ironically, however, progressives tend to find representatives and senators among the least appealing options. Members of Congress are prone to capture by powerful interests and deep-pocketed donors. Even if they resist those temptations, they tend to prioritize the concerns of their district or state to the detriment of society as a whole. If nothing else, earmarks short-circuit the deliberative process that might identify the best way forward. Decades after money was scandalously tucked away for Alaska’s “bridge to nowhere,” the project remains a cautionary tale about the abuse of legislative power.
But of course, progressives remain wary of executive authority as well. A 2008 episode of This American Life reported that Black communities across America still live under the specter of what’s known in conversation as “the Plan” to gentrify poorer neighborhoods, a folk belief born in the era of urban renewal that has proved remarkably persistent. A movement that continues to lionize speaking truth to power remains wary of bureaucracy, convinced that agencies, like legislators, are easily captured by private interests. To protect against that kind of abuse, it continues to rely on the courts.
But the courts, as arbiters of various rules, aren’t really equipped to push forward the kinds of progress that progressives mean to champion. They can certainly halt change, as progressives were so eager for them to do when the establishment was running amok. But even now, judges cannot make the discretionary calls about where the bridge should be built, or the sewage plant sited, or the transmission corridor erected.
Skeptical that anyone wielding power can be trusted to make a truly wise or disinterested decision, progressives end up judging every process by the outcome. And when the battle is less between right and wrong than between one set of trade-offs and another, cynicism quickly gives way to despondency. Rather than face the hard reality that housing projects, transportation networks, and clean-energy facilities are inevitably going to come with costs, reformers view the simple existence of those downsides as evidence that the public is getting a raw deal. Thus progressives almost inevitably make the case that government is fundamentally broken. And they’re right—government is broken, in large part because they broke it.
It’s not that Democrats are at a loss for ideas today. It’s not that they are unable to identify enemies—greedy bankers, polluting corporations, Elon Musk. Rather, the party has lost the will to govern. It cannot figure out how to set priorities, identify choices, ratify trade-offs, and then accept the costs of important decisions. For decades, progressivism has deluded itself into believing that process can obviate the need for discretion. Reformers became so obsessed with ensuring that government would not trample the powerless that they have prevented government from doing its job.
Few voters, of course, would have articulated their frustrations with progressivism that way while casting ballots for Donald Trump. But Democrats’ determination not to let government govern—the progressive movement’s cultural aversion to power—was baked into the campaign. The Biden administration invested billions in infrastructure, high-speed rail, and clean energy, but not in ways that voters could see, touch, or appreciate. Two years after the government appropriated $5 billion for EV chargers, only a few dozen had been built. The country is in a housing crisis, but the government didn’t clear the way for new home construction. Voters overwhelmingly reported that America was on the wrong track, and yet Democratic leaders weren’t righting the train.
A growing movement of so-called supply-side progressives is now trying to address this problem. Those joining in what the Atlantic’s Derek Thompson has framed as a quest for “abundance” are searching for ways to form what the New York Times columnist Ezra Klein has termed a “progressivism that builds.” But even within the growing community of scholars and activists convinced that government’s inability to drive progress is a substantive and political problem, answers to the core question of who should decide remain elusive. Everyone seems to want more stuff and less rigmarole. But in the same way that previous waves of reform focused on chopping away things proponents didn’t like, today’s reformers risk sidestepping the crucial question of who should ratify any given trade-off.
In the realm of housing, the abundance crowd has begun to sketch an appealing answer: Let the free market choose. The problem, they suggest, is fairly simple. People in the suburbs don’t want multifamily buildings constructed nearby, lest the riffraff move in next door. People in grittier neighborhoods fear that new construction will bring higher rents. On both ends, would-be neighbors conspire to preclude any influx of newcomers by constraining what property owners can do.
As The Atlantic’s Yoni Appelbaum argues in his new book, Stuck: How the Privileged and Propertied Broke the Engine of American Opportunity, we should lift those restrictions, letting the people who purchase land build what they want, whether it’s a duplex on a single-family plot, a granny flat above their garage, or a multifamily building near a bus stop. The tension between those who own a plot of land and those who live nearby should be resolved in favor of the figure who controls the property. The supply-side solution, in the realm of housing at least, is to push decision-making power down.
But there is, of course, another potential approach. Rather than limit the opportunities for neighbors to gum up the works, progressivism could reach back into its ideological roots and empower someone with the discretion to clear the way. Instead of deconstructing the barriers that preclude more housing, reformers could affirmatively assign someone to cut through the cacophony of objections and devise a solution that serves the public interest.
To address housing shortages, for example, an official could weigh the pros and cons of any given proposal—considering both the promise of the project and the objections of those nearby—and use their better judgment to decide whether the idea should move forward. The person who owned the plot would not have an absolute right to build, and those worried about neighborhood impacts would not have the power to thwart any change. Authority, in other words, could be pushed up.
Returning the exercise of discretion to public officials cuts against progressivism’s half-century-long campaign to curtail the establishment. Better, many will argue, to establish a process of some sort to regularize consideration. But even if we can make progress in addressing the housing shortage by pushing power down, that can’t be the way forward on transportation or clean energy, because the landowners themselves object to the needed developments. And that’s exactly the point. None of the hard decisions America faces—weighing preservation against new housing, or clean energy against environmental protection, or neighborhood stability against high-speed rail—can be resolved without significant trade-offs. The notion that nothing works can be cured only by a process that makes decisions expeditiously.
That’s not to suggest that all of the guardrails should be removed. Reformers can require decision makers to mitigate harm—to do the least possible damage to the forest, or preserve as many homes in the train’s right-of-way, or demand that the new multifamily buildings erected in a vibrant community hew to a familiar aesthetic. But, in the end, someone needs to be put in charge.
That will be a frightening prospect for many reformers. But if progressives mean to dull the sting of MAGAism—that is, if they intend to make progressivism popular again—there is no alternative. Trump’s return to power wasn’t inevitable—and it certainly wasn’t born of any single ideological failing on the left. But progressivism’s decades-long effort to replace discretion with process has proved a disaster.
The movement wasn’t wrong during the 1970s in wanting to correct for abuses wrought by the establishment, but the tonic has by now proved to be more toxic than the disease. For progressivism to reestablish itself, it will need to make government work. And for government to work—for people to believe that democracy really serves their interests—the individuals who nominally wield public authority need to be given the discretion, at long last, to use it for the greater good.
Starting this week, I once again have the privilege of teaching law students about the First Amendment, a subject in which Americans rightly take great pride. But this term, the job will feel very different. I am in the United States on a green card, and recent events suggest that I should be careful in what I say—perhaps even about free speech.
As I prepare my lecture notes, the Trump administration is working to deport immigrants, includinggreen-card holders, for what appears to be nothing more than the expression of political views with which the government disagrees. These actions are chilling. They also make it difficult to work out how to teach cases that boldly proclaim this country is committed to a vision of free speech that, right now, feels very far away.
In recent weeks, the Trump administration has been—is there any other way to describe it?—rounding up dissidents. Government agents whisked one student off the street into an unmarked car, apparently for the thought crime of co-authoring an op-ed about Israel and Gaza in a student newspaper—an article that likely reached only a modest audience until the government made it grounds for detention. The administration arrested Mahmoud Khalil, a recent Columbia University graduate, on his way home from dinner, in front of his pregnant wife, not for any crime but for his participation in pro-Palestinian protests. Headline after headline has spoken of similar cases around the country, and the administration shows no sign of letting up. To more easily chase down people with ideas it dislikes, the government is asking universities for the names and nationalities of people who took part in largely peaceful protests and engaged in protected speech.
Exactly what kind of expression gets you in trouble is not clear—no doubt that’s partly the point. As Donald Trump’s offensive rapidly escalates, no one can be sure they are safe. Is publicly criticizing the administration’s speech suppression risky? I genuinely don’t know.
None of these actions is consistent with the vision of America depicted in bedrock First Amendment cases. The ruling in New York Times v. Sullivan describes a “profound national commitment to the principle that debate on public issues should be uninhibited, robust, and wide-open,” a quote the Court likes to invokeoften. The claim has bolstered the U.S. Constitution’s reputation as the most speech-protective in the world, which inspires patriotism for lawyers and non-lawyers alike. But this country’s free-speech tradition depends on far more than words in a centuries-old legal document—and as we are learning, those words might mean very little in the absence of a broad societal commitment to upholding them.
My native country, Australia, does not have an explicit constitutional protection for free speech. Its defamation and hate-speech laws would likely appall First Amendment true believers. Given this acculturation, I arrived in America perhaps reflexively skeptical of a doctrine that protects some genuinely harmful expression. But over the years, I have been convinced that empowering the government to punish speech also empowers it to target its critics.
One does not need to look overseas for a different approach to free speech, though. We need only look back in time. Indeed, one thing that always surprises students in First Amendment classes is how recent and fragile the famously speech-protective modern doctrine is. Many assume that free speech is an indelible part of American culture that stretches back to the founding era. Far from it. This country’s free-speech tradition is much younger, and its victories hard-fought.
In the early 20th century, the first Red Scare led state and federal governments to enact, and courts to uphold, highly restrictive speech laws. Dissidents were locked up for circulating pamphlets and articles opposing the American war effort in World War I; a Socialist Party presidential candidate, Eugene Debs, was jailed for a speech that criticized the draft.
Rulings upholding repressive laws stood for decades but ultimately came to be seen as an embarrassment and a stain on this country’s history. When I was studying the First Amendment after arriving in the United States, my professor, Richard Fallon, told us that “if there is any travesty in the history of the First Amendment that ought not to be repeated, it is what happened to Eugene Debs.”
First Amendment courses invariably then turn to the heroic dissents and concurrences of Justices Oliver Wendell Holmes and Louis Brandeis, which are now cited and celebrated for expressing the true meaning of the First Amendment. These opinions do not minimize the harm that speech can cause but argue that democracy and the pursuit of truth require bravery. “Those who won our independence by revolution were not cowards. They did not fear political change,” Brandeis wrote. The Constitution therefore demands that “if there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.”
In subsequent decades, these canonical opinions ushered in a free-speech tradition that interpreted the First Amendment expansively and in rousing terms. Majority opinions would come to describe a Constitution whose “fixed star” is that “no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion”; whose “proudest boast” is that “we protect the freedom to express ‘the thought that we hate’”; whose critics more often fault it for protecting too much speech, not too little.
The First Amendment tradition that has evolved over the past century is not unfailingly robust. For example, a 2010 Supreme Court decision upheld a law banning certain forms of speech that are classified as “material support” to foreign terrorist groups—in that case, the speech included training designated groups on how to pursue their aims peacefully. But even in that case, which upheld a stunningly broad speech restriction, the Court also insisted that nothing in its opinion undermined the general principle that advocacy of unlawful action is protected so long as it is not done in coordination with terrorist groups.
The Trump administration argues that, however strong speech protections are for citizens, noncitizens “don’t have a right to be in the United States to begin with,” as Secretary of State Marco Rubio recently put it. Rather, we are here at the pleasure of the government and can be exiled if we say the wrong thing. The first word in permanent resident seems not to mean what I thought.
This argument is impossible to reconcile with the principle that the Court said last year rests “at the heart of the First Amendment”: that such “viewpoint discrimination is uniquely harmful to a free and democratic society.” (The Court held that attempts by a New York State official to pressure companies to stop doing business with the National Rifle Association because she did not like the group’s gun-rights advocacy would violate the First Amendment.) The administration’s argument also ignores citizens’ right to hear what noncitizens have to say; in 1965, the Court held that the government cannot restrict Americans’ right to read even Chinese propaganda. The administration’s position makes you wonder what noncitizens have to say that is so scary that it cannot be rebutted with more speech. And perhaps most important of all, to accept this argument is to assume—without any good reason that I can see—that this punishment of dissent will stop with foreigners.
My students will surely ask whether the events in recent headlines are consistent with the First Amendment principles that I am teaching them. And I find myself in the surreal position of wondering whether my answer could endanger my own place in this country, for which I am very grateful. Noncitizens are making calculations like these every day now. Recently, the NPR journalist Michel Martin interviewed Deputy Secretary of Homeland Security Troy Edgar about the deportation of Khalil, the former Columbia student. “This is a person that came in under a visa,” Edgar said. “And again, the secretary of state at any point can take a look and evaluate that visa and decide if they want to revoke it.” Edgar repeatedly refused to answer Martin’s simple question: “Is any criticism of the United States government a deportable offense?”
So of course your noncitizen neighbors are hesitating before they speak their mind, wondering whether speaking their mind is worth the risk of deportation, detention, or problems at the border. The chilling effect is not just for noncitizens: As many powerful institutions in American society seem to be acceding to Trump’s demands, of course those less powerful will stay silent. Prominently punish enough critics, and there will cease to be critics to punish. This is exactly the world that the First Amendment promised to prevent.
One of the sad ironies of the current moment is that the Trump administration also claims that it is waging a war against censorship. It is trying to argue that deporting people for expressing views the administration dislikes “is not about free speech.” But as I work out what to say in the classroom this week, I know that isn’t true.
It is a blessing for this troubled country that the semiquincentennial of its struggle for independence is upon it. Indeed, some notable anniversaries have already slipped by: In September 1774, delegates from Suffolk County, Massachusetts, approved a set of resolves rejecting Parliament’s authority, which were then endorsed by the first Continental Congress. In November of that year, the provincial Congress of Massachusetts authorized the enlistment of 12,000 troops. Others lie just ahead: In a month, Americans will observe the 250th anniversary of the battles of Lexington and Concord.
The semiquincentennial offers not just a diversion from current politics or an opportunity to reassert American unity at a time of disharmony, but also a moment to reflect on the character of the men and women who made the United States out of a collection of fractious colonies. That thought occurred to me recently as I attended my final meeting of the Board of Trustees of Fort Ticonderoga, of which I have been part for nearly a decade.
Fort Ti, for those who do not know it, sits on the spit of land between Lake George and Lake Champlain in upstate New York. The small fort is a gem, surrounded by mountains, lovingly restored and preserved as a private institution. Its leadership has grown its museum to now include the finest collection of 18th-century militaria in the United States, if not the world. Tens of thousands visit every year.
Built by the French in 1755 as a base of operations against the British colonies, Fort Ticonderoga witnessed sieges, skirmishes, raids, and ambushes, first in the Seven Years’ War and then in the American war for independence. Since then, presidents have visited repeatedly. Writers too: Nathaniel Hawthorne wrote a famous essay about his visits there with a recently graduated, brilliant young engineer, who may have been none other than Robert E. Lee: “The young West Pointer, with his lectures on ravelins, counterscarps, angles, and covered ways, made it an affair of brick and mortar and hewn stone, arranged on certain regular principles, having a good deal to do with mathematics but nothing at all with poetry.”
My favorite artifact in the museum is a modest thing—a knapsack that belonged to a soldier named Benjamin Warner. He attached a note to it:
This Napsack I caryd Through the War of the Revolution to achieve the American Independence. I Transmit it to my olest sone Benjamin Warner Jr. with directions to keep it and transmit it to his oldest sone and so on to the latest posterity and whilst one shred of it shall remain never surrender you libertys to a foren envador or an aspiring demegog. Benjamin Warner Ticonderoga March 27, 1837.
Warner’s orthography may have been uncertain, but his values were not, and I often think of that warning—about foreign invaders, yes, but also aspiring demagogues.
Plenty of people kept their heads down during the Revolution. John Adams famously said that he thought a third of Americans at the time were in favor, a third opposed, and a third neutral. Those percentages may be off: That middle group—hoping, like most people, simply to get on—may have been larger. And then there were those who had second thoughts—Benedict Arnold most notably, but many others as well, from statesmen such as Joseph Galloway to more ordinary souls caught in the middle.
But the tone was set by those like John Morton, a signer of the Declaration who accepted that “this is putting the Halter about our Necks, & we may as well die by the Sword as be hang’d like Rebels.” In particular, the gentry leadership of the Revolution knew, from the record of how Britain had dealt with rebels in Ireland and Scotland, that they could face loss of their home, their freedom, and possibly their life. When Thomas Jefferson ended the Declaration with the words “we mutually pledge to each other our Lives, our Fortunes and our sacred Honor,” he was not kidding.
Benjamin Warner was not of the gentry, though; he was a mere farmer. He led a long life, from 1757 to 1846. His tombstone, in a cemetery in Crown Point, New York, has a simple epitaph: “A revolutionary soldier & a friend to the Slave.” One may only suppose what that last phrase meant, given that New York was on the Underground Railroad.
Warner was one of those soldiers who served repeatedly from 1775 to 1780, joining one regiment and then another, marching to Quebec, fighting in the Battle of Long Island and in New Jersey. In between campaigns, presumably, he took care of the farm. Beyond that, and his knapsack, we do not know much, other than that he saw his duty, did it, went home, and did it again. There does not seem much flash about him, but he knew what he was fighting for, and what he would willingly fight against.
He has something to teach us. Americans see before them the unedifying spectacle of their representatives being too fearful to convene town halls where they might either be criticized or, worse, be compelled to defend a president who they know is damaging the country every day. We have senators who knowingly confirmed untrustworthy and unqualified individuals to the most important national-security jobs in the country because they feared the wrath of President Donald Trump’s base. We see intellectuals talking about fleeing the country or actually doing so not because they have been persecuted in any way, but because of a foreboding atmosphere. We have formerly great law firms such as Paul Weiss groveling to an administration that has threatened them, and offering up tens of millions of dollars of free services in support of its beliefs rather than stand in defense of the right of unpopular people to be represented in a court of law.
There is a name for this: cowardice. It is not an uncommon failing, to be sure, but so far, at any rate, it seems unaccompanied by shame, although regret may eventually come. Cowardice is, at any event, a quality that one suspects the figures who won us independence would have despised in their descendants, who have had a comparatively easy lot in life. Perhaps the series of 250th anniversaries will cause some of us, at least, to get beyond the historical clichés and think of the farewells to families, the dysentery and smallpox, the brutal killing and maiming on 18th-century battlefields, and the bloody footprints in the snow.
Above all, we should take away from the commemorations before us a celebration less of heroism than of unassuming courage. Now, and for some years to come, we will need a lot less Paul Weiss, and a lot more Benjamin Warner.
In Los Angeles, where I live, you don’t expect to be heckled while driving an electric car to the grocery store. But on a recent afternoon, a couple of men on bikes saw the Tesla logo on the front of my car and shouted, “Fuck you, Tesla guy” as I rolled by with the windows down.
I bought my Tesla Model 3 in 2019, after my wife and I moved from New York to L.A. and needed a car. Not willing to burn gasoline, we got the most practical EV we could afford. Six years later, that car carries a different connotation.
In the aftermath of Elon Musk’s MAGA embrace and his scorched-earth tactics running DOGE, Tesla has become a primary target for protests. On Saturday, demonstrators marched outside all 277 Tesla showrooms and service centers in the United States; Teslas across the country have been vandalized and even destroyed in recent weeks. Even in Los Angeles, where Teslas are as familiar as Fords and not primarily viewed as right-wing totems, this wasn’t the first time I’d been shouted at since the election. Tesla owners who don’t support Musk are playing defense. Some have begun to slap on bumper stickers such as I bought this before Elon went crazy. Others are simply done. Sheryl Crow sold her Tesla and donated the money to NPR; Senator Mark Kelly of Arizona also got rid of his, saying he couldn’t stand to own a car that is a “rolling billboard for a man dismantling our government and hurting people.” Teslas that are eight years old or newer now account for 1.4 percent of all car trade-ins, up from 0.4 percent a year ago, and EV brands such as Lucid and Polestar offer tantalizing deals to Tesla drivers looking for an out. “You’re definitely seeing a lot of people say, ‘You know what? I don’t want to be associated with the trash that’s going on right now around Elon Musk,’” Robby DeGraff, an analyst at AutoPacific, told me. “‘I’m just going to get rid of my car.’”
The Great Tesla Sell-Off is producing a glut of politically tainted pre-owned cars. Used Teslas are now shockingly cheap: One 2021 Model 3 for sale near me, which would have cost more than $50,000 new, is going for less than $20,000. Even if you resent Musk, you should consider buying one. In this moment of DOGE madness, it’s difficult to see a Tesla as anything other than an avatar of Musk. But Elon doesn’t make any money off the Model Y you get secondhand. Strip away the symbolism, and an old Tesla is just a good, affordable car.
No, you shouldn’t buy a new Tesla if you’re enraged at Musk. The boycotters are correct that rejecting these vehicles directly hurts him. It drives down sales numbers, which hurts Tesla’s bottom line and saps the company’s stock price. Tesla’s remarkable valuation, buoyed as much by a cult of personality as by the company’s sales figures, has made Musk the world’s richest person: The company was worth three times more than Toyota in 2024, despite selling six times fewer vehicles. Even so, collapsing sales, not only in the U.S. but also across Europe and Australia, have wiped out billions of dollars from Tesla’s stock.
Buying a pre-owned Tesla might feel just as unseemly. But it’s not. Start with the sustainability question. Anti-Musk liberals would surely agree that more Americans should go electric to cut carbon emissions. Only about one in 10 registered vehicles in America is an EV, so it’s likely that a used-Tesla buyer will be replacing an old gas-burner. The switch might be permanent: More than 90 percent of EV owners say they won’t go back to combustion. For the most part, Tesla refugees aren’t retreating to the polluting purr of the V-6; they’re switching to electric cars from other brands, such as Chevy, Lucid, and Kia.
Used Teslas also help solve the main problem with getting Americans to go electric, which is price. Even with government tax credits, EVs tend to cost a premium compared with gas cars or hybrids. Pre-owned EVs, though, are shockingly affordable. Electric cars in general depreciate faster than gas-powered cars for a number of reasons, including fading battery life and used-car buyers’ unfamiliarity with the technology, Brian Moody, a senior staff editor at Kelley Blue Book, told me. All of this is bad for sellers but good for buyers.
Even before the Great Tesla Sell-Off, the bulk of used EVs were Teslas. The math doesn’t lie: Just five years ago, Tesla sold nearly 80 percent of the electric cars in America. Now that virtually every major car company offers EVs, Tesla’s dominance is waning, but most of those non-Tesla EVs have yet to reach their second owners. Used Teslas were already pretty affordable, but now they are getting even cheaper. Moody said the average transaction price for a used Tesla dipped from nearly $32,000 in November to about $30,400 in early March as more flooded the market. Tesla’s resale value is reportedly falling three times as fast as the rest of the used-car market.
Perhaps most important of all: Unless you purchase a used Tesla directly from the company, Musk isn’t getting your money. It’s possible to buy a pre-owned Tesla and avoid his other revenue streams too. Just like every carmaker, Tesla maintains a network of service centers to repair its vehicle, and because so few car mechanics specialize in electric vehicles, paying Tesla to do the work is much easier (and can feel safer) than taking a gamble on your neighborhood repair guy. However, a used car is likely to be past its four-year basic warranty, so you could take your old Model Y to an independent shop without voiding any coverage.
Then there is the question of charging. Tesla’s Supercharger network is admittedly excellent and convenient. But that is relevant for more than Tesla owners. Over the past couple of years, the rest of the industry has adopted Tesla’s plug standard, and many other brands’ EVs can now visit the company’s fast-charging stations. Musk-hating Rivian owners might still find themselves paying him for kilowatt-hours in a pinch. Still, avoiding them is easier than you might think. EV newbies tend to fret about charging, given that plugging in a car is still not as simple or quick as pulling into the nearest Shell. But the anxiety tends to be exaggerated. Something like 80 percent of EV owners primarily charge at home, which provides enough electricity for daily driving. On road trips, drivers can plan ahead to make a point of visiting charging stations that aren’t owned by Tesla.
Of course, a used Tesla may not spare you from getting shouted at while you’re getting groceries. Buying any kind of Tesla in 2025 can practically feel like an invitation to get graffitied, or at least a tacit endorsement of the brand. But set aside the optics—no simple task—and a pre-owned Tesla is just as climate-virtuous as all the Chevy Bolts and Ford Mustang Mach-Es that aren’t carrying around any MAGA baggage. Refusing an old Model 3 doesn’t hurt Elon or help the planet. But it does stop you from getting a good deal. If you’re still feeling trepidation, consider an apology bumper sticker: I bought it from someone who bought it before Elon was crazy.
This article was featured in the One Story to Read Today newsletter. Sign up for it here.
“Freedom is a fragile thing, and it’s never more than one generation away from extinction,” Ronald Reagan said in 1967, in his inaugural address as governor of California. Kevin D. Roberts, the president of the Heritage Foundation, approvingly quotes the speech in his foreword to Project 2025, the conservative think tank’s blueprint for the Trump administration. Roberts writes that the plan has four goals for protecting its vision of freedom: restoring the family “as the centerpiece of American life”; dismantling the federal bureaucracy; defending U.S. “sovereignty, borders, and bounty”; and securing “our God-given individual rights to live freely.”
Project 2025 has proved to be a good road map for understanding the first months of Donald Trump’s second term, but most of the focus has been on efforts to dismantle the federal government as we know it. The effort to restore traditional families has been less prominent so far, but it could reshape the everyday lives of all Americans in fundamental ways. Its place atop the list of priorities is no accident—it reflects the most deeply held views of many of the contributors—though the destruction of the administrative state might end up imperiling the Trump team’s ability to actually carry out the changes the authors want.
A focus on heterosexual, married, procreating couples is everywhere in Project 2025. “Families comprised of a married mother, father, and their children are the foundation of a well-ordered nation and healthy society,” writes Roger Severino, the author of a chapter on the Department of Health and Human Services and a former HHS and Justice Department staffer. (The document is structured as a series of chapters on specific departments or agencies, each written by one or a few authors.) He argues that the federal government should bolster organizations that “maintain a biblically based, social-science-reinforced definition of marriage and family,” saying that other forms are less stable. The goal is not only moral; he and other authors see this as a path to financial stability and perhaps even greater prosperity for families.
This article has been adapted from David A. Graham’s new book, The Project.
Project 2025’s authors identify a range of ways to achieve the goal across the executive branch. Changes to rules for 401(k)s and other savings programs would be more generous to married couples. HHS would enlist churches and other faith-based organizations to “provide marriage and parenting guidance for low-income fathers” that would “affirm and teach” based on “a biological and sociological understanding of what it means to be a father—not a gender-neutral parent—from social science, psychology, personal testimonies, etc.” Through educational programs, tax incentives, and other methods, the child-support system “should strengthen marriage as the norm, restore broken homes, and encourage unmarried couples to commit to marriage.” Temporary Assistance for Needy Families, the lead federal welfare program, would track statistics about “marriage, healthy family formation, and delaying sex to prevent pregnancy.”
In this vision, men are breadwinners and women are mothers. “Without women, there are no children, and society cannot continue,” Max Primorac writes in his chapter on USAID, where he served in the first Trump administration. (Primorac calls for ridding the agency of “woke” politics and using it as an instrument of U.S. policy, but not the complete shutdown Trump has attempted.) Jonathan Berry writes that the Department of Labor, where he previously worked, would “commit to honest study of the challenges for women in the world of professional work” and seek to “understand the true causes of earnings gaps between men and women.” (This sounds a lot like research predetermined to reach an outcome backing the traditional family.) The Labor Department would produce monthly data on “the state of the American family and its economic welfare,” and the Education Department would provide student data sorted by family structure. Severino suggests that the government either pay parents (most likely mothers) to offset the cost of caring for children, or pay for in-home care from family members; he opposes universal day care, which many on the right see as encouraging women to work rather than stay home with kids.
All of this fits a very conservative worldview, but in some places common ground emerges that might cut across typical partisan lines. For example, in a convergence of the crunchy left and natalist right, Severino wants doulas to be available to all expectant mothers. Contra Severino, Berry suggests that the Labor Department create incentives for on-site child care at work. He also wants Congress to require employers to let workers accumulate paid time off when working overtime, in place of time-and-a-half pay, and to encourage rest time for workers by mandating time-and-a-half compensation on a Sabbath. (The suggested default would be Sunday, but the rule would allow for alternatives such as a Jewish Sabbath, running from Friday sundown to Saturday sundown.)
Turning these ideas into reality would require substantial engagement from the federal bureaucracy. Yet Trump and Elon Musk have spent the first months of the presidency haphazardly demolishing large swaths of the workforce at just the departments that would be necessary to make these things happen. Trump is attempting to dissolve the Education Department altogether; HHS has offered a buyout to every employee.
The parts of this family-oriented agenda that the Trump administration has already moved to enact are some of those that enforce a strictly binary concept of gender, aiming to drive trans and nonbinary people underground; open them up to discrimination at work, at school, and in the rest of their lives; and erase their very existence from the language of the federal government.
“In the past, the word ‘gender’ was a polite alternative to the word ‘sex’ or term ‘biological sex,’” Primorac writes. “The Left has commandeered the term ‘gender,’ which used to mean either ‘male’ or ‘female,’ to include a spectrum of others who are seeking to alter biological and societal sexual norms.”
On his first day in office, Trump signed an executive order that purports to define sex as binary. “It is the policy of the United States to recognize two sexes, male and female. These sexes are not changeable and are grounded in fundamental and incontrovertible reality,” the order states. “Efforts to eradicate the biological reality of sex fundamentally attack women by depriving them of their dignity, safety, and well-being.” The order also dissolved the White House Gender Policy Council, created by former President Joe Biden.
Trump also signed an executive order banning transgender women from women’s sports. The Defense Department says it will not accept transgender recruits for the armed forces, and will begin kicking out transgender service members currently in the military. The Equal Employment Opportunity Commission has moved to drop a discrimination case focused on gender identity, and the Education Department says it will enforce Title IX only to consider “biological sex.”
Right-wing leaders have made attacks on trans people and nontraditional expressions of gender a cornerstone of right-wing politics over the past few years. They have spread disinformation about trans people and panicked over the prospect of children adopting different gender identities or names at school. What is the reason for so much fear? Transgender people make up less than 2 percent of the population, and their presence in society doesn’t evidently harm other people. Project 2025’s pro-family orientation helps explain why the right considers them such a threat. A worldview that sees gender roles as strictly delineated and immutable cannot acknowledge the existence of trans people or anything else that contemplates an alternative to a total separation between what it means to be male and what it means to be female.
Trump has not yet made stricter abortion policies a focus in his new term. Though he has boasted about appointing Supreme Court justices who overturned Roe v. Wade, he seems wary of pushing further, for fear of political backlash. Project 2025 has no such qualms. Severino recommends withdrawing FDA approval for abortion drugs, banning their prescription via telehealth, and using 1873’s Comstock Act to prohibit their mailing. He also recommends a strong federal surveillance program over abortion at the state level. Project 2025 also calls for the return of abstinence-only education and the criminalization of pornography.
With a little imagination, we can glimpse the America that Project 2025 proposes. It is an avowedly Christian nation, but following a very specific, narrow strain of Christianity. In many ways, it resembles the 1950s. While fathers work, mothers stay at home with larger families. At school, students learn old-fashioned values and lessons. Abortion is illegal, vaccines are voluntary, and the state is minimally involved in health care. The government is slow to police racial discrimination in all but its most blatant expressions. Trans and LGBTQ people exist—they always have—but are encouraged to remain closeted. It is a vision that suggests Reagan was right: Freedom really is a fragile thing.
This article has been adapted from David A. Graham’s new book, The Project.
Nothing amplifies a popular trend more than a prominent critic making a noisy case against it. In her 2021 polemic, “The Case Against the Trauma Plot,” the literary critic Parul Sehgal argued that trauma had become a central feature of contemporary literature. In too many recent novels, she observed, characters looked to the buried pain of the past as an explanation for the present; this type of story, she said, “flattens, distorts, reduces character to symptom.” The essay sparked an ongoing debate in the literary community: Has trauma indeed become the dominant plot, and is fiction worse off for it? Or is processing a difficult past on the page still valuable, both for the writer and the reader?
Jamie Hood defiantly sets out to reclaim the trauma plot by doubling down on it, beginning with the title of her debut memoir, Trauma Plot: A Life. Implicit in her project is an acknowledgment that human beings will always have deeply upsetting experiences, and they will always write about them. The only question is how. Hood tries to answer that question through an utterly original recounting of her own past.
Hood is a shrewd critic, and she is informed by the work of authors including Virginia Woolf, Anne Sexton, and Sylvia Plath, whose writings allow her to challenge the idea that there’s something uniquely contemporary about trauma plots (or indeed in the criticism of them). Hood is troubled by Sehgal’s framing of the phenomenon, which seems to “exile” writers “from self-knowledge.” She identifies what she sees as an underlying assumption guiding arguments against this kind of writing—that those who write books about their pain are not producing art: “Like there’s no reason to write about trauma except to make a buck. Like if you talk about having lived through something awful, that’s all you’ve ever talked about or ever will. Like you have no agency inside a story you yourself choose to tell.”
In Trauma Plot, Hood investigates her past and present with startling honesty and curiosity. “I began writing this book in 2016, a year after five men gang raped me and around the time the Access Hollywood tapes were leaked to the public,” she writes. The #MeToo movement demonstrated to Hood that experiences of sexual assault were not “exceptional,” and that they could be spoken about and shared. The accounts she read broke the silence that frequently surrounds rape, and refused to fall into existing narratives of shame or victimhood.
“For most of my life no one I knew talked about rape,” Hood writes, “so there were many years when I thought it happened to every heroine of every Lifetime movie and to me.” These kinds of films, she writes, smooth over the experience of rape into straightforward cause and effect: Sexual assault leads to grieving and then healing, which a brave heroine can achieve by looking for some kind of lesson. Hood searches for new ways to tell her story, forms that depart from familiar scripts.
One place where Hood finds such inspiration is in the myth of the Athenian princess Philomela, from Ovid’s Metamorphoses. While on her way to visit her sister, Philomela is deceived and raped by Tereus, her brother-in-law. When she threatens to reveal what he did, Tereus cuts out her tongue, leaving her mute. But Philomela learns to weave, creating a tapestry that tells the story of her assault. Hood imagines herself as Philomela, finding alternative means of expressing the truth: “I had a need of my own to reckon with the way rape resists testimony or explodes the containers of its own telling, without in turn surrendering to the convention that trauma is, as it were, altogether intelligible. With tongue or without, the story will out.”
Like Philomela, Hood experiments with structure to speak the unspeakable and show the splintering effects of sexual assault (in her case, what she describes as several separate incidents of rape). Trauma Plot is divided into four parts, each of which is written from a different perspective: “She,” “I,” “You,” “We.” In each section, Hood doesn’t just try to understand her own experiences; she wrestles with the limits of language when trying to represent deep personal pain.
Part I, “She,” is an homage to Woolf’s Mrs. Dalloway, following a character Hood calls Jamie H. (in the third person) on the day she and her roommate are planning to throw a party. In Boston in October 2012, Jamie wakes up, commutes to Waltham, teaches classes, meets with an adviser, and, yes, buys the flowers herself, all while she is haunted by what she calls the “Specter”—a leering, disembodied smile that has followed her since that summer. She searches for traces of a “fracturing” June night in her diary; though she knows what happened to her, she is “unable to look on it directly, for it signaled a kind of cognitive eclipse.” Near the end of the section, Jamie finally confronts the event, revealing that a man she calls the Diplomat raped her. Here, the perspective snaps from “she” to “I,”; Jamie reflects that she “must yield the mantle of the third person” in order to “face the Diplomat stripped of distance.”
This “I” is the dominant voice of Part II, which begins, “Two months before the bombing of the Boston Marathon I was raped again.” Here, Hood employs the kind of first-person testimony common to the trauma plot. But she tells her story at a slant, intertwining her second rape and its aftermath with an account of the Boston Marathon bombing, a triple murder in Waltham, and her decision to leave Boston for New York City. In February 2013, Hood describes being drugged and assaulted by “the Man in the Gray Room,” who then offers to drive her home the next morning. The entire chapter seems to snag on that detail, as Hood imagines it undermining her account: “That accepting this from him would undercut the veracity of my victimization didn’t matter, because nothing did. The decision was automatic, and marvelously practical. I’d no clue where I was, no money, and could barely walk. I knew already I wouldn’t report, so there’d be no rape kit, no interrogation or lawyers, no judge, no testimony, no jury.”
Yet Hood knows that her book is a testimony. She writes about her memories carefully, always aware that her reader is forming judgments about her credibility. To such people, Hood shows viscerally how the idea of the perfect victim, beyond reproach or doubt, is a fantasy. After all, she still has to live: “In the movies, they make it seem like your whole life stops when you get raped, but I kept arriving at the awful truth that nothing about it would stop, and I still had to wake up each day and do the same stupid, boring shit I did every other day and would have to go on doing until the end.” As she juggles teaching, writing a dissertation, grading papers, and working extra hours as a transcriptionist, Hood dissociates, turning to alcohol, drugs, and calorie restriction.
In the introduction to her book, Hood writes that “I am, I confess, not a theorist of rape, only an archivist of my own.” In the final two sections, she fully embraces this role: Part III, “You,” set in New York in August 2013, revisits her journals from that period, separating Hood the diarist (earnest, hurting, recently arrived in a new city) from Hood the biographer (critical, distant, seasoned). Here she wryly quotes passages from her old writings, puncturing her past fantasies: “It seems to me,” she writes to her earlier self, “you’ve no notion of what you yourself desire.”
Part IV, “We,” is set in the present, as Hood begins therapy. Perhaps the most gripping part of “We” is when Hood assembles a chronology of her “life and trauma” for her therapist, Helen. “What if I pretended that the plot was linear, and of a piece?” she writes. Hood never shows this chronology to Helen, but lays it out—with some redactions—for the reader. This document is powerful to read toward the end of the memoir: It makes clear just how much Trauma Plot resists linear storytelling in order to reflect the disordered, fragmented experience of sexual assault.
Hood doesn’t indicate whether she feels like she has healed from her past, or what it would look like if she had. But Trauma Plot unambiguously demonstrates her growth as a writer. Like Philomela, Hood alchemizes her suffering into something new. In her first book, the essay-poetry hybrid How to Be a Good Girl: A Miscellany, she mentioned the memoir she was trying to write. “I bite the inside of my cheek & say (again) this is not my rape / book this is not my rape book this is not my rape book / every book is my rape book,” she wrote; “my rape book is 300 pages long & / i will never finish writing it.” I was moved to reach the end of Trauma Plot and realize that Hood has finished her “rape book” (or one of them) and written it according to her own rules.
In a session with Helen, Hood talks about her sense of wasted time. “I’m nearly forty and I’ve only just started living,” she reflects. “I want to be OK with this. But it’s hard not to dream of other lives, and maybe that dream is the current that carries me to writing.” This is the parallel plot of the book—Hood’s artistic development alongside her pain. She ends with a burst of hope that calls to mind Molly Bloom’s ecstatic soliloquy at the end of James Joyce’s Ulysses: “I have hope again! I do! I don’t know my desire, yes, and yet I’m filled with it. And I think, yes, of all I still have to write. Everything left to do.” Hood’s engagement with her own trauma plot doesn’t flatten or distort her story; instead, it expands her craft, her ambition, her desire, and her life.
How should we understand miracles? Many people in the near and distant past have believed in them; many still do. I believe in miracles too, in my way, reconciling rationalism and inklings of a preternatural reality by means of “radical amazement.” That’s a core concept of the great modern Jewish philosopher Abraham Joshua Heschel. Miracles, insofar as Heschel would agree with my calling them that—it’s not one of his words—do not defy the natural order. God dwells in earthly things. Me, I find God in what passes for the mundane: my family, Schubert sonatas, the mystery of innate temperament. A corollary miracle is that we have been blessed with a capacity for awe, which allows us “to perceive in the world intimations of the divine, to sense in small things the beginning of infinite significance,” Heschel writes.
Every so often, though, I wonder whether radical amazement demands enough of us. Heschel would never have gone as far as Thomas Jefferson, who simply took a penknife to his New Testament and sliced out all the miracles, because they offended his Enlightenment-era conviction that faith should not contradict reason. His Jesus was a man of moral principles stripped of higher powers. But a faith poor in miracles is an untested faith. At the core of Judaism and Christianity lie divine interventions that rip a hole in the known universe and change the course of history. Jesus would not have become Christ the Savior had he not risen from his tomb. Nor would Jews be Jews had Moses not brought down God’s Torah from Mount Sinai.
Those who wish to engage with religious scriptures are not relieved of the obligation to wrestle with how miracles should be understood. Do we take them literally or symbolically? Are they straightforward reports of events that occurred in the world, perhaps ones that are no longer possible, because God no longer acts in it? Or are they encoded accounts of things that happened on some other, less palpable level, but were no less real for that?
In her book Miracles and Wonder: The Historical Mystery of Jesus, Elaine Pagels asks different questions about New Testament miracles. She is less interested in whether Jesus performed them than in what accounts for their power. Her larger quest is to understand the enduring appeal of Jesus to so many people “as a living presence, even as someone they know intimately.” Pagels, now 82, is a historian of early Christianity who also writes about her own efforts to find an experience of Christianity, a sense of intermittent grace, consonant with her experience of extreme loss: Her first son died at 6 of a rare disease; her husband died in a hiking accident shortly thereafter. She has spent a lifetime thinking about the multiple dimensions of the gospel truth.
Pagels’s The Gnostic Gospels (1979) is a liberal theologian’s cult classic—it has gone through more than 30 printings. Though not her first work of scholarship, it marked the beginning of a long career as a gifted explainer of abstruse ideas. Her overarching ambition has been to restore a lost heritage of theological diversity to the wider world. The Gnostic Gospels reintroduced forgotten writings of repudiated Jesus sects, produced over the course of the first and second centuries, before a welter of competing perceptions of Jesus’s story were reduced to a single dogma, codified in the apostolic creed, and before the New Testament was a fixed canon. Sounding faintly Buddhist to the modern ear, those writings interpreted miracles as symbolic descriptions of real spiritual revelations and transformations, available only to those with access to secret knowledge (gnosis). “Do not suppose that resurrection is an apparition,” one gnostic teacher wrote in his Treatise on Resurrection. “It is something real. Instead, one ought to maintain that the world is an apparition.”
The subtitle of Miracles and Wonder is slightly misleading: The Historical Mystery of Jesus seems to imply that Pagels will revisit the old debate over whether Jesus existed. That he did is settled doctrine, at least among historians. Rather, she takes us back to what biblical scholars call the Sitz im Leben, the “scene of composition,” in an effort to reconstruct where miracle narratives came from and how they evolved. Using the tools of the historian as well as the literary critic, she tries to unearth the writers’ concerns and influences, and she considers miracles from a bluntly instrumentalist perspective: What problems did they solve; what new vistas did solving them open; what religious function did they serve?
Among their other uses, miracles helped the evangelists overcome challenges to the authority of the Christ story. For all his enigmatic teachings and at times mystifying behavior, Jesus the man is not that hard to explain: He was one among many Jewish preachers and healers prophesying apocalypse in a land ravaged by Roman conquest and failed uprisings. But Jesus the man-god was more difficult for outsiders—Roman leaders, Greco-Roman philosophers, other Jews—to accept. They asked a lot of hostile questions. Why worship a Messiah whose mission had apparently failed? Didn’t his ignominious end—crucifixion was Rome’s punishment for renegades and slaves—contradict his claim to be divine? The Romans were incredulous that anyone would glorify a Jew. To the Jewish elite, he was a rube from the countryside.
Mark, the first known writer of a Christian gospel, could have produced a traditional hagiography. Instead, wishing to publicize Jesus’s singular power—to spread the “good news”—he appears to have invented the gospel genre, the Greek biographical novella as a work of evangelical witness; the subsequent chroniclers followed his lead. Writing around the time of the destruction of the Second Temple, in 70 C.E., he gave Jesus’s story cosmic dimensions. Now it was the tale of “God’s spirit contending against Satan, in a world filled with demons,” in Pagels’s words. Mark may have been recording oral stories developed by Jesus’s followers to convey perceptions of real experiences, but Mark, and they, would also have wanted to defend their certainties against the skeptics.
Pagels isn’t trying to shock the faithful. Reading sacred texts as the products of history, rather than the word of God, has been standard practice in biblical scholarship for more than a century. Her book demonstrates that the Wissenschaftliche, or “scientific approach” (the pioneering Bible scholars were German), doesn’t have to be reductive; indeed, critical scrutiny may make new sense of difficult texts and yield new revelations. As Pagels portrays them, the evangelists were men of creative genius, using their defense of Jesus as an occasion to draft the outlines of a new world religion. “What I find most astonishing about the gospel stories,” she writes, “is that Jesus’s followers managed to take what their critics saw as the most damning evidence against their Messiah—his crucifixion—and transform it into evidence of his divine mission.”
In some cases, recontextualizing the old stories gives them an unexpected poignancy. A good example is her analysis of the virgin birth. It yields a less sanctified Mary, but by highlighting darker currents in the text perhaps obscured by tradition, Pagels imbues the young mother with a haunting sadness. We think of the virgin birth as a basic element of Christian faith, yet only two of the four canonical Gospels refer to it: Matthew and Luke. Mark doesn’t mention Jesus’s birth and says little about his family background. When we first encounter Jesus, he’s a full-grown Messiah being baptized in the wilderness. John’s Gospel has a bit more on Jesus’s family, but no birth scene. When we first see Jesus in the Gospel of John, he is already both the Son of God and a man—that is to say, not an infant.
Matthew and Luke, by contrast, not only depict Jesus’s birth, but herald it at length. They supply genealogies that stretch back to King David, the founder of Israel’s dynasty, giving Jesus a lineage commensurate with his stature. Matthew stresses royalty, prefacing the birth with heavenly portents; afterward, Magi bear royal gifts to a future king. Luke’s version is more rustic but heightens the dramatic tension between Jesus’s humble background and his divinity. Joseph and Mary are turned away from an inn. Mary gives birth in a barn, and shepherds worship him. Both feature an Annunciation, in which an angel appears and announces that Mary, a virgin who is engaged to Joseph, is to have a son by God. In Matthew, the angel comes to Joseph, who has already discovered that Mary is with child, and advises him to marry her—he was planning to send her away before she disgraced them both. Luke’s angel goes directly to Mary.
Why did Matthew and Luke add all this material? Among the many possible answers, Pagels focuses on the likelihood that after Jesus’s death, talk began to circulate that he was the illegitimate son of an unwed mother. The second-century Greek philosopher Celsus used the charge to discredit the Gospels. In an anti-Christian polemic citing Jewish sources, he writes, “Is it not true … that you fabricated the story of your birth from a virgin to quiet rumors about the true and unsavory circumstances of your origins?”
That Mark himself seems to have called Jesus’s paternity into question complicates matters. When his Jesus comes home to Nazareth to preach at the local synagogue, his former neighbors mock him for his wild ideas. “Where did this man get all this?” they sneer. “What miracles has he been doing? Isn’t this the carpenter, the son of Mary, the brother of James, Joses, Judas, and Simon?” (The italics are Pagels’s.) Mark’s readers, who knew how Jewish patronymics worked, would have understood what the villagers were throwing in Jesus’s face. They would not have said “son of Mary” if they’d known the name of Jesus’s father—even if his father was dead.
Matthew and Luke excise that “son of Mary” and make Jesus not just legitimate but doubly legitimate. His mother acquires both a husband, Joseph, and a father, God, for her child. Her marriage and Jesus’s divine paternity purge the implied stain of wantonness. And yet disturbing hints of sexuality still run beneath the surface of the evangelists’ Gospels. In Luke’s Annunciation, after the angel Gabriel delivers his message, Mary asks, “How can this be, since I am a virgin?” Gabriel replies, “The Holy Spirit will come upon you, and the Power of the Most High will overshadow you.”
Pagels doesn’t cite this exchange or address the disconcerting aggressiveness of “come upon you” and “overshadow you,” but she does look closely at Mary’s response: “I am the Lord’s slave; so be it.” This is Pagels’s translation; the word she gives as slave, doule, is in this context more often translated as “servant” or “handmaid.” Soon after, Luke has Mary, thrilled about the pregnancy, burst into song. But her first response, Pagels says, sounds more resigned than joyous: “An enslaved woman was required to obey a master’s will, even when that meant bearing his child, as it often did.” At a minimum, “a girl with no sexual experience might be startled and dismayed to hear that she is about to become pregnant, given the potential embarrassment and shame she might suffer.”
Pagels goes so far as to conjecture how Mary got pregnant, a thesis very much based on circumstantial evidence. Around the time of Jesus’s birth, tens of thousands of Roman soldiers marched into Judea to suppress an insurrection, a brutal campaign recorded by the Jewish historian Josephus. As they fanned out through the countryside to hunt down rebels, they kidnapped and raped any women they could find. Pagels asks, “Was Mary, as a young girl from a humble rural family,” one of those women? “We have no way of knowing,” she adds, though she is struck by one coincidence. Unfriendly rabbinic sources from the first few centuries after Jesus’s death cited slanderous gossip claiming that Mary was promiscuous and had a lover who was a soldier named Panthera, and that he was Jesus’s father. Modern scholars have found the gravestone of a soldier with that name, said to have served in Judea until 9 C.E.; Pagels wonders whether he could have been one of those rapists. Thinking of Mary as a victim of sexual assault is horrifying; it feels sacrilegious. But that she gave birth to her son in an age of cataclysmic violence does make his ultimate triumph seem even more miraculous.
An appreciation of context also yields a new reading of the Passion of the Christ. This account of Christ’s trial and torture in the days leading up to the crucifixion, which shows the Jews baying for his death, has been thought by some to have contributed to centuries of anti-Semitism. In Pagels’s version, the evangelists are motivated less by sheer hatred of Jews than by the need to solve some difficult theological and political problems. What leads them to demonize the Jewish priests and elders, even as they turn Pontius Pilate, Judea’s Roman governor, into an honorable man who perceives Jesus’s innocence and is loath to sentence him?
That the leader of a notoriously cruel occupying power would have shown such compassion for a militant rebel strains credulity and defies the historical record. Pilate was infamous for his “greed, violence, robbery, assault, frequent executions without trial, and endless savage ferocity,” according to the first-century Jewish philosopher Philo, among many others. “I find no simple answer” to the conundrum of the revisionist Pilate, Pagels writes. But she has her theories. For one thing, by acknowledging Jesus’s innocence, the Pilate of the Gospels safeguards Jesus from the charge that he died a criminal.
A good Pilate is implausible though not impossible—that is to say, not miraculous—but he plays a crucial role in the larger miracle of the crucifixion, the transfiguration of a degrading death into the salvation of all mankind. Another reason for the evangelists to absolve Pilate of blame, according to Pagels, would have been to protect themselves. The Roman authorities persecuted Christians harshly, subjecting them to torture and deaths even more gruesome than crucifixion. To vilify a high Roman official was to invite retribution. As the Christians grew more Gentile, the Gospel writers made Pilate more sympathetic and the Jews less so. The writers could not have foreseen that their scapegoating of the Jews would have such lethal consequences and for so long.
I should stress that the Christian miracle narratives have multiple sources. Most important, they interpret other texts. Sure that Jesus was the Messiah, his followers scoured the Jewish Bible for prophecies that foretold his coming. The virgin birth elaborates on a verse from Isaiah that could be construed as predicting it: A virgin “shall conceive, and bear a son.” (“Virgin” is a famous mistranslation. The Hebrew word is almah, or “young woman.” But Matthew would probably have been reading the Hebrew Bible in Greek, where the word appears as parthenos, “virgin.” ) Drawing on existing holy writ was in no way scandalous. Even as Christians moved away from Judaism, the evangelists continued to work within a Jewish scriptural tradition that expected later writers to build on earlier ones. The presence of the old texts in the new ones served as validation. In Matthew and Luke’s view—and in the view of Christians throughout the ages—Isaiah proved them right.
What do biblical miracles do for believers today? In Pagels’s final chapter, she visits Christian communities around the world, many of them poor and subject to political oppression, to explore some of the ways in which the story of Jesus continues to offer comfort and inspiration. In the Philippines, for example, she finds the Bicolanos, Catholics living in remote villages, who worship a syncretistic Jesus inflected with Filipino tradition; they are particularly focused on Easter week, because to them, Jesus represents the promise of a glorious afterlife.
Miracle stories also have applications outside a strictly religious context. They are indispensable fictions, tales to live by. They re-enchant the world. Or so I feel. I read the Bible, Christian as well as Jewish, not for spiritual nourishment—or not for what is generally considered spiritual nourishment—but to be reminded that the universe once held more surprises than it does now and that hoping when all seems hopeless is not unreasonable, at least from the vantage point of eternity. Miracles are useful insofar as we take their poetry seriously. We are talking about encounters with the Almighty. Human language falters in the face of the indescribable, which reaches us only through the figures of speech we are able to understand.
This article appears in the May 2025 print edition with the headline “What to Make of Miracles.”
Leonard Bernstein’s way with orchestras that wouldn’t give him what he wanted was usually imploring, even beseeching. He was disappointed—the musicians were not so much failing him, the conductor, as failing the composer, failing the music. But on one occasion, his disappointment turned to anger. In 1972, he was working with the Vienna Philharmonic on Gustav Mahler’s Fifth Symphony. Mahler had been the head of the Vienna Court Opera and had conducted the Philharmonic from 1897 to 1907.
This was their own music—and they were holding back. Bernstein was rehearsing the stormy first movement of the Fifth Symphony. In his own score of the work, now lodged at the New York Philharmonic Archives, he had written, before the opening movement, “Rage—hostility—sublimation by Mahler and heaven.” And then, “Angry bitter sorrow mixed with sad comforting lullabies—rocking a corpse.” But he was getting neither rage nor consolation from the Vienna Philharmonic.
Sighing and shrugging, irritably flipping pages of the score back and forth, he finally burst out (in German): “You can play the notes, I know that. It’s Mahler that’s missing!” The orchestra had arrived at the anguished climax toward the end of the movement, and the strings—by habit sweet and lustrous—were not playing with the harsh intensity that he wanted. “I’m aware that it’s only a rehearsal. But what are we rehearsing?” It was an implied threat to walk out.
Bernstein later reported hearing grumbling from the ranks. “Scheisse Musik.” Shit music. Scheisse Musik was Jewish music. Mahler was a Bohemian Jew. “They thought it was long and blustery and needlessly complicated and heart-on-sleeve and overemotional,” Bernstein said later in a documentary interview about his relations with the orchestra. The Philharmonic, after banning Mahler during the Nazi period, had played his great, tangled, tormented later symphonies only a few times. The orchestra didn’t know the music; the musicians didn’t love it.
The moment is startling because this was hardly Bernstein’s initial encounter with the esteemed Vienna Philharmonic. He had first conducted the orchestra in 1966, and with enormous success (bouquets were flung; champagne was poured), so his wrath carried a hint of betrayal, as if to say, “We are squandering a lot of hard work.” Turning Mahler into a universal classic—not just a long-winded composer of emotionally extreme symphonies—was part of Bernstein’s mission, part of his understanding of the 20th century, and essential to his identity as an American Jew. In their prejudice against Mahler, which was both racial and musical, the Germans and Austrians at the core of classical tradition had torn out of themselves a vital source of self-knowledge as well as musical glory. Destroying Mahler made it easier for them to become Nazis. Bernstein was determined to restore what they had rejected.
He was proud of America’s musical achievements—proud of the work of the composers Charles Ives and Aaron Copland, and perhaps even prouder of the enduring native talent for popular Broadway entertainment, which, in 1972, was largely a Jewish creation. He had ennobled that tradition himself with the galvanizing West Side Story and the brilliant potpourri that is Candide, an homage to Voltaire’s satire and to European operatic styles, shaped into the greatest American operetta. Ever eager to break down the barriers between classical and popular music, he put elements of jazz into his work. In the ’20s, Europeans had certainly become conscious of American jazz, and Bernstein wanted to enlarge that recognition; he wanted to join America to world culture, even world history.
It turned out that he needed the Vienna Philharmonic, and it needed him too. In fact, after the war, the orchestra needed him desperately. That angry rehearsal was a cultural watershed. Bernstein demanded that Vienna, and Europe in general, acknowledge what both America and Mahler meant to the 20th century—the century that the Europeans had played such a dreadful part in and that the Americans had helped liberate from infamy. An American Jew had become the necessary instrument in the New World’s reforming embrace of the disgraced Old.
The child of Ukrainian immigrants, Bernstein grew up in suburban Boston, an irrepressibly musical little boy who loved listening to the radio and beat out rhythms on the windowsills at home. He didn’t have a piano until he was 10. His father, Sam, notoriously refused to pay for piano lessons, but when he finally relented, Lenny accelerated to full speed, working with the best piano teachers in the Boston area, including the well-known German pianist Heinrich Gebhard. In the summers, he stayed at the family cottage in Sharon, Massachusetts. As a teenager there, Lenny mounted a production of Carmen in which he played the temptress, wearing a red wig and a black mantilla, and a tumultuous Mikado in which he sang the part of Nanki-poo.
Let me make a comparison with a renowned European musician. In 1908, Herbert von Karajan was born in Salzburg, Mozart’s birthplace. There were at least two pianos at home, and Karajan played through Haydn and Beethoven symphonies with his family. On special evenings, string and woodwind players among the family’s Salzburg friends would assemble at the house for chamber music. When he was 6, Karajan took classes at the Mozarteum, the school that preserved the Austro-German musical legacy. He spent his summers with his family on a stunning mountain lake, the Grundlsee, 60 kilometers east of Salzburg.
The contrast makes an American happy: on the one side, tradition, serious public performance, luxury; on the other, émigré teachers, amateur musicales and family shenanigans, casual summers in the modest countryside. Yet what Boston and its environs had to offer in the 1930s, however scrappy, was enough to bring out Lenny’s talent. Karajan was a prodigy; Bernstein was a genius.
On November 14, 1943, the 25-year-old American conducted the New York Philharmonic without rehearsal; the concert was nationally broadcast on CBS Radio, and Bernstein was famous by the next day. In the following years, he conducted all over the country while working on his own classical compositions, including his Symphony No. 1 (Jeremiah), based on biblical texts. By the time he was 40, in 1958, he had created the Broadway successes On the Town and Wonderful Town, in addition to Candide and West Side Story, as well as some of his enduring classical scores. In that same year, he took over as music director of the New York Philharmonic.
Initially, there was a lot of excitement in the press—the first American at the helm of one of the great orchestras! But the tone soon became hostile, even acrimonious. Audiences loved Bernstein, but his full-bodied manner on the podium—arms, head, hips, shoulders, eyebrows, groin in motion—caused embarrassment and even anger. The critic and composer Virgil Thomson, writing in the New York Herald Tribune, complained of “corybantic choreography” and “the miming of facial expression of uncontrolled emotional states.” In the arts, embarrassment may be the superego of emotion: This liberated Jewish body dismayed not only Thomson but the fastidious descendants of German Jews in New York, especially Harold C. Schonberg, the chief music critic of The New York Times starting in 1960, who gave Bernstein terrible reviews for years. In the eyes of Schonberg and others, Bernstein was hammy, exaggeratedly expressive, undignified: He was Broadway; he was show business; he lacked seriousness. The ecstasies of classical music are supposed to be, well, clean. But here was this lusciously handsome young man, a little overripe, leading orchestras in Haydn and Beethoven.
I went to a lot of Philharmonic concerts in Bernstein’s early days as music director, and I heard some things that were under-rehearsed and overdriven, a bit coarse, without the discipline and mastery that were so extraordinary in his later years. But the playing was always vital, the programs exciting. And one concert, given on April 2, 1961, changed my life. It was a performance of Mahler’s Symphony No. 3, a monster with six movements, 95 minutes of outrageously stentorian swagger and odd, folkish nostalgia, capped by a lengthy adagio marked Langsam. Ruhevoll. Empfunden (“Slowly. Tranquil. Deeply Felt”). Bernstein took the adagio at a very slow tempo indeed, considerably slower than did many subsequent conductors, who, I daresay, would have had trouble holding it together at that speed. But the tempo wasn’t remarkable in itself. What was remarkable was the sustained tension and momentum of the movement and the sense of improvisation within it—the slight hesitations; the phrases explored, caressed; and also the singing tone of the entire orchestra at those impossibly slow speeds, all of it leading to the staggering climax at the end.
Bernstein at 25, rehearsing his Jeremiah symphony with the New York Philharmonic in 1944 (New York Daily News Archive / Getty)
The audience erupted into applause, and I remember thinking (I was 17), Anyone who doesn’t know that this man is a great musician can’t hear a thing—or something like that. (The Mahler Third was recorded within the next few days at the Manhattan Center, on West 34th Street. What I heard then, you can hear now on a Sony CD and a variety of streaming services.) After the concert, I went home shaken. That last movement opened gates of sensation and feeling that I had never experienced before, at least not outside of dreams. I was a very repressed and frightened teenager, and the music granted permission, a kind of encouragement to come out of myself and meet the world. The word awakening sounds banal, but I don’t know how else to describe what happened. Bernstein had that effect on many people. But it took his first engagement with the Vienna Philharmonic to awaken New York’s critics, which became one of the great ironies of American musical taste.
During that first stint in Vienna, in 1966, Bernstein conducted Verdi’s Falstaff (at the Staatsoper, the orchestra’s sister organization, formerly known as the Vienna Court Opera). With the Philharmonic, he also did some Mozart, along with Mahler’s Das Lied von der Erde, the orchestral work for two voices that was in fact part of its repertoire. (It was the convoluted and violent later symphonies—masterpieces, all—that the orchestra resisted.) The ovations in 1966 went on forever, in a startling kind of release that even Bernstein, who certainly enjoyed acclaim, thought was a bit curious. “I’m a sort of Jewish hero who has replaced Karajan,” he wrote to his wife, Felicia. And a couple of weeks later, making a report to his parents:
You never know if the public that is screaming bravo for you might contain someone who 25 years ago might have shot me dead. But it’s better to forgive, and if possible, forget … What they call the “Bernstein wave” that has swept Vienna has produced some strange results; all of a sudden it’s fashionable to be Jewish.
The reference to Karajan was far from casual. The prodigious child of Salzburg had become a dominant figure in European classical music. In 1954, when he and Bernstein were both working at La Scala, they talked late into the night. Lenny wrote to Felicia: “I became real good friends with von Karajan, whom you would (and will) adore. My first Nazi.” (Karajan had joined the party in 1935 and remained in it until the end of the war.)
Bernstein’s way of appropriating ex-Nazis has elements of both seduction and triumph. When he went to Vienna in 1966, he had to deal with the repulsive truth that a man named Helmut Wobisch, a former trumpet player in the Philharmonic, was now the manager of the Philharmonic. Wobisch had worked for the SS during the war, and was likely involved in expelling Jewish members from the orchestra. Bernstein referred to him in public as “my dearest Nazi,” and there are photos of Wobisch happily greeting the maestro at the Vienna airport.
Bernstein made grim jokes, but he wanted to woo these men away from their past, their guilt; he would win them over, asserting not only Jewish talent but Jewish forgiveness. He and Karajan developed a friendly rivalry. On different occasions, when they were working in Vienna and Salzburg at the same time, they took turns upstaging each other in public. One was a perfectionist who gave performances of stunning power that sometimes became smoothed out and even bland through repetition; the other was full of surprises—always discovering things, a sensibility always in the making. For years, they represented two versions of musical culture: the authoritarian essence of the Old World and the democratic essence of the Jewish-immigrant New World.
That an American conductor of any kind was enjoying acclaim in Europe was itself cause for wonder. From Bernstein’s point of view, the odds had always been stacked against him. Some years after that 1966 triumph, he wrote the distinguished Austrian conductor Karl Böhm:
You were born in the lap of Mozart, Wagner and Strauss, with full title to their domain; whereas I was born in the lap of Gershwin and Copland, and my title in the kingdom of European music was, so to speak, that of an adopted son.
But by 1972, the positions of son and elders were reversed, and Bernstein’s tone as he fought the Vienna Philharmonic in that rehearsal of the Mahler Fifth was anything but abashed. Bernstein did not, of course, walk out of the turbulent session. He stayed, and he drove the Vienna Philharmonic hard. An American Jew would make them play this music.
In a 1984 video lecture called “The Little Drummer Boy,” Bernstein insisted that Mahler’s genius depended on combining two laughably incompatible musical strains—the strengths of the Austro-German symphonic line and the awkward and homely sounds of shtetl life recalled from the composer’s youth in Bohemia. The exultant and tragic horn calls in the symphonies and in Das Lied von der Erde—were these not the potent echoes of the shofar summoning the congregation on High Holidays? The banal village tunes that Mahler altered into sinister mock vulgarities—did these not recall the raffish klezmer bands, the wandering musicians who played at shtetl weddings?
The ambiguity, the exaltation and sarcastic self-parody, the gloom alternating with a yearning for simplicity and even for redemption—all of that reflected the split consciousness of Jews who could never belong and turned revenge upon themselves. In a remark that Bernstein often quoted, Mahler said, “I am thrice homeless. As a native of Bohemia in Austria, as an Austrian among Germans, and as a Jew throughout all the world. Everywhere an intruder, never welcomed.”
Gustav Mahler during his directorship of the Vienna Court Opera (Bettmann / Getty)
Mahler was demanding and short-tempered, and shame—the shame of being a Jew—may have been an element in his volatile disposition; Bernstein felt that it was. Leading orchestras in London, Tel Aviv, and Berlin, as well as in Vienna and New York, he performed the symphonies and song cycles with a violence and tenderness that ended any further talk of shame. By advocating for Mahler as powerfully as Bernstein did, he helped bring the Jewish contribution to Austro-German culture back into the lives of Europeans—and perhaps also a range of emotions, including access to the bitter ironies of self-knowledge that had been eliminated from consciousness during the Nazi period. Mahler died in 1911, but Bernstein believed that Mahler knew; he understood in advance what the 20th century would bring of violence and harrowing guilt. “Marches like a heart attack,” Bernstein wrote in his score of Mahler’s apocalyptic Sixth Symphony. The tangled assertion and self-annihilation, the vaunted hopes and apocalyptic grief—that was our modern truth. It was all there in the music.
In 2018, the Jewish Museum Vienna mounted an exhibit called “Leonard Bernstein: A New Yorker in Vienna.” The accompanying catalog featured the words “Bernstein in Vienna became the medium through which a prosperous democratic German-speaking cultural community could display its newly found post-war liberal tastes.” Yes, exactly. The ovations for Bernstein went on forever in part because Vienna was celebrating its release from infamy. Perhaps only an American Jew—open, friendly, but a representative of a conquering power—could have produced the effect that Bernstein did.
After his initial Vienna triumph in 1966, Bernstein returned to New York, and the embarrassment and condescending reviews petered out. Vienna had taught New York how to listen. The Europeans were enchanted by the expressive fluency that the New York critics had considered vulgar. Everyone but the prigs realized that Bernstein’s gestural bounty was both utterly sincere and very successful at getting what he wanted. He wasn’t out of control; he was asserting control. Karajan, by contrast, worked through the details in rehearsal and then, in performance, stood there with his eyes closed, beating time, thrusting out his aggressive chin and mastering the orchestra with his stick and his left hand. He was fascinating but almost frightening to watch.
Karajan radiated power when he conducted; Bernstein radiated love. Smiling, imploring, flirting, and commanding, he cued every section and almost every solo, and often subdivided the beat for greater articulation. If you were watching him, either in the hall or on television, he pulled you into the structural and dramatic logic of a piece. He was not only narrative in flight; he was an emotional guide to the perplexed. For all his egotism, there was something selfless in his work.
In 1988, when Bernstein and Karajan were both close to death, they had a final talk in Vienna. Karajan, after neglecting Mahler’s music for decades, had taken up the composer in his 60s and eventually produced two glorious recorded performances of the Ninth Symphony. The Austrian could no longer afford to ignore Mahler; he had become too central to concert life, to 20th-century consciousness, and Bernstein had helped produce that shift. They spoke of touring together with the Vienna Philharmonic. I am moved by the thought of the two old men, rivalries and differences forgotten, murmuring to each other in a hotel room and conspiring to make music. Mahler had brought them together.
A few days after struggling with the Vienna Philharmonic over Mahler’s Fifth in 1972, Bernstein performed and filmed the work with the orchestra in Vienna. The musicians are no longer holding back; it’s a very exciting performance (viewable on YouTube and streaming services). Fifteen years later, in 1987, Bernstein and the Vienna Philharmonic returned to the Fifth, taking it on tour. The performance in Frankfurt on September 8, 1987, was recorded live by Deutsche Grammophon and released the following year, and is also available to stream. It is widely considered the greatest recording of the symphony.
But it is not the greatest recording of the symphony. Two days later, on September 10, at the mammoth Royal Albert Hall in London, Bernstein and the orchestra played the work yet again. The BBC recorded the performance for radio broadcast, and though the recording (audio only) has never been commercially released, it has been posted on YouTube.
The symphony, in any performance, is a compound of despair, tenderness, and triumph. But in many performances, much of its detail can seem puzzling or pointless—vigorous or languorous notes spinning between the overwhelming climaxes. Bernstein clarifies and highlights everything, sometimes by slowing the music down so that one can hear and emotionally register such things as the utter forlornness of the funeral march in the first movement, the countermelodies in the strings that are close to heartbreak, the long silences and near silences in which the music struggles into being—struggles against the temptation of nothingness, which for Mahler was very real.
The symphony now makes complete sense as an argument about the unstable nature of life. Toward the end of it, after a passage slowing the music almost to a halt, Mahler marks an abrupt tempo change: accelerando. In Bernstein’s personal score, he writes at this point, “GO.” Just … go. In London, the concluding pages—with the entire orchestra hurtling in a frenzy to the close—release an ovation in the hall that has the same intensity of joy as the music itself.
Mahler’s music is the dramatized projection of a Middle European, Jewish-outsider sensibility into the world. Bernstein carried the thrice-homeless Mahler home, yes, home to the world, where he now lives forever. The conductor may have been frustrated in some of his ambitions (he was never the classical composer he wanted to be), but he blended in his soul what he knew of Jewish sacred texts, Jewish family life and family feeling—blended all of that with the ready forms of the Broadway musical, the classical symphonic tradition, Christian choral music. He took advantage of new ways of reaching audiences—particularly television—without cheapening anything he had to say. He died too young at 72, dissatisfied, full of ideas and projects, a man still being formed; yet throughout his half-century career, he brought the richness of American Jewish sensibility into the minds and emotions of millions of people.