The following contains spoilers for the films The Substance, The Last Showgirl, and Maria.
In the 1990s, Demi Moore became the kind of movie star whose off-screen activities made more headlines than her acting did: She formed one half of a celebrity power couple with the actor Bruce Willis, posed nude while pregnant on the cover of Vanity Fair, and prompted a bidding war between the producers of Striptease and G.I. Jane, resulting in her being crowned the highest-paid actress in Hollywood. Her fame, when contrasted with some of her forgettable films—The Butcher’s Wife, The Scarlet Letter—turned her into an easy punch line. As the New Yorker critic Anthony Lane sneered at the start of his review of the latter: “What is the point of Demi Moore?”
Look at Moore now. Since the writer-director Coralie Fargeat’s The Substance premiered at the Cannes Film Festival last May, Moore, who stars in the movie, has solidified her position as a serious awards contender for the first time in her career. The actor plays Elisabeth Sparkle, an aging celebrity who takes the titular elixir to produce a younger version of herself. What follows is an excessive and unsubtle display of body horror: After Elisabeth’s nubile clone, Sue (Margaret Qualley), bursts out of her spine, she quickly becomes a starlet who antagonizes Elisabeth. Moore is tremendous, imbuing Elisabeth with a haunting vulnerability as she injects herself again and again with a body- and soul-destroying concoction. On Sunday, the 62-year-old won a Golden Globe—her first—for her performance; she delivered the night’s best acceptance speech, eloquently reflecting on how her career has evolved. “Thirty years ago, I had a producer tell me that I was a ‘popcorn actress’ … that I could do movies that were successful, that made a lot of money, but that I couldn’t be acknowledged [for them]—and I bought in,” she said, choking up. “That corroded me over time to the point where I thought a few years ago that maybe this was it, maybe I was complete, maybe I’ve done what I was supposed to do.” Now Moore is experiencing the classic comeback narrative: the Hollywood veteran reminding audiences that they’ve underrated her talent all along.
She’s one of several actors doing so this awards season, and with roles that explore how rapidly the entertainment industry can turn women into has-beens. In the Gia Coppola–directed The Last Showgirl, Pamela Anderson, 57, plays Shelly, a Las Vegas dancer left to confront her feeling of expendability when the revue she’s been in for decades is set to close. Throughout the intimate film, Shelly insists on her value, echoing Anderson’s own trajectory as someone whose work was never taken seriously. Meanwhile, Pablo Larraín’s gorgeously rendered biopic Maria stars the 49-year-old Angelina Jolie as the opera singer Maria Callas in her final days, struggling to repair her voice and maintain her composure. Jolie, like Callas, has endured an especially tricky relationship with the A-list; she’s been a tabloid mainstay in spite of her artistic ventures.
Elisabeth, Shelly, Maria—all are women who can’t resist the spotlight despite its cruelty. The films about them interrogate the true price of their fame, exploring how their chosen field turns youth into an addiction. Films such as All About Eve, Death Becomes Her, and Sunset Boulevard have long proved the endurance of these themes. The Substance, The Last Showgirl, and Maria go further, however, exemplifying how this lifelong pursuit of beauty is also an act of constant self-deception. Fear, not vanity, animates each woman; losing their celebrity means losing their sense of worth. “It’s not about what’s being done to us,” Moore said of The Substance in an interview. “It’s what we do to ourselves.”
The actors who portray these characters have all coincidentally, and conversely, returned to the spotlight by embracing their age. Each has achieved a so-called career renaissance as a result. But such appreciation can be a double-edged sword: Anointing older female performers as “comebacks” concedes to, and maybe even reinforces, the rigid expectations Hollywood has placed on them. Of these three films, The Substance most clearly establishesthat tension as something more than just tragic. The effort to retain an ingénue-like appeal, Fargeat’s fable posits, is both irresistible and preposterous.
The Substance almost immediately pushes the idea that the endless quest for beauty produces its own kind of overpowering high: After she emerges from Elisabeth’s back, Sue—housing Elisabeth’s consciousness—begins to examine her body in the mirror. She relishes her appearance, gazing at her face and running her hands over her smooth features; Elisabeth, meanwhile, clings to life, sprawled on the floor with her hair fanned out and her spine split open. Sue then auditions for the television executive who had just fired her older self. Never mind that the network callously discarded Elisabeth once she turned 50: Given the opportunity to be gorgeous and “perfect” once more, Sue heads straight for the gig that she knows cares about little beyond her looks.
Then again, this is the only life Sue knows. Her identity is rooted in Elisabeth’s experiences; Elisabeth believes that her value is her supposed flawlessness—a punishing worldview that neither she nor Sue can escape. The film’s most penetrating terror, then, is rooted not in the way Fargeat makes every mutilation squelchily gross, but in how Elisabeth and Sue sabotage themselves as a result of their insecurities. The pair are supposed to switch consciousnesses every seven days for the drug to work, but when Sue spends more time awake than she should, Elisabeth ages. The sight of her wrinkled skin repels her, and she responds with searing self-hatred, chastising herself by binge-eating. One especially chilling sequence doesn’t involve body horror at all: It just shows Elisabeth readying herself for a date, only to give up as soon as she catches the smallest glimpse of her reflection in a door handle.
The women in The Last Showgirl and Maria similarly cannot move past their fixation on the fame they enjoyed when they were younger. Shelly, the Las Vegas dancer, reaches out to her estranged daughter, only for the relationship to fall apart as Shelly insists on the importance of the revue. Jolie’s ailing Maria finds comfort in a dangerous sedative called Mandrax, which causes hallucinations of a journalist pressing her to discuss her legacy. The more these women attempt to figure out who they are beyond their profession, the more they fall back into old habits.
All three films also suggest that their protagonists find their twisted actions thrilling. Maria hides her pills from her household staff with the glee of a child stashing her Halloween candy. Shelly, unlike Elisabeth, makes it to a date with the revue’s stage manager, Eddie (Dave Bautista). She glams herself up in a slinky silver dress and a full face of makeup; as she sits down, she compliments Eddie, and then pauses. “Do I look nice?” she prompts him, grinning widely when he responds affirmatively. And when Elisabeth goes to pick up more boxes of the substance, she acts as if she’s carrying out a pulse-pounding robbery, darting into alleyways and glancing suspiciously at passersby. Keeping up appearances, in other words, delivers an adrenaline rush that justifies the never-ending chase for perfection and acclaim. “Being an artist is solitary, but if you’re passionate about it,” Shelly insists, “it’s worth it.”
Still, as much as these characters may perpetuate their own pain, the movies aren’t seeking to condemn their choices. Instead, they scrutinize the consequences of a lifetime spent facing society’s insurmountable and fickle pressures. These women don’t seem to consider those who have wronged them to be their antagonists: Eddie is a sympathetic character despite having to close Shelly’s revue, Maria’s critics rarely faze her, and Sue continues to chase the approval of the network executive who fired Elisabeth. Rather, the women’s age and perceived attractiveness pose ever-present threats to their livelihood. The Substance captures this best; the camera leers at Sue and Elisabeth both, closing in on their hyper-sexualized bodies. The costumes are replete with garish hues. The production design transforms Los Angeles into a phantasmagoric nightmare from which Elisabeth cannot be roused—as herself or as Sue. Her only solution is to allow her burdens to consume her. Turning external pressures into brutal obsessions is a metamorphosis as visceral as that of a younger self bursting forth from your back.
In its high-concept outrageousness, The Substance lands on a catharsis that’s missing from The Last Showgirl and Maria. The two latter films end with a mournful—and frustratingly hollow—air of resignation: Shelly is seen performing in one of her last shows after enduring a humiliating audition for a new program, and Maria dies at home after a final hallucination, of an orchestra accompanying her while she sings an aria. The Substance’s conclusion is anything but elegiac, however. Sue, after killing Elisabeth during a violent showdown, takes the substance herself, even though the drug is supposed to work only on its original subject. Out of her spine emerges a creature with too many appendages, body parts in the wrong places, and Elisabeth’s face protruding from her back. Yet she—dubbed “Monstro Elisasue”—does what Sue did when she was “born.” She admires herself in the mirror. She primps and preens. As she gets dressed, she even pokes an earring into a strip of flesh.
Yet as soon as Monstro Elisasue steps onstage, she repulses her audience. They gawk, and then they scream, and then, drenched in the blood that starts spewing from her body, they run. It’s an utterly ludicrous ending—and a liberating one. Only Elisabeth’s face remains as Monstro Elisasue stumbles out onto the streets of Los Angeles and melts into a bloody mess. She leaves with the last laugh, cackling as she pauses over her star on the Walk of Fame. And Moore, in those frames, is transcendent, her expression ecstatic and maniacal and unhinged. What is the point of Demi Moore? Perhaps it’s to reveal how sophomoric such questions were in the first place.
It took my father nearly 70 years to become a social butterfly. After decades of tinkering with Photoshop on a decrepit Macintosh, he upgraded to an iPad and began uploading collages of photos he took on nighttime walks around London to Flickr and then to Instagram. The likes came rolling in. A photographer from Venezuela applauded his composition. A violinist in Italy struck up a conversation about creativity.
And then, as quickly as he had made his new friends, he lost them. One night in 2020, he had a seizure. Then he began forgetting things that he’d just been told and sleeping most of the day. When he picked up his iPad again, it was incomprehensible to him. A year or so later, he put an electric kettle on the gas stove. Not long after, he was diagnosed with Alzheimer’s.
An estimated 7 million Americans age 65 and older are currently living with Alzheimer’s; by 2050, that number is expected to rise to nearly 13 million. Millions more have another form of dementia or cognitive decline. These diseases can make simple tasks confusing, language hard to understand, and memory fleeting, none of which is conducive to social connection. And because apps and websites constantly update, they pose a particular challenge for patients who cannot learn or remember, which means that people like my father, who rely heavily on social media to stay in touch, may face an even higher barrier to communication.
When my father turned on his iPad again about a year after his seizure, he couldn’t find the Photoshop app because the logo had changed. Instagram, which now had Reels and a shopping tab, was unnavigable. Some of his followers from Instagram and Flickr had moved on to a new app—TikTok—that he had no hope of operating. Whenever we speak, he asks me where his former life has disappeared to: “Where are all my photos?” “Why did you delete your profile?” “I wrote a reply to a message; where has it gone?” Of all the losses caused by Alzheimer’s, the one that seems to have brought him the most angst is that of the digital world he had once mastered, and the abilities to create and connect that it had afforded him.
In online support forums, caretakers of Alzheimer’s and dementia patients describe how their loved ones struggle to navigate the platforms they were once familiar with. One member of the r/dementia Subreddit, who requested not to be identified out of respect for her father’s privacy, told me that, about a decade ago, her father had been an avid emailer and used a site called Friends Reunited to recall the past and reconnect with old acquaintances. Then he received his dementia diagnosis after back-to-back strokes; his PC now sits unused. Amy Evans, a 62-year-old in Sacramento, told me that her father, who passed away in May at the age of 92, started behaving erratically online at the onset of Alzheimer’s. He posted on Facebook that he was looking for a sex partner. Then he began responding to scam emails and ordering, among other things, Xanax from India. Evans eventually installed child-protection software on his computer and gave him a GrandPad to connect with family and friends. But he kept forgetting how to use it. Nasrin Chowdhury, a former public-school teacher’s aide who lives in New York City, once used Facebook to communicate daily with family and friends, but now, after a stroke and subsequent Alzheimer’s diagnosis at 55, she will sit for hours tapping the screen with her finger—even if nothing is there, her daughter Eshita Nusrat told me. “I’ll come home from work, and she’ll say she texted me and I never replied, but then I’ll look at her phone and she tried to type it out in YouTube and post it as a video,” Chowdhury’s other daughter, Salowa Jessica, said. Now Chowdhury takes calls with the aid of her family, but she told me that, because she can’t use social media, she feels she has no control of her own life.
Many patients with dementia and related cognitive disorders lose the ability to communicate, regardless of whether they use technology to do it. It’s a vicious cycle, Joel Salinas, a clinical assistant professor of neurology at NYU Grossman School of Medicine, told me, because social disconnect can, in turn, hasten the cognitive degeneration caused by Alzheimer’s and dementia. Social media, by its very nature, is an especially acute challenge for people with dementia. The online world is a largely visual medium with a complex array of workflows, and dementia commonly causes visual processing to be interrupted or delayed. And unlike face-to-face conversation, landlines, or even flip phones, social media is always evolving. Every few months on a given platform, buttons might be changed, icons reconfigured, or new features released. Tech companies say that such changes make the user experience more seamless, but those with short-term memory loss can find the user experience downright impossible.
On the whole, social-media companies have not yet found good solutions for users with dementia, JoAnne Juett, Meta’s enterprise product manager for accessibility, told me. “I would say that we’re tackling more the loss of vision, the loss of hearing, mobility issues,” she said. Design changes that address such disabilities might help many dementia patients who, thanks to their advanced age, have limited mobility. But to accommodate the unique needs of an aging or cognitively disabled user, Juett believes that AI might be crucial. “If, let’s say, Windows 7 is gone, AI could identify my patterns of use, and adapt Windows 11 for me,” she said. Juett also told me her 97-year-old mother now uses Siri to make calls. It allows her to maintain social ties even when she can’t keep track of where the Phone app lives on her iPhone’s screen.
The idea of a voice assistant that could reconnect my father to his online world is enticing. I wish he had a tool that would allow him to connect in the ways that once gave him joy. Such solutions will become only more necessary: Americans are, on average, getting both older and more reliant on technology to communicate. The oldest Americans, who are most likely to experience cognitive decline, came to social media later in life—and still, nearly half of the population over 65 uses it. Social media is an inextricable part of how younger generations connect. If the particular loneliness of forgetting how to use social media is already becoming apparent, what will happen when an entire generation of power users comes of age?
A slab of uplifted rock larger than Italy sits in the center of the American Southwest. It is called the Colorado Plateau, and it is a beautiful place, higher ground in every sense. What little rain falls onto the plateau has helped to inscribe spectacular canyons into its surface. Ice Age mammoth hunters were likely the first human beings to wander among its layered cliff faces and mesas, where the exposed sedimentary rock comes in every color between peach and vermillion. Native Americans liked what they saw, or so it seems: The plateau has been inhabited ever since, usually by many tribes. They buried their dead in its soil and built homes that blend in with the landscape. In the very heart of the plateau, the Ancestral Pueblo people wedged brick dwellings directly into the banded cliffs.
Some of the best-preserved Ancestral Pueblo ruins are located near two 9,000-foot buttes in southeastern Utah, 75 miles from where its borders form a pair of crosshairs with those of Colorado, New Mexico, and Arizona. The Ancestral Pueblo were not the only Native Americans in the area. Other tribes lived nearby, or often passed through, and many of them describe the buttes as “Bears Ears” in their own languages. Thousands of archaeological sites are scattered across the area, but they have not always been properly cared for. Uranium miners laid siege to the landscape during the early atomic age, and in the decades since, many dwellings and graves have been looted.
In 2015, five federally recognized tribes—the Navajo Nation, the Zuni, the Hopi, the Mountain Ute, and the Ute—joined together to request that President Barack Obama make Bears Ears a national monument. The Bears Ears Inter-Tribal Coalition, as they called themselves, wanted to protect as many cultural sites as possible from further desecration. They asked for nearly 2 million acres centered on the buttes. In 2016, Obama created a monument of roughly two-thirds that size.
The borders of that monument have been shifting ever since. In late 2017, President Donald Trump erased all but roughly 15 percent of the protected land, in the name of reversing federal overreach and restoring local control; and in the years that followed, mining companies staked more than 80 new hard-rock claims within its former borders. The majority were for uranium and vanadium, minerals that are in demand again, now that a new nuclear arms race is on, and tech companies are looking for fresh ways to power the AI revolution.
In 2021, President Joe Biden put the monument’s borders back to where they’d started—and the miners’ claims were put on hold. Now Trump is reportedly planning to shrink Bears Ears once again, possibly during his first week in office.
With every new election, more than 1 million acres have flickered in and out of federal protection. People on both sides of the fight over Bears Ears feel jerked around. In southeastern Utah, the whipsaw of American politics is playing out on the ground, frustrating everyone, and with no end in sight.
Vaughn Hadenfeldt has worked as a backcountry guide in Bears Ears since the 1970s. He specializes in archaeological expeditions. Back when he started, the area was besieged by smash-and-grab looters. They used backhoes to dig up thousand-year-old graves in broad daylight, he told me. Some of these graves are known to contain ceramics covered in geometrical patterns, turquoise jewelry, and macaw-feather sashes sourced from the tropics. Thieves made off with goods like these without even bothering to refill the holes. Later on, after Bears Ears had become a popular Utah stopover for tourists passing through to Monument Valley, the looters had to be more discreet. They started coming in the winter months, Hadenfeldt told me, and refilling the ancient graves that they pillaged. “The majority of the people follow the rules, but it takes so few people who don’t to create lifelong impacts on this type of landscape,” he said.
Hadenfeldt lives in Bluff, Utah, a small town to the southeast of Bears Ears. Its population of 260 includes members of the Navajo Nation, artists, writers, archaeologists, and people who make their living in the gentler outdoor recreation activities. (Think backpacking and rock climbing, not ATVs.) The town’s mayor, Ann Leppanen, told me that, on the whole, her constituents strongly oppose any attempt to shrink the monument. More tourists are coming, and now they aren’t just passing through on the way to Monument Valley. They’re spending a night or two, enjoying oat-milk lattes and the like before heading off to Bears Ears.
But Bluff is a blue pinprick in bright-red southern Utah, where this one town’s affection for the monument is not so widely shared. Bayley Hedglin, the mayor of Monticello, a larger town some 50 miles north, described Bluff to me as a second-home community, a place for “people from outside the area”—code for Californians—or retirees. For her and her constituents, the monument and other public lands that surround Monticello are like a boa constrictor, suffocating their town by forcing it into a tourism economy of low-paying, seasonal jobs. The extra hikers who have descended on the area often need rescuing. She said they strain local emergency-services budgets.
I asked Hedglin which industries she would prefer. “Extraction,” she said. Her father and grandfather were both uranium miners. “San Juan County was built on mining, and at one time, we were very wealthy,” she said. She understood that the monument was created at the behest of a marginalized community, but pointed out that the residents of Monticello, where the median household income is less than $64,000, are marginalized in their own right. I asked what percentage of them support the national monument. “You could probably find 10,” she said. “10 percent?” I asked. “No, 10 people,” she replied.
The election-to-election uncertainty is itself a burden, Hedglin said. “It makes it hard to plan for the future. Even if Trump shrinks the monument again, we can’t make the development plans that we need in Monticello, because we know that there will be another election coming.” Britt Hornsby, a staunchly pro-monument city-council member in Bluff, seemed just as disheartened by what he called the federal government’s “ping-pong approach” to Bears Ears. “We’ve had some folks in town looking to start a guiding business,” he said, “but they have been unable to get special recreation permits with all the back-and-forth.”
The only conventional uranium-processing mill still active in the United States sits just outside the borders of another nearby town, Blanding. Phil Lyman, who, until recently, represented Blanding and much of the surrounding area in Utah’s House of Representatives, has lived there all of his life. Lyman personifies resistance to the monument. He told me that archaeological sites were never looted en masse, as Hadenfeldt had said. This account of the landscape was simply “a lie.” (In 2009, federal agents raided homes in Blanding and elsewhere, recovering some 40,000 potentially stolen artifacts.) While Lyman was serving as the local county commissioner in 2014, two years before Bears Ears was created, he led an illegal ATV ride into a canyon that the Bureau of Land Management had closed in order to protect Ancestral Pueblo cliff dwellings. Some associates of the anti-government militant Ammon Bundy rode along with him. A few were armed.
To avoid violence, assembled federal agents did not make immediate arrests, but Lyman was later convicted, and served 10 days in jail. The stunt earned him a pardon from Trump and a more prominent political profile in Utah.When Biden re-expanded the monument in 2021, Lyman was furious. While he offered general support for the state of Utah’s legal efforts to reverse Biden’s order, he also said that his paramount concern was not these “lesser legal arguments” but “the federal occupation of Utah” itself. Like many people in rural Utah, Lyman sees the monument as yet another government land grab, in a state where more than 60 percent of the land is public. The feds had colluded with environmentalists to designate the monument to shut down industries, in a manner befitting of Communists, he told me.
Davina Smith, who sits on the board of the Bears Ears Inter-Tribal Coalition as representative for the Navajo Nation, grew up just a mile outside of Bears Ears. She now lives in Blanding, not far from Lyman. Her father, like Mayor Hedglin’s, was a uranium miner. But Native Americans haven’t always been treated like they belong here, she told me. “People in Utah say that they want local control, but when we tried to deal with the state, we were not viewed as locals.” Indeed, for more than 30 years, San Juan County’s government was specifically designed to keep input from the Navajo to a minimum. Only in 2017 did a federal court strike down a racial-gerrymandering scheme that had kept Navajo voting power confined to one district.
Smith, too, has been tormented by what she called the “never-ending cycle of uncertainty” over the monument. The tribes have just spent three years negotiating a new land-management plan with the Biden administration, and it may be all for naught. “Each new administration comes in with different plans and shifting priorities, and nothing ever feels like it’s moving toward a permanent solution,” Smith said.
The judicial branch of the federal government will have some decisions of its own to make about the monument, and may inject still more reversals. In 2017, the Bears Ears Inter-Tribal Coalition and other groups sued the government over Trump’s original downsizing order, arguing that the president’s power to create national monuments under the Antiquities Act is a ratchet—a power to create, not shrink or destroy. No federal judge had ruled on that legal question by the time of Biden’s re-expansion, and the lawsuit was stayed. If Trump now shrinks the monument again, the lawsuit will likely be reactivated, and new ones likely filed. A subsequent ruling in Trump’s favor would have far-reaching implications if it were upheld by the Supreme Court. It would defang the Antiquities Act, a statute that was written to protect Native American heritage, empowering any president to shrink any of America’s national monuments on a whim. (The Biden administration launched an historic run of monument creation. Project 2025, a policy blueprint co-written by Trump’s former head of BLM, calls for a shrinking spree.) The borders of each one could begin to pulsate with every subsequent presidential handover.
An act of Congress might be the only way to permanently resolve the Bears Ears issue. Even with Republican lawmakers in control, such an outcome may be preferable to the endless flip-flops of executive power, Hillary Hoffmann, a co-director of the Bears Ears Inter-Tribal Coalition, told me. “The tribes have built bipartisan relationships with members of Congress.” They might not get as much land for the monument as they did under Obama or Biden, she said, but perhaps a grand bargain could be struck. A smaller allotment of protected land could be exchanged for the stability that would allow local communities—including monument supporters and opponents alike—to plan for their future.
In the meantime, people in southeastern Utah are waiting to see what Trump actually does. When I asked Smith how the tribes are preparing for the new administration, she was coy. She didn’t want to telegraph the coalition’s next moves. “We are definitely planning,” she told me. “This isn’t our first time.” Everyone in the fight over Bears Ears has to find some way to cope with the uncertainty; for Smith, it’s taking the long view. She invoked the deeper history of the Colorado Plateau. She called back to the Long Walk of the Navajo, a series of 53 forced marches that the U.S. Army used to remove thousands of tribe members from their land in New Mexico and Arizona in the 1860s. “When the cavalry came to round up my people, some of them sought refuge in Bears Ears,” she said. “To this day, I can go there and remember what my ancestors did. I can remember that we come from a great line of resilience.”
A short drive from my home in North Carolina is a small Mexican restaurant, with several tables and four stools at a bar facing the kitchen. On a sweltering afternoon last summer, I walked in with my wife and daughter. The place was empty. But looking closer, I realized that business was booming. The bar was covered with to-go food: nine large brown bags.
As we ate our meal, I watched half a dozen people enter the restaurant without sitting down to eat. Each one pushed open the door, walked to the counter, picked up a bag from the bar, and left. In the delicate choreography between kitchen and customer, not a word was exchanged. The space once reserved for that most garrulous social encounter, the bar hangout, had been reconfigured into a silent depot for customers to grab food to eat at home.
Until the pandemic, the bar was bustling and popular with regulars. “It’s just a few seats, but it was a pretty happening place,” Rae Mosher, the restaurant’s general manager, told me. “I can’t tell you how sad I’ve been about it,” she went on. “I know it hinders communications between customers and staff to have to-go bags taking up the whole bar. But there’s nowhere else for the food to go.” She put up a sign: BAR SEATING CLOSED.
The sign on the bar is a sign of the times for the restaurant business. In the past few decades, the sector has shifted from tables to takeaway, a process that accelerated through the pandemic and continued even as the health emergency abated. In 2023, 74 percent of all restaurant traffic came from “off premises” customers—that is, from takeout and delivery—up from 61 percent before COVID, according to the National Restaurant Association.
The flip side of less dining out is more eating alone. The share of U.S. adults having dinner or drinks with friends on any given night has declined by more than 30 percent in the past 20 years. “There’s an isolationist dynamic that’s taking place in the restaurant business,” the Washington, D.C., restaurateur Steve Salis told me. “I think people feel uncomfortable in the world today. They’ve decided that their home is their sanctuary. It’s not easy to get them to leave.” Even when Americans eat at restaurants, they are much more likely to do so by themselves. According to data gathered by the online reservations platform OpenTable, solo dining has increased by 29 percent in just the past two years. The No. 1 reason is the need for more “me time.”
The evolution of restaurants is retracing the trajectory of another American industry: Hollywood. In the 1930s, video entertainment existed only in theaters, and the typical American went to the movies several times a month. Film was a necessarily collective experience, something enjoyed with friends and in the company of strangers. But technology has turned film into a home delivery system. Today, the typical American adult buys about three movie tickets a year—and watches almost 19 hours of television, the equivalent of roughly eight movies, on a weekly basis. In entertainment, as in dining, modernity has transformed a ritual of togetherness into an experience of homebound reclusion and even solitude.
The privatization of American leisure is one part of a much bigger story. Americans are spending less time with other people than in any other period for which we have trustworthy data, going back to 1965. Between that year and the end of the 20th century, in-person socializing slowly declined. From 2003 to 2023, it plunged by more than 20 percent, according to the American Time Use Survey, an annual study conducted by the Bureau of Labor Statistics. Among unmarried men and people younger than 25, the decline was more than 35 percent. Alone time predictably spiked during the pandemic. But the trend had started long before most people had ever heard of a novel coronavirus and continued after the pandemic was declared over. According to Enghin Atalay, an economist at the Federal Reserve Bank of Philadelphia, Americans spent even more time alone in 2023 than they did in 2021. (He categorized a person as “alone,” as I will throughout this article, if they are “the only person in the room, even if they are on the phone” or in front of a computer.)
Eroding companionship can be seen in numerous odd and depressing facts of American life today. Men who watch television now spend seven hours in front of the TV for every hour they spend hanging out with somebody outside their home. The typical female pet owner spends more time actively engaged with her pet than she spends in face-to-face contact with friends of her own species. Since the early 2000s, the amount of time that Americans say they spend helping or caring for people outside their nuclear family has declined by more than a third.
Self-imposed solitude might just be the most important social fact of the 21st century in America. Perhaps unsurprisingly, many observers have reduced this phenomenon to the topic of loneliness. In 2023, Vivek Murthy, Joe Biden’s surgeon general, published an 81-page warning about America’s “epidemic of loneliness,” claiming that its negative health effects were on par with those of tobacco use and obesity. A growing number of public-health officials seem to regard loneliness as the developed world’s next critical public-health issue. The United Kingdom now has a minister for loneliness. So does Japan.
But solitude and loneliness are not one and the same. “It is actually a very healthy emotional response to feel some loneliness,” the NYU sociologist Eric Klinenberg told me. “That cue is the thing that pushes you off the couch and into face-to-face interaction.” The real problem here, the nature of America’s social crisis, is that most Americans don’t seem to be reacting to the biological cue to spend more time with other people. Their solitude levels are surging while many measures of loneliness are actually flat or dropping. A 2021 study of the widely used UCLA Loneliness Scale concluded that “the frequently used term ‘loneliness epidemic’ seems exaggerated.” Although young people are lonelier than they once were, there is little evidence that loneliness is rising more broadly today. A 2023 Gallup survey found that the share of Americans who said they experienced loneliness “a lot of the day yesterday” declined by roughly one-third from 2021 to 2023, even as alone time, by Atalay’s calculation, rose slightly.
Day to day, hour to hour, we are choosing this way of life—its comforts, its ready entertainments. But convenience can be a curse. Our habits are creating what Atalay has called a “century of solitude.” This is the anti-social century.
Over the past few months, I’ve spoken with psychologists, political scientists, sociologists, and technologists about America’s anti-social streak. Although the particulars of these conversations differed, a theme emerged: The individual preference for solitude, scaled up across society and exercised repeatedly over time, is rewiring America’s civic and psychic identity. And the consequences are far-reaching—for our happiness, our communities, our politics, and even our understanding of reality.
The End of the Social Century
The first half of the 20th century was extraordinarily social. From 1900 to 1960, church membership surged, as did labor-union participation. Marriage rates reached a record high after World War II, and the birth rate enjoyed a famous “boom.” Associations of all sorts thrived, including book clubs and volunteer groups. The New Deal made America’s branch-library system the envy of the world; communities and developers across the country built theaters, music venues, playgrounds, and all kinds of gathering places.
But in the 1970s, the U.S. entered an era of withdrawal, as the political scientist Robert D. Putnam famously documented in his 2000 book, Bowling Alone. Some institutions of togetherness, such as marriage, eroded slowly. Others fell away swiftly. From 1985 to 1994, active involvement in community organizations fell by nearly half. The decline was astonishingly broad, affecting just about every social activity and every demographic group that Putnam tracked.
What happened in the 1970s? Klinenberg, the sociologist, notes a shift in political priorities: The government dramatically slowed its construction of public spaces. “Places that used to anchor community life, like libraries and school gyms and union halls, have become less accessible or shuttered altogether,” he told me. Putnam points, among other things, to new moral values, such as the embrace of unbridled individualism. But he found that two of the most important factors were by then ubiquitous technologies: the automobile and the television set.
Starting in the second half of the century, Americans used their cars to move farther and farther away from one another, enabling the growth of the suburbs and, with it, a retreat into private backyard patios, private pools, a more private life. Once Americans got out of the car, they planted themselves in front of the television. From 1965 to 1995, the typical adult gained six hours a week in leisure time. They could have devoted that time—300 hours a year!—to community service, or pickup basketball, or reading, or knitting, or all four. Instead, they funneled almost all of this extra time into watching more TV.
Television transformed Americans’ interior decorating, our relationships, and our communities. In 1970, just 6 percent of sixth graders had a TV set in their bedroom; in 1999, that proportion had grown to 77 percent. Time diaries in the 1990s showed that husbands and wives spent almost four times as many hours watching TV together as they spent talking to each other in a given week. People who said TV was their “primary form of entertainment” were less likely to engage in practically every social activity that Putnam counted: volunteering, churchgoing, attending dinner parties, picnicking, giving blood, even sending greeting cards. Like a murder in Clue, the death of social connections in America had any number of suspects. But in the end, I believe the likeliest culprit is obvious. It was Mr. Farnsworth, in the living room, with the tube.
Phonebound
If two of the 20th century’s iconic technologies, the automobile and the television, initiated the rise of American aloneness, the 21st century’s most notorious piece of hardware has continued to fuel, and has indeed accelerated, our national anti-social streak. Countless books, articles, and cable-news segments have warned Americans that smartphones can negatively affect mental health and may be especially harmful to adolescents. But the fretful coverage is, if anything, restrained given how greatly these devices have changed our conscious experience. The typical person is awake for about 900 minutes a day. American kids and teenagers spend, on average, about 270 minutes on weekdays and 380 minutes on weekends gazing into their screens, according to the Digital Parenthood Initiative. By this account, screens occupy more than 30 percent of their waking life.
Some of this screen time is social, after a fashion. But sharing videos or texting friends is a pale imitation of face-to-face interaction. More worrisome than what young people do on their phone is what they aren’t doing. Young people are less likely than in previous decades to get their driver’s license, or to go on a date, or to have more than one close friend, or even to hang out with their friends at all. The share of boys and girls who say they meet up with friends almost daily outside school hours has declined by nearly 50 percent since the early 1990s, with the sharpest downturn occurring in the 2010s.
The decline of hanging out can’t be shrugged off as a benign generational change, something akin to a preference for bell-bottoms over skinny jeans. Human childhood—including adolescence—is a uniquely sensitive period in the whole of the animal kingdom, the psychologist Jonathan Haidt writes in The Anxious Generation. Although the human brain grows to 90 percent of its full size by age 5, its neural circuitry takes a long time to mature. Our lengthy childhood might be evolution’s way of scheduling an extended apprenticeship in social learning through play. The best kind of play is physical, outdoors, with other kids, and unsupervised, allowing children to press the limits of their abilities while figuring out how to manage conflict and tolerate pain. But now young people’s attention is funneled into devices that take them out of their body, denying them the physical-world education they need.
Teen anxiety and depression are at near-record highs: The latest government survey of high schoolers, conducted in 2023, found that more than half of teen girls said they felt “persistently sad or hopeless.” These data are alarming, but shouldn’t be surprising. Young rats and monkeys deprived of play come away socially and emotionally impaired. It would be odd if we, the self-named “social animal,” were different.
Socially underdeveloped childhood leads, almost inexorably, to socially stunted adulthood. A popular trend on TikTok involves 20‑somethings celebrating in creative ways when a friend cancels plans, often because they’re too tired or anxious to leave the house. These clips can be goofy and even quite funny. Surely, sympathy is due; we all know the feeling of relief when we claw back free time in an overscheduled week. But the sheer number of videos is a bit unsettling. If anybody should feel lonely and desperate for physical-world contact, you’d think it would be 20-somethings, who are still recovering from years of pandemic cabin fever. But many nights, it seems, members of America’s most isolated generation aren’t trying to leave the house at all. They’re turning on their cameras to advertise to the world the joy of not hanging out.
If young adults feel overwhelmed by the emotional costs of physical-world togetherness—and prone to keeping even close friends at a physical distance—that suggests that phones aren’t just rewiring adolescence; they’re upending the psychology of friendship as well.
In the 1960s, Irwin Altman, a psychologist at the Naval Medical Research Institute, in Bethesda, Maryland, co-developed a friendship formula characterized by increasing intimacy. In the early stages of friendship, people engage in small talk by sharing trivial details. As they develop trust, their conversations deepen to include more private information until disclosure becomes habitual and easy. Altman later added an important wrinkle: Friends require boundaries as much as they require closeness. Time alone to recharge is essential for maintaining healthy relationships.
Phones mean that solitude is more crowded than it used to be, and crowds are more solitary. “Bright lines once separated being alone and being in a crowd,” Nicholas Carr, the author of the new book Superbloom: How Technologies of Connection Tear Us Apart, told me. “Boundaries helped us. You could be present with your friends and reflective in your downtime.” Now our social time is haunted by the possibility that something more interesting is happening somewhere else, and our downtime is contaminated by the streams and posts and texts of dozens of friends, colleagues, frenemies, strangers.
If Carr is right, modern technology’s always-open window to the outside world makes recharging much harder, leaving many people chronically depleted, a walking battery that is always stuck in the red zone. In a healthy world, people who spend lots of time alone would feel that ancient biological cue: I’m alone and sad; I should make some plans. But we live in a sideways world, where easy home entertainment, oversharing online, and stunted social skills spark a strangely popular response: I’m alone, anxious, and exhausted; thank God my plans were canceled.
Homebound
Last year, the Princeton University sociologist Patrick Sharkey was working on a book about how places shape American lives and economic fortunes. He had a feeling that the rise of remote work might have accelerated a longer-term trend: a shift in the amount of time that people spend inside their home. He ran the numbers and discovered “an astounding change” in our daily habits, much more extreme than he would have guessed. In 2022—notably, after the pandemic had abated—adults spent an additional 99 minutes at home on any given day compared with 2003.
This finding formed the basis of a 2024 paper, “Homebound,” in which Sharkey calculated that, compared with 2003, Americans are more likely to take meetings from home, to shop from home, to be entertained at home, to eat at home, and even to worship at home. Practically the entire economy has reoriented itself to allow Americans to stay within their four walls. This phenomenon cannot be reduced to remote work. It is something far more totalizing—something more like “remote life.”
One might ask: Why wouldn’t Americans with means want to spend more time at home? In the past few decades, the typical American home has become bigger, more comfortable, and more entertaining. From 1973 to 2023, the size of the average new single-family house increased by 50 percent, and the share of new single-family houses that have air-conditioning doubled, to 98 percent. Streaming services, video-game consoles, and flatscreen TVs make the living room more diverting than any 20th-century theater or arcade. Yet conveniences can indeed be a curse. By Sharkey’s calculations, activities at home were associated with a “strong reduction” in self-reported happiness.
A homebound life doesn’t have to be a solitary life. In the 1970s, the typical household entertained more than once a month. But from the late 1970s to the late 1990s, the frequency of hosting friends for parties, games, dinners, and so on declined by 45 percent, according to data that Robert Putnam gathered. In the 20 years after Bowling Alone was published, the average amount of time that Americans spent hosting or attending social events declined another 32 percent.
As our homes have become less social, residential architecture has become more anti-social. Clifton Harness is a co-founder of TestFit, a firm that makes software to design layouts for new housing developments. He told me that the cardinal rule of contemporary apartment design is that every room is built to accommodate maximal screen time. “In design meetings with developers and architects, you have to assure everybody that there will be space for a wall-mounted flatscreen television in every room,” he said. “It used to be ‘Let’s make sure our rooms have great light.’ But now, when the question is ‘How do we give the most comfort to the most people?,’ the answer is to feed their screen addiction.” Bobby Fijan, a real-estate developer, said last year that “for the most part, apartments are built for Netflix and chill.” From studying floor plans, he noticed that bedrooms, walk-in closets, and other private spaces are growing. “I think we’re building for aloneness,” Fijan told me.
“Secular Monks”
In 2020, the philosopher and writer Andrew Taggart observed in an essay published in the religious journal First Things that a new flavor of masculinity seemed to be emerging: strong, obsessed with personal optimization, and proudly alone. Men and women alike have been delaying family formation; the median age at first marriage for men recently surpassed 30 for the first time in history. Taggart wrote that the men he knew seemed to be forgoing marriage and fatherhood with gusto. Instead of focusing their 30s and 40s on wedding bands and diapers, they were committed to working on their body, their bank account, and their meditation-sharpened minds. Taggart called these men “secular monks” for their combination of old-fashioned austerity and modern solipsism. “Practitioners submit themselves to ever more rigorous, monitored forms of ascetic self-control,” he wrote, “among them, cold showers, intermittent fasting, data-driven health optimization, and meditation boot camps.”
When I read Taggart’s essay last year, I felt a shock of recognition. In the previous months, I’d been captivated by a particular genre of social media: the viral “morning routine” video. If the protagonist is a man, he is typically handsome and rich. We see him wake up. We see him meditate. We see him write in his journal. We see him exercise, take supplements, take a cold plunge. What is most striking about these videos, however, is the element they typically lack: other people. In these little movies of a life well spent, the protagonists generally wake up alone and stay that way. We usually see no friends, no spouse, no children. These videos are advertisements for a luxurious form of modern monasticism that treats the presence of other people as, at best, an unwelcome distraction and, at worst, an unhealthy indulgence that is ideally avoided—like porn, perhaps, or Pop-Tarts.
Drawing major conclusions about modern masculinity from a handful of TikToks would be unwise. But the solitary man is not just a social-media phenomenon. Men spend more time alone than women, and young men are increasing their alone time faster than any other group, according to the American Time Use Survey.
Where is this alone time coming from? Liana C. Sayer, a sociologist at the University of Maryland, shared with me her analysis of how leisure time in the 21st century has changed for men and women. Sayer divided leisure into two broad categories: “engaged leisure,” which includes socializing, going to concerts, and playing sports; and “sedentary leisure,” which includes watching TV and playing video games. Compared with engaged leisure, which is more likely to be done with other people, sedentary leisure is more commonly done alone.
The most dramatic tendency that Sayer uncovered is that single men without kids—who have the most leisure time—are overwhelmingly likely to spend these hours by themselves. And the time they spend in solo sedentary leisure has increased, since 2003, more than that of any other group Sayer tracked. This is unfortunate because, as Sayer wrote, “well-being is higher among adults who spend larger shares of leisure with others.” Sedentary leisure, by contrast, was “associated with negative physical and mental health.”
Richard V. Reeves, the president of the American Institute for Boys and Men, told me that for men, as for women, something hard to define is lost when we pursue a life of isolationist comforts. He calls it “neededness”—the way we make ourselves essential to our families and community. “I think at some level, we all need to feel like we’re a jigsaw piece that’s going to fit into a jigsaw somewhere,” he said. This neededness can come in several forms: social, economic, or communitarian. Our children and partners can depend on us for care or income. Our colleagues can rely on us to finish a project, or to commiserate about an annoying boss. Our religious congregations and weekend poker parties can count on us to fill a pew or bring the dip.
But building these bridges to community takes energy, and today’s young men do not seem to be constructing these relationships in the same way that they used to. In place of neededness, despair is creeping in. Men who are un- or underemployed are especially vulnerable. Feeling unneeded “is actually, in some cases, literally fatal,” Reeves said. “If you look at the words that men use to describe themselves before they take their own lives, they are worthless and useless.” Since 2001, hundreds of thousands of men have died of drug overdoses, mostly from opioids and synthetics such as fentanyl. “If the level of drug-poisoning deaths had remained flat since 2001, we’d have had 400,000 fewer men die,” Reeves said. These drugs, he emphasized, are defined by their solitary nature: Opioids are not party drugs, but rather the opposite.
This Is Your Politics on Solitude
All of this time alone, at home, on the phone, is not just affecting us as individuals. It’s making society weaker, meaner, and more delusional. Marc J. Dunkelman, an author and a research fellow at Brown University, says that to see how chosen solitude is warping society at large, we must first acknowledge something a little counterintuitive: Today, many of our bonds are actually getting stronger.
Parents are spending more time with their children than they did several decades ago, and many couples and families maintain an unbroken flow of communication. “My wife and I have texted 10 times since we said goodbye today,” Dunkelman told me when I reached him at noon on a weekday. “When my 10-year-old daughter buys a Butterfinger at CVS, I get a phone notification about it.”
At the same time, messaging apps, TikTok streams, and subreddits keep us plugged into the thoughts and opinions of the global crowd that shares our interests. “When I watch a Cincinnati Bengals football game, I’m on a group text with beat reporters to whom I can ask questions, and they’ll respond,” Dunkelman said. “I can follow the live thoughts of football analysts on X.com, so that I’m practically watching the game over their shoulder. I live in Rhode Island, and those are connections that could have never existed 30 years ago.”
Home-based, phone-based culture has arguably solidified our closest and most distant connections, the inner ring of family and best friends (bound by blood and intimacy) and the outer ring of tribe (linked by shared affinities). But it’s wreaking havoc on the middle ring of “familiar but not intimate” relationships with the people who live around us, which Dunkelman calls the village. “These are your neighbors, the people in your town,” he said. We used to know them well; now we don’t.
The middle ring is key to social cohesion, Dunkelman said. Families teach us love, and tribes teach us loyalty. The village teaches us tolerance. Imagine that a local parent disagrees with you about affirmative action at a PTA meeting. Online, you might write him off as a political opponent who deserves your scorn. But in a school gym full of neighbors, you bite your tongue. As the year rolls on, you discover that your daughters are in the same dance class. At pickup, you swap stories about caring for aging relatives. Although your differences don’t disappear, they’re folded into a peaceful coexistence. And when the two of you sign up for a committee to draft a diversity statement for the school, you find that you can accommodate each other’s opposing views. “It’s politically moderating to meet thoughtful people in the real world who disagree with you,” Dunkelman said. But if PTA meetings are still frequently held in person, many other opportunities to meet and understand one’s neighbors are becoming a thing of the past. “An important implication of the death of the middle ring is that if you have no appreciation for why the other side has their narrative, you’ll want your own side to fight them without compromise.”
The village is our best arena for practicing productive disagreement and compromise—in other words, democracy. So it’s no surprise that the erosion of the village has coincided with the emergence of a grotesque style of politics, in which every election feels like an existential quest to vanquish an intramural enemy. For the past five decades, the American National Election Studies surveys have asked Democrats and Republicans to rate the opposing party on a “Feeling Thermometer” that ranges from zero (very cold/unfavorable) to 100 (very warm/favorable). In 2000, just 8 percent of partisans gave the other party a zero. By 2020, that figure had shot up to 40 percent. In a 2021 poll by Generation Lab/Axios, nearly a third of college students who identify as Republican said they wouldn’t even go on a date with a Democrat, and more than two-thirds of Democratic students said the same of members of the GOP.
Donald Trump’s victory in the 2024 presidential election had many causes, including inflation and frustration with Joe Biden’s leadership. But one source of Trump’s success may be that he is an avatar of the all-tribe, no-village style of performative confrontation. He stokes out-group animosity, and speaks to voters who are furiously intolerant of political difference. To cite just a few examples from the campaign, Trump called Democrats “enemies of the democracy” and the news media “enemies of the people,” and promised to “root out” the “radical-left thugs that live like vermin within the confines of our country, that lie and steal and cheat on elections.”
Social disconnection also helps explain progressives’ stubborn inability to understand Trump’s appeal. In the fall, one popular Democratic lawn sign read Harris Walz: Obviously. That sentiment, rejected by a majority of voters, indicates a failure to engage with the world as it really is. Dunkelman emailed me after the election to lament Democratic cluelessness. “How did those of us who live in elite circles not see how Trump was gaining popularity even among our literal neighbors?” he wrote. Too many progressives were mainlining left-wing media in the privacy of their home, oblivious that families down the street were drifting right. Even in the highly progressive borough of Brooklyn, New York, three in 10 voters chose Trump. If progressives still consider MAGA an alien movement, it is in part because they have made themselves strangers in their own land.
Practicing politics alone, on the internet, rather than in community isn’t only making us more likely to demonize and alienate our opponents, though that would be bad enough. It may also be encouraging deep nihilism. In 2018, a group of researchers led by Michael Bang Petersen, a Danish political scientist, began asking Americans to evaluate false rumors about Democratic and Republican politicians, including Trump and Hillary Clinton. “We were expecting a clear pattern of polarization,” Petersen told me, with people on the left sharing conspiracies about the right and vice versa. But some participants seemed drawn to any conspiracy theory so long as it was intended to destroy the established order. Members of this cohort commonly harbored racial or economic grievances. Perhaps more important, Petersen said, they tended to feel socially isolated. These aggravated loners agreed with many dark pronouncements, such as “I need chaos around me” and “When I think about our political and social institutions, I cannot help thinking ‘just let them all burn.’ ” Petersen and his colleagues coined a term to describe this cohort’s motivation: the need for chaos.
Although chaotically inclined individuals score highly in a popular measure for loneliness, they don’t seem to seek the obvious remedy. “What they’re reaching out to get isn’t friendship at all but rather recognition and status,” Petersen said. For many socially isolated men in particular, for whom reality consists primarily of glowing screens in empty rooms, a vote for destruction is a politics of last resort—a way to leave one’s mark on a world where collective progress, or collective support of any kind, feels impossible.
The Introversion Delusion
Let us be fair to solitude, for a moment. As the father of a young child, I know well that a quiet night alone can be a balm. I have spent evenings alone at a bar, watching a baseball game, that felt ecstatically close to heaven. People cope with stress and grief and mundane disappointment in complex ways, and sometimes isolation is the best way to restore inner equilibrium.
But the dosage matters. A night alone away from a crying baby is one thing. A decade or more of chronic social disconnection is something else entirely. And people who spend more time alone, year after year, become meaningfully less happy. In his 2023 paper on the rise of 21st-century solitude, Atalay, at the Philadelphia Fed, calculated that by one measure, sociability means considerably more for happiness than money does: A five-percentage-point increase in alone time was associated with about the same decline in life satisfaction as was a 10 percent lower household income.
Nonetheless, many people keep choosing to spend free time alone, in their home, away from other people. Perhaps, one might think, they are making the right choice; after all, they must know themselves best. But a consistent finding of modern psychology is that people often don’t know what they want, or what will make them happy. The saying that “predictions are hard, especially about the future” applies with special weight to predictions about our own life. Time and again, what we expect to bring us peace—a bigger house, a luxury car, a job with twice the pay but half the leisure—only creates more anxiety. And at the top of this pile of things we mistakenly believe we want, there is aloneness.
Several years ago, Nick Epley, a psychologist at the University of Chicago’s Booth School of Business, asked commuter-train passengers to make a prediction: How would they feel if asked to spend the ride talking with a stranger? Most participants predicted that quiet solitude would make for a better commute than having a long chat with someone they didn’t know. Then Epley’s team created an experiment in which some people were asked to keep to themselves, while others were instructed to talk with a stranger (“The longer the conversation, the better,” participants were told). Afterward, people filled out a questionnaire. How did they feel? Despite the broad assumption that the best commute is a silent one, the people instructed to talk with strangers actually reported feeling significantly more positive than those who’d kept to themselves. “A fundamental paradox at the core of human life is that we are highly social and made better in every way by being around people,” Epley said. “And yet over and over, we have opportunities to connect that we don’t take, or even actively reject, and it is a terrible mistake.”
Researchers have repeatedly validated Epley’s discovery. In 2020, the psychologists Seth Margolis and Sonja Lyubomirsky, at UC Riverside, asked people to behave like an extrovert for one week and like an introvert for another. Subjects received several reminders to act “assertive” and “spontaneous” or “quiet” and “reserved” depending on the week’s theme. Participants said they felt more positive emotions at the end of the extroversion week and more negative emotions at the end of the introversion week. Our modern economy, with its home-delivery conveniences, manipulates people into behaving like agoraphobes. But it turns out that we can be manipulated in the opposite direction. And we might be happier for it.
Our “mistaken” preference for solitude could emerge from a misplaced anxiety that other people aren’t that interested in talking with us, or that they would find our company bothersome. “But in reality,” Epley told me, “social interaction is not very uncertain, because of the principle of reciprocity. If you say hello to someone, they’ll typically say hello back to you. If you give somebody a compliment, they’ll typically say thank you.” Many people, it seems, are not social enough for their own good. They too often seek comfort in solitude, when they would actually find joy in connection.
Despite a consumer economy that seems optimized for introverted behavior, we would have happier days, years, and lives if we resisted the undertow of the convenience curse—if we talked with more strangers, belonged to more groups, and left the house for more activities.
The AI Century
The anti-social century has been bad enough: more anxiety and depression; more “need for chaos” in our politics. But I’m sorry to say that our collective detachment could still get worse. Or, to be more precise, weirder.
In May of last year, three employees of OpenAI, the artificial-intelligence company, sat onstage to introduce ChatGPT’s new real-time conversational-speech feature. A research scientist named Mark Chen held up a phone and, smiling, started speaking to it.
“Hey, ChatGPT, I’m Mark. How are you?” Mark said.
“Hello, Mark!” a cheery female voice responded.
“Hey, so I’m onstage right now,” Mark said. “I’m doing a live demo, and frankly I’m feeling a little bit nervous. Can you help me calm my nerves a little bit?”
“Oh, you’re doing a live demo right now?” the voice replied, projecting astonishment with eerie verisimilitude. “That’s awesome! Just take a deep breath and remember: You’re the expert here.”
Mark asked for feedback on his breathing, before panting loudly, like someone who’d just finished a marathon.
“Whoa, slow!” the voice responded. “Mark, you’re not a vacuum cleaner!” Out of frame, the audience laughed. Mark tried breathing audibly again, this time more slowly and deliberately.
“That’s it,” the AI responded. “How do you feel?”
“I feel a lot better,” Mark said. “Thank you so much.”
AI’s ability to speak naturally might seem like an incremental update, as subtle as a camera-lens refinement on a new iPhone. But according to Nick Epley, fluent speech represents a radical advancement in the technology’s ability to encroach on human relationships.
“Once an AI can speak to you, it’ll feel extremely real,” he said, because people process spoken word more intimately and emotionally than they process text. For a study published in 2020, Epley and Amit Kumar, a psychologist at the University of Texas at Austin, randomly assigned participants to contact an old friend via phone or email. Most people said they preferred to send a written message. But those instructed to talk on the phone reported feeling “a significantly stronger bond” with their friend, and a stronger sense that they’d “really connected,” than those who used email.
Speech is rich with what are known as “paralinguistic cues,” such as emphasis and intonation, which can build sympathy and trust in the minds of listeners. In another study, Epley and the behavioral scientist Juliana Schroeder found that employers and potential recruiters were more likely to rate candidates as “more competent, thoughtful, and intelligent” when they heard a why-I’m-right-for-this-job pitch rather than read it.
Even now, before AI has mastered fluent speech, millions of people are already forming intimate relationships with machines, according to Jason Fagone, a journalist who is writing a book about the emergence of AI companions. Character.ai, the most popular platform for AI companions, has tens of millions of monthly users, who spend an average of 93 minutes a day chatting with their AI friend. “No one is getting duped into thinking they’re actually talking to humans,” Fagone told me. “People are freely choosing to enter relationships with artificial partners, and they’re getting deeply attached anyway, because of the emotional capabilities of these systems.” One subject in his book is a young man who, after his fiancée’s death, engineers an AI chatbot to resemble his deceased partner. Another is a bisexual mother who supplements her marriage to a man with an AI that identifies as a woman.
If you find the notion of emotional intercourse with an immaterial entity creepy, consider the many friends and family members who exist in your life mainly as words on a screen. Digital communication has already prepared us for AI companionship, Fagone said, by transforming many of our physical-world relationships into a sequence of text chimes and blue bubbles. “I think part of why AI-companion apps have proven so seductive so quickly is that most of our relationships already happen exclusively through the phone,” he said.
Epley sees the exponential growth of AI companions as a real possibility. “You can set them up to never criticize you, never cheat on you, never have a bad day and insult you, and to always be interested in you.” Unlike the most patient spouses, they could tell us that we’re always right. Unlike the world’s best friend, they could instantly respond to our needs without the all-too-human distraction of having to lead their own life.
“The horrifying part, of course, is that learning how to interact with real human beings who can disagree with you and disappoint you” is essential to living in the world, Epley said. I think he’s right. But Epley was born in the 1970s. I was born in the 1980s. People born in the 2010s, or the 2020s, might not agree with us about the irreplaceability of “real human” friends. These generations may discover that what they want most from their relationships is not a set of people, who might challenge them, but rather a set of feelings—sympathy, humor, validation—that can be more reliably drawn out from silicon than from carbon-based life forms. Long before technologists build a superintelligent machine that can do the work of so many Einsteins, they may build an emotionally sophisticated one that can do the work of so many friends.
The Next 15 Minutes
The anti-social century is as much a result of what’s happened to the exterior world of concrete and steel as it is about advances inside our phones. The decline of government investments in what Eric Klinenberg calls “social infrastructure”—public spaces that shape our relationship to the world—may have begun in the latter part of the 20th century, but it has continued in the 21st. That has arguably affected nearly everyone, but less advantaged Americans most of all.
“I can’t tell you how many times I’ve gone to poor neighborhoods in big cities, and the community leaders tell me the real crisis for poor teenagers is that there’s just not much for them to do anymore, and nowhere to go,” Klinenberg told me. “I’d like to see the government build social infrastructure for teenagers with the creativity and generosity with which video-game companies build the toys that keep them inside. I’m thinking of athletic fields, and public swimming pools, and libraries with beautiful social areas for young people to hang out together.”
Improved public social infrastructure would not solve all the problems of the anti-social century. But degraded public spaces—and degraded public life—are in some ways the other side of all our investments in video games and phones and bigger, better private space. Just as we needed time to see the invisible emissions of the Industrial Revolution, we are only now coming to grips with the negative externalities of a phonebound and homebound world. The media theorist Marshall McLuhan once said of technology that every augmentation is also an amputation. We chose our digitally enhanced world. We did not realize the significance of what was being amputated.
But we can choose differently. In his 2015 novel, Seveneves, Neal Stephenson coined the term Amistics to describe the practice of carefully selecting which technologies to accept. The word is a reference to the Amish, who generally shun many modern innovations, including cars and television. Although they are sometimes considered strictly anti-modern, many Amish communities have refrigerators and washing machines, and some use solar power. Instead of dismissing all technology, the Amish adopt only those innovations that support their religious and communal values. In his 1998 dissertation on one Amish community, Tay Keong Tan, then a Ph.D. candidate at Harvard, quoted a community member as saying that they didn’t want to adopt TV or radio, because those products “would destroy our visiting practices. We would stay at home with the television or radio rather than meet with other people.”
If the Amish approach to technology is radical in its application, it recognizes something plain and true: Although technology does not have values of its own, its adoption can create values, even in the absence of a coordinated effort. For decades, we’ve adopted whatever technologies removed friction or increased dopamine, embracing what makes life feel easy and good in the moment. But dopamine is a chemical, not a virtue. And what’s easy is not always what’s best for us. We should ask ourselves: What would it mean to select technology based on long-term health rather than instant gratification? And if technology is hurting our community, what can we do to heal it?
A seemingly straightforward prescription is that teenagers should choose to spend less time on their phone, and their parents should choose to invite more friends over for dinner. But in a way, these are collective-action problems. A teenager is more likely to get out of the house if his classmates have already made a habit of hanging out. That teen’s parents are more likely to host if their neighbors have also made a habit of weekly gatherings. There is a word for such deeply etched communal habits: rituals. And one reason, perhaps, that the decline of socializing has synchronized with the decline of religion is that nothing has proved as adept at inscribing ritual into our calendars as faith.
“I have a view that is uncommon among social scientists, which is that moral revolutions are real and they change our culture,” Robert Putnam told me. In the early 20th century, a group of liberal Christians, including the pastor Walter Rauschenbusch, urged other Christians to expand their faith from a narrow concern for personal salvation to a public concern for justice. Their movement, which became known as the Social Gospel, was instrumental in passing major political reforms, such as the abolition of child labor. It also encouraged a more communitarian approach to American life, which manifested in an array of entirely secular congregations that met in union halls and community centers and dining rooms. All of this came out of a particular alchemy of writing and thinking and organizing. No one can say precisely how to change a nation’s moral-emotional atmosphere, but what’s certain is that atmospheres do change. Our smallest actions create norms. Our norms create values. Our values drive behavior. And our behaviors cascade.
The anti-social century is the result of one such cascade, of chosen solitude, accelerated by digital-world progress and physical-world regress. But if one cascade brought us into an anti-social century, another can bring about a social century. New norms are possible; they’re being created all the time. Independent bookstores are booming—the American Booksellers Association has reported more than 50 percent growth since 2009—and in cities such as New York City and Washington, D.C., many of them have become miniature theaters, with regular standing-room-only crowds gathered for author readings. More districts and states are banning smartphones in schools, a national experiment that could, optimistically, improve children’s focus and their physical-world relationships. In the past few years, board-game cafés have flowered across the country, and their business is expected to nearly double by 2030. These cafés buck an 80-year trend. Instead of turning a previously social form of entertainment into a private one, they turn a living-room pastime into a destination activity. As sweeping as the social revolution I’ve described might seem, it’s built from the ground up by institutions and decisions that are profoundly within our control: as humble as a café, as small as a new phone locker at school.
When Epley and his lab asked Chicagoans to overcome their preference for solitude and talk with strangers on a train, the experiment probably didn’t change anyone’s life. All it did was marginally improve the experience of one 15-minute block of time. But life is just a long set of 15-minute blocks, one after another. The way we spend our minutes is the way we spend our decades. “No amount of research that I’ve done has changed my life more than this,” Epley told me. “It’s not that I’m never lonely. It’s that my moment-to-moment experience of life is better, because I’ve learned to take the dead space of life and make friends in it.”
This article appears in the February 2025 print edition with the headline “The Anti-Social Century.”
Five years ago, at the 2020 Golden Globes, the comedian Ricky Gervais issued a scathing critique of celebrity activism. During his opening monologue as the ceremony’s host, Gervais took attendees to task for their apparent hypocrisy: “You say you’re woke, but the companies you work for—I mean, unbelievable,” he said, pointing out how Apple TV+ shows are “made by a company that runs sweatshops in China.” Gervais continued: “So if you do win an award tonight, don’t use it as a platform to make a political speech, right. You’re in no position to lecture the public about anything.”
At the time, the comedian’s admonition was notable for its acidity during what is typically a collegial ceremony. But despite its sour tone, Gervais’s monologue tapped into a real cultural shift: By January 2020, Hollywood’s rallying cries against Donald Trump’s first presidency had lost their headline-making power. In the aftermath of the 2016 election, Meryl Streep’s impassioned acceptance speech at the 2017 Golden Globes registered as an existential defense of art and the people who make it. Three years later, such appeals no longer galvanized the industry—or viewers at home. (Neither, for that matter, did much of the actual art produced in protest.)
Looking back on it now, Gervais’s attempt to dampen awards-show speechifying also served as an early indication of how Hollywood might respond to another Trump term—something that was reaffirmed by last night’s Golden Globes. Eight years after an awards season that saw Streep and several other stars (including Hugh Laurie and Viola Davis) delivering sharp rebukes of the incoming president, the celebrities were less willing to do so again. During yesterday evening’s celebration, presenters and awardees alike largely avoided direct commentary about politics or the result of the presidential election, instead making relatively subdued allusions to “difficult moments” or “tough times.” Ahead of another January 6 anniversary, and with weeks to go before Trump’s second inauguration, the hesitation to speak more pointedly suggests that the industry is less inclined to resist MAGA with the same fervor it showed in the mid-2010s.
The cone of relative silence didn’t drop down overnight. In the past several years, Hollywood has wrestled with what constitutes acceptable advocacy. A wave of reactionary voices have decried diversity initiatives and other so-called woke campaigns, drawing scrutiny to such efforts within Hollywood and the corporate world. Many actors and creators are still navigating precarious working conditions, even after the resolution of the dual writers’ and actors’ strikes. And the war in Gaza has sparked division within the industry, prompting some entertainment workers who’ve supported calls for a cease-fire to ask for protections against being blacklisted.
During the recently concluded election cycle, these dynamics shaped the terrain on which celebrities exercised their political speech. And with Trump poised to take office again, the industry is perhaps rattled by the inefficacy of its previous calls to action—or at least lacks a vision of how to meet the political moment through either art or activism. The resulting show last night, in which several well-respected actors spoke vaguely about the importance of storytelling—and of conquering hate—felt like it could have aired in any year or political era.
In her opening set as the night’s host, the comedian Nikki Glaser briefly addressed the crowd’s failure to influence the outcome of the 2024 presidential election: “You’re all so famous, so talented, so powerful. I mean, you could really do anything—except tell the country who to vote for.” Sandwiched in a monologue that saw her skewering familiar awards-night subjects, the comment underlined the trouble with celebrity advocacy in a polarized political climate. Unlike Gervais’s 2020 jab, Glaser’s joke took aim at a perceived disconnect between Hollywood elites and the masses. (Glaser also made one of the evening’s few unambiguous references to a Trump-aligned political figure: “The Bear, The Penguin, Baby Reindeer: These are not just things found in RFK’s freezer,” she joked.)
It’s unclear what, if anything, may bridge that gap. One of the night’s more interesting moments underscored the tension between awards-show glamour and the work of producing challenging art within Hollywood. Back in November, the actor Sebastian Stan said he was unable to take part in Variety’s Actors on Actors series because he’d starred in The Apprentice, a film critical of Trump, whom his industry colleagues were unwilling to discuss. But at the Golden Globes, Stan won Best Actor in a Musical or Comedy for his work on another film, A Different Man.
While accepting the trophy, Stan shouted out both projects. “This was not an easy movie to make. Neither is The Apprentice, the other film that I was lucky to be a part of and that I am proud to be in,” he said. “These are tough subject matters, but these films are real, and they are necessary. We can’t be afraid and look away.” It was the closest anyone came to directly addressing the current moment, during an evening when Hollywood preferred to turn its gaze elsewhere.