Editor’s Note: Is anything ailing, torturing, or nagging at you? Are you beset by existential worries? Every Tuesday, James Parker tackles readers’ questions. Tell him about your lifelong or in-the-moment problems at dearjames@theatlantic.com.
Don’t want to miss a single column? Sign up to get “Dear James” in your inbox.
Dear James,
I’m a 73-old-woman who has been dating a man of the same age. We get along famously except for one problem: His previous girlfriend still lives in his home, which he left to allow her to continue living there. For more than a year, he has been staying at a friend’s second home, but now it’s time for him to go back to his own house. This means he’ll soon be living with his ex, as he refuses to change the situation. Why? Her financial situation is not good, and he feels guilty. He doesn’t seem to understand why I would have a problem with any of this, as he professes to be in love with me. But I don’t think I can continue this relationship as long as he is living with his old girlfriend. Am I being unreasonable?
Dear Reader,
Well, people come to all sorts of bonkers arrangements to get through this life together, don’t they? Two in the basement, one in the attic; three days in this apartment, four in that; I’ll take the couch, you take the bed, she’ll move to Sweden, and the dog can sleep where he likes. But for the bonkers arrangement to work, all parties need to subscribe to more or less the same version of reality.
Which is not the case here. You and your boyfriend—and I’m painting a nice, possibly completely erroneous, picture of him in my mind: a hater of change, a pleaser of people, a postponer of decisions, slothful, benevolent, a man after my own heart, really—have reached the old Frostian fork, the place where the two roads diverge.
Why can’t we all just get along? he wants to know. You, me, my ex-girlfriend, and the mailman who stops in for tea. What’s so complicated about that?
But to you, it’s madness. His ex-girlfriend? Living with him in his house? Sharing a home with him, a domestic space that still has bits of their old relationship lying around in it like used car parts, a carburetor here and a windshield wiper there? It’s an intolerable situation. And I think you have to trust yourself here. Your boyfriend is acting up. He says he’s in love with you, but he’s not doing a very good job of listening to you—hearing you, as the kids say.
My advice: Kick him around a bit, metaphorically speaking. He may have developed thick, woolly layers of insulation around his brain to protect him from the painfulness and difficulty of life. You must penetrate them, batter or needle your way through them. Help him understand how silly he’s being. He’ll get it, eventually, or he won’t. And if he doesn’t, you’ll know what to do.
Dancing from one difficulty to another,
James
By submitting a letter, you are agreeing to let The Atlantic use it in part or in full, and we may edit it for length and/or clarity.
The man who murdered at least 15 people with his truck on Bourbon Street, in New Orleans, last night was flying the black banner of the Islamic State from his truck, according to the FBI. Police shot 42-year-old Shamsud-Din Bahar Jabbar dead at the scene. So far little else is known about the suspect, but given that ISIS flags are not a standard option on a Ford F-150, it is reasonable to presume that the driver—a U.S. Army veteran—committed mass murder as an homage to the Islamic State.
President-Elect Donald Trump famously lamented that Mexico was “not sending their best” to the United States. After contempt for the New Orleans killer, and sadness for the dead and 35 wounded, my reaction to this attack is relief that for the past decade the Islamic State has been sending its best, and its best remain verminous incompetents whose most ingenious plots involve driving trucks into crowds. Jabbar is said to have brought along explosives, and to have set his Airbnb on fire, but either his bombs didn’t work or he did not live long enough to set them off. In 2014, the Islamic State regarded its string of early victories as a sign that God favored it. Now I wonder whether it has noticed that God has seemingly capped the IQs of its operatives, and taken the hint about what that might say about its continued divine favor.
In 2014, the group’s spokesman, Abu Muhammad al-Adnani, kicked off its campaign of terror in Europe by urging followers to improvise weapons. “If you are not able to find an IED or a bullet,” he said, “smash the American or European’s head with a rock, or slaughter him with a knife, or run him over with your car.” Some horrific attacks ensued, including a truck-ramming in 2016 that killed 86. But consider the number of Islamic State supporters of European origin—probably in the tens of thousands—and the easy availability of rocks, knives, and cars. Few have taken Adnani up on his offer, and those who have tend to be (if the jihadists will pardon the expression) ham-handed.
I am not always so optimistic. Every successful attack is tragic. Plots by patient, methodical, and capable people do occur from time to time, and when they are disrupted in their early stages, the details are sobering reminders of what could happen. Indeed, just this week, the FBI detailed what it described as the largest homemade-explosives cache it had ever discovered, in the hands of a 36-year-old man in Smithfield, Virginia. Those hands had allegedly been mangled by a homemade bomb a few years ago (“several fingers were missing”), and a neighbor alerted authorities that the suspect was “stockpiling weapons and homemade ammunition.” It is difficult to overstate how much weaponry one must stockpile before Virginians begin to wonder whether the stockpile might be a tad excessive and merit the attention of the Bureau of Alcohol, Tobacco, and Firearms (all three of which have near-sacramental status in the rural parts of the commonwealth). The suspect is not an ISIS supporter, but he reportedly shows signs of other dubious extreme views. Prosecutors said they found a bag of pipe bombs, and the bag had a #NoLivesMatter patch. The suspect was released on Monday into his mother’s custody.
The nightmare scenario is, and has always been, the combination of violent motivation with the patience and planning that would allow that motivation to find maximally lethal expression. Idiots can still kill, and murders perpetrated by them are as devastating as any other. The mass murder early this morning, long after the Islamic State lost its territory and ceased to be a source of daily terror, shows that that threat is limited in magnitude but nonetheless eternal. Senator Josh Hawley, a Republican from Missouri, called for a hearing on the attack and alleged that “the Biden Administration has made [Americans] less safe,” through unspecified negligence. But no public information so far suggests that authorities could have stopped the killer from renting a truck and driving it into a crowd. The most effective antidote to attacks like this is probably just to do what the United States did late in Barack Obama’s second term and throughout Trump’s first: to dismantle the Islamic State and relegate it to obscurity, where it has less power to inspire random people to act in its name. That strategy has the added effect of countering more sophisticated attacks, by leaving attackers with fewer lairs and havens from which to stage them.
There is no law of nature that says terrorists must always be bad at terrorism. Many terrorists and mass murderers have plotted very effectively, and racked up the body counts to prove it. Right now, authorities are investigating whether the New Orleans killer had accomplices. If he did have help, then his accomplices were equally incompetent. The correct response, in the long term, is to prepare for the day when competence and fervor intersect. Mercifully, that day was not today.
In the shorter term, I agree with my colleague Juliette Kayyem: The correct response to this crime is to proceed with life, play football, and let New Orleans begin to heal itself. No American city feels as alive as New Orleans, as incapable of being deterred from partying. The city has its homegrown poets, but I am partial to the Englishman Philip Larkin’s lines about New Orleans: “On me your voice falls as they say love should / Like an enormous yes … the natural noise of good, / Scattering long-haired grief and scored pity.”
In 1985, when I was 9 years old, I watched the first episode of the new Twilight Zone, a reboot of the classic early-1960s TV series. People rarely talk about the ’80s version, which ran for just three seasons. But there must be other viewers around my age who have never forgotten “A Little Peace and Quiet,” the second story in that debut episode. It’s about a woman who discovers a magic pendant in the shape of a sundial that gives her the power to stop time. Whenever she says “Shut up,” everyone and everything in the world except her comes to a halt, resuming only when she says, “Start talking.”
At first she uses the device to give herself a break from her irritating husband and chattering children. But at the end of the episode, she hears an announcement that the Soviets have launched a nuclear attack on the United States, and she deploys the magic phrase to arrest time. In the last scene, she walks out of her house and looks up to see ICBMs frozen in midair, leaving her with an impossible choice: to unfreeze time and be destroyed along with all of humanity, or to spend eternity as the sole living person in the world.
I remember that TV image better than most of the things I saw in real life as a child. It was the perfect symbol of an understanding of history that Generation X couldn’t help but absorb—if not from The Twilight Zone, then from movies such as The Day After and WarGames. The nuclear-arms race meant that humanity’s destruction was imminent, even though no one actually wanted it, because we were collectively too stupid and frivolous to prevent it. We were terrified of the future, like the woman in the TV show—yet we also secretly longed for the arrival of the catastrophe because only it could release us from the anxiety of waiting.
Four years after that broadcast, the Cold War ended in an American victory with the fall of the Berlin Wall. In an influential essay published in the euphoric year of 1989, the political scientist Francis Fukuyama proclaimed “the end of history.” But it felt more like the resumption of history. Throughout four decades of nuclear brinkmanship, humanity had been living in fearful expectation, like Brutus in Julius Caesar: “Between the acting of a dreadful thing / And the first motion, all the interim is / Like a phantasma or a hideous dream.” Now the doomsday weapons had been, if not abolished, at least holstered, and the passage of time could mean progress, rather than a countdown to annihilation.
Somehow, things haven’t turned out that way. Young people today are no less obsessed with climate disasters than Gen X was with nuclear war. Where we had nightmares about missiles, theirs feature mass extinctions and climate refugees, wildfires and water wars. And that’s just the beginning. As Dorian Lynskey, a British journalist and critic, writes in Everything Must Go: The Stories We Tell About the End of the World, wherever you look in contemporary pop culture, humanity is getting wiped out—if not by pollution and extreme weather (as in Wall-E and The Day After Tomorrow), then by a meteor or comet (Armageddon, Deep Impact), a virus (Station Eleven, The Walking Dead ), or sudden, inexplicable infertility (Children of Men).
These are more than just Hollywood tropes. Lynskey cites surveys showing that 56 percent of people ages 16 to 25 agree with the statement “Humanity is doomed,” while nearly a third of Americans expect an apocalyptic event to take place in their lifetime. Logically enough, people who believe that the world is about to end are much less inclined to bring children into it. According to a 2024 Pew Research Center survey of unmarried Americans ages 18 to 34, 69 percent say they want to get married one day, but only 51 percent say they want to have children. Around the world, birth rates are falling rapidly; one South Korean online retailer reported that more strollers are now being sold for dogs than for babies in that country. Perhaps this is how the world will end—“not with a bang but a whimper,” as T. S. Eliot wrote in his 1925 poem, “The Hollow Men.”
But the fact that Eliot was already fantasizing about the end of the world a century ago suggests that the dread of extinction has always been with us; only the mechanism changes. Thirty years before “The Hollow Men,” H. G. Wells’s 1895 novel The Time Machine imagined the ultimate extinction of life on Earth, as the universe settles into entropy and heat death. Nearly 70 years before that, Mary Shelley’s novel The Last Man imagined the destruction of the human race in an epidemic. And even then, the subject was considered old hat. One reason The Last Man failed to make the same impression as Shelley’s Frankenstein, Lynskey shows, is that two other works titled “The Last Man” were published in Britain the same year, as well as a poem called “The Death of the World.”
In these modern fables, human extinction is imagined in scientific terms, as the result of natural causes. But the fears they express are much older than science. The term apocalypse comes from an ancient Greek word meaning “unveiling,” and it was used in a literary sense to describe biblical books such as Daniel and Revelation, which offer obscure but highly dramatic predictions about the end of days. “A river of fire streamed forth before Him; / Thousands upon thousands served Him; / Myriads upon myriads attended Him; / The court sat and the books were opened,” Daniel says about the Day of Judgment.
Everything Must Go takes note of these early predecessors, but Lynskey mostly focuses on books and movies produced in the U.S. and the U.K. in the past 200 years, after the Christian apocalypse had begun “to lose its monopoly over the concept of the end of the world.” He divides this material into sections to show how the favorite methods of annihilation have evolved over time, in tandem with scientific progress.
In the mid-19th century, as astronomers were starting to understand the true nature of comets and meteors, writers began to imagine what might happen if one of these celestial wanderers collided with our planet. Edgar Allan Poe’s short story “The Destruction of the World,” published in 1843, was perhaps the first to evoke the initial moment of impact:
For a moment there was a wild lurid light alone, visiting and penetrating all things … then, there came a great pervading sound, as if from the very mouth of HIM; while the whole circumambient mass of ether in which we existed, burst at once into a species of intense flame.
This kind of cataclysmic fantasy hasn’t disappeared—in the 2021 movie Don’t Look Up, astronomers discover a new comet months before it’s due to strike Earth. But whereas 19th-century stories emphasized humanity’s helplessness in the face of external threats, the technological advances of the 20th century created a new fear: that we would destroy ourselves, either on purpose or accidentally.
Hiroshima demonstrated that a global nuclear war could not be won. Radioactive fallout and nuclear winter, in which dust and smoke blot out the sun, would mean the extinction of most life on Earth. This scenario could be played for eerie tragedy: In the 1959 film On the Beach, Australians go about their ordinary lives while waiting for the fallout of a nuclear war to arrive and complete humanity’s erasure. Stanley Kubrick’s Dr. Strangelove (1964) staged the end of the world as an absurdist comedy, the accidental result of ideological mania and sheer idiocy. The film closes with the terrifying yet preposterous image of an American airman riding a falling bomb like a rodeo steer.
Technology didn’t just enable us to annihilate ourselves. More unsettling, it raised the possibility that we would make ourselves obsolete. Today this fear is often expressed in terms of AI, but it first surfaced more than a century ago in the 1920 play R.U.R., by the Czech playwright Karel Čapek. Čapek invented both the word robot (adapted from a Czech word meaning “forced labor”) and the first robot uprising; at the end of the play, only one human is left on Earth, an engineer spared by the robots to help them reproduce. Isaac Asimov’s classic collection of sci-fi stories, I, Robot (1950), envisioned a more benevolent scenario, in which robots become so intelligent so quickly that they simply take over the management of the world, turning humanity into their wards—whether we like it or not.
All of these stories can be seen as variations on the theme of “The Sorcerer’s Apprentice,” a tale told in ballad form by Goethe in 1797, at the dawn of the age of technology. Because our tools have become too powerful for us to manage, the future never unfolds the way we expect it to; our utopias always lurch into dystopia.
This element of self-accusation is what makes an apocalypse story distinctively modern. When human beings imagined that the world would end as a result of a divine decree or a celestial collision, they might rend their garments and tear their hair, but they could do nothing about it. When we imagine the end of the world in a nuclear war or an AI takeover, we are not just the victims but also the culprits. Like Charlton Heston at the end of Planet of the Apes, we have no one to curse but ourselves: “You maniacs! You blew it up! Ah, damn you! God damn you all to hell!”
InA Century of Tomorrows:How Imagining the Future Shapes the Present, the historian and museum curator Glenn Adamson surveys a different genre of stories about the future—the ones told by 20th-century “futurologists.” Where Lynskey’s writers and filmmakers envision the future as an inevitable disaster, these modern seers believed that we can control our destiny—if we only have the good sense to follow their advice.
Adamson applies the term futurologist to a wide range of figures in business, science, politics, and the arts, most of whom would not have described themselves that way. For the designer Norman Bel Geddes, shaping the future meant sketching “cars, buses, and trains that swelled dramatically toward their front ends, as if they could scarcely wait to get where they were going.” For the feminist Shulamith Firestone, it meant calling for the abolition of the nuclear family. We also encounter Marcus Garvey, who led a Black nationalist movement in the early 20th century, and Stewart Brand, the author of the hippie bible The Whole Earth Catalog. The assortment of visionaries is odd, but Adamson accords them all a place in his book because they expanded America’s sense of the possible, its expectations about what the future could bring.
The villains of Adamson’s book, by contrast, are the technocrats of futurism—think-tank experts, business executives, and government officials who believed that they could dictate the future by collecting enough data and applying the right theories. A classic example is Robert McNamara, who serves as a parable of “the rise and fall of technocratic futurology’s unchallenged dominance” in Cold War America.
McNamara became a Harvard Business School professor in the 1940s, and demonstrated a talent “for planning, for forecasting, for quantitatively analyzing, for segregating the trouble spots and identifying the upcoming trends, for abstracting and projecting and predicting.” During World War II, he was recruited by the Air Force to study production methods and eliminate inefficiencies. After the war, he did the same at Ford Motor Company, rising to become its head.
When John F. Kennedy named McNamara as his secretary of defense, the choice seemed like a perfect fit. Who better than a master planner to plan America’s Cold War victory? Instead, McNamara spent the next seven years presiding over the ever-deepening catastrophe in Vietnam, where America’s strategic failure was camouflaged by framing the situation, Adamson writes, as “a series of data points, treating ‘kill ratio’ and ‘body count’ as predictive measures in the war’s progress.”
The conclusion that Adamson draws from his illuminating forays into cultural history is that any claim to be able to control the future is an illusion; the more scientific it sounds, the more dangerous it can be. Yet he ends up admitting to “a certain admiration” for futurologists, despite their mistakes, because “they help us feel the future, the thrilling, frightening, awesome responsibility that it is.”
The future can be our responsibility only if we have the power—and the will—to change it. Otherwise it becomes our fate, a basilisk that turns us to stone as we gaze at it. For a long time, that monster was nuclear war, but today’s focus on worst-case scenarios arising from climate change is not as well suited to storytelling. Lynskey quotes the environmentalist Bill McKibben’s complaint that “global warming has still to produce an Orwell or a Huxley, a Verne or a Wells … or in film any equivalent of On the Beach or Doctor Strangelove.”
Climate change is hard to dramatize for the same reason that it is hard to solve: It happens slowly and in the background, until it doesn’t. Compared with that TV image of Russian missiles suspended overhead, our current fears for the future are as intangible and omnipresent as the weather. Confronted with melting glaciers and vanishing species, our promises to use paper straws or shut off the faucet while we brush our teeth feel less like solutions than superstitious gestures.
In a curious way, reading Everything Must Go can serve as therapy for this kind of fatalism. “The unrealized fears of the past can be a comfort,” Lynskey writes, “because the conviction that one is living in the worst of times is evergreen.” There is a difference, of course, between living in fear of the Last Judgment and living in fear of nuclear war or global warming. The former is a matter of faith; the latter are empirical realities. But when impending catastrophes are real, it is all the more important that we not frighten ourselves into seeing them as inevitable. As Edgar points out in King Lear, “The worst is not / So long as we can say, ‘This is the worst.’ ”
I could have been a tech entrepreneur, but my parents let me go to sleepovers. I could have been a billionaire, but I used to watch Saturday-morning cartoons. I could have been Vivek Ramaswamy, if not for the ways I’ve been corrupted by the mediocrity of American culture. I’m sad when I contemplate my lazy, pathetic, non-Ramaswamy life.
These ruminations were triggered by a statement that Ramaswamy, the noted cultural critic, made on X on Thursday. He was explaining why tech companies prefer to hire foreign-born and first-generation engineers instead of native-born American ones: It has to do with the utter mediocrity of American culture.
“A culture that celebrates the prom queen over the math Olympiad champ, or the jock over the Valedictorian, will not produce the best engineers,” he observed. Then he laid out his vision of how America needs to change: “More movies like Whiplash, fewer reruns of ‘Friends.’ More math tutoring, fewer sleepovers. More weekend science competitions, fewer Saturday morning cartoons. More books, less TV. More creating, less ‘chillin.’ More extracurriculars, less ‘hanging out at the mall.’”
In other words, Ramaswamy has decided to use the reelection of Donald Trump as an occasion to tiger-mom the hell out of us. No, you may not finish studying before midnight! Put that violin back under your chin this instant! No, a score of 1540 on your SATs is not good enough!
That sound you hear is immigrant parents all across America cheering and applauding.
Maybe Ramaswamy’s missive hit me so hard because I grew up in that kind of household. My grandfather, who went to the tuition-free City College of New York and made it in America as a lawyer, imbued me with that hustling-immigrant mindset. We may be outsiders, he told me, but we’re going to grind, we’re going to work, we’re going to climb that greasy pole.
And yet it never happened for me. I have never written a line of code. Unlike Ramaswamy, I have never founded an unprofitable biotech firm. What can I say? I got sucked into the whole sleepover lifestyle—the pillow fights, the long conversations about guitar solos with my fellow ninth graders. I thought those Saturday-morning Bugs Bunny cartoons were harmless, but soon I was into the hard stuff: Road Runner, Scooby-Doo, and worse, far worse.
As the days have gone by, though, I have had some further thoughts about Ramaswamy’s little sermon. It occurred to me that he may not be quite right about everything. For example, he describes a nation awash in lazy mediocrity, yet America has the strongest economy in the world. American workers are among the most productive, and over the past few years American productivity has been surging. In the past decade, American workers have steadily shifted from low-skill to higher-skill jobs. Apparently, our mediocrity shows up everywhere except in the economic data.
Then I began to wonder if our culture is really as hostile to nerdy kids as he implies. This is a culture that puts The Big Bang Theory on our TV screens and The Social Network in the movie theaters. Haven’t we spent many years lionizing Steve Jobs, Bill Gates, and Sam Altman? These days, millions of young men orient their lives around the Joe Rogan–Lex Friedman–Andrew Huberman social ideal—bright and curious tech bros who talk a lot about how much protein they ingest and look like they just swallowed a weight machine. When we think about the chief failing of American culture, is it really that we don’t spend enough time valorizing Stanford computer-science majors?
Then I had even deeper doubts about Ramaswamy’s argument. First, maybe he doesn’t understand what thinking is. He seems to believe that the only kind of thinking that matters is solving math problem sets. But one of the reasons we evolved these big brains of ours is so we can live in groups and navigate social landscapes. The hardest intellectual challenges usually involve understanding other people. If Ramaswamy wants a young person to do something cognitively demanding, he shouldn’t send her to a math tutor; he should send her to a sleepover with a bunch of other 12-year-old girls. That’s cognitively demanding.
Second, it could be that Ramaswamy doesn’t understand what makes America great. We are not going to out-compete China by rote learning and obsessive test taking. We don’t thrive only because of those first-generation strivers who keep their nose to the 70-hour-a-week grindstone and build a life for their family. We also thrive because of all the generations that come after, who live in a culture of pluralism and audacity. America is the place where people from all over the world get jammed together into one fractious mess. America was settled by people willing to take a venture into the unknown, willing to work in spaces where the rules hadn’t been written yet. As COVID revealed yet again, we are not adept at compliance and rule following, but we have a flair for dynamism, creativity, and innovation.
Third, I’m not sure Ramaswamy understands what propelled Trump to office. Trump was elected largely by non–college graduates whose highest abilities manifest in largely nonacademic ways—fixing an engine, raising crops, caring for the dying. Maybe Ramaswamy could celebrate the skills of people who didn’t join him at Harvard and Yale instead of dumping on them as a bunch of lard-butts. What part of the word populism does he not understand?
Most important, maybe Ramaswamy doesn’t understand how to motivate people. He seems to think you produce ambitious people by acting like a drill sergeant: Be tough. Impose rules. Offer carrots when they achieve and smash them with sticks when they fail.
But as Daniel Pink writes in his book Drive, these systems of extrinsic reward are effective motivational techniques only when the tasks in front of people are boring, routine, and technical. When creativity and initiative are required, the best way to motivate people is to help them find the thing they intrinsically love to do and then empower them to do that thing obsessively. Systems of extrinsic rewards don’t tend to arouse intrinsic motivations; they tend to smother them.
Don’t grind your kids until they become worker drones; help them become really good at leisure.
Today, when we hear the word leisure, we tend to think of relaxation. We live in an atmosphere of what the theologian Josef Pieper called “total work.” We define leisure as time spent not working. It’s the pause in our lives that helps us recharge so we can get back to what really matters—work.
But for many centuries, people thought about leisure in a very different way: We spend part of our lives in idleness, they believed, doing nothing. We spend part of our lives on amusements, enjoying small pleasures that divert us. We spend part of our lives on work, doing the unpleasant things we need to do to make a living. But then we spend part of our time on leisure.
Leisure, properly conceived, is a state of mind. It’s doing the things we love doing. For you it could be gardening, or writing, or coding, or learning. It’s driven by enthusiasm, wonder, enjoyment, natural interest—all the intrinsic motivators. When we say something is a labor of love, that’s leisure. When we see somebody in a flow state, that’s leisure. The word school comes from schole, which is Greek for “leisure.” School was supposed to be home to leisure, the most intense kind of human activity, the passionate and enjoyable pursuit of understanding.
The kind of nose-to-the-grindstone culture Ramaswamy endorses eviscerates leisure. It takes a lot of free time to discover that thing we really love to do. We usually stumble across it when we’re just fooling around, curious, during those moments when nobody is telling us what to do. The tiger-mom mentality sees free time as a waste of time—as “hanging out at the mall.”
A life of leisure requires a lot of autonomy. People are most engaged when they are leading their own learning journey. You can’t build a life of leisure when your mental energies are consumed by a thousand assignments and hoops to jump through.
A life of leisure also requires mental play. Sure, we use a valuable form of cognition when we’re solving problem sets or filling out HR forms. But many moments of creative breakthrough involve a looser form of cognition—those moments when you’re just following your intuition and making strange associations, when your mind is free enough to see things in new ways. Ninety-nine percent of our thinking is unconscious; leisure is the dance between conscious and unconscious processes.
The story Ramaswamy tells is of hungry immigrants and lazy natives. That story resonates. The vitality of America has been fueled by waves of immigration, and there are some signs that America is becoming less mobile, less dynamic. But upon reflection, I think he’s mostly wrong about how to fix American culture. And he’s definitely not getting invited to my next sleepover.
Elon Musk spent Christmas Day online, in the thick of a particularly venomous culture war, one that would lead him to later make the un-Christmas-like demand of his critics to “take a big step back and FUCK YOURSELF in the face.”
Donald Trump had ignited this war by appointing the venture capitalist Sriram Krishnan to be his senior AI-policy adviser. Encouraged by the MAGA acolyte and expert troll Laura Loomer, parts of the far-right internet melted down, arguing that Krishnan’s appointment symbolized a betrayal of the principles of the “America First” movement.
Krishnan is an Indian immigrant and a U.S. citizen who, by virtue of his heritage, became a totem for the MAGA right to argue about H-1B visas, which allow certain skilled immigrants to work in the United States. (Many tech companies rely on this labor.) In response to Krishnan’s appointment, some right-wing posters used racist memes to smear Indians, who have made up nearly-three quarters of H-1B recipients in recent years. Loomer called such workers “third world invaders” and invoked the “Great Replacement” theory, which claims that America’s white population is being purposefully replaced by nonwhite people from other countries.
Although Musk has seemingly embraced white supremacy on the platform he owns, X, he apparently could not stand for an attack on a government program that has helped make him money. He is himself an immigrant from South Africa who has said that he worked in the U.S. under an H-1B visa before becoming a citizen. Musk also employs such workers at his companies. He posted on X in support of the H-1B program, arguing that it brings elite talent to America. This perspective is not remotely controversial for the Silicon Valley set, but the reactionary and nationalist wings of the Republican Party got very upset with Musk, very quickly. “The American people don’t view America as a sports team or a company,” the provocateur Jack Posobiec wrote in response to one of Musk’s tweets on Thursday. “They view it as their home.” Later, Musk warned his critics that he will “go to war on this issue the likes of which you cannot possibly comprehend.” By the weekend, Steve Bannon, Trump’s former adviser, had called H-1Bs a “scam” and said that Musk’s defense of highly skilled immigrants is showing his “true colors.”
The tech right and nationalist right are separate (but overlapping) factions that operated in tandem to help get Trump reelected. Now they are at odds. For possibly the first time since Trump’s victory, the racial animus and nativism that galvanized the nationalist right cannot immediately be reconciled with the tech right’s desire to effectively conquer the world (and cosmos, in Musk’s case) using any possible advantage. After winning the election together, one side was going to have to lose.
It should be said that opposing H-1Bs is not an inherently MAGA position. The program has well-documented flaws, and has received bipartisan criticism. For instance, Senator Bernie Sanders, an independent, has previously argued that highly skilled immigrant labor is a potential weapon that business owners can use to lower wages. Similarly, supporting H-1Bs says only so much about someone’s politics. Although Musk casts his defense of highly skilled immigrants as racially inclusive, he has repeatedly flirted with racial prejudice on X and has vocally supported a German far-right party with ties to neo-Nazis.
In any case, the coalition of the tech right and the nationalist right was bound to be tested. The two are similar in certain ways: They share a reactionary, anti-“woke” commitment to reversing a perceived pattern of American weakness brought about by DEI initiatives, and both have exhibited authoritarian tendencies. But there were always fissures. The tech right’s desire for free markets is in fundamental tension with a rising conservative skepticism of unchecked capitalism; Tucker Carlson, for example, has spoken critically of “market capitalism,” arguing that “any economic system that weakens and destroys families isn’t worth having.” Much of the nationalist far right sees itself as a movement that values the flourishing, vitality, and self-determination of human beings (as long as they are of the correct race or nationality). Meanwhile, much of the tech right is concerned with advancing technology above all else—the most extreme wings don’t even mind if that ultimately results in human extinction.
For a little while, it almost seemed like the right could dodge these conflicts. Vice President–Elect J. D. Vance is the physical embodiment of a compromise between the far-right, aggressively reactionary, nationalist wing of the Republican Party and its tech-evangelist faction. He worked in a venture-capital firm co-founded by Peter Thiel, the right-wing tech billionaire; has criticized unbridled free markets; and has been cheered on by far-right influencers with big followings. He has spoken out against H-1B visas even as he invested in companies that applied to use them. But part of Vance’s job is to unite his party against a common enemy; that role became less urgent after Election Day.
This skirmish is a preview of how tension between the tech right and the nationalist right may play out once Trump takes office. The nationalists will likely get most of what they want—Trump has already promised mass deportations, to their delight—but when they butt heads with Silicon Valley, Trump will likely defer to his wealthiest friends. That’s how things went during his first term. Despite Trump’s populist promise in 2016 that he would create an economy that benefited common people at the expense of large corporations and the rich (a position popular with the more nationalist wing of the right), he largely did the opposite, supporting and signing into law tax cuts for corporations and the wealthy. This happened even as much of the tech world rebuked Trump over his “Muslim ban” and family-separation policy, which employees of tech giants prodded their leaders to oppose.
This time around, with Musk and the tech entrepreneur Vivek Ramaswamy running the newly created Department of Government Efficiency, the billionaire venture capitalist Marc Andreessen helping staff the department, and Krishnan set to advise on AI policy, the tech right is being integrated into the incoming administration. Trump’s other appointments also suggest that his administration will be friendly to the rich and powerful. His advisers and Cabinet appointments so far consist of ultra-rich confidants from finance and real estate—industries that prioritize markets above other conservative principles. His proposed Cabinet includes few who would be considered dedicated members of the nationalist right. No surprise, then, that Trump seemed to side with Musk, telling the New York Post on Saturday, “I’ve always liked the visas, I have always been in favor of the visas. That’s why we have them.” Perhaps even more so than last time, the plutocrats are in control.