Trump’s a Paper Tiger, and Everyone Knows It

The United States is, yet again, facing an unnecessary crisis of its own making. On October 6, Donald Trump decided, during a phone call with Turkish President Recep Tayyip Erdoğan, to withdraw U.S. military forces from northern Syria. And not for the first time. Erdoğan persuaded Trump to withdraw U.S. forces during a phone call back in mid-December 2018. In response, then–Secretary of Defense James Mattis resigned in protest. Under bipartisan pressure, Trump agreed to keep a reduced number of troops in the region.

But this time was different. Instead of reversing course in the face of bipartisan criticism, Trump doubled down. The same day that he publicly announced his decision, Trump tweeted that an American troop presence was unnecessary to protect the Syrian Democratic Forces (SDF) or prevent Islamic State fighters from escaping confinement, because “if Turkey does anything that I, in my great and unmatched wisdom, consider to be off limits, I will totally destroy and obliterate the Economy of Turkey (I’ve done before!).”

Now the fears of his critics are coming to fruition. The instability created by the Turkish incursion into Syria—involving both the Turkish military and its proxies—has allowed ISIS fighters to escape. Images of executions and other alleged atrocities are circulating on social media. The SDF, desperate for partners capable of helping it fend off the Turkish advance, has formed an alliance with Russia and the Syrian government. On Thursday, the U.S. helped block a United Nations Security Council statement condemning the Turkish operation. On Friday, Turkish artillery landed near American forces. On Saturday, Trump ordered the total evacuation of U.S. troops from northern Syria. NBC News reported that “the decision to move troops out was largely because Turkish military and proxy force” had cut off American supply lines, and the likelihood of U.S. troops being drawn into the conflict was high.

[Kori Schake: Trump is complicit in Erdoğan’s violence]

There is no silver lining here. Trump’s decision to withdraw troops allowed a Turkish incursion into territory held by an American ally. For this, Trump received no foreign-policy concessions. The resulting invasion has set back the war against ISIS, and led the SDF—a force that was trained and armed by the United States—to ally with Russia and its Syrian client. In principle, Trump’s sudden shift in policy might have formed the basis for an effort to repair frayed relations with Ankara—which, among other things, had been kicked out of the F-35 program for buying a Russian air-defense system. But Trump undermined this possibility by slapping sanctions on Turkey. In effect, the United States is now sanctioning a NATO member-state in support of efforts by Russia and the Syrian government to consolidate control in a region formerly protected by the United States. Trump’s utterly bizarre October 9 letter to Erdoğan, in which he pleaded with the Turkish leader not to be “a tough guy” or a “fool,” surely did nothing to reestablish good feeling, let alone respect.

The story of how we got to this point doesn’t start with Trump. He inherited a situation in northeast Syria that could not remain in equilibrium forever, especially given his desire to bring U.S. troops home. Thus, one might argue that he ripped the Band-Aid off, forced an inevitable accommodation between the SDF and Syrian President Bashar al-Assad, and is now bringing the troops home.

But foreign policy is all about managing difficult trade-offs. For example, if policy makers take steps necessary to demonstrate the credibility of their commitment to defend an alliance partner, they might embolden that ally to draw them into a conflict that they’d rather not fight. If policy makers don’t take those steps, the ally might worry about abandonment and seek other security partners. As a great power, the United States has to constantly work to manage a wide variety of cross-cutting pressures. For decades, Washington has balanced arms sales to Saudi Arabia with Israel’s insistence that it retain its “qualitative military edge.”

Not only are trade-offs inevitable, but events rarely unfold as planned. Foreign policy takes place under conditions of uncertainty and risk. In short, foreign policy will inevitably produce lemons, so policy makers need to be prepared to make lemonade. What Trump has done, however, is jam the lemons into his mouth and choked on them. He’s managed to alienate some allies, get people killed, cause foreign leaders and policy makers to further doubt his reliability, give enemies of the United States a chance at regrouping, and further enhance Russian prestige as a power broker in the Middle East.

Trump’s botched Syria policy only highlights something that’s been apparent since the 2016 campaign: He is unfit to run American foreign policy. True, when it comes to unforced errors, Trump is unlikely to match the George W. Bush administration’s decision to invade Iraq.

[Conor Friedersdorf: Trump’s Middle East policy is a fraud]

But as mounting evidence surrounding the Ukraine scandal demonstrates, Trump’s primary interest lies in his own advancement. His information environment is dominated by Fox News and fever-swamp, right-wing conspiracy theories. He’s also all short-term tactics and no long-term strategy. Trump seems to be simply incapable of the kind of strategic thought required for foreign policy. In a devastating Atlantic article, Mark Bowden interviewed numerous U.S. generals who attest that Trump refuses to work through how other countries might respond to his actions. He just wants to make “gut” decisions, which means that he neither anticipates nor plans for contingencies. This makes him fundamentally reactive.

Trump’s unusually singular focus on his personal interests, his distorted information environment, and his allergy to strategic thought set him apart. These traits also mean that his decision-making processes cannot be understood with the normal tools of foreign-policy analysis. Those pundits, analysts, and headline writers who insist on attributing some underlying logic to Trump’s decisions unwittingly normalize and sanitize his presidency. They make his actions more difficult, not easier, to understand.

The president’s impulsiveness does not mean, however, that his string of failures defies prediction. One of the major ways that states pursue their interests is to deter unwanted behavior—to make it clear to other international actors that they will suffer costs if they engage in that behavior. When it works well, deterrence is not particularly exciting, because the unwanted behavior never happens.

Trump cannot deter bad actors, because he’s predictably a paper tiger. For example, Trump ratcheted up tensions with North Korea to the point where many people worried that war was coming. But then he backed down and was soon indulging, both personally and politically, North Korea’s leader, Kim Jong Un—while asking for concessions that North Korea was clearly unprepared to make. Now North Korea presses ahead with nuclear capabilities, while Trump insists that he’s got a great relationship with Kim, who likely views him as a doormat.

[David A. Graham: Trump sides with North Korea against the CIA]

Similarly, when Trump withdrew from the Iran Deal, he described it as a horrible agreement that allowed Iran to continue all kinds of bad behavior. But Trump’s policies in the Middle East have not put much of a dent in Iran’s regional ambitions; Iran is now closer to nuclear weapons than before Trump took office. The only long-term plan seems to have been to apply pressure and hope something breaks. The administration’s occasional demand that Iran comply with the same agreement that Trump abandoned further suggests a profound lack of planning.

Because Trump’s approach to foreign policy is too often reactive, he finds himself turning to coercive diplomacy to clean up his messes—to try to put the proverbial horse back in the barn. This is exactly what’s happening with Turkey. Instead of trying to deter Erdoğan from invading Syria, Trump resorted to punitive sanctions after the incursion had already begun, after people were dying, after U.S. troops were trying to evacuate, and after the Russian-Syrian-SDF agreement. (Trump’s pursuit of sanctions is almost certainly a tactical response to domestic political pressure, rather than a developed contingency plan.) The imposition of sanctions carries greater political risks for everyone concerned than if Trump had read his talking points, pushed back against Erdoğan’s demands, and left American troops in place. Indeed, some reports suggest that Erdoğan was surprised by Trump’s capitulation—that Erdoğan believed he was opening a negotiation rather than issuing an ultimatum.

Trump may wind up reversing some of the damage of his Syrian missteps. But even if Trump forces Erdoğan to capitulate, he’s unlikely to walk away with any net gains. Trump certainly won’t reap any reputational advantage. Trump’s Syria policy, like pretty much all of Trump’s international endeavors, will likely leave the United States worse off than if he had simply maintained the status quo.

Someone will come out in better shape, however: America’s rivals, Russia in particular. Jeb Bush said that Trump would be a “chaos” president, and that’s one reason Moscow wanted him to win. Yes, Moscow came to hope that Trump would deliver on a new grand bargain favorable to Russian goals and interests, such as a return to spheres of influence and an abandonment of American liberal ordering. But his victory alone, the Kremlin wagered, would throw wrenches into America’s global advantage in alliances and partnerships. That bet has paid off.

Click here to see original article

What Elizabeth Warren Learned From Her Heroine

In a speech in New York’s Washington Square Park last month, Elizabeth Warren plucked from history a woman who knew injustice when she saw it. Labor secretary to Franklin D. Roosevelt for all 12 years of his presidency, Frances Perkins was the key force behind the most creative and enduring parts of the New Deal—from Social Security to the 40-hour work week. But Perkins’s career as a workers’ advocate began much earlier. On March 25, 1911, she was drinking tea at a friend’s when a fire broke out behind locked doors at the nearby Triangle Shirtwaist Factory. Perkins bolted to the scene, only to witness scores of women workers—Italian and Jewish immigrants, some as young as 14—jumping nine stories to their deaths. In less than 20 minutes, 146 people died inside the factory or on the sidewalk. Perkins later described that Saturday as the day the New Deal began. In her speech last month, Warren drew inspiration from Perkins, asking, “So, what did one woman—one very persistent woman—backed up by millions of people across this country get done?  Social Security. Unemployment insurance. Abolition of child labor. Minimum wage. The right to join a union. Even the very existence of the weekend.  Big, structural change. One woman, and millions of people to back her up.”

In recent polls of Democratic primary voters, Warren now finishes at or near the top—making the historical underpinnings of her campaign all the more noteworthy. That Americans in the early 20th century needed greater economic security and workplace protections might seem obvious in hindsight. Millions of people were laboring under conditions that were in no way a secret. But someone had to frame those conditions as problems that public policy could fix—and recognize that government action against poverty and insecurity offered a political opportunity for Roosevelt.

Perkins’s vision of America is now in retreat. There are fewer inspectors at the Occupational Safety and Health Administration than ever before, and investigations of workplace fatalities have soared to their highest levels in a decade. Taxi drivers have been driven to financial ruin and suicide, healthcare aides work 24-hour shifts, schoolteachers in some states donate plasma and consign their clothes to survive, and undocumented workers live under threat of deportation. From July until September, a group of Kentucky coal miners camped on railroad tracks, blocking passage of $1 million worth of coal after their paychecks bounced. “No Pay We Stay,’ their sign made from a cardboard box declares. Our times, as Warren implied in her speech, are not as different from Perkins’ own.

[Read more: The Democrats’ new war on Warren]

In her presidential campaign, Warren is positioning herself not just as an alternative to Donald Trump, but as a problem-spotter who can change the way Americans think about the reality before their own eyes. Before her election to the Senate in Massachusetts, she led at least two significant shifts in progressive thinking: In the research she undertook as a law professor, she reframed bankruptcy as the result of bad luck amid unfavorable economic conditions, rather than of debtors’ moral failings. Moreover, a basic premise of the Consumer Financial Protection Bureau, whose creation Warren inspired, is that the details of certain mortgage and credit-card loans aren’t just fine print—and instead conceal sneaky efforts to wrangle money out of consumers. In Wednesday’s debate, former vice president Joe Biden tried to claim some credit for bureau, telling Warren, “I went on the floor and got you votes.” Warren instead attributed her legislative victory to an outpouring of popular support for the idea, declaring, “I am deeply grateful to every single person who fought for it and who helped pass it into law.”

Some political handicappers have argued that Warren would be farthest-left candidate ever elected president. But, like Perkins, Warren is betting that, if Americans think hard about injustices that are hidden in plain sight, they’ll demand that Washington take action.  

Warren and her team have encouraged the comparison with Perkins. On Twitter, Warren’s chief campaign strategist, Joe Rospars, refers to himself as “a Frances Perkins Democrat.” Warren’s slogan—“dream big, fight hard”—echoes Perkins admonition to Roosevelt when he wavered in his commitment to a public-works program. “He wanted to think it over,” Perkins wrote in her memoirs of her years with Roosevelt. “My heart sank….I felt we must fight hard now,” Perkins wrote. Woodworkers built the podium from which Warren spoke with boards taken from Perkins’ ancestral home in Maine, on the Damariscotta River, to which she returned to all her life. Even Warren’s refrain, “I have a plan for that” seems inspired by Perkins, too. Upon his election as president, Roosevelt was prepared to name her secretary of labor—the first-ever female Cabinet member. She arrived at his Manhattan townhouse with a list of demands—causes she had scribbled on scraps and kept in a drawer, adding up into a list of “practical possibilities.” In other words, Frances Perkins had a plan. She extracted Roosevelt’s commitment to follow it in exchange for her taking his Cabinet job.

Perkins never let the president forget her plan, using skills she had honed in the tough world of Tammany Hall and Albany.

Fifty-four years after her death, the broader public is largely unaware of Perkins’s influence, but a number of historians, including Adam Cohen and Kirstin Downey, have chronicled both Perkins’ passion and her influence over Roosevelt. (In recounting key events in Perkins’s life, I have drawn heavily on Cohen’s work and on Perkins’s own writings.)

After the factory fire, Perkins, then about 31, became the executive secretary of New York City’s fledgling Committee on Safety. Her recommendations: fire escapes, exit signs, the nightly emptying of office waste baskets, sprinkler systems, mandatory fire drills, enclosed stairways, and mandatory unlocked exits that could be used in an emergency within three minutes. Those recommendations and her later ones led to worker protection laws in New York and across the country. She forced them to see the horrors of factory life. She took then-New York Governor Al Smith to see thousands of women “coming off the ten-hour night shift on the rope walks in Auburn,” she wrote years later, and she sent then-New York State Senator Robert Wagner crawling “through the tiny hole in the wall that gave egress to a steep iron ladder covered with ice and ending 12 feet from the ground, which was euphemistically labeled ‘Fire Escape.”

[Read more: How the first woman in the U.S. cabinet found her vocation]

When Roosevelt succeeded Al Smith as governor of New York, Perkins’s reports satisfied his wish to hear the “human part of the story,” she wrote. “He wanted me to tell them…how we discovered that men were getting silicosis while polishing the inside of glass milk tanks; how the girls painting luminous dials on clock faces and pointing the fine hair brushes with their lips had contracted radium poisoning; how the old carpenter who lost his arm and settled his compensation claim by agreement with his employer without a hearing had been cheated out of about $5,000, which we discovered in a spot check investigation,” Perkins wrote.

After the 1929 crash, Perkins and Roosevelt together took the boldest steps in the country towards unemployment relief: The governor put Perkins in charge of a New York commission that recommended setting up the first unemployment office. From there they drew a roadmap that, upon Roosevelt’s election as president in 1932, they would later use in the White House to address the Depression and enact the New Deal.

Sworn in as secretary of labor, she found a department rife with cockroaches and corruption. Under Herbert Hoover, the Department of Labor spent some 90 percent of its resources on a Special Immigration Unit, created by the Exclusionary Immigration Acts of 1921 and 1924, which involved two brothers overseeing a team of agents who were sent around the country to find and deport illegal immigrants, with a particular emphasis on alleged communists. Often, the agents extorted the immigrants for bribes along the way. Perkins declined to renew the unit’s funding. Over time, she began to radically reorient the department toward the needs of working Americans.

She did not shy away from intramural battles. During Roosevelt’s first 100 days, Perkins prevailed over his budget director and fellow patrician Lewis Williams Douglas, who viewed a balanced budget with an almost religious zeal. “I soon realized that good as Douglas might be in handling the budget, he was expressing, in more agreeable and persuasive terms, the very philosophy against which the country had reacted so violently under Hoover,” Perkins wrote in her memoirs of her years with Roosevelt. When Perkins learned Douglas had thwarted her effort to include billions of dollars for public works in a relief bill, she demanded a morning meeting with Roosevelt.

“You’ve got to decide now Mr. President,” Perkins insisted, according to Cohen’s account of the meeting. “Here on this beautiful sunshiny afternoon we have to decide if we shall put it in or leave it out.” Roosevelt, always battling conflicting sympathies, agreed, but Perkins took no risks, phoning then U.S. Senator Robert Wagner, the author of the relief bill and the man Perkins sent crawling down an icy fire escape years earlier, as a witness. Roosevelt took the phone and said, “Frances says that she thinks it’s best and I think it’s the right thing, don’t you Bob?” Douglas resigned not long after, and Perkins stayed twelve years.

Perkins ably used what leverage she had, and criticism did not deter her. As the New Deal took shape, the Roosevelt administration had significant leverage over corporations, which were seeking relaxation of federal anti-trust laws to stall free-falling prices. Perkins thought to phone William Green, president of the American Federation of Labor, and ask which provisions he wanted included in the relief bill. Months earlier, when Perkins’ appointment was announced, Green swore he could “never become reconciled” to the selection of a woman and a non-union official, in that order. Now, he took her call, and asked to include the right to organize non-union workers in the bill. That provision would eventually be codified and known as the “Magna Carta” of the labor movement. Ultimately, the only legislation Frances Perkins wanted and didn’t see to fruition was universal healthcare. But everything else she listed at her meeting in FDR’s townhouse, she got.

[Read more: FDR and Hebert Hoover’s fight over the New Deal]

The New Deal was an imperfect vehicle, which helped some more than others. As Landon Storrs, a historian at the University of Iowa, told me by email, social insurance benefits and minimum-wage protections were tied to occupation, at a time when most women and racial minorities worked in occupations excluded from coverage. Many New Deal programs were administered through the states and were therefore subject to local prejudices. And the National Labor Relations Act of 1935 did not immediately benefit minority and female workers as much as others because, like employers, many labor unions in the 1930s discriminated against those groups. But the flaws of the New Deal need not discredit the blueprint of its successes, and its proven political salability.

The New Deal was at best a hope when Roosevelt took office, and nothing about it was inevitable. Only through the vision and dogged advocacy of people like Perkins did the federal government take on greater responsibility for Americans’ welfare in the workplace and in the broader economy. Warren’s veneration of Perkins offers some insight into how the Massachusetts senator sees the current landscape: The United States is a country where workers’ needs are going unaddressed, but also a country that, with the right people in office, is ripe for transformation.

Click here to see original article

Why Is the C-Section Rate So High?

Every day, roughly 10,000 babies are born in the United States, and about a third of them are born via Cesarean section. This share has gone up significantly over time, and many in the scientific community believe that it’s higher than is necessary. Increases in C-section rates have not translated to healthier moms or babies. Although it’s impossible to know the “necessary” rate with real precision, the World Health Organization says it is closer to 10 or 15 percent.

Why do doctors perform so many unnecessary (or “non-indicated,” in the medical vernacular) C-sections?

This is a question both patients and doctors worry over, with an answer that is by no means straightforward. But here’s a data point to consider: While most people agree that what matters most is that mother and baby come through the process safely, doctors are generally paid quite a bit more for a C-section than for a vaginal birth. This financial nudge might just have something to do with the rate of non-indicated C-sections in the U.S.

[Emily Oster: New evidence on pot during pregnancy]

Cesarean sections are lifesaving if you need them. In some situations, a C-section is not only preferable but mandatory—situations involving conditions like placenta previa, in which going into labor would precipitate life-threatening hemorrhaging, or cord prolapse, which can cause the death of a baby if a C-section is not performed in a manner of minutes. But in most instances, the surgery is not the preferred mode of delivery. Evidence and expert consensus are consistent on the message that C-sections, on average, come with greater risks than vaginal births: more blood loss, more chance of infection or blood clots, more complications in future pregnancies, a higher risk of death. Even if serious complications don’t occur, C-section recovery tends to be longer and harder.

And the fact is that a lot of C-sections are performed in clinical gray areas, where the necessity is not clear—for reasons like “abnormal labor progress,” or out of concern for the baby’s safety based on the fetal heart tracing (an intervention notoriously poor at identifying babies who truly are at risk if they continue in labor). And some C-sections are performed electively, at maternal request.

This complexity regarding C-section decision making is not, in and of itself, a problem. If a doctor and a patient have a nuanced, fully informed discussion about the right mode of delivery, given the risks and benefits, they may reasonably arrive at the conclusion that a C-section is the right approach, even if it is not strictly required. The trouble is, that ideal of a doctor and a patient making an objective and informed decision together is hard to come by.  

Doctors and patients are not the only people involved in the decision, and they certainly don’t make it in a vacuum. Most births in the U.S. occur in hospitals. By their very nature, hospitals introduce pressures that may alter birth choices. Labor rooms are scarce, and patients (and their doctors) may be subject to implicit or explicit pressure to avoid “taking too long.”  

Hospitals also tend to be risk-averse; many have developed systems that are intended to improve patient safety, but may result in pressure to perform C-sections rather than letting labor continue. If there is any doubt that the culture of individual hospitals can have an effect on a patient’s chance of a C-section, one need only observe that hospitals’ C-section rates vary from 7 percent to 70 percent. Differences in patient complexity cannot account for that spread.

But even setting aside risk aversion, the decision to perform a C-section can be clouded by considerations other than medical necessity. For example, physician-patients are about 10 percent less likely to have a C-section than comparable nonphysician patients. Why this is the case is not clear, but it suggests that doctors may treat some patients differently from others. Studies have also found that C-sections—especially first-time C-sections—spike around morning, lunchtime, and the end of the day, which could be (could be) interpreted as induced demand by doctors who are responding to scheduling pressures: getting to office hours, eating lunch, going home.

Yet another possible reason for the country’s high C-section rate, as we mentioned, is that physicians are routinely paid more for a C-section than they are for a vaginal delivery—on average, about 15 percent more. Why is this the case? The prevailing logic is that a C-section is a major surgery, so the physicians’ payment should reflect the greater potential for complexity. But this logic rests on a crude generalization. Vaginal birth can be very straightforward, but it can also be very complicated and time-consuming. The same is true for a C-section. Despite this, payments are fixed—they reflect the mode of delivery, not the difficulty.

You could imagine an alternative system that just paid for time, per hour of labor—which would acknowledge the fact that labor management tends to take longer than C-sections; after all, a C-section performed during labor by definition cuts short that labor. Such a system would also account for the costs physicians accrue by spending more time in the hospital: less sleep, less time with family, less time to see patients in the office. But such a system might then wrongly incentivize slow labors, or avoiding C-sections when they’re needed, so a whole different set of problems would emerge.

At any rate, the fact is that the existing system creates a financial incentive to perform a C-section—or a disincentive to manage labor—that may make the difference in the clinical gray areas. When it’s late at night and a labor is long, or progress seems uncertain, or a fetal heart tracing is anything but perfect, and a physician is contemplating the time costs of continuing labor, the present system makes choosing a C-section easy.  

Indeed, studies have shown that the more physicians are paid for C-sections relative to vaginal births, the higher the C-section rates become. And when these differentials are reduced, C-section rates decrease.

Now, just in case any reader is jumping to the conclusion that the only reason she will have a C-section is because her doctor wants more money—or because her doctor wants to leave the hospital, or because she herself is not a physician—that is not what we are arguing at all. If this were true, C-section rates would be even higher than they are now. What we are arguing is that medical care is complex and labor management is subject to myriad pressures. While we may not be able to alleviate all the pressures at play, we may be able to reform one of them.

So let’s change the monetary incentives. Let’s not subtly encourage physicians to perform major surgery on their patients.

One simple approach is to lower the C-section payment, raise the vaginal-birth payment, and meet in the middle. But it’s possible that if rates for C-sections go down, doctors will avoid surgery even in situations where they should perform one.  

So we propose an alternative: Raise the payment rate for vaginal births to the C-section rate, and leave the C-section rate where it is.

Policy makers will object that this method is expensive. Medical costs in the U.S. are already high and rising. Simply paying more for something risks making that problem worse.

What this argument misses, though, is that the public will get much of its money back—possibly quite a lot of it.

Using Health Analytics data, which covers both Medicaid births and commercial-insurance births, we found that if insurers raised the vaginal-birth reimbursement rate to the level of the C-section rate and changed nothing else, costs would rise by about 1.7 percent. Based on an estimated per-birth cost of about $14,000, aggregated over 4 million births, that comes to roughly $965 million a year.  

[Read: How one hospital reduced unnecessary C-sections]

But something else would change: Namely, the C-section rate would go down. Using data from one study on blended-payment approaches and one on differences across patients, we expect that the rate would go down by about three percentage points in the short run. And that would lead to a reduction in costs beyond doctors’ fees, such as related to long hospital stays, operating-room use, and more. The overall cost increase to the system would, then, actually hover around 0.8 percent, or $480 million a year.

And even this is probably an overestimate. In the long run, C-section rate could go down more than three percentage points. One reason is the adage “Once a C-section, always a C-section.” Women who have C-sections are most likely to have a C-section with a subsequent pregnancy, although some of them may choose to try for a vaginal birth, or VBAC. (Further complicating the matter, some insurances reimburse physicians even less for a VBAC compared with either a C-section or a routine vaginal delivery, which further encourages physicians to advise their patients to have a C-section.) If this process gets interrupted—if fewer women have C-sections to begin with—then we may see the share of vaginal births go up dramatically over time, and doctors’ fees go down.  

Depending on how you do the numbers, it’s possible this reform could save money in the long term. Regardless, if vaginal birth is a better outcome for moms in most situations—and we think it is—the we should be willing to accept some increase in costs for safer maternity care.

Click here to see original article

The Experts Strike Back

Donald Trump came into the office without much experience in diplomacy—literal or figurative—but it doesn’t take a career Foreign Service officer to realize that if you spend enough time saying someone is your enemy, that person might begin to feel the same way about you.

From the start of his administration, the president demonized government employees, especially in foreign policy and intelligence. He attacked career officers as part of the “deep state,” discarded their advice, and appointed Cabinet secretaries who alienated them. Now, as an impeachment inquiry rolls forward, Trump is harvesting wind from the ice he sowed. The White House’s attempt at full obstruction of the inquiry has cracked because unlike Trump’s loyalists, career officials and experts have been willing to defy invocations of executive privilege and testify to Congress.

Perhaps no case better exemplifies the way that neglecting and vilifying public servants has backfired than that of Michael McKinley. A career Foreign Service officer, McKinley had served as an ambassador to four countries under Presidents Trump, Barack Obama, and George W. Bush. In 2018, Secretary of State Mike Pompeo plucked McKinley from Brasilia, where he was leading the U.S. embassy, to become a senior adviser, especially charged with serving as a conduit between Pompeo and career officers.

[Read: Revenge of the intelligence nerds]

Last week, however, McKinley resigned in protest of the department’s failure to stand up for Marie Yovanovitch, the former ambassador to Ukraine. Yovanovitch was apparently sacked after pressure from Rudy Giuliani, who spread unsubstantiated claims that she was disloyal to Trump. Yesterday, McKinley testified to House impeachment investigators, complaining about the politicization of the State Department, including the sidelining of career staff.

Yovanovitch herself also testified last week, delivering a scorching appraisal of the Trump administration’s actions in Ukraine and its approach to foreign policy more broadly. She, too, was intimately involved in some of the central events, though she had been recalled by the time of Trump’s July 25 call with President Volodymyr Zelensky of Ukraine. (Though removed from her post, she remains a State Department employee; the White House tried to block her testimony, but she honored the House subpoena.)

This week also saw testimony from Fiona Hill, who served as the top Russia expert on the National Security Council until August. Hill reportedly told House members about discussions involving Ukraine, and about how alarmed White House staff were about the way the Trump administration was handling Ukraine—especially with Giuliani appearing to run a shadow foreign policy there. “I am not part of whatever drug deal [Gordon] Sondland and [Mick] Mulvaney are cooking up,” then–National Security Adviser John Bolton told Hill, referring to the U.S. ambassador to the European Union and the acting White House chief of staff, according to The New York Times.

[David A. Graham: The conspiracy of silence is cracking]

On Tuesday, George Kent, a top State official handling Ukraine, told House members that he was cut out of policy discussion on that country after Mulvaney intervened. Tomorrow, a Defense Department official will talk. And the flood of testimony began when Kurt Volker, a career diplomat who was working as an envoy to Ukraine until he was forced out earlier this month, testified and handed over a damning series of messages between himself, Sondland, and William Taylor, who became the top U.S. diplomat in Ukraine after Yovanovitch’s recall. He, too, has been called to testify.

Kent’s story seems emblematic: Despite his expertise on the subject and his long record of service, he alleges he was sidelined by the White House chief of staff, a political appointee and former congressman, in favor of people like Giuliani, the president’s personal lawyer; Sondland, a wealthy hotelier and political ambassador; and Energy Secretary Rick Perry, a Republican politician.

Throughout his administration, Trump has routinely discarded the advice of experts. Of course, the president has the prerogative to make his own decisions. But Trump has not only opted to disregard their advice; he has decided he doesn’t need to hear it at all, making policy moves based on little information—a tendency recently demonstrated by his precipitous withdrawal from Syria.

Trump has also hired bosses who make life at places like the State Department miserable. His first secretary, Rex Tillerson, was regarded as levelheaded on policy and not especially political, though those tendencies eventually got him fired by Trump. But Tillerson embarked on a quixotic overhaul of the department’s workforce that alienated many employees, and a steady stream of high-ranking career officials left. Pompeo, though more political, seemed concerned with improving morale, but his decision not to back Yovanovitch erased that work. (Adding insult to injury, she testified that Pompeo sent a deputy to talk to her rather than doing it himself.) The president, meanwhile, has routinely attacked government employees as being part of a deep state.

Disregarding experts is a bad way to make policy for many reasons, but as these cases demonstrate, it’s bad politics, too. My colleague Mike Giglio recently noted that after Trump spent years demonizing the intelligence services, it was an intelligence official, following legal channels to a T, who filed the whistle-blower complaint that sparked impeachment. And it’s long-demonized officials in foreign policy who are now fueling the fire.

[Read: There is no American ‘deep state]

Trump and his minions will likely try to portray this testimony as evidence that he was right about the deep state all along. As I have previously written, the term is misleading and dangerous. It also doesn’t fit here. Each of these people is testifying to Congress, an equal branch of government, under official procedures. Most if not all of them are testifying under subpoena, a legally binding summons. The White House’s legal rationale for withholding testimony and documents under executive privilege is flimsy at best. An impeachment inquiry is not invalid simply because Trump doesn’t like it or feels that Democrats are mean, no matter how insistently he says otherwise. And the revelations they are providing are disturbing, generally consistent, and backed up by documents, including those released by the White House itself.

The testimony from career officers points to how far Trump has politicized government. This crisis revolves around his requests to a foreign government, Ukraine, to interfere in American elections by investigating his political rival Joe Biden. It’s not unusual for career government employees to disagree with administration policy, but usually they don’t speak out publicly, because doing so would undermine the mission and work of their institutions. (Retired Marine General and former Defense Secretary James Mattis’s silence since resigning in 2018 illustrates this approach.) But these officers have concluded that worse damage is being done to the government already. In other words, officials typically stay quiet about their political masters because of their own loyalty to the State Department. Now their loyalty to the State Department is driving them to speak out about their political masters.

Trump is famous for demanding loyalty from his subordinates, while showing little of his own toward them. With the president disregarding their work, the government’s foreign-policy professionals have had ample time to assess the hostile actor in the White House. They were ready for this moment.

Click here to see original article

Will Canada Redefine Conservatism Again?

In Toronto this spring, Andrew Scheer, the man seeking to replace Justin Trudeau as prime minister of Canada, made what is perhaps the most important speech of his career. While Scheer, the leader of the Conservative Party of Canada (CPC), is no Trudeau—he’s younger, dorkier, and less foppish—his speech, about immigration, sounded at times like something Trudeau would say. Scheer spoke of Canada as a generous, diverse country and denounced “intolerance, racism, and extremism of any kind.” If anybody disagreed, he added, “there’s the door.”

But if Scheer was aligning himself, in some ways, with his electoral rival, he was also setting himself apart. He quoted scripture, something politicians in Trudeau’s Liberal Party are less likely to do. He praised the entrepreneurial spirit that impels immigrants to leave their home. And he spoke darkly about “Mexican drug-cartel members” and “individuals flagged as threats to national security,” who exploit weaknesses in the immigration system at the expense of lawful applicants who wait their turn. The speech was shot through with conservative themes: free enterprise, law and order, self-reliance, and faith.

In this respect, Scheer sounded more like his CPC predecessor Stephen Harper, the prime minister of Canada from 2006 to 2015. When Harper became the leader of the CPC in 2004, the organization had only just come into existence, through a merger between an establishment Tory party and a populist upstart. Harper built a winning electoral coalition by linking rural western Canada to the ethnically diverse suburbs of Toronto and Vancouver, which were then considered Liberal strongholds. He framed conservative and populist talking points in ways that resonated with many minority voters.

Tories across the world took notice. British Prime Minister David Cameron personally sought Harper’s advice, a former Canadian government official told me, as did operatives close to German Chancellor Angela Merkel. On the American right, too, strategists were contemplating what a more magnanimous conservatism might look like. In 2013, the Republican National Committee ran a postmortem on Mitt Romney’s failed presidential bid, concluding that, to ensure their long-term viability, Republicans needed to become less hostile toward immigrants.

Much has changed since then. The rise of Donald Trump and the European far right has made political cultures more toxic. The CPC is under pressure both to distinguish itself from its far-right counterparts and to resist the lure of nativism, while the revelation that Trudeau wore black- and brown-face makeup several times prior to his entry into politics has undermined his status as an icon of liberal cosmopolitanism. Canada will hold a general election on Monday, and nobody is sure what will happen. Will the country again be at the vanguard of a multiethnic conservative revival? Or is it no longer possible to build a voter coalition that is at once right-leaning, populist, and diverse?


Jason Kenney had what he describes as a “eureka moment” in 1994, in the Vancouver studio of a Cantonese-language radio show. Then just 27, Kenney was the president of the fiscally conservative Canadian Taxpayers Federation, and had been invited on the program to discuss economic issues. The host started translating callers’ questions into English, and Kenney—who at the time subscribed to the conventional wisdom that immigrant communities skew leftward—was shocked by what he heard. “The calls were coming from the right,” he recalled to me. “People commented on government waste, on their dislike of welfare culture, and on abuse of our asylum system by false refugee claimants. There was this whole parallel universe of political opinion that had never made its way into the mainstream.”

Jason Kenney sits at a Sikh community center holding a bouquet.
Premier of Alberta Jason Kenney makes an election campaign visit to members of the Sikh community in Edmonton. (Candace Elliott / Reuters)

Today Kenney is perhaps the most important conservative politician in the country—the premier, the Canadian equivalent of governor, of Alberta province. Before that, he was a chief architect of the CPC’s multicultural-outreach strategy. And if there was a birthplace for that strategy, it was in that studio, a quarter century ago.

In 2004, the CPC lost its first election as a combined party to an unpopular Liberal incumbent. Kenney, however, was convinced that the problem had less to do with policy than with messaging and tone. If you wanted people to take an interest in your platform, he reasoned, you first had to take an interest in them. Conservative candidates were used to showing up at events for gun owners and meetings of the Canadian Federation of Agriculture; now they’d be expected to appear just as regularly at mosques, gurdwaras, and cultural community centers.

Kenney himself was a tireless retail politician who campaigned doggedly in suburban neighborhoods thousands of miles from his district. “My Irish ancestors, who stepped off the boats in New York and Boston in the 1850s, were greeted by folks from the local Democratic Party organization, who got them their first houses and jobs,” he said. “That led to generations of partisan fealty. It was all about relationships.”

For the CPC, building such relationships didn’t mean changing its political identity. Once elected as prime minister, Harper sometimes crafted policies with diaspora communities in mind, such as removing visa requirements for Polish and Hungarian visitors and publicly apologizing for the historic mistreatment of Chinese people in Canada. But the party mostly doubled down on conservatism, reasoning that small-business owners (many of whom are immigrants) would respond well to anti-tax rhetoric, just as observant Muslims and Hindus would appreciate messaging about family values and faith.

The most difficult balancing act was in the area of immigration. Under Harper, Canada admitted more new people than at any previous point since the First World War. But the CPC also reoriented the immigration system away from family reunification and toward younger, skilled immigrants, and introduced new laws making it tougher for certain classes of refugee claimants to resettle in Canada. Here, again, communication was key. Ian Brodie, Harper’s former chief of staff, told me that he encouraged CPC politicians to talk with voters in a “status reaffirming” way, emphasizing that the new policies were not anti-immigrant but rather favored “law-abiding” immigrants who would “create jobs and raise good families.”

[Read: Canada’s secret to escaping the ‘liberal doom loop’]

The CPC didn’t go in for snobbish, country-club conservatism. Its policies—agricultural subsidies, expanded gun rights—had a folksy appeal, and Harper explicitly addressed his speeches to “ordinary Canadians” who felt estranged from the urban liberal elite. Patrick Muttart, a former Harper strategist, describes this outlook as one of “inclusive populism,” a message he says was supposed to resonate beyond a white rural base. In 2011, Harper won his third consecutive election and his first majority mandate. This time he did better among new Canadians than among the native-born.


What happened next is well known. By the time of the 2015 election, the CPC was embroiled in controversies over several hard-line initiatives, including a law enabling the government to strip Canadian citizenship from dual nationals convicted of terrorism and an election pledge to establish a “barbaric cultural practices” hotline whereby citizens could snitch on foreign-born neighbors. The European migration crisis and a domestic trial over a thwarted terrorism plot had brought issues of border security to the fore, and turmoil in the industrial sector had heightened anxieties about foreign competition for jobs. “The influence of nationalism in Western democracies was touching Canada,” says Tim Powers, a policy analyst and former CPC adviser. “In this climate, new Canadians and immigrants were convenient targets of derision.”

In the end, Harper’s cynical messaging limited his appeal, and Trudeau, with a cheery, Instagram-friendly campaign, recaptured many of the multicultural, suburban areas that the CPC had so recently won. If the 2011 election demonstrated that an inclusive populist coalition was buildable, the 2015 one showed just how quickly it could disappear.

[Read: Canada’s surprising history of blackface]

Today Scheer is looking to avoid Harper’s mistakes, but he’s operating in an even more noxious culture. Canada has seen the rise of energetic protest movements in which participants borrow symbols and rhetoric from the European far right, while a right-wing splinter political organization, called the People’s Party of Canada, is promising to dramatically reduce immigration rates and push back against “extreme multiculturalism.”

Sometimes the CPC has seemed to cave to nativist pressures. In 2018, a party operative circulated a racist social-media advertisement criticizing Trudeau for his allegedly lax immigration policies: The image depicted a black man with a suitcase approaching a hole in a border fence. Scheer himself later publicly lambasted Trudeau for signing the Global Compact for Migration, a nonbinding United Nations resolution that Scheer claimed would undermine Canada’s immigration sovereignty, an assertion that seemed oddly paranoid coming from a politician who prides himself on equanimity. Recent news stories have revealed that one CPC candidate shared Islamophobic conspiracy theories on social media, while another had ties to a prominent white nationalist. (Trudeau has seized on revelations such as these as evidence that the CPC is a front for bigotry, although some voters question whether he has the moral authority to make such claims.)

The campaign itself, however, has been mostly in line with Muttart’s notion of inclusive populism. The CPC is running a relatively diverse slate of candidates with a platform centred on nickel-and-dime issues such as tax credits for commuters and reduced heating costs. Its signature campaign pledge—to support the oil sector and scrap the carbon tax that the Trudeau Liberals passed—seems designed to pit fuel-dependant exurban voters against downtown environmentalists. And Scheer’s affable, soccer-dad persona makes him a suitable foil for Trudeau, the wealthy scion of Canada’s most storied political family. Preelection polls put the two parties at a statistical tie.

[Read: ‘There’s a perception that Canada is being invaded’]

If Scheer prevails, his victory will call into question fashionable notions about the future of conservatism. In their 2004 book, The Emerging Democratic Majority, John B. Judis and Ruy Teixeira argue that demographic change in the American electorate could pose existential problems for the Republican Party. Since then, liberal pundits have gleefully interpreted this thesis to mean that growing cultural diversity portends the demise of the right. But such assumptions rest on questionable premises: that voting patterns remain fixed over time, that minority voters are natural liberals, and that political movements cannot adapt to change.

Recent Canadian history suggests otherwise. And if the CPC pulls out a win—or even a respectable second-place finish—it will have offered a blueprint for how populist conservatism can survive in the 21st century.

On a chilly morning in September, I watched Scheer give a campaign speech in Alberta’s capital, Edmonton. It was the day after an international climate strike, and the CPC leader stood defiantly on the back of a pickup truck, flanked by Kenney. In the crowd, amid white men in cowboy hats and buffalo-plaid jackets, there were Asian, Middle Eastern, and African Canadians. Scheer’s topic was Trudeau, whom he positioned as an international playboy—more concerned with shoring up his green credentials and global-celebrity status than with the needs of ordinary citizens. “Trudeau travels around the world,” Scheer said, “and he talks down the men and women who exploit our natural resources.”

Like all populist rhetoric, the speech turned on a distinction between people versus elites. The elites were celebrity environmentalists and foreign advocacy groups. (The previous day, the Swedish activist Greta Thunberg had met with Trudeau to discuss scaling back the extractive sector.) The people, Scheer implied, were in the crowd in front of him, here in the nation’s oil-producing heartland. “I will defend Canada and Canadians’ interests,” he promised, to enthusiastic cheers. The people in the audience weren’t all white or native-born, but they seemed to agree that when Scheer spoke about Canadians, he was speaking about them.

Click here to see original article

Brexit Will Never Be Over

Monday morning, the Queen put on her crown and reading glasses to deliver an 11-page speech from the throne in the House of Lords. Scenes do not get more British than this: Horse Guards clopping down the avenues; diamonds glinting in the TV lights. Following customs that have been obsolete for decades if not centuries, the prime minister and his cabinet stood on foot shoved into a corner to hear the words they themselves had put into the monarch’s mouth. Everything looked much as everything has looked for as long as anyone can remember. So it’s quite weird to absorb how utterly the British system of government has collapsed.

The basic rule of the British system is that the prime minister commands a majority in the House of Commons. Lose that majority, and you have to stop being prime minister. In the 19th century, that tended to mean handing the job over to somebody else. In the 20th century, it meant calling an election. In the 21st century, it has meant … well, what does it mean?

[Read: The never-ending Brexit crisis]

Boris Johnson became prime minister in July. He met Parliament for the first time in September—and promptly lost his first vote. Over the next five days, he lost five more, each on an essential issue.

Johnson tried to side-step his loss of control by proroguing Parliament. His opponents challenged him in court and won, forcing Monday’s ceremonial reopening. The queen’s speech is traditionally followed by a vote. As things are going, Johnson looks likely to lose that vote, too. After expelling 21 Conservatives from his own caucus to punish them for prior rebellions, he can count on only 288 votes in the House of Commons, 32 short of a majority.

At any previous moment in British history, Johnson’s government would have fallen by now and an election been called. But Britain amended its law in 2011 to fix parliamentary terms at  approximately five years, unless two-thirds of Parliament voted for an early election. The idea was to ensure stability. Instead, the fixed-term amendment has created a political ghostland: a government lacking democratic legitimacy, but also unable to put itself out of its own misery. (One of the votes Johnson lost was a vote for early elections.)

Andrew Cooper, the pollster and strategist to former prime minister David Cameron, offers one explanation as to why Britain cannot form a stable government.

Twenty years ago, the safest Conservative seats in the country were mostly economically secure and mostly ethnically English. The safest Labour seats in the country were mostly economically distressed and ethnically diverse. The two parties would then battle for the votes of everybody else, but the clash between the economically secure English and the distressed and diverse provided the main battle front of British politics.

Over the past two decades, this map of politics has lost relevance. Only 9 percent of British people still strongly identify with a political party. Old partisan identities have faded before a sharp new bifurcation, Leave versus Remain. Almost 90 percent of British voters identify with these new ideological categories, 44 percent do so strongly. This new bifurcation has rotated the old political map. Leave is strongest where voters are distressed and ethnically English. Remain is strongest where voters are economically secure and diverse. The old parties are struggling to find their footing in the new heartlands.

[Read: Why a ‘Brexit Election’ will make things worse]

By committing to Leave, the Conservatives have acquired new supporters who want more government protection from the rigors of globalism—even as the party’s own internal justification for Brexit was to rip up EU regulations, shrink the British state, and reposition Britain as more global, not less.  With the government determined to Leave, the opposition Labour Party should logically speak for Remain. But under the left-sectarian leadership of Jeremy Corbyn, Labour does not want the voters who would be attracted to a Remain message. This has left Labour with no coherent message at all on the single most important issue facing the country.

The result is a paralytic muddle, in which Labour snubs its most natural voters and Conservatives plot to betray theirs.

The next few days will see a sequence of dramatic events in British politics. The timeline is confusing, the outcomes unpredictable. But it’s a good guess that Johnson will pull a rabbit out of a hat, procure something that can be sold as a deal, and put himself on the road to the election he wants. Johnson and his advisers hope that once they put Brexit behind them, they can return to the familiar politics of rich versus poor, Thatcherism versus socialism, up versus down. Jeremy Corbyn and his Marxoid advisers hope the same thing. They will all be disappointed. They are talking about yesterday’s issues in tomorrow’s world. Brexit will never be over, not even if Britain quits the European Union, because the discontents that caused Brexit will still seethe the day after Brexit—and probably more than ever.

Click here to see original article

The Atlantic Politics Daily: What Warren Won’t Say

Today in Politics

It’s Thursday, October 16. Today, interrogating the 2020 Democratic front-runner. ¶ Plus, putting a price tag on Medicare for All. ¶ Finally, have you heard of Jon McNaughton? You may have seen his art.

(JOHN MINCHILLO / AP)

What Warren Won’t Say

The latest Democratic primary debate lacked electricity. Overshadowing the wholly “disorienting evening,” David Graham writes, is the ongoing impeachment inquiry:

“There are several reasons the debate never really took off, but the central problem was that each of the candidates is seeking to excite the Democratic base, and right now the thing that is most exciting to Democrats is impeaching Donald Trump.”

[Read the rest of “The Democatic Primary Is Now a Sideshow,” by David Graham]

The only flare-ups of the night were between Elizabeth Warren and those who essentially anointed her as the front-runner through their direct challenges. Our writers have long interrogated the “I have a plan” candidate on her plans:

1. “The tenor of the critiques Warren received tonight were far milder versions of the attacks she’ll have to fend off from Republicans if she’s the nominee,” Russell Berman writes.

2. “Elizabeth Warren has a lot of plans—including a plan not to cop to how she would pay for Medicare for All,” Edward-Isaac Dovere reports from Ohio.

3. She’s been asked a variation of this question at every debate, and “she’s sticking to her party’s age-old wariness of telling middle-class families in a simple sound bite that their tax bill might go up,” Russell points out.

4. Warren’s higher-education proposals “have been welcome in the black college community—even though the mechanics of exactly how the fund will operate are still a bit messy,” our education reporter Adam Harris writes, after interviewing Warren earlier this year.

5. Warren laid some groundwork for her foreign-policy thinking in a major speech nearly a year ago. “But it’s already becoming clear that when it comes to foreign policy, Warren’s vision is more conventional; Bernie Sanders’s is more radical. And both leave crucial questions unresolved,” Peter Beinart argued then.


Argument of the Day

(WIN MCNAMEE / GETTY)

A new study puts a price tag for a plan candidates like Warren and Bernie Sanders was defending last night: $34 trillion in the first decade of its operation. Ron Brownstein takes a hard look at the eye-popping figure:

The Urban Institute estimates that a single-payer plan would require $32 trillion in new tax revenue over the coming decade.

How big a lift is it to raise $32 trillion? It’s almost 50 percent more than the total revenue the CBO projects Washington will collect from the personal income tax over the next decade (about $23.3 trillion). It’s more than double the amount the CBO projects Washington will collect over the next decade from the payroll tax that funds Social Security and part of Medicare (about $15.4 trillion).

→ Read Brownstein’s full story here.

+ More from Ron: “How L.A.’s Health-Care Reform Is a Lesson for Democrats.”


Before You Go

(ZACH D ROBERTS / NURPHOTO / GETTY)

Even if you haven’t heard of the artist Jon McNaughton, you’ve seen his work in your news and social-media feeds.

He’s gone viral with paintings of President Donald Trump clutching the American flag (Respect the Flag), Trump playing football (All-American Trump), and Trump at the easel unveiling his masterpiece (The Masterpiece). McNaughton is the closest thing the Trump administration has to a court artist, although liberals see him as more of a court jester. Art critics call him a propagandist and purveyor of populist schlock. He “panders and preaches to the converted” with work that is “drop-dead obvious in message,” says Jerry Saltz, the senior art critic for New York magazine. Others see McNaughton as a straight-up comedian.

→ A professor of art history tries to understand Trump’s court artist.


About us: The Atlantic’s politics newsletter is a daily effort from our politics desk. Today’s edition was written by Shan Wang. You can reach us with questions, comments, or concerns anytime by replying directly to this email.

Your support makes our journalism possible. You can subscribe here.

Click here to see original article

The Eye-Popping Cost of Medicare for All

Senator Elizabeth Warren’s refusal to answer repeated questions at last night’s debate about how she would fund Medicare for all underscores the challenge she faces finding a politically acceptable means to meet the idea’s huge price tag—a challenge that only intensified today with the release of an eye-popping new study.

The Urban Institute, a center-left think tank highly respected among Democrats, is projecting that a plan similar to what Warren and Senator Bernie Sanders are pushing would require $34 trillion in additional federal spending over its first decade in operation. That’s more than the federal government’s total cost over the coming decade for Social Security, Medicare, and Medicaid combined, according to the most recent Congressional Budget Office projections.

In recent history, only during the height of World War II has the federal government tried to increase taxes, as a share of the economy, as fast as would be required to offset the cost of a single-payer plan, federal figures show. There are “no analogous peacetime tax increases,” said Leonard Burman, a public-administration professor at Syracuse University and a former top tax official in both the Bill Clinton administration and the CBO. Raising that much more tax revenue “is plausible in the sense that it is theoretically possible,” Burman told me. “But the revolution that would come along with it would get in the way.”

At the debate, as throughout the campaign, Warren refused to provide any specifics about how she would fund a single-payer plan. Instead, whether questioned by moderators or challenged by other candidates, she recycled variants on the same talking points she has used in venues from campaign town halls to a recent appearance on The Late Show with Stephen Colbert. Rather than explain what revenue she would raise to fund the plan, Warren insisted that, under single payer, middle-income families would save more money with the elimination of health-care premiums, co-pays, and deductibles, regardless of any taxes imposed. “Costs will go up for the wealthy and for big corporations, and for hard-working middle-class families costs will go down,” she said at the debate.

[Read: The risk of Elizabeth Warren’s dodging]

That calculation itself is disputed. And it begs the question: Even if families would eventually save under a single-payer system, a President Warren would still need to identify a politically plausible funding plan to pass such a program through Congress. By all indications, that looms as an extremely daunting project.

The new Urban Institute study helps define the magnitude of the task Warren (or Sanders) would face. The think tank modeled the costs of eight possible plans to expand health-care coverage that generally track ideas from the Democratic presidential candidates. By far, the most expensive was its version of the single-payer plan that Sanders introduced in the Senate and Warren later endorsed: a blueprint that would eliminate private health insurance, require no co-pays or premiums from individuals, and provide everyone in the United States (including undocumented immigrants) an expansive benefits package including dental, vision, and home health care.

The 10-year cost of $34 trillion that the study forecasts nearly matches the CBO’s estimate of how much money the federal government will spend over that period not only on all entitlement programs, but also all federal income support, such as the Supplemental Nutrition Assistance Program. Former Vice President Joe Biden said incorrectly at the debate that the single-payer plan would cost more annually than the total existing federal budget—it would cost less. (CBO says Washington will spend about $4.6 trillion in 2020.) But over the next decade, the plan on its own would represent a nearly 60 percent increase in total expected federal spending, from national defense to interest on the national debt, according to CBO projections.

The Urban Institute estimates that a single-payer plan would require $32 trillion in new tax revenue over the coming decade. That’s slightly less revenue than its projected cost because it would generate some offsetting savings by eliminating certain tax benefits the government now provides, such as the exclusion for employer-provided health care.

How big a lift is it to raise $32 trillion? It’s almost 50 percent more than the total revenue the CBO projects Washington will collect from the personal income tax over the next decade (about $23.3 trillion). It’s more than double the amount CBO projects Washington will collect over the next decade from the payroll tax that funds Social Security and part of Medicare (about $15.4 trillion). A $32 trillion tax increase would represent just over two-thirds of the revenue the CBO projects the federal government will collect from all sources over the next decade (just over $46 trillion.)

Taxes that can fill that big a hole are not easy to identify. Even by Warren’s own estimates, which some liberal economists consider too optimistic, her proposed wealth tax on personal fortunes exceeding $50 million would raise just $2.75 trillion over the next decade. That’s less than what would be required to fund a single-payer plan for one year. In any case, Warren has already targeted that revenue for other proposals she’s issued, such as providing universal pre-school and child care, and cancelling most college debt while funding tuition-free public higher education. Repealing the tax cuts for businesses and individuals that President Donald Trump and the GOP Congress passed in late 2017 would similarly raise about $2 trillion in federal revenue over the next decade.

Other tax options would likewise make a relatively minor dent. For instance, some Democrats have proposed for years to eliminate the current cap on the payroll tax—which stops taxing income above about $133,000—and instead impose the tax on all income above a higher threshold, such as $250,000. The CBO recently estimated that such a plan would raise about $1.2 trillion over the next decade, again a small share of single payer’s cost. Besides, Warren has already proposed such a tax hike and earmarked the money to increase Social Security benefits.

Burman told me the broad-based income-tax increases that Sanders has discussed using to fund single payer—including raising the top income-tax rate past 50 percent and ending reduced taxation for capital gains—would likely cover about half the proposal’s cost. If Warren or Sanders tried to cover the other half with a value-added tax—a sort of national sales tax that many European nations use to fund their social-safety net—the rate would likely need to be set around 25 percent, he estimates. “All of the things Sanders proposed plus a high VAT by European standards might get you there,” Burman said.

Alternatively, some advocates have discussed raising the payroll tax to fund a single-payer plan. Currently the payroll tax is set at 15.3 percent of earnings, with the cost split between employees and employers. Former CBO Director Douglas Holtz-Eakin, now the president of the center-right think tank the American Action Forum, told me that level would need to substantially rise to fund a single-payer plan. He said, in a “ballpark” estimate, that Sanders’s plan “would require [a] payroll-tax hike of 20 to 25 percentage points.”

Whatever alternative Warren or Sanders select, a single-payer plan would require increasing federal revenue at a rate not seen in 70-odd years, both Burman and Holtz-Eakin told me. Measured as a share of the economy, total federal receipts tripled during World War II, rising from almost 7 percent to nearly 21 percent of the gross domestic product from 1940 to 1945, according to federal figures. Since then, federal revenue, compared to the broader economy, has generally oscillated within a fairly narrow range. The most it’s increased in a single decade is about 10 percent, during the 1950s, 1960s, 1990s, and in the past decade. (Revenue has increased despite the massive Trump tax cuts because the Great Recession vastly reduced federal revenues and created an unusually low starting point.)

Though federal revenue today still starts at a low level by historic standards (16.6 percent), providing more potential flexibility to raise taxes, the cost of a single-payer plan would swamp any such advantage. By 2029, with the added cost of single payer factored in, federal revenue would increase to close to 30 percent of the total economy. That would mean federal revenue would increase as a share of the economy by about three-fourths over a decade, vastly more rapidly than in any other 10-year period since World War II. It would also mean that federal revenue would considerably exceed the share of the economy it consumed even in World War II, when it reached 20.5 percent in 1944, a level unmatched since.

[Read: The question Elizabeth Warren doesn’t want to answer]

The central response from Warren and Sanders to concerns about their health plans’ cost has been to tout the overall savings for Americans, and the Urban Institute analysis suggests that for lower- and middle-income families that’s possible. The study projects that households will save nearly $887 billion in annual costs and employers another $955 billion, some of which could revert to workers in the form of higher wages.

Though that’s much less than the new taxes required for the plan, the organization says that lower- and middle-income families might still come out ahead: “Higher-income people will likely face the greatest increases in taxes, meaning their new tax burdens would likely exceed their savings; the reverse is likely true for lower-income populations,” the report concludes. (Among other dissenters, that conclusion is disputed by Kenneth Thorpe, a leading health economist at Emory University and a former assistant secretary in Clinton’s Health and Human Services department. In a study Biden cited at last night’s debate, Thorpe calculated that, under single payer, 71 percent of households with private insurance today—almost 70 million households in all—“would pay more in new taxes than they would save through the elimination of premiums and cost sharing.”)

The debate over savings is impossible to resolve so long as Warren refuses to offer any indication of what revenue she would raise to fund her plan. On CNN this morning, South Bend, Indiana, Mayor Pete Buttigieg lashed her for dodging the question. “I have a lot of respect for Senator Warren, but last night she was more specific and forthcoming about the number of selfies she’s taken than about how this plan is going to be funded,” he said.

What is clear now is that the Sanders version of single payer—which Warren at the debate called “the gold standard” of health-care proposals—would cost vastly more than any other alternative. The new analysis found that plans similar to the one Biden, Buttigieg, and other candidates have proposed—centered on expanding a public option to compete with private insurance companies—would achieve nearly universal coverage at a cost of roughly $122 billion to $162 billion annually, depending on exactly how it is designed. Even what the analysts called a single-payer plan “lite”—requiring some co-pays and offering somewhat-less-generous benefits, without covering undocumented immigrants—would cost about $1.5 trillion annually, about half as much as the Sanders and Warren proposal.

Such comparisons are certain to compound the anxieties that many Democratic health-care experts feel about trying to defend in the general election a single-payer plan that would eliminate private health insurance and require such a large increase in federal spending.

“Many countries do not wrest the entire burden of every single person’s health care into the federal government,” said Neera Tanden, the president of the liberal think tank the Center for American Progress and a former health-policy adviser to Barack Obama and Hillary Clinton. “I think there are big questions about the United States moving from the most conservative health-care system to the most leftward government-run health-care system.

“There are positives and negatives to any of these options,” Tanden added. “But one issue in a country that has more anxiety about the government’s role in people’s lives is whether it is feasible, or even sustainable over the long term, to have the federal government [grow so much] in size because the entire system of health care would be run through the government.”

Click here to see original article

Et Tu, LeBron?

Less than two years ago, Fox News host Laura Ingraham infamously said LeBron James should “shut up and dribble,” after the NBA superstar criticized President Donald Trump. Now everyone—especially on the right—is on the Los Angeles Lakers forward’s case for disheartening comments he made about the explosive political situation between the NBA and China. Not even pro basketball’s biggest star can fix the bind the league is in. If anything, James is reinforcing it.

On Monday, James weighed in for the first time on the international firestorm swirling around the National Basketball Association. The controversy began after Houston Rockets general manager Daryl Morey tweeted his support of the Hong Kong protesters on October 4, right before the Lakers and the Brooklyn Nets were scheduled to play exhibition games in China. Morey’s tweet was quickly taken down, but the fallout from the tweet—which included lost sponsorships for the Rockets and China’s refusal to allow the broadcast of two preseason games—has continued and even intensified after James voiced his opinions.

[Jemele Hill: The NBA is going to have to choose]

“I’m not here to judge how the league handled the situation,” James said to reporters before the Lakers preseason game against the Golden State Warriors. “I just think that when you’re misinformed or you’re not educated about something, and I’m just talking about the tweet itself, you never know the ramifications that could happen. We all seen what that did. Not only for our league, but for all of us in America, for people in China as well.”

Especially because James has built a reputation for speaking his mind about important issues, many American fans were dumbfounded that he not only took a position that aligned with China’s, but also faulted Morey for—as the old saying goes—playing with the church’s money. In other words, Morey’s tweet had put the entire NBA and all of its players’ lucrative relationships with China in the crosshairs.

Like many other American businesses, the NBA has staked much of its future in China, a market that has 300 million people playing basketball. Several NBA players have signed healthy endorsement deals with Chinese companies, or, like James, have made regular visits to China as they continue to capitalize on a Chinese market that is thirsty for American basketball. All of that nearly was compromised with one tweet.

“Sometimes,” James continued, “you have to think through things that you say that may cause harm not only for yourself but for the majority of people.”

As someone who has been critical of his own government, James is the last person who should ever give the impression that he favors the suppression of any viewpoint—even one that is proving to be as costly and uncomfortable as Morey’s has been for the NBA. Freedom of speech isn’t something that should be protected only on a case-by-case basis. It becomes a slippery slope for some people—including James—when it actually requires some sacrifice.

Still, expecting James and other NBA players to solve a tense political crisis between China and the league is unfair, and outside of their responsibilities. Inevitably, strengthening the league’s relationship with a repressive country was going to involve some pitfalls Even though James, more than any other player, is the face of the NBA, he doesn’t have to be an authority on China-Hong Kong relations. Nobody should be surprised if he’s more vocal about issues he’s lived with in America than those in China.

In 2017, James called Trump a “bum” in a Twitter posting, which became the most retweeted tweet by an athlete that year. James was coming to the rescue of his colleague Stephen Curry, the Golden State Warriors guard whom Trump attacked after Curry admitted he had no interest in making the traditional visit to the White House to celebrate the Warriors’ NBA championship.

That was just one of many times James went after Trump, who finally fired back at James by insulting his intelligence. While conservatives excoriated James for his criticisms, the former league MVP didn’t lose a single dollar over what he’s said about the president. The NBA may have gained a few more detractors, but it was largely business as usual and it certainly didn’t put the league in the middle of a crisis.

But what if James’s takedowns of Trump had resulted in a more costly backlash? What if the league, other players, and James himself had lost sponsorships and endorsements? Would he have apologized or backpedaled?

[Read more: The NBA-China disaster is a stress test for capitalism]

Considering how fiercely James has unloaded on Trump, it’s hard to imagine his taking back anything he said. And if the league had ever moved to sanction him in any way for his views, the blowback would have been immense. And the league would have deserved it.

But, according to an ESPN report last night, in a private meeting between players and NBA commissioner Adam Silver in China, James was anything but a defender of free speech. He seemed to argue to NBA commissioner Adam Silver that Morey should be punished because, if a player sent a tweet that jeopardized an international relationship and millions of dollars, the league would have issued some level of discipline. Silver reportedly cited James’s situation with Trump as proof that wouldn’t happen.

That is a puzzling stance for James to take.  Had a player tweeted what Morey did, James surely wouldn’t have suggested any discipline—even if James himself were frustrated with being put in the middle of the resulting controversy. On Twitter, James tried to further explain his position, but given the seriousness of the situation in Hong Kong, I doubt many fans were receptive to his explaining how this created a challenging week for the players who were in China as everything unfolded. The Hong Kong protesters definitely were aware of James’ opinions. His comments so angered the Hong Kong protesters that they burned his jersey—something American fans have done when angry at James, but for much different reasons.

Morey likely didn’t anticipate that his tweet would practically destroy his team’s relationship with China. He probably also couldn’t have foreseen how much his tweet would potentially sabotage players’ business interests in China. An unnamed Lakers player, sources told ESPN, lost out on a $1 million endorsement opportunity with a Chinese company because of Morey.

At the meeting with Silver, according to ESPN, James accused the Rockets general manager of not thinking about how his pro-democracy tweet would affect others in the league, and complained that commissioner and Morey himself had left players to handle a public-relations crisis they didn’t cause. Which is true. When it invested so heavily in the China market, the NBA—like lots of American companies trying to do business in a country whose government doesn’t care about human rights and does not tolerate criticism by its own citizens and apparently not its business partners —was just crossing its fingers and hoping for the best.

By painting Morey as the villain, though, James is giving the Chinese government a free pass for its heavy-handed, petty overreaction. After all, Morey’s show of support for the Hong Kong protesters was just one tweet; it was hardly Martin Luther King’s “Letter From a Birmingham Jail.” It looked as if the Chinese government just used the tweet to show the NBA who has the upper hand in their relationship.

Predictably, James’s comments are being used against him, by some of the same people who are quick to criticize black athletes for speaking out. “The masks are off,” Ingraham declared on her Fox News show. Now suddenly they’re looking for LeBron James to single-handedly defeat communism like when Rocky Balboa took down Ivan Drago.

James pointed out that his critics seem far more upset about problems happening abroad than the ones they see every day in America. “I also don’t think every issue should be everybody’s problem as well,” he said on Monday. “When things come up, there’s multiple things that we haven’t talked about that have happened in our own country that we don’t bring up.”

James said his priority is protecting players from being put in an awkward position. That’s why, according to the ESPN story, the league followed James’s lead and didn’t make players available to media while in China. James must realize money and principle are almost never in perfect balance—especially if it involves people who ultimately don’t share the same values. Sooner or later, you have to decide which is more important.

Click here to see original article

Japan’s Changing Face of Depression

The city government worker was just getting the hang of his job when a new hire upended everything. She became his mentee, and she asked him if he could put together a manual on how to do her work. He told her okay, but begrudgingly. The manual was a good idea in theory, but he was busy, and he wished she could just learn through observation, as he had.

Over the next months, as he dealt with more immediate deadlines, the worker kept pushing the manual off. His new colleague grew frustrated. “All day, morning and evening, she kept asking me, ‘When will the manual be ready? When will the manual be ready?’” the worker told me through an interpreter.

The manual was a mundane request, but it made him feel confused and powerless. He didn’t know how to communicate to the new colleague that he didn’t have the time and that explaining the job was difficult. Repeated over and over, her request caused his anxiety to ratchet up to extreme levels. He hesitated to delegate work to her, which meant that he took on even more. He started having problems sleeping and eating.

Finally, the worker says, he went to lunch with his boss to discuss the situation. His boss assured him that it wasn’t his fault and asked him to work on the manual as best he could. Still, when he came back to the office, he could see the new colleague giving him the side-eye. Later, she asked him why she hadn’t been invited out, too.

That evening, the worker went home and collapsed in his living room. He felt like he couldn’t go to work anymore. The next day, his wife took him to the hospital, where he was diagnosed with depression. He was allowed him to take a hiatus from his job for a few months. After graduating with a degree from a prestigious state-run university, he couldn’t believe what was happening to him.

The worker was one of a few patients in similar situations introduced to me by Takahiro Kato, a professor of neuropsychiatry at Kyushu University in Japan. (Kato requested anonymity for the patients to maintain their privacy and protect them from repercussions at work.) Kato believes these patients’ distress is an example of an emerging condition that he refers to as “modern-type depression.” At its heart, the condition is a struggle by some workers to learn how to assert themselves in a social context where they have little practice. And its reach might extend far beyond Japan.


Aside from a few researchers, most mental-health professionals in Japan don’t use the term “modern-type depression.” It isn’t a clinical diagnosis, and despite its “modern” tag, characteristics of the condition likely have always existed alongside other forms of depression. The term first gained prominence in the 1990s, when Japanese media seized on it to portray young workers who took time off from work for mental-health reasons as immature and lazy.

While the term still carries stigma, Kato believes it’s useful to examine as an emerging cultural phenomenon. In the West, depression is often seen as a disease of sadness that is highly personal. But in Japan, it has long been considered a disease of fatigue caused by overwork. The traditional depressed patient has been a “yes man,” someone who always acquiesces to extra tasks at the expense of his social life and health. What makes modern-type depression different, according to Kato, is that patients have the desire to stand up for their personal rights, but instead of communicating clearly they become withdrawn and defiant.

Clinically, this type of behavior first started to appear with some frequency in the work of Shin Tarumi, a colleague in Kato’s department at Kyushu University. In the early 2000s, Tarumi noticed that some of his younger depression patients, particularly those born after 1970, had an entirely different personality profile than traditional depression patients. They didn’t try to maintain harmony at the expense of themselves, and they had less loyalty to social structures. Instead, they avoided responsibility. They tended to fault others for their unhappiness.

Several years after Tarumi died, Kato took over the line of research based on his own clinical observations. There are no definitive statistics on the prevalence of this type of patient. Patients exhibiting these characteristics tend to be middle class. Most are men, because men are more likely to seek professional help in Japan. There’s no connection to a particular type of job, as the issues patients face are mostly interpersonal. What they do share are similar personality traits and social conditions.

Kato connected his findings about these patients to Japan’s public discourse around modern-type depression because he found the term useful for exploring a fairly recent cultural flux. Modern-type depression patients, Kato believes, are in an uncomfortable limbo state, trained to be dependent in their family and social lives and unclear how to adapt to a quickly evolving company culture that asks them to be more assertive. While they want to speak up for themselves, their ways of going about it are ineffective and immature.

One patient Kato introduced me to was a 34-year-old engineer. At first, the engineer was happily employed at a government office, but he says he was transferred against his wishes to another known for its long hours. He repeatedly asked if he could be moved again, but his supervisor told him it was impossible. He lost his motivation. Months after he started asking, he was finally granted the transfer, but it was too late for him to snap out of his withdrawn state. When we spoke, the engineer was in the middle of a long hiatus from work.


Kato has found that a variety of disruptive changes in Japanese culture, from childhood through the workplace, have made it difficult for many workers to adjust to a corporate ethos in the country increasingly based on Western individualism. He lays out these causes in two papers in the journals Psychiatry and Clinical Neurosciences and American Journal of Psychiatry.

Japanese parenting is one major factor. As Japan focused on rebuilding economically after defeat in World War II, Kato observes, men were busy working and mostly absent, so the culture began promoting the ideal of the nurturing, even coddling, mother. The mother-child bond became symbolic of the Japanese behavioral pattern of amae, a desire by children to be loved and act self-indulgently well into adulthood. While some psychologists have promoted the importance of this nurturing relationship, others say that, taken to extremes, it discourages children from becoming autonomous adults.

Kato believes this problem of dependence was compounded by Japan’s education structure. In the 1970s, the government education system deemphasized competition and focused more on allowing students to develop their own interests. This approach, called yutori kyōiku, was a huge contrast to the strict schooling that had led to Japanese success in the past. Today, yutori is widely criticized for bringing down the overall rigor of Japanese education. Some blame the idea itself, and others believe that it was just implemented incorrectly. Either way, the more relaxed system offered fewer opportunities to contend with demanding authority figures or competition from peers.

As Kato explains, many who were brought up within this environment had a major wake-up call when Japan’s economy hit a period of stagnation in the 1990s. At work, they faced an older, paternalistic model of leadership and had to put up with heavy criticisms of bosses. In the past, unending diligence under such pressures would at least lead to senior positions; job stability was pretty much guaranteed as the country experienced years of steady economic progress. But the rupture of the bubble economy meant that this silver lining had disappeared.

To keep a job, it was no longer sufficient to follow basic orders. Now, workers had to prove themselves as individuals, and many had never developed that skill. It was especially hard on those whose personalities tended to be withdrawn or less socially skilled, who might have been able to fly under the radar in the past. Some simply gave up. “Modern-type depression patients are living out the consequences of a nation transitioning from a culture of collectivism, in which they have to accept their rank within a family, to a capitalistic workplace where they have to forge their own path,” Kato says.


Modern-Type Depression does not seem to be isolated to Japan. In a 2011 study, Kato surveyed 247 psychiatrists, half of them from Japan and half from eight other countries, including Australia, Bangladesh, and South Korea. He gave the psychiatrists two case vignettes resembling traditional and modern-type depression, and found that both descriptions were familiar to many of the participants.

Based on these doctors’ replies, modern-type depression appears to be most prevalent in urban areas within collectivistic cultures that are experiencing rapid socioeconomic changes. Taiwan, another collectivist society that has rapidly urbanized, had an even higher rate of such cases than Japan; Bangladesh and Thailand also had a high prevalence. As cultures around the world adapt to a globalized workplace, many more workers might be in store for this psychologically demanding adjustment, which could lead to a wave of mental-health troubles that psychologists so far don’t know how to treat. (The same pattern might appear in immigrant populations who move from a country with a collectivist culture to the West, though Kato has not yet looked at such examples.)

In Japan, some researchers remain concerned about continuing to use the term “modern-type depression.” Junko Kitanaka, a medical anthropologist at Keio University, worries that the historical stigma that comes with the label unnecessarily pathologizes young people’s dissatisfaction at work, when it would be more helpful to build a workplace culture in which they can thrive. “If it’s used to better understand workers’ psyche and the genesis of depression, then it’s good,” she says. “But I don’t think it is used that way in general discourse. It is used in a way that places blame unnecessarily on the individual worker’s personality.”

So far, no medical consensus exists on therapeutic interventions for the condition, whatever it’s called. While efforts to normalize depression in Japan have led many people to seek treatment, Kitanaka says that the country still needs to educate people about the many different forms depression can take beyond the current stereotype of self-sacrifice. Kato has proposed that psychosocial interventions such as group therapy and changing companies’ work environments should be the primary treatment strategies, since medication has shown to be less effective for modern-type depression.

Kato is currently studying 400 patients long-term to see what protocols work best. In the meantime, one therapy he recommends is Re-work, a program that Tsuyoshi Akiyama, a psychiatrist at NTT Medical Center in Tokyo, started for treating conventional workplace depression. More than 220 clinics in Japan use it. The program is run as an imitation workplace, where participants do readings, have discussions, play sports and work out puzzles with each other. Trained staff members watch and give them ideas about where their interpersonal problems might lie and how to work more effectively.

The city government worker I spoke to who struggled with writing the manual for his colleague is one beneficiary of Rework. After returning to his job, he had a hard time adjusting, because he felt everyone was handling him with kid gloves. He couldn’t find a way to reassure them he was okay, and all of his overthinking about the situation made him lag behind and relapse. Through Rework, he began to see that he needed to start simply doing the work instead of getting caught up in the social dynamics.

Today, he says, if a coworker asked him to make a manual, he wouldn’t blame himself so much if he couldn’t get it done. He would simply state what his limits are. “I was hesitant before to talk to someone who I didn’t want to communicate with,” he says. “Now if I have a difficult colleague, I can handle it.”

Click here to see original article