The Age of A(I)nxiety
On February 26, 2026, Jack Dorsey announced Block would cut about 4,000 employees — nearly half its workforce. He cited AI explicitly. Block’s stock surged 24% in extended trading. On an analyst call, Dorsey said he expects “a majority of companies” to reach the same conclusion within a year.
The following day, the Trump administration ordered every federal agency to immediately cease using Anthropic’s products. Anthropic had refused to remove contractual restrictions barring Claude from autonomous weapons systems and mass surveillance. Defense Secretary Hegseth called the position incompatible with “American principles.” Anthropic was declared a supply chain risk. Agencies have six months to phase out its products.
One story is about AI eliminating jobs. The other is about AI becoming too powerful to deploy without restrictions. Both anxieties are real. They arrived within 24 hours of each other.
The fear is real
KPMG surveyed 2,100 American workers last year and found that 52% now fear job displacement due to AI. That figure nearly doubled in twelve months.
It is also oddly distributed.
Quinnipiac asked a more specific question in April 2025: do you think AI will decrease job opportunities overall? Fifty-six percent said yes. Then they asked: are you personally concerned that AI might make your own job obsolete? Only 21% said yes.
Researchers call this the third-person effect: people accept systemic risk in the abstract but discount it when applied to themselves. The same phenomenon that leads people to believe other drivers are more dangerous than they are. We believe the threat is real. We do not believe it applies to us.
The data supports both halves. The threat is real. And most people’s personal jobs are probably fine. The misdirection is where the third-person effect becomes dangerous: the jobs most at risk are not the ones held by people filling out surveys.
Worried about the wrong jobs
A Randstad analysis of 126 million job postings found that openings requiring zero to two years of experience have declined by an average of 29 percentage points since January 2024. Junior tech roles: down 35%. Finance: down 24%. Logistics: down 25%.
The people most exposed to AI-driven displacement are not the experienced workers answering anxiety surveys. They are workers who have not yet entered the labor market, or who entered recently and found it had changed. They are not worried about losing jobs they are still waiting to be hired into.
The Harvard Business Review reported in January 2026, based on a survey of more than 1,000 global executives, that 60% of organizations have already reduced headcount in anticipation of AI’s future impact. Only 2% cited actual AI performance as the cause.
The anxiety is creating the reality before the technology does. Companies are betting on AI’s potential, and the people paying the price are those whose jobs were expected to exist when that potential materializes.
The gap between floors
The executives making these decisions and the workers living with them are experiencing different realities.
BCG and Columbia Business School surveyed 1,400 U.S.-based employees in November 2025 and found that 76% of executives believe their employees feel enthusiastic about AI adoption. Only 31% of individual contributors actually feel that way. Executives are more than twice as optimistic as the people they lead.
The behavioral paradox runs deepest among workers most native to AI tools. Forty-seven percent of Gen Z workers conceal their AI use from colleagues out of fear of judgment. The same generation that says AI makes people lazier and less smart (79% and 62% respectively) uses it daily and hides that it does. They are performing both halves of the contradiction simultaneously.
This is what the DORA data looked like from the developer side: 30% of developers actively distrust the AI they are shipping, and yet it ships. The distrust does not stop the deployment. The anxiety does not stop the adoption. The coercion is structural.
The other anxiety
While the job displacement conversation dominates, a second anxiety was being validated at the same time.
Since Anthropic’s founding, the company has maintained that certain applications are categorically too dangerous to enable: systems that make targeting decisions without human involvement, systems that surveil entire civilian populations. Those commitments were embedded in its government contracts.
On February 24, 2026, one day before the Pentagon issued its specific ultimatum, Anthropic quietly published Responsible Scaling Policy Version 3.0, removing the hard requirement that had barred the company from training more capable models without proven safety measures in place. The prior policy included a binary pause trigger: reach a capability threshold, stop training until safety is demonstrated. That requirement is gone. Anthropic’s stated reasoning: if one developer pauses while others proceed, developers with the weakest protections set the pace.
Two days later, Dario Amodei wrote: “We cannot in good conscience accede to their request.” The company held the line against autonomous weapons and mass surveillance. The Pentagon called Amodei “a liar with a God complex.” The Trump administration banned Anthropic from government systems.
The two moves sit in tension. Under enough commercial and governmental pressure, Anthropic softened its own limits. Then refused to cross a different one. The point is not to adjudicate which decision was right. The point is that the safety researchers who worried that AI institutions would buckle under pressure were not wrong. The institutions themselves feel it.
The historical reflex
The instinct when confronted with technology anxiety is to reach for the ATM.
In 1985, the United States had 60,000 ATMs and 485,000 bank tellers. By 2002, it had 352,000 ATMs and 527,000 tellers. Automation made each branch cheaper to run, which caused banks to open more branches, which increased total demand for tellers. The job transformed. It did not disappear.
This pattern has repeated often enough to acquire a name: the lump of labor fallacy. The assumption that there is a fixed amount of work to be done. Technology creates productivity, productivity creates wealth, wealth creates new demand, new demand creates new jobs.
The caveat is now explicit in the economics literature. AI performs cognitive tasks, not just physical or repetitive ones. The ATM replaced one function of one job. AI may displace entire categories of cognitive work simultaneously. Goldman Sachs moved from “no significant correlation between AI exposure and employment outcomes” in 2025 to “could push unemployment modestly higher” by February 2026. The economists most inclined to dismiss the anxiety are hedging.
There is also the entry-level problem the ATM analogy does not address. When ATMs were installed, displaced cash-handling work was absorbed by branch expansion and by workers who had already built relationships and expertise. The entry-level collapse does not offer that cushion. You cannot upskill into a relationship you were never given the chance to build.
What I am still figuring out
Whether the third-person effect is protective or dangerous. If people systematically underestimate their personal exposure to AI-driven displacement, they will not advocate for the policy responses — retraining programs, safety nets, labor market reforms — that would protect them when the displacement arrives. The anxiety that is missing may matter as much as the anxiety that is present.
And whether Block is the leading indicator or the outlier. Dorsey said companies are “late” and that most will reach the same conclusion within a year. The HBR data says 60% of organizations are already acting on AI’s potential, not its performance. If performance catches up to potential, the 2% figure becomes a much larger number quickly. The anxiety would be warranted retroactively, which is the worst possible timing.
February 26 and 27 were about different fears. About four thousand people lost their jobs because AI is productive enough to replace them. A company was banned from federal business for insisting that AI should not be allowed to make targeting and surveillance decisions without human oversight.
The hype cycle and the doom cycle both treat anxiety as a single dial to be turned up or down. The data says otherwise. The anxiety about job displacement is real and aimed slightly wrong: toward experienced workers’ personal situations, away from the entry-level collapse already underway. The anxiety about AI systems without safeguards was always real and is now documented in a government blacklisting order.
Both were validated within 24 hours of each other. Neither is going away.