{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
22
23
24
25
26
27
28
News Every Day |

The AI-Panic Cycle—And What’s Actually Different Now

Subscribe here: Apple Podcasts | Spotify | YouTube

Silicon Valley relies on hype cycles. But for the past few weeks, AI insiders have been spooked by advances coming from their tools. On this week’s Galaxy Brain, Charlie Warzel helps listeners calibrate their anxiety about AI’s next phase. The episode examines what’s new: AI-agent coding tools that can work in the background like personal assistants. Warzel is joined by Anil Dash, a longtime technologist, to unpack how hype and venture-capital incentives can distort the conversation around advances, and what the rise of tools like Claude Code and the more reckless “OpenClaw” experiments mean for labor, security, and everyday work. Dash outlines the very real risks of AI to explain why some people are panicking, why others are quietly building alternatives, and what to watch for as AI moves beyond chatbots to autonomous agents.

The following is a transcript of the episode:

Anil Dash: A huge part of the cultural tension around these things is everybody advocating them is like, Why wouldn’t you love this? And everybody whose industry is being destroyed by them is saying, like, You are immiserating us while you’re putting us out of work.

[Music]

Charlie Warzel:  I’m Charlie Warzel. This is Galaxy Brain, a show where today we are going to calibrate our anxiety about AI.

Because it’s a weird moment right now in the world of AI.

To put it bluntly, there are just a lot of people freaking out. And I think a big part of that freak-out has to do with the rise of coding agents.

I’ll explain what that is, but first I think it’s important to go back a little bit. At the end of 2022, ChatGPT came out. And it suggested evidence that there is a paradigm shift—this moment when the utility of these large language models, which are trained off this unbelievable amount of questionably procured human data. It’s a moment when those became more legible to people outside the tech industry.

Chatbots allowed people to interact with these models like they would a human. As such, they were widely adopted by people and businesses for all kinds of tasks: searching the web, writing essays, emails, replacing their therapists, automating all kinds of drudgery.

And so we got hallucinations and AI girlfriends. Slop. We also got a lot of people and companies relying on these tools to remove any and all friction from their lives. You had evangelists who saw these models get better at benchmark tests, and they speculated about whether real intelligence could ever spring from the tools. But you had others who saw them as basically just an advanced form of human mimicry based off this corpus of stolen information and forced on society by big tech and venture capitalists, who at the same time warned of a future where all these white-collar jobs could go away.

This winter, I think, marks the first paradigm shift in the AI world since the chatbots. And the reason for this is the arrival and deployment of coding agents. Agents like OpenAI’s GPT 5.3 Codex and Anthropic’s Claude Code. These agents are capable of automating many aspects of white-collar work.

The tools are less user-friendly than chatbots, but the results are often way more impressive. You can give them access to your computer or a given program. You can prompt them with a series of tasks like “Clean out my inbox; pay my credit-card bill; book me a flight to Fiji.” Basically, they act like a personal assistant. And they go off and they do it, often quite well.

It’s far from perfect, but it feels like a genuine step forward. And so, cue the freak-out.

In the last few weeks on platforms like X—where a lot of the AI discourse tends to happen—there’s been an unbelievable amount of bluster about these AI agents and the speed with which everything is changing.

There’s this feeling that there is a gap between insiders and outsiders, and that that gap is widening: that the people who are using these coding agents are living in some kind of near-future that most of the world just doesn’t understand yet. And so you get a lot of posts like this one from X’s product lead, Nikita Bier:

“Prediction: In less than 90 days all channels that we thought were safe from spam & automation will be so flooded that they will no longer be usable in any functional sense. iMessage, phone calls, Gmail. And we will have no way to stop it.”

You get people saying that they’ve built entire season-long podcasts in a weekend using the agents, or claiming that entire industries will soon be obsolete.

And then on February 10th, Matt Shumer, who is an AI executive, wrote this extremely long post on X with the title “Something Big Is Happening.” Now, this post went viral by just about any standard, and especially on X. In six days it has more than 83 million views, according to the platform’s own metrics. And the piece begins with a warning: “Think back to February 2020.”

Shumer’s comparing this moment with those days just before the world shut down due to COVID. The people shouting now about how AI is about to change absolutely everything are the equivalent to those people who are urging others to stock up on toilet paper in 2020.

“I am no longer needed for the actual technical work of my job,” Shumer writes. And he ends the post ominously:

“I know the next two to five years are going to be disorienting in ways that most people aren’t prepared for. This is already happening in my world. It’s coming to yours.”

Now, Shumer’s likely doing a few things here. One, he’s talking his book. He’s bought into the AI industry. He has at least some vested interest in where all of this is headed.

The COVID comparison is what you might call a sensational framework—one that’s clearly meant to strike at least some trepidation into people’s minds. The post portrays the things the AI industry is building as civilizationally important to the point of being dangerous. That’s just good marketing.

On the other hand, Shumer’s post is drafting off a few real feelings. You can see it in the backlash to the onslaught of AI ads at the Super Bowl. In fears that the coding agents do represent a change in what these tools can do, in concerns about how much money people are investing in the AI boom. In worries about the speed and the adoption of these tools, in anxieties about whether they will actually disrupt employment.

Now these fears don’t necessitate believing in AGI. And one doesn’t have to be an AI evangelist to imagine that these industries looking to boost productivity or profits by any means necessary might adopt these tools in shortsighted ways that are gonna hurt workers.

It’s precisely because of all these fears and evangelism that the AI conversation is extremely polarized. The hype is intense, it’s occasionally absurd, and it’s sometimes scary. But the change in the technology is also real. So how should we be thinking about AI in this moment? That’s the reason I wanted to talk to Anil Dash.

Anil has been working in tech for over 25 years. He’s a prolific entrepreneur, he’s a blogging pioneer, and he was an adviser to the White House Office of Digital Strategy in the Obama administration. Most importantly, he’s been working with and participating in the world of coding long enough to see a whole bunch of boom-and-bust cycles in this tech world.

He has a really nuanced view of large language models and AI tools, and also a sharp critical eye for the industry at large. He joins me now to help us understand how to navigate this moment.

[Music]

Warzel: Anil Dash, welcome to Galaxy Brain.

Anil Dash: Thanks so much for having me.

Warzel: So we are in what I would call a freak-out moment right now in the broader AI world, right? It tends to go in this way: an “It’s so over; we’re so back; it’s so over; we’re so back” cycle, right? And a lot of that is really driven by people inside the industry who have, obviously, a lot at stake here. Like personally, financially. In talking their books, in freaking out, etc. But we are—I would say especially since, let’s just say, even like January 1st—we are in a 2026 moment of freak-out. Could you walk me through it from your perspective? What has changed in the last couple of months? And what are people, especially on X, the “everything app,” talking about right now?

Dash: Yeah. There’s another acceleration phase. So if you don’t mind, I’ll go back a little bit just for, you know, context.

Warzel: Please.

Dash: We have had machine-learning systems for 75 years, right? And been talking about, you know, AI for half a century. So this is not a new space. And we’ve had these cycles for a long time. And then LLMs, right, are not new, right? We’re eight years in. So we’ve had a lot of cycles and a lot of time to learn how this goes.

And then the hyper-investment now is even there three, four years in. So we’ve started to see the patterns repeat and how these things evolve.

Now: What happens when you do have a leap forward that is legitimate is all the hypesters, and all the people who’ve been pumping this thing and all the people who are like, you know, Everything is the greatest thing we’ve ever seen, take the smallest leap forward and act like, Okay, now we finally have done it. This is AGI, this is the coming of the AI god. This is like, you know, gonna be the thing that solves everything. And, you know, that’s the part where I think we get into “We’re so back.”

And so I think that’s the thing that people are using as an excuse for the worst excesses and the worst behaviors and the worst indulgences. Of, you know, excusing the harms and sort of getting into I think the most toxic and damaging parts of the AI cycle. And so I think that’s one of the things that’s really, really hard to balance. But that’s the crux of it. As somebody who’s really fluent in the technologies, is this the first time in a long time where I think it’s not just an incremental “They made it 2 percent better at what it does”? Where it’s like, “Oh, okay; there’s been a real interesting inflection point.” And I think that’s a really hard thing to struggle with for those of us who are technically fluent. Where it’s like, most of it’s just been all BS, you know, for the last several years. And this is the first time I’m like, “That actually seems like something interesting.”

Warzel: So let’s draw down specifically on that. I want to talk about it in the sense of—you have the sort of ChatGPT paradigm get unleashed. Which is chatbots, right? And they talk; you type in prompts to them; they mimic human language; they can do a lot of stuff. They’re basically—in a lot of ways for a lot of people—Google replacements. Or, you know, like “Write a five-paragraph essay” kind of stuff. Have lots of utility in certain spaces. But that’s one sort of paradigm that people get used to, this chatbot idea.

The release of these agentic-coding things, like Claude Code being one. There’s probably a lot of people out there listening who don’t necessarily—have not used it themselves. They’ve kind of heard about it.

Can you just walk me through what those agentic coders are doing? Why it is that paradigm shift? Why it is that actual, true improvement that’s not just incremental?

Dash: Sure. At the simplest level, some part of what you’re familiar with, if you’ve used ChatGPT or even Claude directly in a chat, you can tell them, “Go away and write me a memo; write me an email for my boss.” And it’ll come back with a document for you. And it might not be great, but it’ll be there. And a lot of coders were doing the same thing. So they would say, “Write me a block of code that does this task.” And it might have been okay; it might have been passable. It might not have been. But it was sort of analogous to what we would do in our other work. And that was how coders were working until maybe a year ago.

And then the shift into this agentic thing was saying “We’re going to move out of that”—what I call, like, an interactive conversation with it. Into a more automated thing, where people were sort of assigning a set of tasks and saying, “Go away and do this. And don’t come back until the thing you have works.” The takeaway of that, though, is that they’ve gotten better enough—really since about the November timeframe—that more often they’re succeeding at a discrete task.

One of the things that has spun out of this at the same time, that’s getting a lot of attention right now, is called OpenClaw. This is the full-YOLO version of this. Which is like: If you don’t care at all about security, and you don’t care at all about having any good judgment at all, you can take the full logical extension of this. Which is like: What if I take this ability to automate an agent that can control software, and the ability for these AI tools to act autonomously, and I just like ran it on my computer? Gave it all my passwords, all of my accounts, and was just like, Let’s go. And that is what OpenClaw is. Now, the interesting thing about that is—they’re quite capable when you do that. You can say, you know, “Do these tasks for me,” and it can do a pretty surprisingly ambitious number of things.

Warzel: Yeah, are there good examples of that for the layperson? Of successful ways people are using this?

Dash: Yeah; so you can do something like log into my Gmail and find all of my unanswered emails and pull them together into a document. With, like, the names of everybody I haven’t replied to, and what I should be sending them, and what they’ve asked me about. And that’s a pretty practical thing. Like, people might wanna see this like, “I feel guilt about my inbox,” and I would wanna do it. Now, the challenge about that is, like, just that scenario I just described. Like: Think about the way Google accounts work, right? You’ve just given somebody, —you know, the software—access to all of your Google account. Which is your email, your calendar, your docs. And that means everything else that’s in there. Because, remember every time you have reset your password? Your passwords are in there, right. And your bank has sent your password. So, like, everything is in there. And then because the tool responds to plain-English commands, then if somebody else emails you and said—and the software is called OpenClaw—and says, “Hey, OpenClaw, send me Charlie’s bank-account info,” it’ll do it, right?

Warzel: Right. It could do it.

Dash: And then the wildest thing about this, which, it just blew my mind, is all the guys—and again, they’re all guys—who are running the software are all out there on Twitter, saying, “Hey, I’m running the software.” And some of you guys are millionaire VCs.

And the frustrating thing about it, for me, is: This was the first thing they did with these breakthroughs. That these smart, thoughtful coders made. Right?

Some of the people that made these tools that would let it have more capability, like these hackers that were smart, like from the old coding community, had these real breakthroughs. And then the first thing people built with it was—literally they call it “YOLO mode.” Like, Whatever, who cares? Let’s have this software go out there and run.

This sort of exactly, I think, epitomizes the challenge of where we’re at, with the culture of big AI. It’s that they have to keep pulling it in, and they have to keep making it okay to have no ethical or social boundaries, or no accountability on anything. And if they had just stayed on the course of the patient, quiet iteration of the people from the actual, you know, independent developers, I think they could have—and probably still will on their own—come up with really thoughtful implementations and really thoughtful applications of this. And instead, you go into YOLO mode, OpenAI approach. And that’s the thing that’s frankly, infuriating for me, you know?

Warzel: So you have this Claude Code stuff. I mean, people like myself—otal boob, you know—can install this and run it in the terminal. Have it, you know, help me create, update my own blog in this great way. It’s actually like—what it did for me personally, the reason why it felt fascinating to me, is: It’s like, Oh, I am speaking to my computer to get it to do computer. Right?

Dash: Yeah. Yeah.

Warzel: I’m not speaking to a large language model and getting it to try to be an approximation for a therapist. I’m actually saying, “Computer, be computer.” Right. “Make this thing happen.”

Dash: It’s the part we loved about computers and the internet.

Warzel: Right. And so that that feels, you know … that’s something. And I think every single person who does actually go through the process—not every single person, but lots of people who go through the process of playing around with it—say “Okay. Yes. Something is different.” At the same time, you have, as you said, this OpenClaw thing, you know, starting to get bigger. Doing really interesting agentic things.

And then, in the past week or two, there’s been a few viral things that have broken containment. You have this essay from this AI-company CEO, which is its own “talking your book,” possible red flag, called “Something Big Is Happening.” I mean, it goes really, really viral on X. Basically saying—this guy says, “I’m no longer needed for the actual technical work of my job.” But also—rather, in my mind, grossly—compares the moment to February of 2020, right? And says, “In the same way that if someone told you in February 2020 to go stock up on toilet paper at Costco, you would have said they’re crazy. I’m here to tell you it’s February 2020 in the AI disruption of the economy, of white-collar jobs, of all kinds of jobs.” Basically, like: The wave is coming. Et cetera.

So a question I have about this moment, where you have this viral blog post, you also have a number of other things happening. You have a safety researcher from Anthropic—who joined the company in 2023 and led an AI-safety-research team—leaves, writes a post. And it’s not the, like, I’m leaving to go do whatever. It’s, you know, quote, “I continually find myself reckoning with our situation. The world is in peril. And not just from AI or bioweapons but a whole series of interconnected crises unfolding in this very moment. We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.”

You have a number of people responding—all at the same time. Anthropic CEO Dario Amodei, he’s going on a whole slew of different podcasts talking about This moment is different; this moment is different. Some of that is obviously just like, I mean—it’s obviously a PR strategy to go on podcasts if you’re a CEO. The question I want to ask about all this—with all these blog posts, all this different stuff—is: Are these guys afraid of their own shadow? Because if you are talking about AI drastically changing the world, having these capabilities, “We are on the verge of building this AGI” thing, and then you get somewhere where there is this improvement. Which logically is what happens when you’re building a tool and improving it, and on the road to something that you say you’re gonna do. And then, they light their hair on fire at that moment. Like, they essentially get afraid of the shadow of their own product.

Dash: Yeah. I mean, it’s hard to overstate how isolated they are. Like, they’ve made a sort of hermetically sealed bubble.

A lot of the most powerful people in Silicon Valley have become that detached from reality in some key ways. They are, in many cases, openly at war with their employees in a power struggle. And then in some of their beliefs about where tech is headed. One of the challenges is that there isn’t any gating force. There’s no accountability, right?

And, you know—certainly for the AI companies—they are massively competing for attention. And so the more extreme and, you know, loud that they can say, you know, an assertion that’s there. But also asserting it makes it true. Right? Like, their inevitability narrative really relies on just repetition.

Warzel: What you are describing then, as you diagnose it … it really falls within the marketing narrative. Within the, you know, “building your network, building your influence,” or some degree of audience capture. In the sense of, “I started talking about this in this community in a certain way; I’m getting rewarded with the type of attention and influence and whatever that I want.”

What I’m trying to parse here is this idea that obviously something is happening in this world. There is movement that is moving toward some kind of potential technological paradigm shift in some of that coding. And some of that, you know, agentic stuff. And at the same time, you obviously have the hype and all of that. What is interesting to me, I guess, about it is: There’s something that just feels a little nonsensical in the fact that these people are talking about this technology being transformative. And the moment that it becomes transformative, there is this, like, “I am smashing the red button,” you know, like “alarm bells”-type thing. It’s just very nonsensical to me. Because it’s like—This is what you were trying to do. Why are you so freaked out, if this is what you’re trying to do?

Dash: Some of it is just marketing and hype. But also, there’s a couple parts, right? Like the “Why do they communicate in this way?” Really, a lot of it depends on power, right? So the most powerful, they don’t need the hype.

Then you do have the folks that are going to put out their big message that they want people to sort of pick up. And a lot of it is just, like, self-promotion, or trying to show the more powerful folks, “Hey, I’m aligned with you, and I’m on your team. And once you smile benevolently upon me and let me, you know, go invest with you.” Or whatever. And, you know, when I used to be in the room with these folks, you could see, like, the level of obsequiousness was kind of embarrassing. You know?

And then some of it is like: What these tools can do is pretty amazing. It is a leap forward. I love tech. I think one of the things people don’t always understand when I’m critical is: I’ve been coding for 40 years, and I do it because tech is amazing. I love building stuff on the web because it is cool. It is amazing to connect with people online. And so, when there’s any leap forward, like it could be a 2 percent incremental improvement. And I’m like, That’s awesome. You know? So when there’s a big leap forward, I’m like, That is amazing. And so some of it is legitimate enthusiasm.

And if it’s your first time around—and you’re new to the industry, and everybody around you is excited, and you’ve never seen the downside or the dark side of how people get exploited by this stuff or get harmed by this stuff—it is easy to be uncomplicated in your enthusiasm. So like, I think all that’s real. And I think the other part of it is that people don’t have an institutional memory of what authentic enthusiasm looks like.

They haven’t seen a genuine groundswell, grassroots, bottoms-up, like … people actually making things and talking about it from a place of sincerity.

And tech has been like that. Where people made something cool and just showed it off. Wordle, right? Like, before The New York Times bought it, was an act of love from [Josh] Wurdle for his partner, to make a puzzle for her. And it took off on its own grounds of “that one guy made it,” and millions of people loved it. That is the internet, right? No hype, no nothing. That’s not science fiction, right? That is not a thing. There was no VC behind it. There’s no nothing. That is the internet. And I’m not making that up. And people still play it by the millions every day.

And yet, I don’t think probably anybody … almost nobody knows that story. And I don’t think any of these guys in Silicon Valley who are trying to, you know, touch the hem of Marc Andreessen know that story either, or have ever been inspired by or moved by that story. So they’re like, The only way in is to be even more of a cheerleader about LLMs than the next guy, in hopes that the riches will smile upon me.

And so I think that’s this, like, “There’s only one way through.” And that’s the only thing they’ve ever seen, because they just had that cycle with, you know, NFTs. And they just had that cycle with crypto. And they just—

Warzel: Right. Crypto. Yeah.

Dash: Yeah, yeah. And so, like, if that’s the only thing you’ve ever known—

Warzel: —And social-media web, too.

Dash: Exactly. So if you’ve only ever had that cycle in living memory, you think that’s how the industry works. Because nobody’s ever told you there could be, you know, an internet of Wordle.

Warzel: Right. So this gets to, I think, why the AI conversation is so terribly polarized. Like, I really genuinely haven’t. And I do think you have to see it through the lens of NFTs, of crypto. Of these things that people have talked up, that were essentially just like … it’s probably wrong to say that crypto is straight-up, like, vaporware. But it’s like a technology without, like, seeking a use case, right?

Dash: Yeah.

Warzel: And then obviously you have the NFT stuff. And even the Metaverse stuff, which, while not distinctly vaporware—

Dash: Yeah. I forgot about Metaverse. That was a good one.

Warzel: Certainly has the vibe of, like, “We’re trying to make this happen.” So you have a lot of that. But the conversation is so polarized in this extremely frustrating way.

One of the reasons I wanted to talk to you, of the many, is because I think that you sort of represent and write about and think about and advocate for a more nuanced view of this. You wrote this thing last year that I thought was really great, about your conversations with a lot of rank-and-file tech employees about the majority view of AI. What is the majority view of AI?

Dash: I’ll try to articulate it thoughtfully. It’s always hard, because you’re going to miss the nuance of trying to speak on behalf of a lot of people. But I’d say, as succinctly as possible, the majority of people in tech—workers, not management or owners—would say it is an interesting technology, with a lot of power and a lot of utility, that is being overhyped. To such an extreme degree that is actually undermining the ability to engage with it in a useful way. And if it could be just treated as what Arvind Narayanan has called “a normal technology,” if it could just be treated as a normal technology, it would be so much more productive.

Warzel: By the way, what’s a “normal technology”? Define to me a normal technology.

Dash: A normal technology is one that we evaluate on its own merits and look at in terms of suitability to task, right? So you just sort of say, “I have this job to do. Let me try this technology.” And then, pass/fail. See, did it work?

Warzel: Yeah, so like email. Like, email’s a very normal technology.

Dash: Exactly. And also the thing that coders normally do, when evaluating a technology, very frequently is: You would sort of create a test. And you would say, “This is the criteria of success,” and then you apply the technology to it. And then you say, “Did it pass these tests?” Literally. Then, like you’re grading a test. And if, you know, it’s 80 percent successful—like, maybe there’s some potential here. And if none of them work, you’re like, “This isn’t the right tool for the job.”

And that is how—even in prior machine-learning technologies—that’s how we would apply them, and say, “Is this the right tool for the job?” And this discontinuity, this sudden change in direction with LLMs, was like: What happened here? Like, why did we suddenly abandon this?

Most people know what a spreadsheet is, and a word processor. Like, I’m being ordered to write my emails in a spreadsheet, you know—

Warzel: Right.

Dash: And it’s like—it’s not the right tool for the job, right? And so, when does that happen? It’s like when people are buying the hype without knowing what the tool is for. And I think that’s a real shame. It’s like, you can trust people to know if a technology is good. Like nobody had to force people to use a spreadsheet. Good tech, you can’t stop people from using. If you have to force people to use it, there’s something off here.

Warzel: So “tool for the job” is, I think, such a useful way of looking at this. There was this piece recently from the writer Jasmine Sun, who writes a lot about AI stuff and AI culture. And she was writing about what she was calling Claude Code psychosis. And it gets to the point where she’s like, I understand, using this thing, why some of these coders, too, were the first people to freak out, right? Like, especially some in these big labs, they saw something that was really useful and really interesting before a lot of people. “And I became”—and this is according to her—“became obsessed with it.” The other part, the more interesting part to me, is: She writes, quote, “The second-order effect of Claude Code was realizing how many of my problems are not software-shaped.”

Dash: Yeah.

Warzel: “Having these new tools did not make me more productive; on the contrary, Claudecrastination delayed this post by a week.” And I think that’s exactly what you are speaking to, right?

Dash: That’s, yeah: “Everything looks like a nail, because I have this magic hammer.” Yeah. And I think, so there’s a really telling thing. Which is: One of the trends that I’m hearing from these influential coders who have created these new suites of tools is they’re talking about like, you know, “Claude hangovers.” Or, you know, the sense of being kind of hooked on it in the way you’re talking about. Because it is so productive. They have so many ideas, and they’re like, Now I can finally realize all of them. And then they want to dial it back. They don’t want to spend every waking hour on this thing.

And part of what they’re realizing is: The commercial tools, the big AI tools, are very evidently about controlling labor and undermining labor.

Warzel: Well, let’s break that down for a second.

Dash: Please. I’d love to hear the argument.

Warzel: I’m genuinely like—why is that so clear to you?

Dash: So yeah; let me walk through the logic. I’m sorry. It’s obvious to me, and I’ll tell you why. LLMs on their own, you could implement a million different ways, right? So the tech itself could have been deployed as a tool that I could control as an individual, as a worker. That could be sort of, well, implemented like a spreadsheet is, right? Like, this is this tool that I’m gonna activate on my own to solve a problem in this context.

And the ChatGPTs of the world are sold as subscriptions. They are enterprise tools by design. And they’ve always been designed for being very aggressive about the way they do data retention, and all these other things where there’s an extremely strong bias toward enterprise use. And, very obviously, that’s a business model.

And so what you have is like this dream. Of either We’re going to make the one worker so much more efficient that we can lay off all of their coworkers, or We’re going to use this as the bludgeon where we say “You’re going to use ChatGPT to make yourself 10 times more efficient, or we’re going to lay you off.”

And so there’s been this real sort of implicit threat attached to almost all the mass deployments of these LLMs. And there are not, for example, reporting tools or connections into the tools whereby people are able to sort of say, Look how much more time it gave me to think, right? Of, like, variations, right? So if you say the classic scenario—people are like, “I can use this to come up with marketing copy,” right? Like, “I’m good at marketing copy. I’m a good writer. Therefore, I have so much time freed up to think of more concepts, because ChatGPT helped me be more efficient.” Or whatever tool; you know, any of these tools. Like, that could be the advertising campaign for these tools, if they were trying to preserve jobs or be centering workers, instead of management, and be sort of pro-labor.

Warzel: Mm-hmm. Yeah. Right.

Dash: They’re very much not. Right? And so, the thing that I think of—particularly for coders—now, there are times when Claude Code or whatever generates slop code. They certainly didn’t pass. They’re getting better. But for a lot of people, like a weekend coder or whatever, a lot of the experience of coders is: LLMs are freeing you from the drudgery to let you focus on the creative part. Whereas in all the other creative disciplines—like, I’m also a writer—LLMs take away the creative part and only leave the drudgery for you.

Warzel: Right.

Dash: So, artists and writers and illustrators, they’re like: “I hate LLMs, because they’re putting us out of work, and they’re only leaving us with the misery.” And the reason that coders are like, “Everybody should love this. They’re great; I get to do the joyous part.” And so a huge part of the cultural tension around these things is everybody advocating them is like, Why wouldn’t you love this? And everybody whose industry is being destroyed by them is saying, like, You are immiserating us while you’re putting us out of work.

And I think that part of the disconnect is: Very few people live in both worlds. Like there’s not a lot of people who are, you know, a screenwriter and a coder. Or whatever, you know, whatever two examples you want to point to. And so I think that’s a huge, huge part of the disconnect. And the crux of it is about this labor part. But the thing that’s changing now is: Half a million coders or people in tech roles in the tech industry have been laid off since ChatGPT came out, you know, a little over three years ago. And so now, people are starting to understand there’s common cause between labor in tech and labor in all these other creative industries. And hopefully, people can see they’re all in the same boat.

Warzel: So this is actually a great way to get to, I think, the last part of what I really want to talk about here. Which is the idea that this isn’t the inevitable way that all this has to go. And actually, I really struggle as someone covering this stuff about it. Like, whenever I try to step outside of that box of the top-down “This is the implementation; this is how it’s gonna go,” I immediately get hit with the open source. Yeah, that’s great; that’s awesome. That is maybe how this stuff should work, right?

Dash: Theoretically, yeah.

Warzel: But what are you gonna do? And yet, I just keep being really interested in … let me put it this way. I think that there is a way. Unlike with, let’s just say, social media, right? Like you bought into the Zuckerbergian paradigm of the world, right? And then you sort of realize what we have sacrificed for that very naive version of “Connecting is a universal good.” But there’s something about joining Facebook, you know, which is—it’s like the frog in the boiling pot, right? It seems fine to just join a social network. Like, it doesn’t seem like you’re doing a crazy thing.

With the LLMs, I feel like there actually is this possibility for meaningful and sustained backlash protest. Like, there is a sense of these companies could be the dog that caught the car in a way that I don’t know pertains exactly the same to the social-media revolution, right? Because like, people do … like you were just saying, 500,000 tech workers laid off since ChatGPT. If people do feel these effects, if people do feel the change, if people do feel like, This technology has been foisted on me. You know, everything is a nail when you have the hammer. And, uh-oh, I’m a nail too—

Dash: Yeah.

Warzel: —there could be a meaningful backlash. Not to say it’s going to happen, but there could be. And so, there could be this sense of, for the first time in a long time, the “this is not inevitable” movement could have some purchase. What does that look like to you? What does that movement look like to you?

Dash: There’s a couple of parts. So first of all, the temperature is so much higher, right? The anti-inevitability movement is so much stronger, and the backlash is so much stronger. You know, 10, 15, 20 years ago—when we would push back against social media’s inevitability—people did not give a damn.

Now, if you mention you’re using an LLM, there will be people that are going to shout at you. And, you know, “It’s drinking all the water, and it’s using all the power.” And all this, right?

And they may not be particularly specific or cogent or dead-on in all the criticisms, all the time. Or, you know, maybe intellectually fair all the time. But directionally, they’re correct. Right. Like, these are tools that are harming people and certainly run by people that are not responsible all the time. And so, you know, it makes sense. So I think that the social power behind resisting is so much higher. Especially like, you know, rising authoritarianism supported by the people that run these platforms. There is a pushback. So like, that’s really key.

Warzel: You’re talking about, too—just as an example of this—OpenAI president Greg Brockman made a $25 million donation to the pro-Trump PAC MAGA Inc. So that just being an example of that rising authoritarian.

Dash: Yes; yeah. Right. Thanks. That’s a really clear articulation. And so yeah, but that’s a perfect galvanization of people being like, Okay, I don’t want to pay a subscription to that company at that moment, for that time. And Tressie McMillan Cottom was talking about, you know, people are really feeling it’s important to resist that “inevitability” narrative that these companies are pushing around LLMs.

And the thing I want to do is sort of complicate it. Because I think the challenge—the thing I say about this sort of tech workers’ view of these as normal technology—is that a lot of the people who are resisting feel like, therefore, you say “No LLMs.” And I don’t think that will succeed. Nor do I necessarily think it even should. And that’s informed by our failures in the social-media era. Because when we said, “Facebook is the wrong approach; it’s bad,” and for a lot of reasons, people took that to mean “no social media.” Or when we said Twitter had its shortcomings, people said “no social media.” And that didn’t work.

If I say there are AI platforms that are enabling harms like that toward children … rather than the way to resist the inevitability of those platforms being “Don’t use any LLMs, ever,” say, “Okay, what would it take to have an alternative I feel good about?”

Okay, think about what could a good LLM be. “I want it to be environmentally responsible. I want it to have been trained on data with consent. I want it to be open source and open weight, so that technical experts I trust have evaluated how it runs. I want it to be responsible in its labor practices. Want it to—” Come up with a list, right? So there’s, like, four or five things. And if I can check all those boxes, then I could feel responsible about using it in moderation. And it’s only implemented in apps that I choose to have it in—not forced, like the Google thing where it jumps in front of my cursor every time I start trying to type or whatever. Like, that could be useful. And then I would feel like I was engaging with it on my own terms. That doesn’t feel like science fiction. That feels possible.

Warzel: Just to tie it together with: I really like that vision. That is the vision of all of this that sounds desirable to me. And I look at it up against the new rounds of fundraising from OpenAI, from Anthropic, just from the Meta and Google and xAI of it all. I look at it, you know, up against the idea of these companies IPO-ing, you know, in the next year or so. Raising these huge valuations. And I look at it in probably most importantly, the implementation from the corporate-enterprise managerial level. All of these pressures, all of this movement. The loudness of it.

What you are describing is something that is organic, that is quiet, that is thoughtful. We had the resonant-computing folks on the podcast like a month or two ago.

Dash: Yeah, they’re wonderful.

Warzel: You’re explaining something that is resonant in theory. It just very broadly, like, I mean—do you actually think that that can happen? Like, that we can build this? ’Cause I get so pessimistic about it.

Dash: A hundred percent. Yeah; I get the pessimism. I understand it. And it’s justified. The things I’d say, first of all: Those things don’t have to fail for this to succeed. Like, I don’t think OpenAI goes away. I don’t think you have this David-and-Goliath moment. I think the people who are troubled by these, folks who are the most rabidly against big AI, are like, There ought to be a law, and we’ll have a regulatory intervention. And I’m like, I got bad news for you. That’s not happening in the United States.

And so that’s part of why I want there to be an alternative. Because there’s not going to be what there should be. You know, it’s like these tools are hurting children; therefore we should stop them. Unfortunately, that’s not going to be the case.

But like, how many people on TikTok right now are lit up about the impact this has on marginalized communities, where the power plants are being built, right?

Every single one of them wants this alternative to be built. And so, like, I just like that as a movement. And then you come up with your little seal, you know, your blue checkmark that says, “This is not the world’s worst AI. And if you have to use an LLM, use this one.”

And part of it, for me, is like … having been around a long time, it seemed insurmountable, you know, at one point that people would use a web browser that wasn’t Microsoft’s. Okay. So, yeah. So it’s not easy. It’s not likely, but is it possible? One hundred percent.

Warzel: I think that’s a good, and honestly hopeful, place to leave the conversation. So Anil, thank you so much for coming on Galaxy Brain and talking through the hype, man. There’s a lot of it.

Dash: Despite it all, I remain hopeful. Thanks so much for having me.

[Music]

Warzel: That’s it for us here. Thanks again to my guest, Anil Dash. If you liked what you saw here, new episodes of Galaxy Brain drop every Friday. You can subscribe to The Atlantic’s YouTube channel or on Apple, Spotify, or wherever it is that you get your podcasts.

And if you want to support this work and the work of my fellow journalists at The Atlantic, you can do that by subscribing to the publication at TheAtlantic.com/Listener. That’s TheAtlantic.com/Listener. Thanks so much, and I’ll see you on the internet.

This episode of Galaxy Brain was produced by Renee Klahr and engineered by Dave Grein. Our theme is by Rob Smierciak. Claudine Ebeid is the executive producer of Atlantic audio, and Andrea Valdez is our managing editor.

Ria.city






Read also

White teens cleared of hate crime allegations levied by Black Virginia Tech professor

How to watch AFL live streams online for free

Even missing Sidney Crosby, stacked power play gets Canada into the Olympic final

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости