Kate Conger & Ryan Mac: Moving Fast and Breaking Things, at Twitter and the Federal Government
WatchCats Episode 4, with the authors of "Character Limit: How Elon Musk Destroyed Twitter"
As Elon Musk and his teen tech team take a chainsaw to the federal bureaucracy, those of us who followed the South African centibillionaire’s fraught takeover of the platform formerly known as Twitter may be feeling an eerie sense of déjà vu. The hasty mass layoffs, often seemingly conducted with little understanding of who is being fired and what function they served. The dubious projections of a drastically improved fiscal outlook. The general ambiance of fear and confusion, exacerbated by intimidating all-hands e-mails. Haven’t we seen this movie before?
No, it’s not a glitch in the Matrix; Musk is running his Twitter-takeover playbook on the federal government with astonishing fidelity. To help us break down the parallels, we could think of no better guides than New York Times tech reporters Kate Conger and Ryan Mac, authors of Character Limit: How Elon Musk Destroyed Twitter, the most detailed and thorough account out there of the microblogging platform’s transformation into “X.”
You can also listen on your favorite podcast app.
If you just can’t stand waiting for the next exciting episode of WatchCats, consider becoming a paid subscriber to get that sweet, sweet wonkery mainlined directly into your veins a day before the general public.
Episode 4 Transcript
Noah Kunin: Hi, I'm Noah Kunin.
Julian Sanchez: And I'm Julian Sanchez.
Noah: And this is WatchCats. Julian, what did you do last week? What did you get accomplished?
Julian: Well, you know, I have a five bullet point list, which mostly involves recording episodes of WatchCats.
Noah: Does it have your federally mandated minimum of memes and trolling in it? Because I do believe that was in the requirements as well.
Julian: No, I'm afraid I'm behind. I worry that the Grok system that's assigned to evaluate my productivity may decide to replace me with some kind of modulated HAL voice as your co-host.
Noah: Well, hello, beloved audience. If you're just tuning in, Elon Musk through this new system, a system you'd be surprised didn't already exist before, a government-wide email system operated out of the Office of Personal Management, or OPM, sent out an email to all federal workers, asking them what they got accomplished in the past week. It was then broken by NBC, according to three anonymous sources, that that information would go into a large language model to look at all that huge amounts of text data, presuming some vast amount or majority of federal workers actually responded. And then that system would determine whether someone's work was mission critical or not.
Only catch, OPM actually did disclose last year, the number of AI systems it has in operation or in development, and it only had two systems in development. And only one of those would be relevant, which is the out of the box Microsoft Co-pilot AI that comes with Microsoft 365, their system in the cloud.
Julian: You know, it's funny, Noah, this high-speed, seemingly not-very-clearly-thought-out rush to rapidly evaluate large numbers of employees and do large-scale layoffs, often with a perhaps dubious understanding of what the fired employees are actually responsible for… sounds vaguely familiar to me.
Noah: Sounds really familiar to me, too. This is actually a method of team management that is common in Silicon Valley and in startups all over the country and all over the world, for that matter. Rarely is it deployed across an employee base that is the size of the US Federal Government.
Julian: Right, but it is something we've seen in the not-too-distant past, and also at the hands of one Elon Musk. That's why we thought it would actually be pretty relevant to bring on a couple of the folks who have most closely tracked Elon Musk's cataclysmic reorganization of the website formerly known as Twitter. Again, I think, frankly, the most detailed account I've seen of that, of the changes he made there for good and ill, is a book called Character Limit, How Elon Musk Destroyed Twitter—you can tell they have a viewpoint on the balance of good and ill there—by a couple of fantastic New York Times tech reporters, Kate Conger and Ryan Mac.
Noah: One of the reasons why I think this perspective is so useful is, and this is something Kate and Ryan talk about very quickly in the interview, is Elon only kind of has the same playbook. There's only one sort of meta strategy he uses at all of his companies and all of his endeavors. So looking at how he's deployed that at SpaceX, Tesla, X (formerly Twitter), might give us some insight as to what we can expect or not expect going forward with DOGE.
Ryan, Kate, thank you so much for joining us here on WatchCats. Can you tell us a little bit about yourselves, what kind of work you've been doing on the New York Times and on your book, and where were you before the Times?
Kate Conger: Sure, yeah. So thanks so much for having us. We're happy to be here. I have been with the Times since 2018. And when I joined the Times was when I started covering Twitter and then when Elon Musk joined Twitter is when I started covering Elon Musk. So it has definitely shifted the trajectory of my work since 2022.
Noah: I imagine.
Kate: Yeah, as well as the trajectory of the company and the trajectory of government. So his influence is myriad. Before the Times, I worked for Gizmodo and I was covering tech policy, cybersecurity, all kinds of things there. And my history prior to that was a bunch of other tech publications and then local media in San Francisco, where I'm based.
Ryan Mac: I'm Ryan Mac. I'm the co-author of Character Limit with Kate. I'm also a reporter at the New York Times. I'm based in Los Angeles, which is a bit of an odd place for a tech reporter to be based. But I have been here and at the Times for almost four years now. I cover Elon Musk and kind of larger accountability reporting in the industry. Before that, I was at BuzzFeed News, where I covered companies like Facebook and Twitter and Google. And before that, I was at Forbes, where I got my start as a full-time journalist.
Julian: So, first, I want to just suggest to listeners, if they have not been familiar with the book or checked it out, Character Limit, which is my favorite double entendre book title since Christopher Hitchens' The Missionary Position, is really an incredibly impressively detailed account—I mean, down to, I don't know how you guys got long transcripts of people's text messages that I’d think they would not have an interest in sharing with you. But it is a real deep-in-the-weeds sort of blow-by-blow of both the early days of Twitter and Musk's acquisition, and then the transformation of that platform under his stewardship. If you are interested in a really granular look at the process that played out there, it is certainly worth a read.
I want to though, in the context of the transformation of the Federal Government we're seeing, ask what are some of the most sort of striking parallels you're seeing between the transformation of Twitter under Musk's tenure, and what is going on with DOGE's radical restructuring of the Federal Government?
Ryan: Yeah, I mean, just so many parallels. Early on, we wrote a story about, for the time saying, like, déjà vu, the tactics being used on the Federal Government takeover are the same exact things we saw with Twitter, from—everyone saw it with the email subject line, “fork in the road.” That was a famous email that Elon sent to Twitter employees in the couple weeks after he took over, offering them essentially a buyout, you can come with me and you can be a “hardcore” employee or you can take this fork in the road and leave and get a buyout. That same messaging and same methodology was used with this deferred resignation program that we're seeing play out now.
This idea as well that Elon could come in and improve the Federal Government by just bringing in these young technocrats is also something we saw at Twitter. You know, this idea that he could bring in the smartest folks that he knew from his companies, Tesla, SpaceX, Neuralink, and have them kind of work their way through Twitter and cut the fat because he viewed these folks as incompetent.
That kind of same view is what he's taken to the Federal Government, that there's so much incompetence, so much fraud, so much wrongdoing and quote-unquote “evil.” You know, he's framed this as a fight of good and evil, that he's going to come in with these 20 year old engineers or these folks that he's found along the way that are motivated to work for him day and night, and they're going to clean the place up. You know, that is part and parcel of what he did at Twitter. And it's kind of like, you know, driven all his actions to this day that we're seeing in Washington.
Kate: Yeah, I think there's more similarities than differences, for sure. And I feel like I'm getting annoying at this point, saying “this is just like Twitter.” But it is. And, you know, it's interesting. I think there's this perception of Musk as being like very disruptive, very chaotic. But at the core, I think he's so consistent and repeats the same things over and over again, the same actions. He trusts the same people. He quotes the same movies. He makes the same jokes. He shares the same memes. Like he is just very, very repetitive in the way that he operates. And I think that that has actually been really helpful for Ryan and I as we're going into covering his DOGE takeover because there is such a specific playbook to him. And it's very easy to kind of see what's happening and see what's coming next because he just repeats himself.
Ryan: I want to shout out one of those movies too, which is Office Space. He went into Twitter. I don't know if you guys have seen Office Space. It's a cult classic movie, but you know, there's the scene of the Bobs, these consultants going in and cutting people. What do you do here kind of thing? Ask, you know, what is what they ask people. He cited that movie in the Twitter takeover and he's doing it now with the Federal Government. You know, it's just like he's just playing the hits essentially of what he knows in his mind. He has these like reference points that he goes back to over and over again. And it's just amazing to see because we spent more than two years covering it. And now we're just, you know, doing it all over again.
Noah: Albeit now, the thing being cut is the US Federal Government. So the scale is a little bit different. And in terms of the manifestation, I think your lens is exactly spot on, which is, I think I would be hard-pressed to say that Twitter, now X, isn't more efficient in providing its core service, right? There's less money, fewer people going in to produce the core X service.
That's a different question of its effectiveness, of its utility, right? And certainly I've seen across the board, except for a very small slice of the American public, X's utility has gone through the floor, right? And now, what is its value added service to me? Its value added service to me is access to Grok 3 and LLM that's going to hallucinate most of the time. And I think that's also going to show us what this next phase is, is they really think that AI can take the place of the vast—and specifically, because AI can mean a lot of things—that the LLM architecture, the large language model architecture, can take the place of thousands, if not hundreds of thousands of federal employees and thousands of federal IT systems.
That is the core thesis of what they're going to be doing next, presuming DOGE in its current incarnation survives the next 90 days, let's say. And to me, there's just a clear answer, is that no, it can't. Even current frontier models freak out trying to understand the federal register. Right? It's just not a data set that is...
Julian: In fairness, so do I.
Noah: No, there's an absolutely valid reason for it. Right? And this is why this has been so difficult for me, seeing these people take on the performative language of things I actually do believe in. The federal register and all of our laws need a really serious rewrite and refactoring, even if you don't change like the fundamental assets within them. But it is what it is. Right? Like the systems at IRS and Treasury might be these terrible Elder God level contortions of COBOL and IBM Assembly. But that is in fact what they are, and we don't have easy solutions for them other than doing the incredibly hard work, which to be fair, the government has currently failed across multiple different types of administrations, across multiple different political parties, failed at modernizing.
One thing I want to bring forward from our previous episode is this phrase that Julian brought up, which is “efficiency for whom,” right? What is the point of making these systems more efficient? What are they trying to accomplish? So as we think about the motivations of not just Musk, he's obviously the main character in this drama, but the other DOGE staffers, whether they're GSA or OPM or OMB, what is the purpose of what they're trying to do as you see it right now?
Kate: Yeah, I mean, I think when you talk about “who is this for,” it relates to the way X has transformed, as well as the way the government is now transforming. You know, you talked about what is the utility of X, and I think really it is in service of one user now, and that is Elon. And so much about the platform has been tweaked to suit his particular needs and interests. And, you know, when we look at some of the things that DOGE has accomplished so far, again, they are really aligned to his personal ideology and interests.
You know, we see that with this emphasis immediately on cutting DEI programs and going after diversity efforts in government. That is something that has kind of been a thorn for him for a long time, and something he wanted to get rid of at Twitter. He was focused on throwing out merch that had DEI slogans on it at Twitter, painting over a Black Lives Matter mural that was in the offices. When you think about the overall amount of money that these cuts are saving, it's minuscule, and it is not going to radically alter the federal budget. But they are pet issues to him, and so they land at the top of the priority list over other more actual effective cutting.
Julian: Actually, you know, speaking of that, one of the things that comes out in your book is Musk has a penchant for just sort of making up numbers that are congenial to whatever case he's trying to make, whether it's, you know, finding the most flattering looking metric to make it look like use of the platform is growing to revenue projections that seem to be basically based on sort of wishes. And we've seen sort of a repeat of that in the way they're attempting to quantify how effective they're being at cutting what they describe as waste and fraud and abuse.
I wonder if you could talk about either, you know, some of the ways numbers have been treated in Musk's business life and how they're being used at DOGE and the Federal Government to create a kind of aura of success.
Ryan: I'll give you a couple of answers, past and present, to that, which is at Twitter he had this metric he liked to use called “unregretted user minutes.” And he was like really like talking about this. He's like, oh, I've come across this new metric: unregretted user minutes. You know, if you if you use Twitter, you're really going to enjoy or use acts, you're really going to enjoy it. And we're going to measure this and maximize for it. He never explained what it was. He never even released like actual figures related to it. He just really he only talked about it at every public appearance that he had. And people were like, you know, nodding and they're like, it gets people moving kind of thing. But like when you really dug into it, no one knew what he was talking about. And I don't think he even knew either. Like, I don't even know how you define an unregretted user minute. Do you like poll someone after they've been doom scrolling on Twitter for three hours and ask them, you know, do you feel good? No one has any idea.
So he loves to use these ideas of like metrics and quantifying of things without, you know, sometimes defining them. Recently he has been raising new capital for X. You might have seen new stories about that or as well as selling the debt around the company, which there is a lot of. He raised $13 billion to do this deal. They've been talking about EBITDA, which is for people that are familiar with finances, but earnings before interest taxes, deductions and amortizations, I think. Or I might have gotten the D wrong. But, you know, it's a special financial metric. But it's not net profit. And he's doing that to cover up the fact that they're making a ton of, they're spending a ton of money in interest payments.
You know, and now he has, with DOGE, this kind of counter that he's doing with trying to add up the amount of contracts that they're cutting. And yesterday, a couple of days ago, they got numbers wrong. Like, they confused a $8 million contract for an $8 billion contract that they counted in their total. So, yeah, he likes to deceptively use metrics and kind of made up stats to bolster his points. And initially, they sound good. And if you're not digging below the surface, they probably look great to you. But once you really dig into it, you kind of realize that it's a lot of bullshit.
Julian: One of the things I recall reading was that there were some contracts where they were saying, well, we've sort of cut off this contract, counting the full value of the contract, most of which had already been paid out over the last four years, and said, well, we're halting this with four-fifths of the money paid, and four-fifths of the work complete, which is now not going to get complete, so we don't get actually the value of the contract, or at least not all of the value of the contract, but we're counting the savings as though what we've saved is a bunch of money that's already been spent. Is that something you've seen?
Ryan: Yeah, there was like a Thompson Reuters-related legal product or legal database that a lot of federal attorneys and lawyers use. I guess I'm not entirely sure what it provides, but I'm sure there's...
Julian: You mean like Westlaw or...
Ryan: Yeah, Westlaw, exactly. And like they talked about ending that contract, and Musk has been on this rampage against Reuters, specifically because of... He ties everything Reuters related to the coverage that they've done of him, which won a Pulitzer, great stories. And so, he has this like grudge against Reuters, and so they've cut Reuters related contracts not understanding what they're related to. And in that case, that money has already been paid. It's not like they're getting that money back. You're not saving anything by cutting the contract. Maybe you're saving the future payments, but most of it's gone.
Julian: I should also say for the folks who are not familiar with legal work, Westlaw is sort of a necessity for doing legal work. I mean, you cannot really be a lawyer and not have Westlaw and get your work done. This is not the days anymore when you would sit with huge stacks of books and physically look up citations. It is a basic tool lawyers require for their work. So, I am curious what they are going to replace it with and still expect lawyers to get things done.
Ryan: Yeah, someone described it to me like a Bloomberg terminal for a Wall Street trader, right?
Noah: Yeah. One thing I want to go back to is this idea of active versus passive deception. And I am still struggling with the wording of this, but here is what I mean, which is that Musk from the Oval Office said very clearly, very frankly, that his team is moving and will continue to move so fast, they will get things wrong on a semi-regular basis.
And we've seen that time and time again, like this 8 million versus 8 billion dollars thing. If you go back, what it looks like it happened is there was a typo in the original Federal Procurement Database, but they didn't understand the change history stack. So whatever coding they were looking at to concatenate all this information pulled the wrong data point. That's the kind of QA where even if you're not working in government, even if you're working at a fairly fast startup, you're going to be doing ahead of time for things that matter a lot.
That's what they've completely thrown out the window. Yes, government technology in general takes way too long to move from ideation to production. But there is a version where it moves too fast because it's actually dependent on being correct for people's lives and livelihood. That's the general concern which is happening right here, which is how long is this going to continue until there's an actual disaster with some of these fundamental legacy systems which we will all rely on. Whether that's from pushing the technical systems way beyond the speed they can actually change at or by removing the people who do know how to work these monstrosities to effectuate some goal like, I don't know, all of our tax returns that are coming up.
Now, we're seeing thousands of people being let go from the IRS. At the same time, DOGE employees are trying to get into the IRS to understand the systems that those people actually know how to work. So, I want to talk about some of your recent reporting, if that's okay. A couple of days ago, starting on the 18th, and it's not clear to me who broke this story originally on the 18th, whether this was 404 Media or the Washington Post, but it was followed on with reporting that you all did at the New York Times, entitled Federal Tech Workers Pushback Against Musk's Efforts. An employee at the Technology Transformation Services told colleagues that he had resigned. He was asked to grant a Trump appointee access to a database used to text the public. First off, I want to notify our audience is I was a member of GSA when we formed the Technology Transformation Services, or TTS. I was the Infrastructure Director at 18F, which was one of the organizations that helped build the TTS. And once that was formed, I went on to be the Infrastructure Director at TTS and worked with this individual specifically.
Ryan: Yeah, I think it was 404 Media that had the story first. They've been doing great reporting. But yeah, Kate and I were getting some readouts of what was going on with Notify.gov. There is an individual by the name of Thomas Shedd, who has been tasked to lead TTS and is a kind of Trump appointee. He used to work at Tesla. He was trying to get access to the Notify.gov database. Essentially, this employee resisted and resigned in that process. And there's just been a lot of fallout since then with employees that Kate and I have covered.
Kate: Right, and so what Thomas Shedd has done at TTS, and I think is sort of common for people in Musk's orbit, is ask for read write access to a number of tools and programs. And I think that, you know, this is really a big part of Musk's thinking, is to get inside these systems and analyze them. And it comes from this idea that everyone who works with him is sort of an elite engineer that's going to bring a special level of understanding to these, to these databases that no one else has.
You know, there I think is a non nefarious version of what Thomas has been doing, which is, you know, he's a new director at TTS. He wants to read up on these systems and understand the projects he's overseeing. But I think it's something he could do without getting full read-write access to them. You know, he could sort of skim the code base without having that insider access.
And I think that that's what is worrying to people at TTS when they're seeing these requests. It's, you know, does he need this data? Where is this data going? And is it going to be siphoned into these sort of big, wonky, let's-analyze-the-government-and-restructure-it-using-AI projects? You know, and I think that that is where the fear that some of these TTS employees has comes from. It's the idea that this data could be siphoned off and misused, mismanaged in a way that it was never intended for.
Noah: You know, one thing I want to stress here is it's actually I feel like I've been saying a lot this month. It's worse than that, which is it wasn't just read-write access, it was admin root access. And to kind of part the curtain here a little bit, Notify.gov is a system for Federal IT teams, not just Federal IT, state and local as well as, along with territories and tribal, to hey, if you want your digital system to go text a bunch of people to send a notification, you know, via SMS, boom, this system does that for you. You don't need to code it from scratch.
It's built on a platform that I helped create called Cloud.gov, which allows you to run digital systems in government that comes with a lot of the security and privacy protections you need to actually get the authorization operate to run in government to begin with. And we usually architect these systems. So even the people who do push code to the systems, build new features, operate and maintain the systems, don't actually have access to the underlying data, right? The information is encrypted in transit and at rest. So the people who are reading the system or rewriting the system, they don't have access to that data. They're just pushing the code, which then in the cloud acts on that data. The only people with the actual keys to the castle to actually inspect a particular data record, like somebody's name and phone number and the messages they're getting from the system sending the text would be the admin or root user.
And that is what is so weird about the request, is if you're there to look at efficiency and effectiveness, that is not data that you need, right? The only reason why you would need admin root access, being a commissioner or director at that level, is if you didn't trust your administrative team to give you any real information about the system, and you would need the ability to go in and triple check everything that they were producing, right? So this is a collapse of trust on both sides of the equation.
Kate: Yeah, and I think trust is really thin on the ground right now in a lot of these agencies, but this is also something that has been a big influence in Musk's takeover operations in the past, notably with Twitter. He viewed pretty much everyone who was working at Twitter prior to the takeover as his enemy. He thought they were incompetent. He thought, in some cases, they were fraudulent and needed to bring in his own trusted team to run the takeover and to root out those individuals that he thought were corrupt, essentially. So you see that now playing out with DOGE, where people like Thomas Shedd, who's someone that has worked for Musk and is in the circle of trust, is being put in charge of this. And there is a complete lack of trust. And I think in many cases, maybe not in Shedd's case in particular, but in many cases, a level of disrespect for federal employees, that they're viewed as incompetent, perhaps corrupt, and that they're going to be hiding or lying about what they're working on and trying to conceal that from the DOGE team.
Ryan: And I think what's interesting about that is if you come in with that mindset, that's only going to breed exactly what you suspect them of, right? If you distrust someone, that's not going to engender trust on the other side, right? They're only going to distrust you. And that's exactly what's playing out at GSA and TTS, which these employees are scared. There wasn't much communication with them coming in from these folks. Everything that is being done is going to be viewed with a kind of skeptical lens. The other day, this was in our story from yesterday, but an employee who reposted that resignation letter from that individual who resigned had their Slack access taken away. And their Slack access was taken away because it, I guess, management felt like that was a form of dissent that they were unwilling to tolerate. That's exactly something that played out at the Twitter takeover with Elon Musk and his folks, you know, cutting off Slack access, firing people on the spot for dissent. They were trawling through Slack and keyword-searching names and seeing who had talked shit on them previously. You know, that kind of, like, Gestapo tactic is just like, it's just kind of crazy to see play out over and over again. So, yeah, that just kind of shows the, you know, internal strife right now at those agencies.
Julian: Yeah, one of the patterns that you guys do seem to notice—maybe ironically, given his sort of public posture as a champion of free speech—is a really extraordinary degree of intolerance for internal criticism or dissent. I find myself often thinking of the late science fiction author Robert Anton Wilson's SNAFU principle. It was a principle he coined to describe the relationship between one's position in a hierarchy and one's connection to reality. The idea being kind of the higher up the chain of authority goes, the more layers of people whose incentive is to please the boss between you and sort of the ground truth, the more detached you become from what the reality on the ground actually is. And of course, this can be accelerated by the extent to which the boss signals an unwillingness to hear unpleasant information or contrary perspectives.
Are there ways in which that environment has sort of backfired in his previous enterprises? He has been successful for many years in certain industries, so I'm wondering, does this sort of rapid, you know, kind of cut people and take command approach, has it been effective in some cases? And what are the drawbacks when it hasn't been effective?
Kate: I mean, I think that this approach has made him really isolated from the ground truth of what is going on in his companies. You know, we saw this a lot at Twitter where even people who were in his inner orbit were very worried about bringing certain things up to him and coaching other people at Twitter about how to interact with him. This is how you should talk to him. This is when you should talk to him. Say this. Say it this way. Don't say that. And so I think it really ends up kind of isolating him from what's actually going on. And I think he knows that to a certain extent because he often talks to people about wanting them to tell him the hard truths and wanting them to inform him about what's broken. But when he has actually been told those things, and we've seen this time and time again, he lashes out and fires people. So I think he understands that he's a little bit disconnected from reality, but also when reality is able to come and confront him, he can't handle it.
Julian: That’s part and parcel of… and I wonder to what extent actually you see this as a change: It seems to a lot of people that Musk has sort of evolved over the past few years from a kind of libertarian-inflected, but not obviously crazily right-wing guy to someone who is just increasingly steeped in conspiracy theories.
One of the very first anecdotes in the book involves him furiously lashing out at a data scientist who said, hey, I saw you retweeting this conspiracy theory about the guy who attacked Paul Pelosi being secretly his gay lover, and this is just an insane thing that you'd have to be one of the most gullible people to buy this, let alone amplify it to millions of people. To what extent is it your perception that this represents some kind of real shift in his character, and how is that manifesting and causing problems?
Ryan: I think politically, he's always been an interesting character. I would say early 2000s, he was pretty stereotypical of your socially liberal, fiscally libertarian Silicon Valley entrepreneur or CEO. Someone who supported gay marriage, someone whose foundation gave money to the Transgender Law Center. I think that was in 2012 or 2013. Supported liberal causes, supported Barack Obama, was friends with Gavin Newsom. And those were political alliances that were good for his businesses, political positions that were good for Tesla. It was good for SpaceX. You know, Obama was really into the idea of SpaceX and privatizing space and kind of gave him the green light on that.
But it started to change as he, you know, spent more time on Twitter, as he kind of pushed back against what he saw as wokeness. He's talked about his trans daughter, for example, radicalizing him. He's talked about COVID and the shutdown of his factories in 2020 as radicalizing him and him getting very pissed off at the California government, which was led by Gavin Newsom at the time. And so, we kind of get these kind of flashpoints that push him, I guess, more and more rightwards. And in conjunction with that, you also have him spending just an insane amount of time on Twitter, like more time than probably all of us combined have spent on Twitter. And we spent a lot of time on Twitter.
Kate: And that's saying something, yeah.
Ryan: Yeah, we've been saying a lot, right? Because, you know, and I always had this theory about him, where if you stayed up past midnight on the West Coast, you would get to see some pretty special tweets from him. Like just, he would go. And now that's all the time. You know, that's, you know, that's, he's tweeting sometimes 200, 300 times a day, engaging with-
Kate: Does he have a team?
Ryan: I don't… No, that's him. That's him. Like, you know, and engaging with Cat Turd too, and Wokeness, and that's his political environment now. There's no critical thinking, it's whatever comes across his Twitter feed. And I think that kind of flywheel effect has pushed him to be more and more radical in his thinking. And that's what we get today.
Noah: It's really interesting the idea, especially given Elon talks about the woke mind virus so much, is if what actually happened, like let's just take the reason why he bought Twitter at his word. He thought it was fundamental to the preservation of free speech. But in buying it, it bought him. It ended up colonizing his brain, as opposed to the other way around. That's a really interesting way of looking at it. The other thing that I'm tracking is how, and this might seem like a zag, so follow with me for a second, is how neo-reactionary thought has captured a lot of Silicon Valley of late.
Having lived there a lot, having interacted with Curtis Yarvin specifically, what seems to be happening is people who are not spending a lot of time thinking about the philosophical underpinnings of a neo-reactionary pro-monarchist perspective, taking just like the headlines from The Dark Enlightenment, and just immediately deploying them within the federal context.
Something we talked about last episode is that Elon wants a complete repeal of all regulations. Let society run and when things break, then Congress will add regulations back in. The fundamental idea from the neo-reactionary movement in terms of the federal government is that it's impossible to reform as is. You have to let it collapse and build something new. That's just a regurgitation of the Silicon Valley perspective. If you have a large, entrenched, scolaric incumbent, it's almost impossible to go in and reform it and make it efficient and effective. What you should do is just start up a new company to compete with it and follow those new practices and procedures.
I've been trying to figure out how much Elon has actually directly engaged with neo-reactionary thought. Is that something that you have seen in your reporting in the past or to come?
Kate: I think really where his views on a lot of this stuff come from is more of the long-termist stuff in Silicon Valley. He thinks about-
Julian: The Stuart Brand sort of Long Now…
Kate: Yeah. It's sort of like an overlap, I think, with the EA movement, but this idea that the short-term harm is fine if there's this long-term humanitarian benefit and that really influences everything that Musk thinks and does. With Tesla, it's this idea of electrifying vehicles out into the future. With SpaceX, it's about going and colonizing Mars. Twitter is about saving free speech, quote unquote, and this whole DOGE thing is about revolutionizing and modernizing the government. He said this about DOGE. He said, I don't mind if there's short-term pain for Americans as the government essentially collapses, if it is worth the long-term payoff of having more functional, robust government systems.
I think that is really the way to understand how he operates. He does not care about the short-term destruction, pain. He does not care about putting people out of work at his companies, laying them off en masse if the long-term goals are achieved. And so, I think that that kind of thinking is really what is driving and shaping him in this regard.
Ryan: I spoke with a former executive who just left me with a very lasting quote that stuck in my mind, which is that Elon Musk cares a lot about humanity, but not much about his fellow man or fellow human. And if you think about it, it's kind of a perfect encapsulation, right? Like, he doesn't care about the individual factory worker or the laid off IRS agent or, you know, name whatever job, but what he does care about is advancing humanity, getting them to Mars, making life multiplanetary. And that's kind of, that's, you know, the way of thinking about him, in my view.
I don't think he's so much engaging with Curtis Yarvin or Mencius Moldbug and reading, you know, 10,000 word screeds online. But if they can be summarized by Grok and put into his Twitter feed and retweeted, sure, I'm sure he'll come across that stuff. But I just think there is some alignment, but I think it's just kind of coincidental as opposed to a purposeful, you know, or like a meeting of the minds, you know, where they're like in a back room talking with each other on a text thread, you know, I don't think that's happening.
Noah: I think an important distinction we should make is long-termism, I think people will get that inherently, but just so we can go back, what does EA stand for and what is the EA movement?
Kate: Oh my gosh. Ryan, do you want to talk about this?
Ryan: I do not.
Kate: All right, great. I can do it.
Noah: I can do it in a circle, that's fine.
Ryan: This is Kate’s grave, she said it, so she's going to jump in on that grenade if you want.
Kate: Yeah. The EA stuff, it's effective altruism and it is a movement that is born out of, I think, philanthropic interest. I want to think that there's a positive intent there, but the movement is basically about trying to help other people as much as possible and sort of applying a rationalist approach to that. But it overlaps with this idea that a lot of current harm is acceptable to reach long-term goals and that a lot of old school philanthropy, as we've seen it, is basically wasting money trying to rescue people in the short-term without fixing problems in the long-term. I don't know if there's anything that you would want to jump on and add to that, but there's a lot of overlap from that thinking with the tech world, with the crypto worlds, et cetera.
Julian: I think Bertrand Russell once defined philosophy as the process of moving from premises no one can deny to conclusions no one can accept. And I think something sort of similar could be said about EA, which is: In the abstract, I think it's very hard to argue against the abstract principle that it would be a very good idea if people tried to do a kind of rigorous cost-benefit analysis of charitable giving or more rigorous than has traditionally been the case. And to think very hard about, well, what is the best way to use a dollar that's going to save the greatest number of lives or reduce the greatest amount of suffering, as opposed to just giving money to whatever is currently fashionable or what is going to invite me to a nice gala.
But then the problem is that substantively, they've just gone down some very strange rabbit holes, where they're convinced that sci-fi scenarios about AI destroying humanity are like the most dangerous thing. And so, you know, stop malaria nets, all resources need to be fed into preventing Skynet from destroying the human race.
And, you know, with the long term, you know, the long term, I think we could say a recurring problem in politics is it's very hard to get political actors to think past the next election cycle. And so there's a lot of sort of malinvestment in stuff that's going to generate a short term return that you can wave in front of the voters and a lot less willingness to think about problems that are going to be costly up front to solve, but are going to prevent much bigger problems 20, 30, 50 years down the line. So you think, well, you know, at a certain level of abstraction, that sounds appealing.
On the other hand, maybe kind of ironically what the way you're describing must reminds me very much of Albert Camus' description from The Rebel of a certain kind of communist radical who is prepared essentially to accept an unlimited amount of near-term suffering on the premise that history will vindicate—I know what the end game of history is. Once we reach the end of history, everything will be perfect. There is essentially no amount of harm that you can inflict in the present that is not ultimately outweighed by the future utopia that's created. Of course, we know that didn't pan out terribly well.
I wonder, where is the point where this thinking goes off the rail from a premise that seems pretty congenial, I think, to most people—”We should think in the long term; we should do this kind of cost-benefit analysis”—to this sort of indifference to short-term damage.
Kate: I mean, I think it's sort of a tolerance for damage and even death, and I think that that, for me, is where this kind of thing goes off the rail. So in a hypothetical scenario where you have a very contagious disease, I think maybe the EA approach to that is isolate and quarantine those people and let them all die because that allows humanity to survive. And if you don't let these people who are infected with this disease mingle with humanity, then, you know, humans survive. It's good for the overall benefit of the human race, right? But like our sort of thinking is that people get a contagious disease and we should treat them and take protocols to not become infected ourselves, right? But take care of them, give them health care, give them support, try to save them. And I think the EA approach is now let them die. And that feels very like morally dubious to me personally.
Julian: It also occurs to me that one of the problems with that kind of thinking is that there's not a lot of thinking about the human incentives there, right? So if you decide, well, our policy is going to be not to help people with the disease and instead, you know, lock them away in this sort of imaginary scenario— what if you start thinking about what incentives does that create for people who are infected? Well, to conceal it! It creates a bunch of social problems that maybe are not captured in a simplistic disease propagation model that's not taking into account the social incentives.
And it seems like, again, maybe another pattern in some of the stuff that Musk has done is that he seems to have a very kind of engineering-centric “code über alles” mindset where he takes over, in the case of Twitter, a social media platform that is fundamentally a kind of massive meta community that collects a lot of different interconnected communities. And he thinks about it in terms of, well, how do we fix the code? Right? How do we get the best engineers in there to determine that the best code is being produced? And we'll have a very good and successful and profitable platform if we get the code right. And does not seem to be thinking as clearly in terms of, well, what are the social dynamics of this environment?
Ryan: This is a key problem with Twitter is that he viewed it as a technical problem, right? This is kind of a fatal flaw of his or original sin, which is that he came into Twitter thinking that all of its ills could be solved with technological solutions. But Twitter isn't, I mean, it is a tech company. It is in Silicon Valley. It's a website and an app. And but it's a social problem. It's how do you get people online, millions of people online talking in a way so that they're not going to kill each other, so that they can have great conversations and post their photos and share their thoughts and get these celebrities to spill their guts, and so other people come and read. It's a community thing.
I think he fundamentally misunderstood that because he doesn't use it that way. He uses it, I mean, it's his microphone or megaphone. And he's not going on there to follow like the LA Lakers or like to read about whatever philosophy you want to, but like he uses it to just broadcast his thoughts. And he thinks that it's just a technical issue. Like how do we get this into more hands? Or how do we get my tweets in front of more eyeballs?
Noah: Bouncing off the idea of it being a social problem, right? If we go back to the original purpose of DOGE, which is to cut 2 trillion, you just, you know, and let me give voice to their hypothesis is that the US is on course for fiscal demise. And if we don't start backing out on our spending around $2 trillion, this is going to run off a cliff. And the only way you get to those numbers is by looking at entitlements.
So, they've been going everywhere other than entitlements in terms of actually cutting. Now we're starting to hear a little bit more about Social Security and Medicare or Medicaid. And just at CPAC, we're recording this on the 21st of February 2025, Elon Musk has said like, hey, we need to start looking at blowing $500 billion out of those programs, right? And that is not a technocratic solution. You can criticize those programs all you want as PAYGO, as Ponzi schemes, right? But this is the way they were architected. People have paid into the system and they expect to get their money back out. And there's just no way you can technocratic your way through a solution other than a new social compact, right?
And maybe that social compact, as evidenced in Congress and likely tremendous pushback from everyone, is re-architecting the system to make it more solvent. You can't solve it through code or procedure. You have to solve it through politics. Or the political consequence will just wipe you back out of the levers of power. One thing I want to flag in terms of the technology side of it, actually, Julian, jump on that if you want to talk about some of the talent stuff.
Julian: I just want to ask Ryan and Kate if there are ways that they're seeing in the DOGE process, this sort of engineering, “code über alles” mindset creating problems in the same way that it seems to have caused problems with Twitter. That is thinking in terms of “what is the code solution, what is the hardware solution we can implement” while being maybe a little bit obtuse about what the social dynamic is and how your changes are affecting that.
Kate: Yeah, I think some of those tech projects are still very nascent, and so we're not really seeing the impacts of them quite yet. I'll be transparent: I'm very much an AI skeptic, but the idea of we can take all federal contracts, dump them into an AI system and it'll tell us what to cut and what to keep, I think is ludicrous. That's sort of been the plan that's floated of how we're going to decide on cost-cutting. And so I think that is a clear example of if this goes through and that does move forward, I think it's going to be disastrous and there's going to be parts of it that are silly and ridiculous and parts of it that are really destructive.
But you know, a lot of these other things, too, it's the access requests for Americans' personal information. I think those are very troubling as well. And again, we're still kind of seeing pushback around that, and it's not totally clear where those projects are going to land.
Ryan: Yeah, I would say it's still pretty early. I mean, this is when this wasn't really a technocratic solution, but the cutting of all USAID projects and funding, you know, basically shuttering that overnight.—and we're seeing the consequences of that play out now—were done by his engineers. So obviously, we're starting to see the human cost of that play out. But I think in terms of like the actual technical solutions that they're proposing, I think that's the stuff we'll be reporting on in the next couple of weeks to months, and seeing how that affects actual people on the ground.
Noah: One thing I just want to stress with the, where is Elon Musk getting this $500 billion number, and he thinks all of that could be fraud, is the GAO reports between 2018 and 2022, they were estimating any between $233 and $520 billion in fraud. That's a pretty big margin of error. And the reason why that was particularly so high for those years was because of pandemic loans, right? And that's probably $200 billion in fraud right there, which is not ongoing. So you have to cut actual services to get back up to that number. And he's also using the high bound version of that number, not the low bound version of it.
One thing I wanted to call out is that maybe the playbook is turning around a little bit. The New York Times is reporting an hour ago that the IRS has actually barred Gavin Kleiker, one of the DOGE team, from seeing personal information at the IRS. The original MOU would have allowed them to see all IRS data. But according to the agreement they have now actually signed, which is breaking news, should access to IRS systems that contain returns or return information become necessary as part of the DTLE. That's how they're getting them over there. It's through a system called detailing an employee from the White House to the IRS. As part of the DTLE's duties under this agreement, “that access shall only be provided if it is anonymized and in a manner that can't be associated with directly or indirectly with any taxpayer.” So I think we are seeing a slight change to the playbook because they knew the incredible outcry for over-providing access was not going to result in a successful operation for them in the long term.
Julian: Is there any kind of final summation or thing that you're reporting is causing you to be especially concerned about that you think folks should hear about before we cut out?
Ryan: Oh man, I just think it's going to take time to see what the actual effects of these changes are going to be. You know, we're still in the early weeks of this kind of DOGE takeover. When we were covering Twitter, we started to see outages, you know, a couple of months later, weeks to months later, you know, at that point, they were, you know, pulling apart servers and that kind of thing. So I don't know, it's every week feels like a new thing we're covering. And it's just, yeah, I guess that's what I would stress. Kate, I don't know if you have any thoughts.
Kate: I think one of the big questions that we tried to answer in our book was what does accountability look like for Elon Musk? And the answer that we arrived at was that there isn't any. There was really no agency or court that was able to restrain what he was doing at Twitter. And now I think that question has become really relevant again. What does accountability look like for an unelected person who is essentially running the government? And we'll see how that plays out. I think we're watching that in real time as some of these DOGE efforts meet court challenges, but it's an open question.
Noah: That was Ryan Mac and Kate Conger from The New York Times. One thing we've noticed over the past 12 weeks is just how many more of you are listening to us. Our subscriber numbers continue to grow. That's awesome. Thank you so much for supporting the show. But we noticed that a mere fraction of you have given us a rating on Apple Podcasts. It's one of the best ways to help us grow organically beyond just sharing this show on social networks. If you could take a moment out of your time today to give us a five-star rating, that would be huge. Julie and I are running this whole thing with duct tape and baling wire.
Julian: Absolutely. Also, again, it takes some time and there are expenses involved in producing this. The podcast itself is absolutely always going to be free. We think this is important stuff to cover and we don't want to paywall that. But if you feel like heading over to the Substack and subscribing there and possibly even ponying up for a paid subscription, that helps us produce these and polish them and be able to do more of them. And we are looking at ways to, while keeping the podcast free, offer some special bonus stuff for our paid subscribers.
Noah: Again, head over on to that Substack. Link is in the description below. Drop us a comment. We'd love to hear your thoughts as well. Let me tell you, as you've been enjoying our first four episodes, the guests we have booked over the next few weeks are going to blow your socks off.
Julian: We're looking forward to more exciting episodes soon. In the meantime, I've been Julian Sanchez.
Noah: And I'm Noah Kunin, and I hope next week you accomplish everything you ever dreamed of.
Julian: See you next time on WatchCats.