Critical takes on tech - BlogFlock2025-07-12T04:50:34.274ZBlogFlockescape the algorithm, Cybernetic Forests, Disconnect, The Convivial Society, Blood in the MachineThe budget bill opens the floodgates for state surveillance tech and bad AI - Blood in the Machinehttps://www.bloodinthemachine.com/p/the-budget-bill-opens-the-floodgates2025-07-04T05:15:09.000Z<p>Hello out there, </p><p>Another week, another fusillade of grim news, at least stateside here. Hope folks can take the 4th of July holiday weekend to recharge and get out to the beach or the lake or wherever. I’m certainly going to. In fact, I’m embarking on a long-planned family trip—I have a draft of a piece that’s in decent shape that I may try to schedule for next week, but if I don’t wrap it in time, hope everyone can cut me some slack for taking the time off. I may take inspiration from what my pals at <a href="https://www.404media.co/introducing-404-media-summer-break-2025/">404 Media did this week</a> and log <em>all</em> the way off. And speaking of family: My 9 year-old son just recommended a book to me for the first time. He brought me “Dog Man: The Scarlet Shredder” and said “dad, I think you’ll like this, it’s about an AI army that tries to take over the world and the good guys stop them.” What can I say? He gets me.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>Thanks as always to everyone who reads and supports this work—and an extra-massive thanks to everyone who chips in a little each month so I can keep the lights on. You make it possible for me to do things like take some days off with the fam. If you too would like to make such things possible, and you’re able, consider upgrading to a paid subscription so I can continue training the next generation of AI critics. Cheers, and onwards. </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p>So the House of Representatives has passed the big bill, which means it’s headed to Trump’s desk where he will adorn it with a loping signature and smile for a photo op like he’s at the grand opening of a used car lot. Thanks to said bill, tax cuts for the rich will be extended and made permanent, 12 million Americans will be kicked off their insurance, food assistance programs will be gutted, and ICE and other federal law enforcement agencies will be given many billions of dollars. (The smallest of comforts is knowing <a href="https://www.bloodinthemachine.com/p/dont-forget-what-silicon-valley-tried">that the 10-year AI law ban once lodged in it is dead</a>.) </p><p>The bill allocates $45 billion for “immigration detention capacity,” $31 billion to fund ICE operations, $13 billion for state and local grants and $6.3 for “border surveillance technology.” There’s also $66 billion for Customs and Border Patrol (CBP), for personnel and digital infrastructure. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vgMz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48c77b06-9ad5-4cdd-9571-a997f7c3311d_1155x526.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vgMz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48c77b06-9ad5-4cdd-9571-a997f7c3311d_1155x526.png 424w, https://substackcdn.com/image/fetch/$s_!vgMz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48c77b06-9ad5-4cdd-9571-a997f7c3311d_1155x526.png 848w, https://substackcdn.com/image/fetch/$s_!vgMz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48c77b06-9ad5-4cdd-9571-a997f7c3311d_1155x526.png 1272w, https://substackcdn.com/image/fetch/$s_!vgMz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48c77b06-9ad5-4cdd-9571-a997f7c3311d_1155x526.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vgMz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48c77b06-9ad5-4cdd-9571-a997f7c3311d_1155x526.png" width="1155" height="526" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/48c77b06-9ad5-4cdd-9571-a997f7c3311d_1155x526.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":526,"width":1155,"resizeWidth":null,"bytes":1060012,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":false,"topImage":true,"internalRedirect":"https://www.bloodinthemachine.com/i/167474449?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48c77b06-9ad5-4cdd-9571-a997f7c3311d_1155x526.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!vgMz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48c77b06-9ad5-4cdd-9571-a997f7c3311d_1155x526.png 424w, https://substackcdn.com/image/fetch/$s_!vgMz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48c77b06-9ad5-4cdd-9571-a997f7c3311d_1155x526.png 848w, https://substackcdn.com/image/fetch/$s_!vgMz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48c77b06-9ad5-4cdd-9571-a997f7c3311d_1155x526.png 1272w, https://substackcdn.com/image/fetch/$s_!vgMz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48c77b06-9ad5-4cdd-9571-a997f7c3311d_1155x526.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>There is, in other words, a record-breaking store of federal funds dedicated to policing, detaining, and deporting people who live in this country. As has widely been pointed out, this bill makes ICE the single largest federal law enforcement agency in the country by a significant margin. Aaron Reichlin-Melnick, a senior fellow at the American Immigration Council <a href="https://bsky.app/profile/reichlinmelnick.bsky.social/post/3lt3fdogz322u">notes</a> that ICE now has “more money per year at its disposal over the next four years than the budgets of the FBI, DEA, ATF, US Marshals, and Bureau of Prisons combined.” </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!C_Ae!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c7349f-6cf4-42b1-bea0-e808351d28cd_788x694.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!C_Ae!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c7349f-6cf4-42b1-bea0-e808351d28cd_788x694.png 424w, https://substackcdn.com/image/fetch/$s_!C_Ae!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c7349f-6cf4-42b1-bea0-e808351d28cd_788x694.png 848w, https://substackcdn.com/image/fetch/$s_!C_Ae!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c7349f-6cf4-42b1-bea0-e808351d28cd_788x694.png 1272w, https://substackcdn.com/image/fetch/$s_!C_Ae!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c7349f-6cf4-42b1-bea0-e808351d28cd_788x694.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!C_Ae!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c7349f-6cf4-42b1-bea0-e808351d28cd_788x694.png" width="788" height="694" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/d3c7349f-6cf4-42b1-bea0-e808351d28cd_788x694.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":694,"width":788,"resizeWidth":null,"bytes":104873,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":false,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/167474449?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c7349f-6cf4-42b1-bea0-e808351d28cd_788x694.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!C_Ae!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c7349f-6cf4-42b1-bea0-e808351d28cd_788x694.png 424w, https://substackcdn.com/image/fetch/$s_!C_Ae!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c7349f-6cf4-42b1-bea0-e808351d28cd_788x694.png 848w, https://substackcdn.com/image/fetch/$s_!C_Ae!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c7349f-6cf4-42b1-bea0-e808351d28cd_788x694.png 1272w, https://substackcdn.com/image/fetch/$s_!C_Ae!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c7349f-6cf4-42b1-bea0-e808351d28cd_788x694.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>And here is one thing that you can bet with 100% certitude that those agencies will do with that money: Purchase high-tech “AI-powered” systems for surveilling, monitoring, and tracking the nation’s population, in the name of ferreting out non-citizens. </p><p>The betting part is cheating a little bit, because ICE and CBP et al already have large contracts with tech companies like Palantir, Anduril, and Cellebrite to do exactly that. But this budget bill has opened the floodgates in a way that the tech companies willing to operate in this space have only dreamed of. With tens of billions of dollars to spend by 2029, when much of the funding is to be cut off—and the Trump administration closer than ever to Silicon Valley—I think we can expect to see a good deal of it invested in AI and surveillance systems whose manufacturers promise to expedite and accelerate Trump’s deportation program. </p><p>In fact, that’s already what’s happening. </p><p>On the same day that news broke of the budget bill’s passage, WIRED’s <a href="https://www.wired.com/story/cbp-wants-new-tech-to-search-for-hidden-data-on-seized-phones/">Caroline Haskins published a story</a> about how the CBP “is asking tech companies to pitch digital forensics tools that are designed to process and analyze text messages, pictures, videos, and contacts from <a href="https://www.wired.com/story/how-to-protect-yourself-from-phone-searches-at-the-us-border/">seized phones, laptops, and other devices at the United States border</a>.”</p><p>There’s more: </p><blockquote><p>The agency <a href="https://sam.gov/opp/a6263b2d59744928b9c5e564c6e11e9f/view">said in a federal registry listing</a> that the tools it’s seeking must have very specific capabilities, such as the ability to find a “hidden language” in a person’s text messages; identify specific objects, “like a red tricycle,” across different videos; access chats in encrypted messaging apps; and “find patterns” in large datasets for “intel generation.” The listing was first posted on June 20 and updated on July 1.</p><p>CBP has been using Cellebrite to extract and analyze data from devices <a href="https://www.usaspending.gov/award/CONT_AWD_HSBP1008P22772_7014_-NONE-_-NONE-">since 2008</a>. But the agency said that it wants to “expand” and modernize its digital forensics program. Last year, CBP claims, <a href="https://www.cbp.gov/travel/cbp-search-authority/border-search-electronic-devices#:~:text=Despite%20the%20fact%20that%20all,devices%20of%2047%2C047%20international%20travelers.">it did searches</a> on more than 47,000 electronic devices—which is slightly higher than the approximately <a href="https://www.cbp.gov/newsroom/stats/cbp-enforcement-statistics-fy2023">41,500 devices</a> it searched in 2023 but a dramatic rise from 2015, when it searched <a href="https://www.aclutx.org/en/news/can-border-agents-search-your-electronic-devices-its-complicated">just more than 8,500 devices</a>.</p></blockquote><p>With $66 billion in funding, expect the CBP to be able to scan a lot more devices than 50,000 a year. </p><div class="bluesky-wrap outer" style="height: auto; display: flex; margin-bottom: 24px;" data-attrs="{"postId":"3lt3fdogz322u","authorDid":"did:plc:2vtbmhmrwzbqcfv4we4uxzzt","authorName":"Aaron Reichlin-Melnick","authorHandle":"reichlinmelnick.bsky.social","authorAvatarUrl":"https://cdn.bsky.app/img/avatar/plain/did:plc:2vtbmhmrwzbqcfv4we4uxzzt/bafkreiegkrp7aglu36764u6432m2hcgotmplgq5y34vsf6y5sh5txmtghq@jpeg","text":"Here is the funding for immigration enforcement in the bill, to be spent through September 30, 2029, which includes:\n\n- $74.9 billion for ICE detention and removal\n- $65.6 billion for CBP infrastructure, hiring, tech\n- $10 billion DHS slush fund\n- $3.5 billion for state enforcement\n\nAnd more!","createdAt":"2025-07-03T18:48:37.326Z","uri":"at://did:plc:2vtbmhmrwzbqcfv4we4uxzzt/app.bsky.feed.post/3lt3fdogz322u","imageUrls":["https://cdn.bsky.app/img/feed_thumbnail/plain/did:plc:2vtbmhmrwzbqcfv4we4uxzzt/bafkreidypfofmttnpopvxp2ygef7a3a5ozgix52ok54dzcnsgq6zsoruua@jpeg"]}" data-component-name="BlueskyCreateBlueskyEmbed"><iframe id="bluesky-3lt3fdogz322u" data-bluesky-id="46131421535461636" src="https://embed.bsky.app/embed/did:plc:2vtbmhmrwzbqcfv4we4uxzzt/app.bsky.feed.post/3lt3fdogz322u?id=46131421535461636" width="100%" style="display: block; flex-grow: 1;" frameborder="0" scrolling="no"></iframe></div><p>Over at <a href="https://www.techpolicy.press/how-trumps-budget-bill-sells-out-the-future-to-big-tech/">Tech Policy Press</a>, Jai Dulani highlights how the bill is a gift to big tech: </p><blockquote><p>This budget reveals the intersection at which the anti-immigrant Right and the tech Right converge. Under the version of the bill headed to the president’s desk, US Customs and Border Protection's (CBP) 2024 budget of $23 billion would nearly <a href="https://theimmigrationhub.org/article/how-the-trump-administration-is-priming-its-case-to-congress-for-more-money-and-why-we-cant-fall-for-it/">triple</a>. Some portion of this supplemental funding, which is estimated to exceed $60 billion, would result in bigger contracts to surveillance technology corporations. An additional $2.8 billion is <a href="https://www.cbo.gov/publication/61384">allocated</a> for “other surveillance technologies” along the southwest, northern, and maritime borders. This may include an expansion of the “surveillance towers” <a href="https://www.cbp.gov/document/foia-record/anduril-surveillance-towers">operated</a> by Anduril Industries. Anduril was founded by Trump supporter Palmer Luckey, who has raked in <a href="https://www.cnbc.com/2025/02/11/anduril-to-take-over-microsofts-22-billion-us-army-headset-program.html">billions</a> of dollars worth of contracts since Trump took office.</p></blockquote><p>Some $700 million is earmarked for ICE’s “information technology investments to support enforcement and removal operations,” Dulani notes—the kind of contracts that have been held by Peter Thiel and Joe Lonsdale’s (both ardent Trump supporters) Palantir through the last two administrations. Dulani continues:</p><blockquote><p>The bill provides<a href="https://homeland.house.gov/wp-content/uploads/2025/04/DHS_reconciliation.xml_.pdf"> $6.2 billion</a> for border technology and screening, which includes the deployment of biometric technology. Information derived from facial recognition, iris scans, and DNA will be collected and fed into DHS’ centralized <a href="https://www.meritalk.com/articles/dhs-eyes-cloud-overhaul-for-massive-biometric-identity-system/">biometric repository</a>, the Automated Biometric Identification System (IDENT). IDENT is <a href="https://www.dhs.gov/sites/default/files/publications/privacy-pia-nppd-ident-december2012.pdf">accessed</a> not only by ICE, but also the Department of Justice (DOJ) and state and local law enforcement. This data will be used for tracking, profiling, and policing, despite an abundance of documentation of the ways in which this technology does not work; it infamously misidentifies people, especially people of color, leading to <a href="https://www.washingtonpost.com/business/interactive/2025/police-artificial-intelligence-facial-recognition/">wrongful arrests</a> and detentions.</p></blockquote><p>These investitures will necessarily expand the surveillance of *everyone* not just migrants, and that immigration surveillance is just one piece of the pie. The Department of Defense stands to receive its biggest budget in history, and may ultimately wind up with more than $1 trillion. Dulani concludes that “this bill makes one thing crystal clear: corporate-state collusion is accelerating, threatening to fully convert the US into an authoritarian state.”</p><div class="digest-post-embed" data-attrs="{"nodeId":"0fe8e45f-ca75-406a-9da5-88697f601e0c","caption":"The 10-year ban on state AI laws may be dead, but let's not forget that OpenAI and big tech tried to subvert democracy.","cta":"Read full story","showBylines":true,"size":"md","isEditorNode":true,"title":"Don't forget what Silicon Valley tried to do","publishedBylines":[{"id":934423,"name":"Brian Merchant","bio":null,"photo_url":"https://substack-post-media.s3.amazonaws.com/public/images/cf40536c-5ef0-4d0a-b3a3-93c359d0742a_200x200.jpeg","is_guest":false,"bestseller_tier":100}],"post_date":"2025-07-02T03:18:56.478Z","cover_image":"https://substackcdn.com/image/fetch/$s_!WWub!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac187a65-9023-4a67-a7fa-3d6bb66e2170_811x482.png","cover_image_alt":null,"canonical_url":"https://www.bloodinthemachine.com/p/dont-forget-what-silicon-valley-tried","section_name":null,"video_upload_id":null,"id":167306315,"type":"newsletter","reaction_count":96,"comment_count":15,"publication_name":"Blood in the Machine","publication_logo_url":"https://substackcdn.com/image/fetch/$s_!irLg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21f9bf3-26aa-47e8-b3df-cfb2404bdf37_256x256.png","belowTheFold":true}"></div><p>And Silicon Valley companies are playing a central role. Google runs <a href="https://cloud.google.com/gov/federal-defense-and-intel">AI and cloud services to the US military</a>. OpenAI inked a $200 million defense contract <a href="https://www.bloodinthemachine.com/p/this-is-the-gentle-singularity">just weeks ago</a>. Executives at Meta, OpenAI, and Palantir have officially enlisted in the US Army <a href="https://www.bloodinthemachine.com/p/this-is-the-gentle-singularity">as part of its innovation initiative</a>. Anthropic, Palantir, and Amazon <a href="https://investors.palantir.com/news-details/2024/Anthropic-and-Palantir-Partner-to-Bring-Claude-AI-Models-to-AWS-for-U.S.-Government-Intelligence-and-Defense-Operations/">partnered to sell AI tools to the federal government for intelligence and defense operations</a>. The list goes on.</p><p>Ten years ago, any of those items would have caused some blowback and even outrage—Google <a href="https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html">faced protests and open revolt among its employees</a> when it announced that it was becoming a defense contractor in 2018, during the first Trump term. Now, big tech is open about collaborating with an administration that is building out state capacity for surveillance, policing, and mass deportation. Most gave money to Trump’s inauguration fund, they’ve paid personal visits to Mar-A-Lago, they campaigned to pass a bill that would have banned states from legislating or regulating their AI products. </p><p>We can only begin to imagine the new scope of state-sponsored AI surveillance tools and inter-agency databases and facial recognition technologies that this new budgetary largesse, and a Silicon Valley unafraid of capitalizing on it, will beget. One slim consolation is that a lot of them will be useless. Many new contracts will go to over-promising opportunists and grifters who spin up companies and divisions to get in on the budgetary gold rush. Many projects will suffer rampant dysfunction and mismanagement. Many won’t get off the ground. Many of the AI tools simply won’t work as advertised, in the same ways that many AI tools do not work as advertised in many other fields and capacities.</p><p>But this is the smallest consolation. As readers of this newsletter know, it often <a href="https://www.bloodinthemachine.com/p/whats-really-behind-elon-musk-and">will not matter whether AI or automation tools work well or at all</a>—they nonetheless can be used to justify more expansive dragnets to scoop up, administer, and weaponize our data and more aggressive intrusions into our daily lives. They will offer their administrators an avenue through which to dodge accountability; ‘it was not the state that accidentally flagged someone for deportation, it was the AI.’ And so on. </p><p>I think it’s not hyperbole to say that what we’re witnessing is a radical restructuring of the American state. DOGE’s hollowing out of nonpartisan civil servants, the administration’s attacks on universities, and the wealth transfer are severe enough, but the formation of a new and extraordinarily financed paramilitary force, the erection of detention centers for migrants on US soil, the drive to “de-naturalize” US citizens; these are new authoritarian frontiers not so vigorously explored ever in my lifetime. And the Silicon Valley giants, which, I might add, have spent the last decades collecting and storing data on every user in the nation, are demonstrating enthusiasm for abetting the project. The floodgates are open. </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h2>Extra Bloody bits:</h2><p>I’m a talking head in this short More Perfect Union documentary about how Silicon Valley is using AI to concentrate power, along with my colleague at AI Now, Sarah Myers West, and some other very smart folks. It’s a good doc, and very much worth watching in full: </p><div id="youtube2-DUfSl2fZ_E8" class="youtube-wrap" data-attrs="{"videoId":"DUfSl2fZ_E8","startTime":null,"endTime":null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/DUfSl2fZ_E8?rel=0&autoplay=0&showinfo=0&enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Thanks to everyone who has sent in <a href="https://www.bloodinthemachine.com/p/how-ai-is-killing-jobs-in-the-tech-f39">more AI Killed My Jobs stories</a>—so many good ones. I’m narrowing down which industry or profession(s) to do next, and it’s tough, for a lot of reasons. But seriously, really appreciate everyone’s candor, wisdom, and perspective. Keep them coming; address is AIKilledMyJob@pm.me. </p><p>Expect the next installment in a couple weeks. Oh and keep an eye out for an abridged version of the last one that the Guardian is aiming to run next week. They’re co-publishing the piece, which should help get more eyes on everyone’s stories, and further spread the word about what management, executives, and their AI initiatives are doing to workers in the tech world—and to the industry in general.</p><p>Okay okay, That’s it for now—see you next week, or the week after, if I run out of gas. Hang in there everyone, and hammers up. </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>I would like to note that I originally intended to add “he’s the best” here, but he came in and saw me writing this and said that part was too embarrassing and I had to delete it so I’m hiding it down here.</p></div></div>Don't forget what Silicon Valley tried to do - Blood in the Machinehttps://www.bloodinthemachine.com/p/dont-forget-what-silicon-valley-tried2025-07-02T03:18:56.000Z<p>Greetings all, </p><p>Hope everyone’s hanging in. Going to try to keep this on the shorter side, given that it’s a holiday week and all. But I was seeing a number of celebrations (very justified) over what looks like a victory (and it is) but one that comes with an enormous caveat or three.</p><p>So the Senate <a href="https://www.bbc.com/news/articles/clyzzzdj15vo">passed Trump’s gargantuan “big beautiful bill,”</a> and if it makes it through the House in roughly the same shape, it will, as many have noted, be <a href="https://www.nytimes.com/2025/07/01/business/poor-americans-senate-legislation.html">a historic transfer of wealth</a> from the poorest Americans to the richest. It’s the single biggest cut ($1 trillion) to Medicaid since the program’s inception, it cements ~$4 trillion worth of tax cuts for the wealthy, and it grants $45 billion in funding to ICE, making ICE the largest federal law enforcement agency. It cuts food assistance to the poor and scraps incentives for clean energy and electric vehicles. It adds $3 trillion to the US deficit. It’s a travesty.</p><p>If there’s a silver lining at all to be found here, it’s that the 10-year ban on state-level lawmaking on AI was stripped from the budget bill mere hours before it passed. “Early Tuesday morning, the United States Senate voted 99-1 to pass an amendment to the budget bill removing the proposed 10-year moratorium on the enforcement of state laws on artificial intelligence,” as <a href="https://www.techpolicy.press/us-senate-drops-proposed-moratorium-on-state-ai-laws-in-budget-vote/">the Tech Policy Press</a>, which has been doggedly following the bill, noted. </p><p><a href="https://www.barrons.com/articles/ai-meta-alphabet-microsoft-state-regulation-bill-7970bdb9">Big tech stocks tumbled</a> when the news broke.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!WWub!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac187a65-9023-4a67-a7fa-3d6bb66e2170_811x482.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!WWub!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac187a65-9023-4a67-a7fa-3d6bb66e2170_811x482.png 424w, https://substackcdn.com/image/fetch/$s_!WWub!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac187a65-9023-4a67-a7fa-3d6bb66e2170_811x482.png 848w, https://substackcdn.com/image/fetch/$s_!WWub!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac187a65-9023-4a67-a7fa-3d6bb66e2170_811x482.png 1272w, https://substackcdn.com/image/fetch/$s_!WWub!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac187a65-9023-4a67-a7fa-3d6bb66e2170_811x482.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!WWub!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac187a65-9023-4a67-a7fa-3d6bb66e2170_811x482.png" width="811" height="482" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/ac187a65-9023-4a67-a7fa-3d6bb66e2170_811x482.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":482,"width":811,"resizeWidth":null,"bytes":368459,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":false,"topImage":true,"internalRedirect":"https://www.bloodinthemachine.com/i/167306315?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac187a65-9023-4a67-a7fa-3d6bb66e2170_811x482.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!WWub!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac187a65-9023-4a67-a7fa-3d6bb66e2170_811x482.png 424w, https://substackcdn.com/image/fetch/$s_!WWub!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac187a65-9023-4a67-a7fa-3d6bb66e2170_811x482.png 848w, https://substackcdn.com/image/fetch/$s_!WWub!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac187a65-9023-4a67-a7fa-3d6bb66e2170_811x482.png 1272w, https://substackcdn.com/image/fetch/$s_!WWub!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac187a65-9023-4a67-a7fa-3d6bb66e2170_811x482.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption"><a href="https://www.techpolicy.press/us-senate-drops-proposed-moratorium-on-state-ai-laws-in-budget-vote/">Tech Policy Press</a>.</figcaption></figure></div><p><em>A quick note that, as always, 100% of this work is made possible by paid subscribers—if you find value in this analysis and reporting, and you’re able, please consider subscribing or upgrading. Infinite thanks to all who already do. Now, onwards.</em> </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p>I <a href="https://www.bloodinthemachine.com/p/de-democratizing-ai">reported on this project at length in May</a>, and spoke to California state rep Isaac Bryan, who’d written one of the state AI laws that the GOP’s amendment aimed to kill. It is unambiguously a good thing that it’s dead, and that Silicon Valley’s power is not ‘halt states’ capacity to do democracy with regard to our products’ levels of absolute quite yet. But it’s close. The state AI law ban really was as draconian and anti-democratic as it sounded. In its original form, it read “no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.” </p><p>But now that it’s dead, and we’re all getting swept up in the swirl of well-justified outrage over the amendments that *did* pass and the spotlight turns to the House, I worry many will forget what almost happened here. That Silicon Valley elites almost won a battle to stop states from passing AI laws, period. </p><p>Ultimately, they were one vote short. But just one.</p><p>I would suggest that we do not forget that OpenAI, Meta, Google, and some of the top VC firms in Silicon Valley were behind this bill and driving for its passage. Or that, had these companies not sent teams of lobbyists to DC and sold the idea that AI is a zero sum arms race with China, it would not have existed at all. That biggest AI companies—<a href="https://www.nytimes.com/2025/06/05/opinion/anthropic-ceo-regulate-transparency.html">Anthropic excepted</a>, to its credit, even if its opposition was performative—were directly campaigning to exempt their products and practices from the democratic process. They were actively pushing to radically de-democratize a technology they are selling us all as a great force for <a href="https://www.bloodinthemachine.com/p/ai-is-not-democratizing-creativity">democratization</a>.</p><p>As OpenAI and Meta et al quietly move on to building and selling their next generation of AI products, don’t forget what they tried to do. Which, essentially, can be boiled down to “stop California from passing laws to regulate AI that might take a little time and money to comply with.” </p><p>Here’s <a href="https://www.politico.com/news/2025/05/12/how-big-tech-is-pitting-washington-against-california-00336484">Politico to remind us</a> of that campaign:</p><blockquote><p>As California and other states move to regulate AI, companies like OpenAI, Meta, Google and IBM are all urging Washington to pass national AI rules that would rein in state laws they don’t like. So is Andreessen Horowitz, a Silicon Valley-based venture capitalist firm closely tied to President Donald Trump.</p><p>Tech lobbyists are pushing the White House to oppose state AI laws, sitting down with key lawmakers to emphasize the threat and — perhaps most crucially — huddling with fellow lobbyists to see if they can unite behind a national strategy.</p><p>Their effort puts the companies in the unusual position of supporting federal regulation on the fast-moving technology, which they have so far managed to stave off almost entirely.</p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bttm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7233e7a4-7fb6-4c33-99ad-e26a0d96b556_670x661.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bttm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7233e7a4-7fb6-4c33-99ad-e26a0d96b556_670x661.png 424w, https://substackcdn.com/image/fetch/$s_!bttm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7233e7a4-7fb6-4c33-99ad-e26a0d96b556_670x661.png 848w, https://substackcdn.com/image/fetch/$s_!bttm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7233e7a4-7fb6-4c33-99ad-e26a0d96b556_670x661.png 1272w, https://substackcdn.com/image/fetch/$s_!bttm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7233e7a4-7fb6-4c33-99ad-e26a0d96b556_670x661.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bttm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7233e7a4-7fb6-4c33-99ad-e26a0d96b556_670x661.png" width="670" height="661" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/7233e7a4-7fb6-4c33-99ad-e26a0d96b556_670x661.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":661,"width":670,"resizeWidth":null,"bytes":281003,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/167306315?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7233e7a4-7fb6-4c33-99ad-e26a0d96b556_670x661.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!bttm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7233e7a4-7fb6-4c33-99ad-e26a0d96b556_670x661.png 424w, https://substackcdn.com/image/fetch/$s_!bttm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7233e7a4-7fb6-4c33-99ad-e26a0d96b556_670x661.png 848w, https://substackcdn.com/image/fetch/$s_!bttm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7233e7a4-7fb6-4c33-99ad-e26a0d96b556_670x661.png 1272w, https://substackcdn.com/image/fetch/$s_!bttm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7233e7a4-7fb6-4c33-99ad-e26a0d96b556_670x661.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption"><a href="https://www.politico.com/news/2025/05/12/how-big-tech-is-pitting-washington-against-california-00336484">Politico</a>, 5/12/25.</figcaption></figure></div><p>The tech industry has recognized that because federal regulation of AI is all but off the table now, the only real risk that their products will be regulated comes from states like California, New York, and Colorado, where more progressive legislatures are able to pass meaningful laws that can impact the market for tech products. This was an effort to shut that down wholesale.</p><p>And it looked like it was going to pass, right up until the last minute, when Marsha Blackburn withdrew her support for the second time in short succession. The ban had already <a href="https://www.bloomberg.com/news/articles/2025-07-01/senate-removes-tax-bill-provision-limiting-state-ai-regulation?srnd=undefined">been sanded down to 5 years</a>, and would have functioned by tying it to states’ eligibility to receive broadband funding. If a state passed any AI laws, the federal government would cut off broadband funds. But Blackburn is a senator of Tennessee, where country music artists in Nashville had pushed for protections from AI, and the state responded by passing ELVIS—the <a href="https://www.armstrongteasdale.com/thought-leadership/artificial-intelligence-and-copyrights-tennessees-elvis-act-becomes-law/">Ensuring Likeness Voice and Image Security Act</a>. Blackburn worried the AI ban would interfere with its implementation, and with lawmakers’ ability to write laws to protect children from digital products and services. </p><p>Opposition <a href="https://www.axios.com/pro/tech-policy/2025/07/01/how-advocates-killed-the-ai-moratorium">from civil society groups</a>, Democrats and other Republicans had already muddied the waters, and there was opposition in other corners—but when Ted Cruz, one of the most vocal backers of the AI law ban, cut a deal with Blackburn exempting her state’s law, it looked like the moratorium would pass. Eleventh hour calls from Steve Bannon, of all people, caused her to switch <em>again, </em>and when she announced she was out, everyone else abandoned the measure too. </p><p>That’s why the measure officially failed by 99-1, as once-supportive senators switched their votes to save face as it became clear the amendment would fail, but it really was just one vote shy of passing. Let’s not forget that. Because yes, sure, any industry coalition that sees a shot at exempting itself from costly regulation is probably going to try to take it—but most industries aren’t <a href="https://www.bloodinthemachine.com/p/this-is-the-gentle-singularity">promising wide-scale social transformation</a> or that they are going <a href="https://www.msn.com/en-us/news/technology/panic-after-sam-altman-says-ai-will-require-changes-to-social-contract-and-society/ar-AA1xU1cJ">to rewrite the social contract</a>. We must take note of what has happened here, and henceforth regard all of Sam Altman’s talk of the need for careful regulation or for democracies to adapt to AI as the pablum it has always been. We must understand that these companies have made a direct play for—and will continue to seek—absolute control over their automation and surveillance technologies, no matter the social cost. </p><p>Cruz has floated the idea of passing a state AI law ban as a standalone bill, so there’s little doubt that we haven’t seen the last of such efforts. Silicon Valley continues to hold sway over the Trump administration, and vice versa. But for now, state level lawmaking can persist. One of California’s bills, “No Robo Bosses,” which seeks to prevent algorithmic discrimination in the hiring process, among other things, just <a href="https://www.siliconvalley.com/2025/06/16/californias-no-robo-bosses-artificial-intelligence-workplace/">cleared a hurdle in the state legislature</a>—it’s one step closer to become law.</p><p>The states are the laboratory of democracy, as <a href="https://thehill.com/regulation/court-battles/565428-a-dozen-gop-governors-urge-supreme-court-to-let-states-regulate/">the GOP likes to remind us</a>. The prime movers in AI were willing to shut it down.</p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h2>More bloody bits</h2><p>Peter Lewis, writing in <a href="https://www.theguardian.com/australia-news/commentisfree/2025/jul/01/ai-hype-artificial-intelligence-power-dynamics">the Guardian</a> and <a href="https://www.bloodinthemachine.com/p/one-year-of-blood-in-the-machine">citing my book</a> (cheers Peter), calls on Australians to embrace their inner Luddite—many convicted machine breakers were “transported” to Australia as punishment for their luddism in the 1810s—when considering AI. </p><p>From the piece, which is well worth a read: </p><blockquote><p>If I hear another well-intentioned person justifying their support for the regulation of AI with the qualifier “I’m no luddite, but …” I’m going to start breaking my own machine.</p><p>From <a href="https://www.capitalbrief.com/article/tech-leaders-cautiously-optimistic-as-ayres-steps-into-husics-shoes-1a21d945-77eb-46ec-baa4-f84aa8196b4a/preview/">ministers</a> to <a href="https://www.afr.com/work-and-careers/workplace/union-push-for-workers-to-have-right-to-refuse-to-use-ai-20250616-p5m7to">union leaders</a> to progressives watching from the cheap seats<strong>,</strong> there is growing recognition that untrammelled development of this technology carries significant risks.</p><p>But there is also a reticence to be seen as being anti-technology lest we are perceived as standing in the way of the productivity boom and consequent bounty of abundance that the boosters of these tools promise is just around the corner. After all, we aren’t luddites.</p><p>The problem with being forced into this defensive mindset is that we misread the challenge at hand, which is not so much about the nature of the technology but the power dynamics driving this change.</p></blockquote><p>Read the whole thing in <a href="https://www.theguardian.com/australia-news/commentisfree/2025/jul/01/ai-hype-artificial-intelligence-power-dynamics">the Guardian</a>.</p><p>-In a thoughtful and far-ranging conversation about what socially mediated AI products are doing to our ability to discern reality with New Yorker writer Jia Tolentino, on his <a href="https://www.msnbc.com/msnbc-podcast/why-is-this-happening/-brain-finally-broke-jia-tolentino-podcast-transcript-rcna215652">Why Is This Happening? podcast</a>, Chris Hayes gave a surprise shout to this very newsletter. Thanks for the nod, and hello to any readers who found me there. Welcome aboard—and hammers up.</p><div><hr></div><p>That’s it for today. More soon; stay frosty out there. </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p></p>Is AI Art Net Art? - Cybernetic Forests685e74825840eb000155516e2025-06-29T11:04:28.000Z<figure class="kg-card kg-embed-card"><iframe src="https://vimeo.com/event/5194804/embed" width="640" height="360" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share"></iframe></figure><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">💡</div><div class="kg-callout-text">Above: a full video recording of presentations and discussion among Vladan Joler, Valentina Tanni & Eryk Salvaggio from 26 June 2025 at the Bibliotheca Hertziana in Rome. Below: Eryk Salvaggio's remarks on parasitic resistance and creative misuse across net art and AI. </div></div><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">💡</div><div class="kg-callout-text">Much (not all) of the talk below is based on our paper in Critical AI, "<a href="https://read.dukeupress.edu/critical-ai/article/doi/10.1215/2834703X-11700228/401274/Cultural-Red-Teaming-ARRG-and-Creative-Misuse-of?ref=mail.cyberneticforests.com" rel="noreferrer">Cultural Red Teaming: ARRG! and Creative Misuse of AI Systems</a>," published this week. </div></div><h2 id="is-ai-art-net-art">Is AI Art Net Art?</h2><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mail.cyberneticforests.com/content/images/2025/06/netart2small.png" class="kg-image" alt="Is AI Art Net Art?" loading="lazy" width="1200" height="813" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/06/netart2small.png 600w, https://mail.cyberneticforests.com/content/images/size/w1000/2025/06/netart2small.png 1000w, https://mail.cyberneticforests.com/content/images/2025/06/netart2small.png 1200w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Eryk Salvaggio opening the event at the Bibliotheca Hertziana, "Is AI Art Net Art?"</span></figcaption></figure><img src="https://mail.cyberneticforests.com/content/images/2025/06/networks-1.png" alt="Is AI Art Net Art?"><p>As someone who writes critically about AI, I often get asked why I engage with it at all. In response, I would say that "AI" — as apparatus, as politics, as data extraction, as epistemology — is engaging with me already. My practice is about making distance by making space.</p><p>That's the stance at the heart of an adversarial AI practice. It isn't about collaboration, because I don't have a choice in the matter. My data will be analyzed, my picture taken at the stoplight, the tools integrated into the university or the workplace.</p><p>I don't <em>collaborate</em> in my artistic practice with AI. I try to antagonize – and analyze – back. It's been at the heart of my technology based practice since I started out, as a net artist, as a teenager. I wasn't <em>coding per se, </em>as in, I wasn't developing websites or building platforms. I was <em>making-with</em> the materials of html and javascript, ASCII characters and browsers, while resisting and inverting their logic.</p><p>In the friction, I cultivated a way of understanding the web. I saw it as a medium of communication. But to communicate, you had to do certain things according to certain protocols. The browser constrained you, but you could find unusual parts of the network.</p><p>You could use the source code to make additional layers of the work. You could reimagine autoloading html pages as a slow motion projector, imagine ASCII characters as a means of transmitting video, as in this 2002 piece, <em>RGB</em>. </p><figure class="kg-card kg-embed-card kg-card-hascaption"><iframe src="https://player.vimeo.com/video/988150247?app_id=122963" width="426" height="240" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share" title="RGB (2002)"></iframe><figcaption><p dir="ltr"><span style="white-space: pre-wrap;">Eryk Salvaggio, </span><i><em class="italic" style="white-space: pre-wrap;">RGB</em></i><span style="white-space: pre-wrap;"> (2002), which used auto-refreshing HTML and ASCII art to create an interactive video stream. Originally a browser-based artwork.</span></p></figcaption></figure><p>Looking at the basic building blocks of the web as code and text, I was drawn to ASCII art — in which text characters operated as shaded pixels. This style of work is challenging to archive — there were glitching browser behaviors that animated the work, which owes a lot to the glitch-net-art pioneers JODI and ASCII artist Vuk Ćosić, which don't work anymore, so even these videos are not quite right.</p><figure class="kg-card kg-embed-card kg-card-hascaption"><iframe src="https://player.vimeo.com/video/990949351?app_id=122963" width="426" height="240" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share" title="Lambs in Ascension (Date Unknown) Net Art Documentation"></iframe><figcaption><p dir="ltr"><span style="white-space: pre-wrap;">Eryk Salvaggio, </span><i><em class="italic" style="white-space: pre-wrap;">Lambs in Ascension</em></i><span style="white-space: pre-wrap;"> (200X), which used a browser-based glitch and ASCII art to create an animation. Originally a browser-based artwork.</span></p></figcaption></figure><p>In other works, such as <em>Lambs in Ascension</em>, these animations took over the browser. The idea was twofold: to reveal, immediately, that the user's sense of control over the browser was an illusion steered by the interface, and an attempt to get into that protocol and disrupt it. The corporate takeover of the web, the control of web hosts and servers, was not a foregone conclusion. There were other ways of arranging the net, and other ways to visualize and imagine it.</p><p>Companies grabbed larger and larger portions of audience share, and the rest is the story of the Web we have today. The platform hospitality of the Web became its primary illusion, and its protocols became even more complex and inaccessible.</p><h3 id="auction-stand-for-personal-hate">Auction Stand for Personal Hate</h3><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mail.cyberneticforests.com/content/images/2025/06/hate.gif" class="kg-image" alt="Is AI Art Net Art?" loading="lazy" width="500" height="324"><figcaption><span style="white-space: pre-wrap;">Eryk Salvaggio, "Auction Stand for Personal Hate," 2001. </span></figcaption></figure><p>Part of any successful Net Art practice was being kicked off of a platform for unexpected behavior.</p><p>For example, in 2001 I used eBay to auction off a profound personal hatred toward whomever paid the most for it. I got up to $50,000 before eBay canceled the auction and introduced a clause that objects sold had to have a material form. So I tried to sell a drawing of my hate, which you see here; they canceled that too, with no explanations. </p><p>In a recent publication, "<a href="https://read.dukeupress.edu/critical-ai/article/doi/10.1215/2834703X-11700228/401274/Cultural-Red-Teaming-ARRG-and-Creative-Misuse-of?ref=mail.cyberneticforests.com" rel="noreferrer">Cultural Red Teaming: ARRG! and the Creative Misuse of AI Systems</a>," published in Critical AI, co-authored with Caroline Sinders and Steph Maj Swanson, we discuss the role of <em>creative misuse</em> in AI art, an idea that comes, from a few paths, through the legacy of net art. </p><p>Anna Watkins-Fisher's 2020 book, "<a href="https://read.dukeupress.edu/books/book/2804/The-Play-in-the-SystemThe-Art-of-Parasitical?ref=mail.cyberneticforests.com" rel="noreferrer"><em>At Play in the System: The Art of Parasitical Resistance</em></a>," put into words something that I had struggled to identify in my own practice. It examined internet art acts, like mine with eBay, as a new form of aesthetic resistance that aims to navigate our own entanglement with capitalist logics of power. Not by resisting or refusing them, but by using them incorrectly, to do things they were not intended to do.</p><p>Jon Ippolito described a strategy among net artists back in a 2002 article in Leonardo, "<a href="https://www.jstor.org/stable/pdf/1577255.pdf?ref=mail.cyberneticforests.com" rel="noreferrer"><em>10 Myths of Internet Art</em></a>," in which he distinguishes "innovation" from art-making in net.art:</p><blockquote>"What sets art apart from other technological endeavors is not the innovative use of technology, but a creative misuse of it. To use a tool as it was intended, whether a screwdriver or spreadsheet, is simply to fulfill its potential. By misusing that tool–that is, by peeling off its ideological wrapper and applying it to a purpose or effect that was not its maker's intention–artists can exploit a technology's hidden potential in an intelligent and revelatory way."</blockquote><p>But this innovation can also backfire, as Watkins-Fisher writes: </p><blockquote>"What are the meaning and value of a politics of disruption when artworks that are critical of corporations and government institutions can be said to help them—however inadvertently—close their loopholes? When hackers actually help states and corporations improve the security of their information systems?"</blockquote><p><em>Parasitical resistance</em> relies on this hospitality, siphoning resources from the host – the platforms – in ways that create beneficial outcomes to the world beyond the host. It means coming up to the borders of that hospitality in ways that reveal its limits, showing the degree of control and the expectations of normal behavior that are demanded in return. </p><p>The spaces for creating glitches online, either through bad code or weird behaviors, were understood on these platforms as <em>errors</em>. When engineers had more time and motivation, these spaces artists had found and exploited would be fixed. The noise in the channel would be eliminated, and the system would be "better," in the sense of usability and purpose – typically, data collection, marketing – in ways that ultimately limited the scope of actions afforded to the user. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mail.cyberneticforests.com/content/images/2025/06/netart1small.png" class="kg-image" alt="Is AI Art Net Art?" loading="lazy" width="1200" height="825" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/06/netart1small.png 600w, https://mail.cyberneticforests.com/content/images/size/w1000/2025/06/netart1small.png 1000w, https://mail.cyberneticforests.com/content/images/2025/06/netart1small.png 1200w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Eryk Salvaggio speaking at the "Is AI Art Net Art?" discussion at the Bibliotheca Hertziana in Rome.</span></figcaption></figure><h3 id="fixing-the-ai-glitch">Fixing the AI Glitch</h3><p>As we turn to AI, this relationship is as timely as ever. The glitch in AI reveals something just as radical as the glitch in the browser and just as vulnerable to recuperation. It shows us that the system is choreographed toward certain sets of illusions.</p><p>AI is an industry driven by a proliferation of illusions: sustaining myths that depend upon loose definitions of intelligence, decision-making, reason, and creativity, while paradoxically emphasizing new forms of control for the people who use them. AI is simultaneously a thinking being and nothing more than a tool: a personal, exploitable employee to propel all of us into new levels of wealth and disconnection.</p><p>In this sense, I am an artist who is trying to strip the imagination out of AI, rather than expand our imagination of what it is. We can and should imagine AI differently, but we need to identify the ideology within that imagination first. The AI we have emerged within the context of the world we have, and we can't change AI unless we change that world: its structures, incentives, legacies and logics.</p><p>That is a bold order for any single artist. So I should be clear: I don't think my art changes the world. But it can do work, as Ippolito suggests, by "peeling off its ideological wrapper."</p><h3 id="algorithmic-resistance-research-group">Algorithmic Resistance Research Group</h3><p>In 2023 I was invited to attend the largest hacker convention in the world, DEFCON, for its AI Village. That year, the event was sponsored by the White House, which created some leverage for participation from AI companies. The event invited hackers in attendance — 25,000 of them, if you can believe it — to come into the ballroom for set time intervals and see if they could hack a series of Large Language Models into doing things they were not meant to do. The information was gathered up as a "red teaming" exercise, which is, the data was passed on to those companies to analyze and potentially fix. </p><blockquote class="kg-blockquote-alt">Even critical AI art could be said to have some complicity with the AI industry. </blockquote><p>In that context, even critical AI art could be said to have some complicity with the AI industry: we were their guests, and we were guests at an event creating excitement about AI, with the pretense that the AI industry was working on better goals toward a just society. But fixing models' outputs also created a good face from industry toward the Biden administration, by showing a commitment to "transparency" and community engagement — some of which was true. </p><p>At DEFCON, I worked with two other artists — <a href="https://carolinesinders.com/?ref=mail.cyberneticforests.com" rel="noreferrer">Caroline Sinders</a> and <a href="https://supercompo.site/?ref=mail.cyberneticforests.com" rel="noreferrer">Steph Maj Swanson</a> — to create an ad hoc exhibition of critical AI artworks as the <a href="https://www.cyberneticforests.com/defcon31?ref=mail.cyberneticforests.com" rel="noreferrer">Algorithmic Resistance Research Group, or <em>ARRG!</em> </a>We presented work made with glitches we'd found in these AI systems. They were used as materials to confront the broader social context of AI as an organizing logic – tackling, across the three, concerns about AI that transcend the ethics-washing of inviting a community to solve, unpaid, bias problems in corporate models. </p><p>We were invited, before the White House was involved, as a group of outspoken artists, but after the White House came in, it was inferred that we should not be "explicitly political." Steph Maj Swanson was commissioned to make a piece, <em>Suicide III</em>, which was stationed on a screen at the entrance to the hackerspace where volunteers waited before being guided to their seats for the Red Teaming exercise. The screen at first looks like an announcement taking place at the site of the convention, as if Joe Biden is about to address the audience. </p><p>So the film presents a deep fake Joe Biden discussing the idea is that there is so much hype on the floor in Las Vegas — <em>hyperstition</em>, specifically, a kind of hype that becomes true through its own assertion — that Biden has to deploy a Department of Counter-Augury to protect the sovereignty of America's future from tech companies building AI.</p><p>It was parasitical resistance, not just to <em>platform</em> hospitality, which is how I often approach things, but through parasitical resistance to the hype of the event – itself a platform – that we were participating in. I think we succeeded in testing the limits of an event that had transformed into a kind of spectacle about ethical AI. The organizers were never outright hostile and many of the other volunteers were extremely enthusiastic. But the top organizers of the conference never said a single word to us – and we were not invited back. </p><h3 id="ai-beyond-theory">AI Beyond Theory</h3><p>Much of the conversation about AI and art focuses on epistemological aspects of creativity, human thought and expression, or the financial impact on creative industries. I am less interested in these questions because none of that has ever been what my art is meant to do. </p><p>Fisher writes of parasites, a movement in which:</p><blockquote>"The digital is not necessarily the medium or site of exhibition of these artworks; it is the informing condition of their emergence. The digital constitutes a favorable milieu for the consolidation of power structures that predate it, for technologies, sold as empowering, draw us ever more tightly into their ideological mechanisms through apparatuses of capture and economies of dependency. This study reconceives resistance under what Gilles Deleuze famously termed the regime of control, where power has moved outside disciplinary spaces of enclosure and made openness its constitutive promise."</blockquote><p>She goes on to say: </p><blockquote>"Parasitical works use art as a means to wedge open—to redirect or subtly re-incline—the mechanisms used to justify and legitimize the privatization of resources and access. Parasitism responds to a contemporary political economy in which less powerful players are increasingly constrained and made dependent by the terms of their relationships to more powerful players."</blockquote><p>The terms of this relationship, in the AI industry, is what we could call <em>platform hospitality</em>: a certain understanding of the rules you are meant to play by when you use the system. When I use Midjourney, I am a guest. We have to be careful about triggering platform violations lest we get kicked out. This rigidity of behavior constrains us as artists and researchers, who are increasingly turning to misuse to gather data about these systems.</p><blockquote class="kg-blockquote-alt"><em>Platform hospitality</em>: a certain understanding of the rules you are meant to play by when you use the system.</blockquote><p>As a critical AI practitioner, my art is a way of raising questions that reframe the user's position to technology. In the net art era, the glitch was specific to the browser. Its interface, what browsers allowed us to do and what behaviors they restrained, were enforced as computer code as well as codes of conduct. </p><p>Thinking about the world presented to us by the Web helped us to focus on the politics of mediation that interfaces created. Alex Galloway, one of the founders of the net art community Rhizome, wrote in 2015 that <em>"The world no longer indicates to us what it is. We indicate ourselves to it, and in so doing the world materializes in our image."</em> He's writing about the world of the screen, the world presented through our browser windows and media. It is difficult not to look at that passage and immediately think of the manifestation of AI-generated text and images: <em>"Today all media are a question of synecdoche (scaling a part for the whole), not indexicality (pointing from here to there)."</em></p><p>In other words, there are properties of AI generated media that are simply an acceleration of what we artists and theorists had seen before on the Web, rather than a novel phenomenon. As <a href="https://publicseminar.org/2015/06/on-galloway/?ref=mail.cyberneticforests.com">Mackenzie Wark</a> explains, what Galloway was presenting "is a theory not of media but of mediation, which is to say not a theory of a new class of objects but of a new class of relations: mediation, allegory, interface. Instead of beginning and ending from technical media, we are dealing instead with their <em>actions</em>: storing, transmitting, processing. ... [Galloway] is careful not to seek <em>essences</em> for either <em>objects</em> or <em>subjects</em>."</p><p>AI art, in the critical strand, understands the relationship here, too, wherein what we see from machines is all simulation and plausibility. Perhaps all of this AI revolution is just an extension of the past 30 years of category mistakes around technology, distinction in name only, the innovation limited to shifts in power and user behavior. </p><blockquote class="kg-blockquote-alt">Interfaces, hardware, and content are things, but they are combined through ideologies that structure the relationship between them. </blockquote><p>Interfaces, hardware, and content are things, but they are combined through ideologies that structure the relationship between them. Net artists worked within these systems, always conscious that the work existed not merely on a machine or in the server room but in the linking of one computer user to the other through the network: after all, net art was <em>networked art</em>.</p><p>Artists get to turn theory into practice, creating ways of living-with or against the technical imposition of political theory. The abstract question that isolates infrastructure from its politics collapses. We get into the technical to challenge the influence the theory has upon it, because that is how we make work.</p><p>Being an artist creates a sense of perspective in which no technology is ever taken at face value, that all technology is a surface waiting to be scratched. The honey is inside it, and the trick is trying to get inside the hive without getting stung. </p><hr><h2 id="upcoming-events">Upcoming Events</h2><figure class="kg-card kg-image-card"><img src="https://mail.cyberneticforests.com/content/images/2025/06/Human-Movie-Beach-Still-_-Small-1.jpg" class="kg-image" alt="Is AI Art Net Art?" loading="lazy" width="1000" height="562" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/06/Human-Movie-Beach-Still-_-Small-1.jpg 600w, https://mail.cyberneticforests.com/content/images/2025/06/Human-Movie-Beach-Still-_-Small-1.jpg 1000w" sizes="(min-width: 720px) 720px"></figure><h3 id="melbourne-july-3-human-movie-performance">Melbourne, July 3: <em>Human Movie</em> (Performance!)</h3><p><strong>w/ JODI (NL, BE) & Debris Facility Pty Ltd (AUS)</strong><br><em>@ Club Miscellania, Melbourne</em></p><p>I'll perform <em>Human Movie</em> as part of a series of performances including the net.art legends JODI and the Australian "para-corporate and parasitic entity," Debris Facility Pty Ltd. Open to the public, details below!</p><div class="kg-card kg-button-card kg-align-center"><a href="https://events.humanitix.com/network-anarchy?ref=mail.cyberneticforests.com" class="kg-btn kg-btn-accent">Get Tickets</a></div><hr><figure class="kg-card kg-image-card"><img src="https://mail.cyberneticforests.com/content/images/2025/06/Network-anarchy-and-unstable-diffusions.png" class="kg-image" alt="Is AI Art Net Art?" loading="lazy" width="1280" height="720" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/06/Network-anarchy-and-unstable-diffusions.png 600w, https://mail.cyberneticforests.com/content/images/size/w1000/2025/06/Network-anarchy-and-unstable-diffusions.png 1000w, https://mail.cyberneticforests.com/content/images/2025/06/Network-anarchy-and-unstable-diffusions.png 1280w" sizes="(min-width: 720px) 720px"></figure><h3 id="melbourne-7-8-july-noisy-joints-embodying-the-ai-glitch">Melbourne, 7-8 July: Noisy Joints: Embodying the AI Glitch</h3><p><strong>w/ Camila Galaz</strong><br><em>@ RMIT Media Portal, Deakin Downtown, Melbourne</em></p><p>The entire conference is going to be great. Here's our part:</p><p>Artists and researchers Eryk Salvaggio and <a href="https://www.camilagalaz.com/?ref=mail.cyberneticforests.com" rel="noreferrer">Camila Galaz</a> present a participatory workshop on interrupting and reframing the outputs of generative AI systems. Drawing from a critical AI puppetry workshop originally developed at the Mercury Store in Brooklyn, New York, <em>Noisy Joints</em> invites participants to think through the body—its categorisation, misrecognition, and noise—within AI image-generation systems. How do our physical movements interact with machine perception? How can choreographies of shadow, gesture, and failure unsettle the logic of automated categorisation?</p><p>Across the session, participants will explore these questions through short talks, collaborative video-making, glitch-puppetry exercises, and experimental use of tools like Runway’s GEN3 model. Using shadows, projections, and improvised movement, the workshop will trace a playful and critical path through the interfaces and assumptions that shape AI perception. No technical experience is required.</p><p><em>Convened by Joel Stern (RMIT), Thao Phan (ANU), and Christopher O’Neill (Deakin).</em></p><div class="kg-card kg-button-card kg-align-center"><a href="https://www.admscentre.org.au/event/network-anarchy-and-unstable-diffusions/?ref=mail.cyberneticforests.com" class="kg-btn kg-btn-accent">More Details</a></div>Hundreds of workers mobilize to 'Stop Gen AI' and help each other survive AI automation - Blood in the Machinehttps://www.bloodinthemachine.com/p/hundreds-of-workers-mobilize-to-stop2025-06-27T18:19:00.000Z<p>Greetings critics, coders, friends and luddites — </p><p>How are we holding up out there? Any one else feeling <a href="https://www.forever-wars.com/this-is-the-beginning-of-the-end-of-the-9-11-era/">a few fleeting glimmers of hope</a> breaking through the dark polluted skies? Increasingly rare stuff these days, gotta take it whilst we can. But we have a lot to get to, much of it concerning the ongoing fights against tech companies’ exploitation of creative workers, in this week’s Critical AI Report. </p><p>First up, we’ll meet new mutual aid group that’s formed to help creatives and knowledge workers and anyone else impacted by generative AI—tilted, aptly, Stop Gen AI—and its founder, cybersecurity expert Kim Crawley. We’ll look at the flurry of legal action around generative AI and what it means for writers, creatives, and workers. In two major legal cases, the judges ruled in favor of the tech companies against writers who’d brought lawsuits against them; but in a twist, those “losses” might actually be good news for humans. Plus a new report on algorithmic inequality in gig work, Tesla’s disastrous robotaxi launch, and more.</p><p>A quick note that a chunk of this report is for paying subscribers only, so if you can, chip in and help support this work—a massive thanks to everyone who already does! Because of you, I can:</p><p>-Complete <a href="https://www.bloodinthemachine.com/p/how-ai-is-killing-jobs-in-the-tech-f39">in-depth reports on the impact of AI on workers</a><br>-Report on how Silicon Valley is <a href="https://www.bloodinthemachine.com/p/de-democratizing-ai">successfully pushing lawmakers to ban AI regulation</a><br>-Pay rent<br>-Send my kids to summer camp<br>-Report on the ground <a href="https://www.bloodinthemachine.com/p/the-weaponization-of-waymo">on the destruction of Waymos in Los Angeles</a><br>-Focus reportage and analysis on workers, creators, artists, students, rather than big tech’s bottom line<br>-Read more news and studies and testimonials about AI than is probably healthy for any human to do</p><p>If any of that sounds worthwhile please consider becoming a paid subscriber, so I can do more of it, and better. Thanks, and onwards!</p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><h2>Part 1: Stop Gen AI</h2><p>This week, I published <a href="https://www.bloodinthemachine.com/p/how-ai-is-killing-jobs-in-the-tech-f39">the first installment of my ‘AI Killed My Job’ series</a>, focusing on tech workers. Since starting the project, I’ve heard dozens of harrowing, dark, and absurd stories from engineers, product managers, marketers, coders, and beyond, each of whom have seen their livelihoods transformed, degraded, or replaced outright by AI. </p><p>One person who wrote in with an account was Kim Crawley. Crawley is a cybersecurity expert, author and college instructor who’s been hit hard by generative AI. Beyond that, she, like many knowledge workers, finds the whole commercial enterprise—and the Silicon Valley companies behind it—to be an ethical blight. She reached out to tell me that she and some peers were starting a project to do something about it. They were organizing a mutual aid group called <a href="https://stopgenai.com/">Stop Gen AI</a>. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!EYEF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2157c966-b3a6-4a5d-aa5e-083012fb31f0_1219x840.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!EYEF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2157c966-b3a6-4a5d-aa5e-083012fb31f0_1219x840.png 424w, https://substackcdn.com/image/fetch/$s_!EYEF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2157c966-b3a6-4a5d-aa5e-083012fb31f0_1219x840.png 848w, https://substackcdn.com/image/fetch/$s_!EYEF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2157c966-b3a6-4a5d-aa5e-083012fb31f0_1219x840.png 1272w, https://substackcdn.com/image/fetch/$s_!EYEF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2157c966-b3a6-4a5d-aa5e-083012fb31f0_1219x840.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!EYEF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2157c966-b3a6-4a5d-aa5e-083012fb31f0_1219x840.png" width="1219" height="840" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/2157c966-b3a6-4a5d-aa5e-083012fb31f0_1219x840.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":840,"width":1219,"resizeWidth":null,"bytes":1020599,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/165059006?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2157c966-b3a6-4a5d-aa5e-083012fb31f0_1219x840.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!EYEF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2157c966-b3a6-4a5d-aa5e-083012fb31f0_1219x840.png 424w, https://substackcdn.com/image/fetch/$s_!EYEF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2157c966-b3a6-4a5d-aa5e-083012fb31f0_1219x840.png 848w, https://substackcdn.com/image/fetch/$s_!EYEF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2157c966-b3a6-4a5d-aa5e-083012fb31f0_1219x840.png 1272w, https://substackcdn.com/image/fetch/$s_!EYEF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2157c966-b3a6-4a5d-aa5e-083012fb31f0_1219x840.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">Screenshot of Stop Gen AI’s <a href="https://stopgenai.com/get-your-stopgenai-com-email-account/">website</a>.</figcaption></figure></div><p>It is, as far as they could tell, the first mutual aid group intended both to help workers survive as generative AI companies erode their livelihoods and to organize a resistance among those who want to push back. </p><p>“My initial idea was to start a mutual aid fund to financially support creative and cognitive workers made poor because our overlords think we can be replaced,” Crawley tells me on Signal. “You know and I know that it doesn't matter how shitty the autocomplete torment nexus is, our overlords don't care.”</p><p>They booked a domain, StopGenAI.com, began holding meetings, wrote a manifesto, and organized a joint PayPal account. In addition to soliciting funds from the public, they’d offer incentives—anyone who donates $10 a year gets an .@stopgenai.com email address of their choosing. And the group, which is rife with creatives, plans on offering a bundle of art, games, and other works on Itch.io, the platform known for hosting indie games. Donations will be distributed to both members and nonmembers, with people in need—those who are housing insecure, for instance—receiving priority. </p><p>Stop Gen AI soon ballooned to 500 members. Artists, academics, folks with mutual aid leadership experience, and an organizer with <a href="https://connecting-humanity.org/">eSims for Gaza</a> all joined up. Donations to the group are open now, and it’s planning to launch the bundle soon.</p><p>For Crawley, all this is personal. Before Stop Gen AI, she says, she was feeling depressed. </p><p>“I made a pretty good income from researching and writing about cybersecurity, especially in 2021, 2022, 2023,” she told me. “I certainly wasn't a millionaire, but I was making enough money that I didn't worry about my finances. Starting in 2024, my income took a real hit.”</p><p>Freelance writing gigs dried up, and her working teaching cybersecurity at a university became jeopardized. “I teach a Master's level enterprise cybersecurity course,” she says. “But for how much longer, who knows? Because I am very outspoken publicly about how much I deeply hate Gen AI. Meanwhile the rest of the faculty and my bosses have drunk the Gen AI kool-aid.”</p><p>Crawley worries about the environmental impacts of AI as well as the labor ones. She felt that civil society was not doing enough to resist the rise of generative AI.</p><p>“I approached a bunch of unions and guilds,” Crawley says. “They didn't seem to be doing much to counter the Gen AI menace. That's when I realized I had to take matters into my own hands and not wait for someone else to take the initiative.” Stop Gen AI has already yielded dividends for Crawley, though they’re eager to grow and desperate to bring the group to public attention—there is power in organizing, and in solidarity, she finds. </p><p>“Starting the group had greatly improved by mental health,” she says. “Because we're doing something. We're not just watching Gen AI make all of us poor and destroy the planet.”</p><p>It’s not enough just to raise awareness anymore, she says. “I want us to actually organize and fight.”</p><p>Crawley asks that any interested parties visit the group’s website, <a href="https://stopgenai.com/">StopGenAI.com</a>, where there are instructions both for joining, and for <a href="https://www.paypal.com/paypalme/StopGenAI">donating</a>. Info about the StopGenAI email address <a href="https://stopgenai.com/get-your-stopgenai-com-email-account/">is here</a>. I’m also including the Q+A with Crawley at the end of the post, as well, for those interested in reading our whole conversation.</p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h2>The BITM News Brief:</h2><ul><li><p>Some good tech news for a change — the modular, easily repairable Fairphone, which also attempts to ethically source its supply chain, has <a href="https://www.theverge.com/news/692248/fairphone-6-smartphone-repairable-modular-accessories-murena">announced a long-awaited upgrade</a>. </p></li><li><p>If you’re in LA, go see Annie Dorsen’s Prometheus Firebringer, an acclaimed play that, as <a href="https://www.vulture.com/2023/09/theater-review-prometheus-firebringer-annie-dorsen.html">New York Mag put it</a>, offers “looping interrogations into how the omnipresence of AI is already rewiring our brains.” It’s playing at the Redcat, <a href="https://www.redcat.org/events/2025/annie-dorsen">tickets are here</a>.</p></li><li><p>New research finds that, as the University of Pennsylvania’s PR <a href="https://www.asc.upenn.edu/news-events/news/ai-perceived-more-negatively-climate-science-or-science-general">team puts it</a>, AI is “Perceived More Negatively than Climate Science or Science in General.” The paper is <a href="https://academic.oup.com/pnasnexus/article/4/6/pgaf163/8159304?login=false">here</a>.</p></li><li><p>“Tesla robotaxi incidents caught on camera in Austin draw regulators’ attention,” <a href="https://www.cnbc.com/2025/06/23/tesla-robotaxi-incidents-caught-on-camera-in-austin-get-nhtsa-concern.html">a great CNBC story</a> by friend of BITM Lora Kolodny, with a headline that undersells the absolute menace Tesla robotaxis are to the people of Austin.</p></li><li><p>Power Switch Action and Gig Workers Rising have <a href="https://www.powerswitchaction.org/resources/ubers-inequality-machine">a new report out</a> examining algorithmic manipulation, inequality, and working conditions for Uber drivers based on the responses of 2,500 workers. The report finds that the app’s algorithm works to prevent them from reaching bonus goals, penalizes workers for holding out for better fares, and more. Per the report: “vast majority of surveyed drivers report getting squeezed and manipulated by Uber’s pay algorithm, and commonly report serious financial hardship and psychological distress as a result of their unpredictable pay on the app.” It’s worth spending some time with the whole thing.</p></li><li><p>Scale AI, the company that just got a $14 billion acquihire investment from Meta, has a massive spam and security problem, as <a href="https://www.inc.com/sam-blum/exclusive-scale-ais-spam-security-woes-while-serving-google/91205895">Sam Blum reports in Inc.</a></p></li><li><p>Friend of BITM Charley Johnson asked if I’d share his new course with Blood readers, and I am happy to oblige. He sent me this one liner about it: “Tech hype hides power. Reclaim it in <em><a href="https://www.charley-johnson.com/sociotechnicalsystemschange">Systems Change for Tech & Society Leaders</a></em>—a live course with systems change expert Charley Johnson.” </p></li><li><p>Unfortunate headline of the week, presented without comment:</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!C5-k!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29208f7-c883-4d04-a9d2-747d6f849fab_661x236.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!C5-k!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29208f7-c883-4d04-a9d2-747d6f849fab_661x236.png 424w, https://substackcdn.com/image/fetch/$s_!C5-k!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29208f7-c883-4d04-a9d2-747d6f849fab_661x236.png 848w, https://substackcdn.com/image/fetch/$s_!C5-k!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29208f7-c883-4d04-a9d2-747d6f849fab_661x236.png 1272w, https://substackcdn.com/image/fetch/$s_!C5-k!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29208f7-c883-4d04-a9d2-747d6f849fab_661x236.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!C5-k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29208f7-c883-4d04-a9d2-747d6f849fab_661x236.png" width="661" height="236" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/c29208f7-c883-4d04-a9d2-747d6f849fab_661x236.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":236,"width":661,"resizeWidth":null,"bytes":37933,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/165059006?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29208f7-c883-4d04-a9d2-747d6f849fab_661x236.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!C5-k!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29208f7-c883-4d04-a9d2-747d6f849fab_661x236.png 424w, https://substackcdn.com/image/fetch/$s_!C5-k!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29208f7-c883-4d04-a9d2-747d6f849fab_661x236.png 848w, https://substackcdn.com/image/fetch/$s_!C5-k!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29208f7-c883-4d04-a9d2-747d6f849fab_661x236.png 1272w, https://substackcdn.com/image/fetch/$s_!C5-k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29208f7-c883-4d04-a9d2-747d6f849fab_661x236.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div><hr></div><h2>Part 2: AI lawsuits galore</h2><p>Two high-profile class action lawsuits levied by creators against AI companies were decided in court this week, and while both were initially reported as wins for the tech industry, the reality is much, much more complex—and even, in certain ways, promising for writers and creatives taking on the AI giants. </p>
<p>
<a href="https://www.bloodinthemachine.com/p/hundreds-of-workers-mobilize-to-stop">
Read more
</a>
</p>
In Copyright ‘Wins’ for AI, Judges Leave Room for Future Defeats - Cybernetic Forests685e86c85840eb00015552422025-06-27T12:02:25.000Z<img src="https://mail.cyberneticforests.com/content/images/2025/06/claude.png" alt="In Copyright ‘Wins’ for AI, Judges Leave Room for Future Defeats"><p>Two legal rulings in favor of the AI industry this week seem like they're victories for the companies, suggesting training is transformative and therefore, fair use. But hidden in the rationale of both cases are indications of deep-seeded legal problems around training – it's just up to authors and their legal teams to raise them. </p><p>My latest piece in Tech Policy Press examines the rulings and the legal rationales that will likely be setting precedent beyond the two short-term legal victories. </p><div class="kg-card kg-button-card kg-align-center"><a href="https://www.techpolicy.press/in-copyright-wins-for-anthropic-and-meta-judges-leave-ample-room-for-future-defeats/?ref=mail.cyberneticforests.com" class="kg-btn kg-btn-accent">Read More</a></div>How AI is killing jobs in the tech industry - Blood in the Machinehttps://www.bloodinthemachine.com/p/how-ai-is-killing-jobs-in-the-tech-f392025-06-25T15:00:46.000Z<p>“What will AI mean for jobs?” may be the single most-asked question about the technology category that dominates Silicon Valley, pop culture, and our politics. Fears that AI will put us out of work routinely top <a href="https://www.pewresearch.org/social-trends/2025/02/25/u-s-workers-are-more-worried-than-hopeful-about-future-ai-use-in-the-workplace/">opinion</a> <a href="https://poll.qu.edu/poll-release?releaseid=3923">polls</a>. Bosses are citing AI as the reason they’re slashing human staff. Firms like <a href="https://www.theverge.com/news/657594/duolingo-ai-first-replace-contract-workers">Duolingo</a> and <a href="https://www.cnbc.com/2025/05/14/klarna-ceo-says-ai-helped-company-shrink-workforce-by-40percent.html">Klarna</a> have laid off workers in <a href="https://www.bloodinthemachine.com/p/the-ai-jobs-crisis-is-here-now">loudly touted shifts to AI</a>, and DOGE used <a href="https://www.bloodinthemachine.com/p/doges-ai-first-strategist-is-now">its “AI-first” strategy</a> to <a href="https://www.bloodinthemachine.com/p/whats-really-behind-elon-musk-and">justify firing federal workers</a>. </p><p>Meanwhile, tech executives are pouring fuel on the flames. Dario Amodei, the CEO of Anthropic, claims that AI products like his will soon <a href="https://www.axios.com/2025/05/28/ai-jobs-white-collar-unemployment-anthropic">eliminate half of entry level white collar jobs</a>, and replace up to 20% of all jobs, period. OpenAI’s Sam Altman <a href="https://www.msn.com/en-us/news/technology/openai-ceo-sam-altman-says-ai-is-ready-for-entry-level-jobs-but-unbothered-gen-z-have-made-it-their-new-work-friend/ar-AA1G9Y7T?ocid=msedgntp&pc=U531&cvid=8f4eeddbcf404ac086d362e1e015f247&ei=68">says that</a> AI systems can replace entry level workers, and will soon be able to code “like an experienced software engineer.” Elsewhere, he’s been <a href="https://futurism.com/the-byte/sam-altman-ai-destroying-jobs">blunter</a>, claiming "Jobs are definitely going to go away, full stop."</p><p>But the question remains: What’s actually happening on the ground, right now? There’s no doubt that lots of firms are investing heavily in AI and <em>trying </em>to use it to improve productivity and cut labor costs. And it’s clear that in certain industries, especially creative ones, the rise of cheap AI-generated content is hitting workers hard. Yet broader economic data on AI impacts <a href="https://www.nber.org/papers/w33777">suggests a more limited disruption</a>. Two and a half years after the rise of ChatGPT, after a torrent of promises, CEO talk, and think pieces, how is—or isn’t—AI <em>really</em> reshaping work? </p><p>About a month ago, I <a href="https://www.bloodinthemachine.com/p/did-ai-kill-your-job">put out a call in hopes of finding some answers</a>. I had a vague idea for a project I’d call AI Killed My Job, that would seek to examine the many ways that management has used AI to impact, transform, degrade, or, yes, try to replace workers outright. It’s premised on the notion that we’ve heard lots of speculation and plenty of sales pitches for AI automation—but we have not heard nearly enough from the workers experiencing the phenomenon themselves.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rWji!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2e5d82-ee48-4ec6-b2cb-3b886003c09c_2080x620.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rWji!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2e5d82-ee48-4ec6-b2cb-3b886003c09c_2080x620.jpeg 424w, https://substackcdn.com/image/fetch/$s_!rWji!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2e5d82-ee48-4ec6-b2cb-3b886003c09c_2080x620.jpeg 848w, https://substackcdn.com/image/fetch/$s_!rWji!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2e5d82-ee48-4ec6-b2cb-3b886003c09c_2080x620.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!rWji!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2e5d82-ee48-4ec6-b2cb-3b886003c09c_2080x620.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rWji!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2e5d82-ee48-4ec6-b2cb-3b886003c09c_2080x620.jpeg" width="1456" height="434" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/be2e5d82-ee48-4ec6-b2cb-3b886003c09c_2080x620.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":434,"width":1456,"resizeWidth":null,"bytes":317369,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":false,"topImage":true,"internalRedirect":"https://www.bloodinthemachine.com/i/165134533?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2e5d82-ee48-4ec6-b2cb-3b886003c09c_2080x620.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rWji!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2e5d82-ee48-4ec6-b2cb-3b886003c09c_2080x620.jpeg 424w, https://substackcdn.com/image/fetch/$s_!rWji!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2e5d82-ee48-4ec6-b2cb-3b886003c09c_2080x620.jpeg 848w, https://substackcdn.com/image/fetch/$s_!rWji!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2e5d82-ee48-4ec6-b2cb-3b886003c09c_2080x620.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!rWji!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2e5d82-ee48-4ec6-b2cb-3b886003c09c_2080x620.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>So, I wrote a post here on the newsletter <a href="https://www.bloodinthemachine.com/p/did-ai-kill-your-job">explaining the idea</a>, posted <a href="https://bsky.app/profile/bcmerchant.bsky.social/post/3loowy27zr22y">a call to social media</a>, and asked for testimonials on various <a href="https://www.youtube.com/watch?v=JNj2jm_Iy54">news shows</a> and <a href="https://systemcrash.info/">podcasts</a>. I was floored by the response. The stories came rolling in. I heard from lots of folks I expected to—artists, illustrators, copywriters, translators—and many I didn’t—senior engineers, managers of marketing departments, construction industry consultants. And just about everyone in between. I got so many responses, and so many good ones, that I resolved to structure the project as a series of pieces that center the workers’ voices and testimonies themselves, and share their experiences in their own words. </p><p>Because I got so many accounts, I decided to break down the articles by field and background. Starting, today, with an industry that’s at once the source of the automating technology and feeling some of its most immediate impacts. Today, we’ll begin by looking at how AI is killing jobs in the tech industry. </p><p>I heard from workers who recounted how managers used AI to justify laying them off, to speed up their work, and to make them take over the workload of recently terminated peers. I heard from workers at the biggest tech giants and the smallest startups—from workers at Google, TikTok, Adobe, Dropbox, and CrowdStrike, to those at startups with just a handful of employees. I heard stories of scheming corporate climbers using AI to consolidate power inside the organization. I heard tales of AI being openly scorned in company forums by revolting workers. And yes, I heard lots of sad stories of workers getting let go so management could make room for AI. I received a message from one worker who wrote to say they were concerned for their job—and a follow-up note just weeks later to say that they’d lost it.</p><p>Of the scores of responses I received, I’ve selected 15 that represent these trends; some are short and offer a snapshot of various AI impacts or a quick look at the future of employment. Others are longer accounts with many insights into what it means to work in tech in the time of AI—and what it might mean to work, period. The humor, grace, and candor in many of these testimonials often amazed me. I cannot thank those who wrote them enough. Some of these workers took great risks to share their stories at a time when it is, in tech, a legitimate a threat to one’s job to speak up about AI. For this reason, I’ve agreed to keep these testimonies anonymous, to protect the identities of the workers who shared them.</p><p>Generative AI is the most hyped, most well-capitalized technology of our generation, and its key promise, that it will automate jobs, desperately needs to be examined. This is the start of that examination.</p><p><strong>Three very quick notes before we move on. First, </strong>this newsletter, and projects like AI Killed My Job, require a lot of work to produce. If you find this valuable, please consider becoming a paid subscriber. With enough support, I can expand such projects with human editors, researchers, and even artists—like Koren Shadmi, who I was able to pay a small fee for the 100% human-generated art above, thanks to subscribers like you. <strong>Second</strong>, if <em>your</em> job has been impacted by AI, and you would like to share your story as part of this project, please do so at <a href="mailto:AIkilledmyjob@pm.me">AIkilledmyjob@pm.me</a>. I would love to hear your account and will keep your account confidential as I would any source. <strong>Third, </strong>some news: I'm partnering with the good folks at <a href="https://perfectunion.us/">More Perfect Union</a> to produce a video edition of AI Killed My Job. If you're interested in participating, or are willing to sit for an on camera interview to discuss how AI has impacted your livelihood, <a href="mailto:AIkilledmyjob@perfectunion.us">please reach out</a>. Thanks for reading, human, and an extra thanks to all those whose support makes this work possible. Tech is just the first industry I plan on covering; I have countless more stories in fields from law to media to customer service to art to share. Stay tuned, and onwards.</p><p><em>This post was edited by Mike Pearl. </em></p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><h1><strong>“AI Generated Trainers”</strong></h1><p><strong>Content moderator at TikTok.</strong></p><p>I have a story. I worked for TikTok as a content moderator from August 2022 to April 2024, and though I was not replaced by AI, I couldn't help noticing that some of the trainers were. </p><p>In my first year, I would be assigned training videos that featured real people either reading or acting. These trainings would be viewed internally only, not available to the public. Topics could be things like learning about biases, avoiding workplace harassment, policy refreshers, and so on. In the early months of my time there, the trainings were usually recorded slideshows with humans reading and elaborating on the topics. Sometimes they were videos that included people acting out scenarios. Over time, the human trainers were replaced with AI by way of generated voices or even people going over the materials in the videos.</p><p>It was honestly scary to me. I don't know how to explain it. I remember that they had added embellishments to make them seem more human. I distinctly remember a woman with an obscure black tattoo on her bicep. The speech and movement wasn't as clean as what I see in videos now, but it was close enough to leave me with an eerie sensation.</p><p>As far as content moderation goes, much of that is already done by AI across all major social media platforms. There has historically been a need for human moderators to differentiate grey areas that technology doesn't understand. (Example: someone being very aggressive in a video and using profanity, but it not being directed at an individual. AI might think the video involves bullying another user and ban the video, but a moderator can review it and see that there's no problem/no targeted individual.)</p><div class="pullquote"><p>It was honestly scary to me. I don't know how to explain it. I remember that they had added embellishments to make them seem more human.</p></div><p> I think as AI models continue to learn, however, moderators will be replaced completely. That's just a theory, but I'm already seeing the number of these job postings dwindling and hearing murmurs from former coworkers on LinkedIn about widespread layoffs.</p><h1><strong>“AI is killing the software engineer discipline”</strong></h1><p><strong>Software engineer at Google.</strong></p><p>I have been a software engineer at Google for several years. With the recent introduction of generative AI-based coding assistance tools, we are already seeing a decline in open source code quality <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> (defined as "code churn" - how often a piece of code is written only to be deleted or fixed within a short time). I am also starting to see a downward trend of (a) new engineers' readiness in doing the work, (b) engineers' willingness to learn new things, and (c) engineers' effort to put in serious thoughts in their work.</p><p>Specifically, I have recently observed first hand some of my colleagues at the start of their career heavily relying on AI-based coding assistance tools. Their "code writing" consists of iteratively and alternatingly hitting the Tab key (to accept AI-generated code) and watching for warning underlines <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> indicating there could be an error (which have been typically based on static analysis, but recently increasingly including AI-generated warnings). These young engineers - squandering their opportunities to learn how things actually work - would briefly glance at the AI-generated code and/or explanation messages and continue producing more code when "it looks okay."</p><p>I also saw experienced engineers in senior positions when faced with an important data modeling task decided to generate the database schema with generative AI. I originally thought it was merely a joke but recently found out that they basically just used the generated schema in actual (internal) services essentially without modification, even if there are some obvious glaring issues. Now those issues have propagated to other code that needs to interact with that database and it will be more costly to fix, so chances are people will just carry on, pretending everything is working as intended.</p><p>All of these will result in poorer software quality. "Anyone can write code" sounds good on paper, but when bad code is massively produced, it hurts everyone including those who did not ask for it and have been trusting the software industry.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7RwP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f04f4b-2f13-48c8-83d0-0ae64e3d5f8c_2140x784.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7RwP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f04f4b-2f13-48c8-83d0-0ae64e3d5f8c_2140x784.png 424w, https://substackcdn.com/image/fetch/$s_!7RwP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f04f4b-2f13-48c8-83d0-0ae64e3d5f8c_2140x784.png 848w, https://substackcdn.com/image/fetch/$s_!7RwP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f04f4b-2f13-48c8-83d0-0ae64e3d5f8c_2140x784.png 1272w, https://substackcdn.com/image/fetch/$s_!7RwP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f04f4b-2f13-48c8-83d0-0ae64e3d5f8c_2140x784.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7RwP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f04f4b-2f13-48c8-83d0-0ae64e3d5f8c_2140x784.png" width="2140" height="784" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/c6f04f4b-2f13-48c8-83d0-0ae64e3d5f8c_2140x784.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":784,"width":2140,"resizeWidth":null,"bytes":173662,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/165134533?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79c401a3-c19c-4e09-92be-c6d6a416a715_2140x784.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7RwP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f04f4b-2f13-48c8-83d0-0ae64e3d5f8c_2140x784.png 424w, https://substackcdn.com/image/fetch/$s_!7RwP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f04f4b-2f13-48c8-83d0-0ae64e3d5f8c_2140x784.png 848w, https://substackcdn.com/image/fetch/$s_!7RwP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f04f4b-2f13-48c8-83d0-0ae64e3d5f8c_2140x784.png 1272w, https://substackcdn.com/image/fetch/$s_!7RwP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f04f4b-2f13-48c8-83d0-0ae64e3d5f8c_2140x784.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">Story: <a href="https://www.cnbc.com/2024/10/30/dropbox-slashes-20percent-of-global-workforce-eliminating-500-roles.html">CNBC</a>.</figcaption></figure></div><h1><strong>“How AI eliminated my job at Dropbox”</strong></h1><p><strong>Former staff engineer at Dropbox.</strong></p><p>I was part of the 20% RIF at Dropbox at the end of October.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> The stated reason for this was to focus on <a href="https://dash.dropbox.com/">Dash</a>, their AI big-bet. The 10% RIF in 2023 was also to focus more on Dash.</p><p>How did this eliminate my job? Internal reprioritization, that's how. I was moving into an area that was scheduled to focus on improving Dropbox's reliability stance in 2025 and beyond, intended to be a whole year initiative. It's tricky to go into details, but the aim was to take a holistic view of disaster preparedness beyond the long standing disaster scenarios we had been using and were already well prepared for. Projects like this are common in well established product-lines like Dropbox's file-sync offering, as they take a comprehensive overview of both audit compliance (common criteria change every year) and market expectations.</p><p>This initiative was canned as part of the RIF, and the staffing allocated to it largely let go. Such a move is consistent with prioritizing Dash, a brand new product that does not have dominant market-share. Startups rarely prioritize availability to the extent Dropbox's file-sync product does because the big business problem faced by a startup is obtaining market-share, not staying available for your customers. As products (and companies) mature, stability starts gaining priority as part of customer <em>retention</em> engineering. Once a product becomes dominant in the sector, stability engineering often is prioritized <em>over</em> feature development. Dropbox file-sync has been at this point for several years.</p><p>With Dash being a new product, and company messaging being that Dash is the future of Dropbox, a reliability initiative of the type I was gearing up for was not in line with being a new product scrapping for market-share. Thus, this project and the people assigned to it were let go.</p><p><em><strong>Blood in the Machine: What are you planning next?</strong></em></p><p>This job market is absolutely punishing. I had a .gov job for the .com crash, a publicly funded .edu job for the 2008 crash, and a safe place inside a Dropbox division making money hand over fist during the COVID crash (Dropbox Sign more than doubled document throughput over 2020). This is my first tech winter on the bench, and I'm getting zero traction. 37 job apps in the months I've been looking, 4 got me talking to a human (2 of which were referrals), all bounced me after either the recruiter or technical screens. Never made it to a virtual onsite.</p><p>This has to do with me being at the Staff Engineer level, and getting there through non-traditional means. The impact is when I go through the <em>traditional</em> screens for a high level engineer I flame out, because that wasn't my job. The little feedback I've gotten from my hunt is a mix of 'over-qualified for this position' and 'failed the technical screen.' Attempting to branch out to other positions like Product Manager, or Technical Writer have failed due to lack of resume support and everyone hiring Senior titles.</p><p>I may be retired now. I'm 50, but my money guy says I've already made retirement-money; any work I do now is to increase lifestyle, build contingency funds, or fund charitable initiatives. The industry is absolutely toxic right now as cost-cutting is dominating everything but the most recently funded startups. We haven't hit an actual recession in stock-prices due to aggressive cost and stock-price engineering everywhere, and cost-engineering typically tanks internal worker satisfaction. I've been on the bench for six months, money isn't a problem. Do I want to stick my head back into the cortisol amplifier?</p><p>Not really.</p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><h1><strong>“It's no longer only an issue of higher-ups: colleagues are using chatgpt to undermine each other.”</strong></h1><p><strong>Tech worker, marketing department.</strong></p><p>I used to work at a mid-sized Silicon Valley startup that does hardware. The overall project is super demanding, and reliant on skilled, hands-on work. Our marketing team was tiny but committed. My manager, the CMO<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a>, was one of the rare ones: deeply experienced and seasoned in the big ones, thoughtful, and someone who genuinely loved his craft.</p><p>Last year, a new hire came in to lead another department. Genuinely believe she is a product of the "LinkedIn hustler / thought-leadership / bullshit titles" culture. Super performative.</p><p>Recently and during a cross-functional meeting with a lot of people present, she casually referred to a ChatGPT model she was fine-tuning as our "Chief Marketing Officer"—in front of my manager. She claimed it was outperforming us. It wasn’t—it was producing garbage. But the real harm was watching someone who’d given decades to his field get <strong>humiliated</strong>, not by a machine, but by a colleague weaponizing it.</p><p>Today, in the name of “AI efficiency,” a lot of people saw the exit door and my CMO got PIPd.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p><p>The irony here is two-fold: one, it does not seem that the people who left were victims of a turn to "vibe coding" and I suspect that the "AI efficiency" was used as an excuse to make us seem innovative even during this crisis. Two, this is a company whose product desperately needs real human care.</p><p><em>[If your job has been impacted by AI, and you would like to share your story as part of this project, please do so at <a href="mailto:AIkilledmyjob@pm.me">AIkilledmyjob@pm.me</a>.]</em></p><h1><strong>“AI killed two of my jobs”</strong></h1><p><strong>Veteran tech worker at Adobe and in the public sector.</strong></p><p>AI killed my previous job, and it's killing my current one.</p><p>I used to work at Adobe... Despite constantly telling my team and manager I strongly disliked GenAI and didn't want to be involved in GenAI projects, when AI hype really started picking up, my team was disbanded and I was separated from my teammates to be put on a team specifically working on GenAI. I quit, because I didn't see any way for me to organize, obstruct, or resist without being part of building something that went against my values. I often wonder whether I should have let myself get fired instead. This was before we learned that Adobe trained Firefly on Stock contributions without contributors' opt-in, and before the Terms of Service debacle, so I'm glad I wasn't there for that at least.</p><p>Now I work in the public sector. It's better in most ways, but I have to spend ridiculous amounts of time explaining to colleagues and bosses why no, we can't just "use AI" to complete the task at hand. It feels like every week there's a new sales pitch from a company claiming that their AI tool will solve all our problems—companies are desperate to claw back their AI investment, and they're hoping to find easy marks in the public sector.</p><p>I don't want to be a curmudgeon! I like tech and I just want to do tech stuff without constantly having to call bullshit on AI nonsense. I'd rather be doing my actual job, and organizing with my colleagues. It's exhausting to deal with credulous magical thinking from decision-makers who should know better.</p><p><strong>*My work at Adobe*</strong> </p><p>When I was at Adobe, I worked in Document Cloud. So like Acrobat, not Photoshop. For most of my time there, my job was evaluating machine learning models to see if they were good enough to put in a product. The vast majority of the time, Document Cloud leadership killed machine learning projects before they ended up in a product. That was either because the quality wasn't good enough, or because of a lack of "go-to-market.” In other words, middle and upper management generally did not accept that machine learning is only appropriate for solving a small subset of problems, which need to be rigorously-scoped and well-defined. They were looking for "everything machines" (these are derogatory air quotes, not a direct quote) that would be useful for huge numbers of users. </p><p>By the time AI hype really started to pick up, I had moved to a team working on internal tools. I wasn't building or evaluating machine learning models and I was outspoken about not wanting to do that. When LLM hype got really big, senior leadership started describing it as an "existential threat" (that is a direct quote as far as I remember), and re-organizing teams to get LLMs into Document Cloud as soon as possible. Adobe did not do *anything* quickly, so this was a huge change. A big red flag for me was that rather than building our own LLMs, Adobe used OpenAI's chatbots. When I asked about all of OpenAI's ethical and environmental issues, management made generic gestures towards being concerned but never actually said or did anything substantive about it. At that point I quit, because I had specifically been saying I didn't want to be involved in GenAI, and given the rushed and sloppy nature of the rollout, I didn't want my name anywhere near it. </p><p><strong>*Colleagues' reactions*</strong> </p><p>Definitely I knew some colleagues who didn't like what Adobe was doing. There were a lot of people who privately agreed with me but publicly went along with the plan. Generally because they were worried about job security, but also there's a belief at Adobe that the company's approach to AI isn't perfect but it's more ethical than the competition. Despite being a huge company, teams were mostly isolated from each other, and as far as I know there wasn't a Slack channel for talking about AI concerns or anything like that. When I asked critical questions during department meetings or expressed frustration with leadership for ignoring concerns, people told me to go through the chain of command and not to be too confrontational. </p><p>Looking back, I wish my goal hadn't been to persuade managers but instead to organize fellow workers. I was probably too timid in my attempts to organize. I do regret that I didn't try having more explicit 1-on-1's about this, even though it would have been risky. Obviously I was very lucky/privileged to have enough savings to even consider quitting or letting myself get fired in this shitty job market, and I often wonder if I could have done more to combine strategies and resources with other colleagues so that fighting back would be less risky for everyone. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2HSW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F735a03c9-b829-4b36-a385-c038c826bee3_1118x790.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2HSW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F735a03c9-b829-4b36-a385-c038c826bee3_1118x790.png 424w, https://substackcdn.com/image/fetch/$s_!2HSW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F735a03c9-b829-4b36-a385-c038c826bee3_1118x790.png 848w, https://substackcdn.com/image/fetch/$s_!2HSW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F735a03c9-b829-4b36-a385-c038c826bee3_1118x790.png 1272w, https://substackcdn.com/image/fetch/$s_!2HSW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F735a03c9-b829-4b36-a385-c038c826bee3_1118x790.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2HSW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F735a03c9-b829-4b36-a385-c038c826bee3_1118x790.png" width="1118" height="790" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/735a03c9-b829-4b36-a385-c038c826bee3_1118x790.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":790,"width":1118,"resizeWidth":null,"bytes":234536,"alt":"","title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/165134533?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68f289bd-de1d-4b21-9c88-8a66d2f11ac0_1118x790.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!2HSW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F735a03c9-b829-4b36-a385-c038c826bee3_1118x790.png 424w, https://substackcdn.com/image/fetch/$s_!2HSW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F735a03c9-b829-4b36-a385-c038c826bee3_1118x790.png 848w, https://substackcdn.com/image/fetch/$s_!2HSW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F735a03c9-b829-4b36-a385-c038c826bee3_1118x790.png 1272w, https://substackcdn.com/image/fetch/$s_!2HSW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F735a03c9-b829-4b36-a385-c038c826bee3_1118x790.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">Screenshot, Adobe’s blog. <a href="https://blog.adobe.com/en/publish/2024/10/14/adobes-new-skilling-initiative-aims-bring-ai-literacy-content-creation-digital-marketing-skills-30-million-worldwide">Source</a>.</figcaption></figure></div><p><strong>*Impact of AI on work*</strong> </p><p>When the GenAI push started, a lot more of my colleagues started working nights and weekends, which was rare (and even discouraged) before then. Managers paid lip service to Adobe's continuing commitment to work-life balance, but in practice that didn't match up with the sense of urgency and the unrealistic deadlines. I'm not aware of anyone who got fired or laid off specifically because of getting replaced by AI, and it looks like teams are still hiring. Although for what it's worth, in general Adobe does not do layoffs these days, but instead they pressure people into quitting by taking work away from them, putting them on PIPs, that kind of thing. </p><p>I found out that a colleague who had been struggling with a simple programming task for over a month—and refusing frequent offers for help—was struggling because they were trying to prompt an LLM for the solution and trying to understand the LLM's irrelevant and poorly-organized output. They could have finished the work in a day or two if they had just asked for help clarifying what they needed to do. I and their other teammates would gladly have provided non-judgmental support if they had asked. </p><p>Our team found out that a software vendor (I can't say which one but it was one of the big companies pushing Agentic AI) was using AI to route our service request tickets. As a result, our tickets were being misclassified, which meant that they were failing to resolve high-priority service disruptions that we had reported. We wasted days on this, if not weeks.</p><p><strong>*My current job*</strong> </p><p>At my current job, I'm basically a combination of programmer and database administrator. I like the work way more than what I did at Adobe. Much like the corporate world, there are a lot of middle and upper managers who want to "extract actionable insights" from data, but lack the information literacy and technical knowledge to understand what they can (or should) ask for. And the people below them are often unwilling to push back on unreasonable expectations. It's very frustrating to explain to executives that the marketing pitches they hear about AI are not reflective of reality. It makes us seem like we're afraid of change, or trying to prevent "progress" and "efficiency." </p><p>So I would say the private and public sector have this in common: the higher up you go in the organization, the more enthusiastic people are about "AI,” and the less they understand about the software, and (not coincidentally) the less they understand what their department actually does. And to the extent that workers are opposed to "AI,” they're afraid of organizing, because it feels like executives are looking for reasons to cut staff.</p><h1><strong>“No crypto, no AI”</strong></h1><p><strong>Tech worker.</strong></p><p>So this is sort of an anecdote in the opposite direction of AI taking jobs—in a recent interview process at a mature startup in the travel tech space, part of the offer negotiations were essentially me stating “yeah I don’t want to work here if you expect me to use or produce LLM-based features or products” (this is relevant as the role is staff data scientist, so ostensibly on supply side of AI tooling), and them responding with “yeah if you want to do LLM work this isn’t the place for you.” </p><p>Though my network isn’t extensive, I feel like this is a growing sentiment in the small- and medium-tech space - my primary social media is on a tech-centric instance of the fediverse (<a href="http://hachyderm.io/">hachyderm.io</a>) and more often than not when I see the #GetFediHired hashtag, it’s accompanied by something akin to “no crypto, no AI” (also no Microsoft Teams, but I digress).</p><h1><strong>“Gradual addition of AI to the workplace”</strong></h1><p><strong>Computer programmer.</strong></p><p>Our department has now brought in copilot, and we are being encouraged to use it for writing and reviewing code. Obviously we are told that we need to review the AI outputs, but it is starting to kill my enjoyment for my work; I love the creative problem solving aspect to programming, and now the majority of that work is trying to be passed onto AI, with me as the reviewer of the AI's work. This isn't why I joined this career, and it may be why I leave it if it continues to get worse.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ohTd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8ecb452-f4da-4cf8-8969-643533a5878e_205x205.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ohTd!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8ecb452-f4da-4cf8-8969-643533a5878e_205x205.png 424w, https://substackcdn.com/image/fetch/$s_!ohTd!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8ecb452-f4da-4cf8-8969-643533a5878e_205x205.png 848w, https://substackcdn.com/image/fetch/$s_!ohTd!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8ecb452-f4da-4cf8-8969-643533a5878e_205x205.png 1272w, https://substackcdn.com/image/fetch/$s_!ohTd!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8ecb452-f4da-4cf8-8969-643533a5878e_205x205.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ohTd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8ecb452-f4da-4cf8-8969-643533a5878e_205x205.png" width="205" height="205" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/c8ecb452-f4da-4cf8-8969-643533a5878e_205x205.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":205,"width":205,"resizeWidth":null,"bytes":55655,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/165134533?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8ecb452-f4da-4cf8-8969-643533a5878e_205x205.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ohTd!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8ecb452-f4da-4cf8-8969-643533a5878e_205x205.png 424w, https://substackcdn.com/image/fetch/$s_!ohTd!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8ecb452-f4da-4cf8-8969-643533a5878e_205x205.png 848w, https://substackcdn.com/image/fetch/$s_!ohTd!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8ecb452-f4da-4cf8-8969-643533a5878e_205x205.png 1272w, https://substackcdn.com/image/fetch/$s_!ohTd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8ecb452-f4da-4cf8-8969-643533a5878e_205x205.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h1><strong>“my experience with AI at work and how I just want to make it do what I don't want to do myself”</strong></h1><p><strong>Software engineer at a large tech firm.</strong></p><p>All my life, I’ve wanted to be an artist. Any kind of artist. I still daydream of a future where I spend my time frolicking in my own creativity while my own work brings me uninterrupted prosperity.</p><p>Yet this has not come to pass, and despite graduate level art degrees, the only income I can find is the result of a second-class coding job for a wildly capitalist company. It’s forty hours a week of the dullest work imaginable, but it means I have time to indulge in wishful thinking and occasionally, a new guitar.</p><div class="pullquote"><p>Real use cases where AI can be used to do work that regular old programming could not are so rare that when I discovered one two weeks ago, I asked for a raise in the same breath as the pitch.</p></div><p>I am experiencing exactly what you describe. There’s been layoffs recently, and my company is investing heavily in AI, even though they’re not sure yet how best to make it do anything that our corporate overlords imagine it should do.</p><p>From the c-level, they push around ideas about how we could <em>code AI to do work</em>, but in reality, those on the ground are only using AI <em>to help write code that does the work</em>, as the code always has. Real use cases where AI can be used to do work that regular old programming could not are so rare that when I discovered one two weeks ago, I asked for a raise in the same breath as the pitch.</p><p>And here I am, five hundred words into this little essay, and I’ve barely touched on AI! Nor have I even touched any of the AI tools that are so proudly thrust into my face to produce this. I’ve played around with AI tools for creative writing, and while they’re good at fixing my most embarrassing grammar errors, none of them have helped me in my effort to bridge the gap between my humble talent as a creative and my aspirations for my effort.</p><p>There’s a meme going on Pinterest that I believe sums up this moment: “We wanted robots to clean the dishes and do our laundry, so we could draw pictures and write stories. Instead they gave us robots to draw pictures and write stories, so we could clean dishes and do laundry.” This feels very true in the sense that human talent is getting valued not for the time it took to gain it and the ingenuity it proves, but for how well it feeds the greed of those who can afford to invest in bulk. But art in capitalism has always been this way, hasn’t it? If we don’t have a patron, we might as well eat our paint, and AI only tightens that grip that the privileged have held us in for centuries.</p><p>I’ve never been so fortunate to consider the work that funds my DoorDash addiction to be my passion’s output, and perhaps that’s why I’m not afraid of what I’ll lose. But it’s that same work that has me sharing notes with fellow programmers, and many of them will say with blunt honesty that they’re worried they’ll be replaced by AI. This is a vulnerability I rarely see from the group of people who often elevated their work as <em>valuable</em> and <em>practical</em>, as opposed to my efforts to learn how to make music and poetry, which were <em>wasteful</em> and <em>useless</em>. But I am like a plant that learned how to grow on rocks and eat insects. In a meeting soon, I’m going to level with them:</p><p><em>Don’t you understand? This work, what we do day in and day out for a soulless organization that drives profit from stealing our essence, </em>this is the laundry!<em> And if they think I’ll just throw that work into a machine and let it do all the work for me, they’re right. But it’s a machine that automates the work of running machines that automates the work that people used to do by hand, while constantly stealing glances at the clock, just waiting for the moment when they could be out from under the gaze of some righteous egomaniac. </em></p><p>Maybe this is just the perspective of someone who’s seen her work, of almost any type, get devalued with such regularity that it’s hard to imagine the robots making it any more difficult than it already is. No one’s ever really cared about my Instagram posts. No one pretends that my code will change the world. Perhaps, someday, I’ll make more money while babysitting on the weekends. I spend a lot of time thinking about things that haven’t worked out for me, and for us, as a society, and I think some of our worst failures come from moments when we can’t differentiate between the ability to use machines and our abilities as machines.</p><p>Last week I made a pie for my family, and I obviously didn’t get paid for it. Somewhere off in the offices of the illuminati, an account will calculate the value of the oven that baked the crust, the refrigerator that cooled the filling, the bougie pie dish that made my effort look food-blog ready. But there’s no monetary value in the work I did that literally put food on the table, and I rarely, if ever, get paid to perform the music I love, or receive more than pocket change for the short stories I publish. I keep thinking that the solution for both problems exist in some future innovation, but I can’t imagine what that invention would be, and I can’t find proof of a real connection between the two.</p><p>Maybe ChatGPT knows the answer to this riddle? I can throw a penny into our new philosophy vending machine, but I might come up with a better answer myself if I think about it while I unload the dishwasher.</p><p>PS I didn’t use ai to write this, also didn’t even bother to push it through an ai extruder to check the grammar. I guess I’m just feeling too lazy today to push that button! Have a nice weekend.</p><p><em>[If your job has been impacted by AI, and you would like to share your story as part of this project, please do so at <a href="mailto:AIkilledmyjob@pm.me">AIkilledmyjob@pm.me</a>.]</em></p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><h1><strong>“AI-native high school interns”</strong></h1><p><strong>Fintech worker.</strong></p><p>Hello! I am a tech worker at a fintech. My workplace has been pushing AI really hard this year.</p><p>Here's the latest thing <a href="https://www.linkedin.com/posts/activity-7333887709163225095-nw-W/">they thought up</a>:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!i2Lg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6504df52-a517-48c5-b69f-a1cf682173ea_573x766.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!i2Lg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6504df52-a517-48c5-b69f-a1cf682173ea_573x766.png 424w, https://substackcdn.com/image/fetch/$s_!i2Lg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6504df52-a517-48c5-b69f-a1cf682173ea_573x766.png 848w, https://substackcdn.com/image/fetch/$s_!i2Lg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6504df52-a517-48c5-b69f-a1cf682173ea_573x766.png 1272w, https://substackcdn.com/image/fetch/$s_!i2Lg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6504df52-a517-48c5-b69f-a1cf682173ea_573x766.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!i2Lg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6504df52-a517-48c5-b69f-a1cf682173ea_573x766.png" width="573" height="766" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/6504df52-a517-48c5-b69f-a1cf682173ea_573x766.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":766,"width":573,"resizeWidth":null,"bytes":153295,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/165134533?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfea7210-2d51-40c9-9f12-29fdb191c5b8_573x766.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!i2Lg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6504df52-a517-48c5-b69f-a1cf682173ea_573x766.png 424w, https://substackcdn.com/image/fetch/$s_!i2Lg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6504df52-a517-48c5-b69f-a1cf682173ea_573x766.png 848w, https://substackcdn.com/image/fetch/$s_!i2Lg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6504df52-a517-48c5-b69f-a1cf682173ea_573x766.png 1272w, https://substackcdn.com/image/fetch/$s_!i2Lg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6504df52-a517-48c5-b69f-a1cf682173ea_573x766.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>It 100% feels like testing the waters for just how unqualified and underpaid your workforce can be. Just as long as they can work the shovel of LLM they're good right?</p><p>The children yearn for the (LLM) mines!</p><h1><strong>“CrowdStrike”</strong></h1><p><strong>Current CrowdStrike employee.</strong></p><p>I work for CrowdStrike, the leading cybersecurity company in the United States. As a current employee, I can't reveal specific details about myself.</p><p>As you may have heard, CrowdStrike laid off 500 employees on May 7th, 2025. These were not underperformers. Many of them were relatively new hires. This action was presented as a strategic realignment with a special focus in "doubling down on our highest-impact opportunities," to quote CEO George Kurtz.</p><p>In the internal email, he states further:</p><blockquote><p><strong>AI investments accelerate execution and efficiency: AI has always been foundational to how we operate. AI flattens our hiring curve, and helps us innovate from idea to product faster. It streamlines go-to-market, improves customer outcomes, and drives efficiencies across both the front and back office. AI is a force multiplier throughout the business.</strong></p></blockquote><p>So, AI has literally killed many jobs at CrowdStrike this week. I'm fortunate to be among the survivors, but I don't know for how long.</p><p>Generative AI, particularly LLMs, is permeating every aspect of the company. It's in our internal chats. It's integrated into our note-taking tools. It's being used in triage, analysis, engineering, and customer communications. Every week, I'm pinged in an announcement that some new AI capability has been rolled out to me and that I am expected to make use of it. Customers who are paying for live human service packages from us are increasingly getting the output of an LLM instead. Quality Assurance reviewers have started criticizing reviewees for failing to run things through AI tools for things as trivial as spelling and grammar. Check out the <a href="https://www.crowdstrike.com/en-us/">front page</a> and count the number of times "AI" is mentioned. It didn't used to be like this.</p><p>CrowdStrike is currently achieving record financials. At the time I write this, CRWD is trading at $428.63 in striking range of the stock's 52-week high. The efforts of my colleagues and I to rebuild from the incident of July 19, 2024 have been rewarded with shareholder approval and 500 layoffs. Some of the impacted individuals were recent graduates of 4-year schooling who, in addition to student loans, have moving expenses because they physically relocated to Texas shortly before this RIF occurred.</p><p>Many lower-level employees at CrowdStrike are big fans of generative AI; as techy people in a techy job, they fit the bill for that. Even so, many of them have become wary… of what increased AI adoption means for them and their colleagues. Some of the enthusiastic among them are beginning to realize that they're training the means of additional layoffs—perhaps their own.</p><p>CrowdStrikers have been encouraged to handle the additional per capita workload by simply working harder and sometimes working longer for no additional compensation on either count. While our Machine Learning systems continue to perform with excellence, I have yet to be convinced that our usage of genAI has been productive in the context of the proofreading, troubleshooting, and general babysitting it requires. Some of the genAI tools we have available to us are just completely useless. Several of the LLMs have produced inaccuracies which have been uncritically communicated to our customers by CrowdStrikers who failed to exhibit due diligence. Those errors were caught by said customers, and they were embarrassing to us all.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!i_OY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dce8950-7270-43bd-a8d9-d655074c5417_1054x514.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!i_OY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dce8950-7270-43bd-a8d9-d655074c5417_1054x514.png 424w, https://substackcdn.com/image/fetch/$s_!i_OY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dce8950-7270-43bd-a8d9-d655074c5417_1054x514.png 848w, https://substackcdn.com/image/fetch/$s_!i_OY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dce8950-7270-43bd-a8d9-d655074c5417_1054x514.png 1272w, https://substackcdn.com/image/fetch/$s_!i_OY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dce8950-7270-43bd-a8d9-d655074c5417_1054x514.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!i_OY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dce8950-7270-43bd-a8d9-d655074c5417_1054x514.png" width="1054" height="514" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/8dce8950-7270-43bd-a8d9-d655074c5417_1054x514.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":514,"width":1054,"resizeWidth":null,"bytes":132306,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/165134533?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb18f4d41-6df4-4f29-9e5b-231319511a11_1054x514.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!i_OY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dce8950-7270-43bd-a8d9-d655074c5417_1054x514.png 424w, https://substackcdn.com/image/fetch/$s_!i_OY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dce8950-7270-43bd-a8d9-d655074c5417_1054x514.png 848w, https://substackcdn.com/image/fetch/$s_!i_OY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dce8950-7270-43bd-a8d9-d655074c5417_1054x514.png 1272w, https://substackcdn.com/image/fetch/$s_!i_OY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dce8950-7270-43bd-a8d9-d655074c5417_1054x514.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">CNBC, screenshot. <a href="https://www.cnbc.com/2025/05/07/crowdstrike-announces-5percent-job-cuts-says-ai-reshaping-every-industry.html">Source</a>.</figcaption></figure></div><p>I would stop short of saying that the existence of genAI tools within the company is directly increasing the per capita workload, but an argument could be made of it indirectly accomplishing that. The net result is not a lightening of the load as has been so often promised.</p><p>Morale is at an all-time low. Many survivors have already started investigating their options to leave either on their own terms or whenever the executives inevitably decide an LLM is adequate enough to approximately replace us.</p><p>The company is very proud of its <a href="https://www.crowdstrike.com/en-us/careers/#:~:text=Our%20workplace%20recognitions">recognitions</a> as an employer. As CrowdStrikers, we used to be proud of it too. Now we just feel betrayed.</p><h1><strong>“Coding assistants push”</strong></h1><p><strong>Software engineer, health tech startup.</strong></p><p>I work as a software engineer and we've been getting a push to adopt AI coding assistants in the last few months. I tried it, mostly to be able to critique, and found it super annoying, so I just stopped using it. But I'm starting to get worried. Our CEO just posted this in an internal AI dedicated Slack channel. The second message is particularly concerning.</p><p>[It’s a screenshot of a message containing a comment from another developer. It reads:]</p><blockquote><p>"I am sufficiently AI-pilled to think that if you aren't using agentic coding tools, then you are the problem. They are good enough now that it's a skills issue. Almost everyone not using them will be unemployed in 2 years and won't know why (since they're the ones on Hacker News saying "these tools never work for me!" and it turns out they are using very bad prompts and are super defensive about it)."</p></blockquote><p>We've had some layoffs long before this AI wave and the company has not picked up the pace in terms of hiring since. I'm sure now they're thinking twice before hiring anyone though. The biggest change was in how the management is enthusiastically incentivizing us to start using AI. First they offered coding assistants for everyone to use, then the hackdays we had every semester turned into a week long hackathon specifically focused on AI projects.</p><p>Now we have an engineer, if you can call him that, working on a project that will introduce more than 30k lines of AI generated code into our codebase, without a single unit test. It will be impossible to do a proper code review on this much code and it will become a maintenance nightmare and possibly a security hazard. I don't need to tell you how much management is cheering on that.</p><h1><strong>“My job hasn't been killed, yet”</strong></h1><p><strong>Front end software engineer at a major software company.</strong></p><p>My job hasn't been killed yet, but there's definitely a possibility that it could be soon. I work for a major software company as a front end software engineer. I believe that there's been AI-related development for about a year and a half. It's a little hard to nail down exactly because I'm one of the few remaining US-based developers and the majority of our engineering department is in India. The teams are pretty siloed and the day-to-day of who's on what teams and what they're doing is pretty opaque. There's been a pretty steady increase of desire and pressure to start using AI tools for a while now. As a result, timelines have been getting increasingly shorter, likewise the patience of upper management. They've tried to create tools that would help with some of the day-to-day repeatable UI pieces that I work on, but the results were unusable from my end and I found that I can create them on my own in the same amount of time.</p><div class="pullquote"><p>The agents themselves had names and AI-generated profile pictures of minorities that aren't actually represented in the upper levels of the company, which I find kind of gross.</p></div><p>Around October/November of last year, the CEO and President (who's the former head of Product) had decided to go all-in on AI development and integrate it in all aspects of our business. Not just engineering, but all departments (Sales, Customer Operations, People Operations, etc). Don't get a ton of insight from other departments other than I've heard that Customer Ops is hemorrhaging people and the People Ops sent an email touting that we could now use AI to write recognition messages to each other celebrating workplace successes (insulting and somewhat dystopian). On the engineering side, I think initially there was a push to be an AI leader in supply chain, so there were a lot of training courses, hackathons and (for India) AI-focused off-sites where they wanted to get broad adoption of AI tools and ideas for products that we can use AI in.</p><p>Then in February, the CEO declared that what we have been doing is no longer a growth business and we were introducing an AI control tower and agents, effectively making us an AI first company. The agents themselves had names and AI-generated profile pictures of minorities that aren't actually represented in the upper levels of the company, which I find kind of gross. Since then, the CEO has been pretty insistent about AI in every communication and therefore there's an increased downward pressure to use it everywhere. He has never been as involved in the day-to-day workings of the company as he has been about AI. Most consequential is somewhere he has gotten the idea that because code can now be generated in a matter of minutes, whole SAS<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> applications, like the ones we've been developing for years, can be built in a matter of days. He's read all these hype articles declaring 60-75% increase in engineering productivity. I guess there was a competitor in one of our verticals that has just come on the scene and done basically what our app can do, but with more functionality. A number things could explain this, but the conclusion has been that they used AI and made our app in a month. So ever since then, it's been a relentless stream of pressure to fully use AI everywhere to "improve efficiency" and get things out as fast as possible. They've started mandating tracking AI usage in our JIRA stories<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a>, the CEO has led Engineering all-hands<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> (he has no engineering background), and now he is mandating that we go from idea to release in a single sprint (2 weeks) or be able to explain why we're not able to meet that goal.</p><p>I've been working under increasingly more compressed deadlines for about a year and am pretty burned out right now, and we haven't even started pushing the AI warp speed churn that they've proposed recently. It's been pretty well documented how inaccurate and insecure these LLMs are and, for me, it seems like we're on a pretty self-destructive path here. We ostensibly do have a company AI code of conduct, but I don't know how this proposed shift in engineering priority doesn't break every guideline. I'm not the greatest developer in the world, but I try to write solid code that works, so I've been very resistant to using LLMs in code. I want my work to be reliable and understandable in case it does need to be fixed. I don't have time to mess around and go down rabbit holes that the code chatbots would inevitably send me down. So I foresee the major bugs and outages just sky-rocketing under this new status quo. How they pitch it to us is that we can generate the code fast and have plenty of time to think about architecture, keep a good work/life balance, etc. </p><p>But in practice, we will be under the gun of an endless stream of 2 week deadlines and management that won't be happy at how long everything takes or the quality of the output. The people making these decisions love the speed of code generation but never consider the accuracy and how big the problem is of even small errors perpetuated at scale. No one else is speaking up to these dangers, but I feel like if I do (well, more loudly than just to immediate low-level managers), I'll be let go. It's pretty disheartening and I would love to leave, but of course it's hard to find another job competing with all the other talented folks that have been let go through all this. Working in software development for so long and seeing so many colleagues accept that we are just prompt generators banging out substandard products has been rough. I'm imagining this must be kind of what it feels like to be in a zombie movie. I'm not sure how this all turns out, but it doesn't look great at the moment.</p><p>The one funny anecdote during all this AI insanity is they had someone from GitHub demo do a live presentation on Co-Pilot and the agents. Not only was everything he demoed either unreliable or underwhelming, but he <em>could not stop</em> <em>yawning</em> loudly during his own presentation. Even the AI champions are tired.</p><p><em><strong>Less than a month later, the engineer emailed me a followup.</strong></em></p><p>And I just got laid off yesterday. The reason cited was they need full stack developers and want engineers that are in India, and not for performance. My front-end-focused position was rendered obsolete. Very plausible since they definitely prefer hiring young and less expensive developers abroad. So AI is not technically the direct cause, but definitely a factor in the background. They'll hire a bunch of new graduates to churn out whatever AI solutions that they think they can hype. Annoyingly, they did announce two new AI agents yesterday, again with faces and names of women. The positive is that they did give me a decent severance, so in the short term I'm fine, financially but also that I don't have to deal with the pressure of ridiculous deadlines.</p><h1><strong>“AI experience”</strong></h1><p><strong>Edtech worker.</strong></p><p>I work for a small edtech startup and do all of our marketing, communications, and social media. I've always enjoyed doing our ed policy newsletter and other writing related projects. My boss absolutely loves AI, but until recently I'd been able to avoid it. A few weeks ago, my boss let me know that all of my content writing would now be done on ChatGPT so I would have more time to work on other projects. He also wants me to use AI to generate images of students, which I've luckily been able to push back on. </p><p>Although he says it's a time saver, I don't actually have other projects, so not only am I creating complete slop, but I'm also left with large amounts of time to do nothing. Being forced to use AI has turned a job I liked into something I dread. As someone with a journalism background, it feels insulting to use AI instead of creating quality blog posts about education policy. Unfortunately, as a recent grad, I haven't had much luck finding another job despite applying to hundreds, so for now I have to make do with the situation, but I will say that having to use AI is making me reconsider where I'm working.</p><h1><strong>“AI makes everything worse”</strong></h1><p><strong>Senior developer at a cloud company.</strong></p><p>I work for a cloud service provider (who will retaliate if you don't post this anonymously, unfortunately), and they're absolutely desperate for the current AI fad to be useful for something. </p><p>They're completely ignoring the environmental costs (insane power requirements, draining lakes of freshwater for cooling, burning untold CPU and GPU hours that could be dedicated to something useful instead) because there's a buck to be made. They hope. But they're still greenwashing the company of course. </p><p>For cloud companies, AI is a gold rush; until the bubble bursts, they can sell ridiculous amounts of expensive server time (lots and lots of CPU/GPU/memory/storage) and tons of traffic to and from the models. They're selling shovels to the gold miners, and are in a great position to charge rent if someone strikes a vein of usefulness. </p><div class="pullquote"><p>I can see a scenario coming fast that's going to set back software development by years</p></div><p>But they're desperate for this to keep going. They're demanding we use AI for literally everything in our jobs. Our managers want to know what we're using AI for and what AI "innovations" we've come up with. If we're not using AI for everything, they want to know why not. I don't think we're quite at the point of this being part of our performance evaluations, but the company is famously opaque about that, so who knows. It's certainly something the employees worry about. </p><p>My work involves standards compliance. Using AI for any part of it will literally double our work load because we'll have to get it to do the thing, and then carefully review and edit the output for accuracy. You can't do compliance work with vibes. What's the point of burning resources to summarize things when you need to review the original and then the output for accuracy anyway? </p><p>I can see a scenario coming fast that's going to set back software development by years (decades? who knows!): </p><ul><li><p>C-suite: we don't need these expensive senior developers, interns can code with AI </p></li><li><p>C-suite: we don't need these expensive security developers, AI can find the problems </p></li><li><p>senior developers are laid off, or quit due to terrible working conditions (we're already seeing this) </p></li><li><p>they're replaced with junior developers, fresh out of school... cheap, with no sense of work-life balance, and no families to distract them </p></li><li><p>all the vibe coding goes straight to production because, obviously, we trust the AI and don't know any better; also we've been told to use AI for everything </p></li><li><p>at some point, all the bugs and security vulnerabilities make everything so bad it actually starts impacting the bottom line </p></li><li><p>uh oh, the vibe coders never progressed beyond junior skill levels, so nobody can do the code reviews, nobody can find and fix the security problems </p></li><li><p>if all the fired senior developers haven't retired or found other jobs (a lot of these people want to get out of tech, because big tech has made everything terrible), they'll need to be hired back, hopefully at massive premiums due to demand</p></li></ul><p>If these tools were generally useful, they wouldn't need to force them on us, we'd be picking them up and running with them.</p><div class="pullquote"><p>At first, the exec’s AI speech was greeted by the typical heart-eyes and confetti emojis, but then I saw there were a few thumbs-down emojis thrown into the mix. This was shocking enough on its own, but then the thumbs-downs multiplied, tens and hundreds of them appearing on the screen, making those few little confettis seem weak and pathetic. I was already floored at this point, and then someone posted the first tomato…</p></div><h1><strong>“then came the tomatoes”</strong></h1><p><strong>Tech worker at a well-known tech company.</strong></p><p>I work at a fairly well-known tech company currently trying to transform itself from a respected, healthy brand to a win-at-all-costs hyperscaler. The result has mostly been a lot of bullshit marketing promises pegged to vaporware, abrupt shifts in strategy that are never explained, and most of all, the rapid degrading of a once healthy, candid corporate culture into one that is intolerant of dissent, enforces constant positivity, and just this week, ominously announced that we are “shifting to a high-performance culture.”</p><p>The company leadership also recently (belatedly) declared that “we are going all in on AI.” </p><p>I don’t use AI. I morally object to it, for reasons I hardly need to explain to you. And now I feel like I’m hiding plain sight, terrified someone will notice I’m actually doing all my own work.</p><p>We’re hiring for new roles and have been explicitly told that no candidate will be considered for *any* job unless they’re on board with AI. Every department has to show how they’re “incorporating AI into their workflows.” I heard through the grapevine that anyone so much as expressing skepticism “does not have a future with the company.”</p><p>It is pretty bleak. I’d leave, but I keep hearing it’s the same everywhere.</p><p>But then something insane happened.</p><p>At the most recent company all-hands, typically the site of the post painful sycophancy, one of our executives gave a speech formally announcing our big AI gambit. The meeting is so big that there is no Zoom chat, so people can only directly react via emojis. At first, the exec’s AI speech was greeted by the typical heart-eyes and confetti emojis, but then I saw there were a few thumbs-down emojis thrown into the mix. This was shocking enough on its own, but then the thumbs-downs multiplied, tens and hundreds of them appearing on the screen, making those few little confettis seem weak and pathetic. I was already floored at this point, and then someone posted the first tomato. It caught on like wildfire until there were wave after wave of virtual tomatoes being thrown at the executive’s head—a mass outcry against being forced to embrace AI at gunpoint. He tried to keep going but his eyes kept darting to the corner of his screen where the emojis appeared, in increasing panic.</p><p>It was goddamn inspiring. And while the executives didn’t immediately abandon all their AI plans, they are definitely shaken by what happened, and nervous about mass dissent. As they should be.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!SxsR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a3042a3-f769-4d77-9c3b-7755ce14d186_2080x620.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!SxsR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a3042a3-f769-4d77-9c3b-7755ce14d186_2080x620.jpeg 424w, https://substackcdn.com/image/fetch/$s_!SxsR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a3042a3-f769-4d77-9c3b-7755ce14d186_2080x620.jpeg 848w, https://substackcdn.com/image/fetch/$s_!SxsR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a3042a3-f769-4d77-9c3b-7755ce14d186_2080x620.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!SxsR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a3042a3-f769-4d77-9c3b-7755ce14d186_2080x620.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!SxsR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a3042a3-f769-4d77-9c3b-7755ce14d186_2080x620.jpeg" width="1456" height="434" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/6a3042a3-f769-4d77-9c3b-7755ce14d186_2080x620.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":434,"width":1456,"resizeWidth":null,"bytes":232863,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/165134533?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a3042a3-f769-4d77-9c3b-7755ce14d186_2080x620.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!SxsR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a3042a3-f769-4d77-9c3b-7755ce14d186_2080x620.jpeg 424w, https://substackcdn.com/image/fetch/$s_!SxsR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a3042a3-f769-4d77-9c3b-7755ce14d186_2080x620.jpeg 848w, https://substackcdn.com/image/fetch/$s_!SxsR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a3042a3-f769-4d77-9c3b-7755ce14d186_2080x620.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!SxsR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a3042a3-f769-4d77-9c3b-7755ce14d186_2080x620.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><div><hr></div><p><em>Thanks again to every tech worker who shared their story to me, whether it was included here or not—and to </em>every <em>worker who has written in to <a href="mailto:AIkilledmyjob@pm.me">AIkilledmyjob@pm.me</a>, period. I intend to produce the next installment in coming weeks, so subscribe if below if that’s of interest. And if you’d like to support this work, and receive the paywalled Critical AI reports and special commentary, please consider becoming a paid subscriber. My wonderful paid subscribers are the only reason I am able to do any of this. A million thanks. </em></p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p><em>Finally, one more time with feeling: If your job has been impacted by AI, and you would like to share your story as part of this project, please do so at <a href="mailto:AIkilledmyjob@pm.me">AIkilledmyjob@pm.me</a>. If you’re willing to participate in an on-camera interview, contact us at <a href="mailto:aikilledmyjob@perfectunion.us">AIkilledmyjob@perfectunion.us</a>. Thanks everyone—until next time. </em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>The title is somewhat tongue-in-cheek; we recognize that <a href="https://www.bloodinthemachine.com/p/the-ai-jobs-apocalypse-is-for-the">AI is not sentient</a>, that it’s <a href="https://gizmodo.com/robots-are-not-coming-for-your-job-management-is-1835127820">management, not AI, that fires people</a>, but also that there are many ways that AI can “kill” a job, by sapping the pleasure one derives from work, draining it of skill and expertise, or otherwise subjecting it to degradation.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>The two footnotes in this account were provided by the worker, and are presented exactly as shared. The first is this link: https://www.gitclear.com/coding_on_copilot_data_shows_ais_downward_pressure_on_code_quality</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>like how Google Docs signals typos, an example: <a href="https://superuser.com/questions/1796376/getting-rid-of-the-red-squiggly-underline-on-errors">https://superuser.com/questions/1796376/getting-rid-of-the-red-squiggly-underline-on-errors</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Editor’s note: RIF is “reduction in force.”</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Ed: Chief Marketing Officer.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Ed: PIP stands for Performance Improvement Plan — in tech, getting a PIP, or PIPd, is like getting an official warning that you’re underperforming and thus more likely to get terminated. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>Ed: Software as a service.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>Ed: <a href="https://www.atlassian.com/software/jira">Jira</a> is project management and bug-tracking software.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>Ed: All-hands are meetings where everyone from the entire company or department are required to attend.</p></div></div>How AI is killing jobs in the tech industry - Blood in the Machinehttps://www.bloodinthemachine.com/p/how-ai-is-killing-jobs-in-the-tech2025-06-25T11:15:26.000Z<p>“What will AI mean for jobs?” may be the single most-asked question about the technology category that dominates Silicon Valley, pop culture, and our politics. Fears that AI will put us out of work routinely top <a href="https://www.pewresearch.org/social-trends/2025/02/25/u-s-workers-are-more-worried-than-hopeful-about-future-ai-use-in-the-workplace/">opinion</a> <a href="https://poll.qu.edu/poll-release?releaseid=3923">polls</a>. Bosses are citing AI as the reason they’re slashing human staff. Firms like <a href="https://www.theverge.com/news/657594/duolingo-ai-first-replace-contract-workers">Duolingo</a> and <a href="https://www.cnbc.com/2025/05/14/klarna-ceo-says-ai-helped-company-shrink-workforce-by-40percent.html">Klarna</a> have laid off workers in <a href="https://www.bloodinthemachine.com/p/the-ai-jobs-crisis-is-here-now">loudly touted shifts to AI</a>, and DOGE used <a href="https://www.bloodinthemachine.com/p/doges-ai-first-strategist-is-now">its “AI-first” strategy</a> to <a href="https://www.bloodinthemachine.com/p/whats-really-behind-elon-musk-and">justify firing federal workers</a>. </p><p>Meanwhile, tech executives are pouring fuel on the flames. Dario Amodei, the CEO of Anthropic, claims that AI products like his will soon <a href="https://www.axios.com/2025/05/28/ai-jobs-white-collar-unemployment-anthropic">eliminate half of entry level white collar jobs</a>, and replace up to 20% of all jobs, period. OpenAI’s Sam Altman <a href="https://www.msn.com/en-us/news/technology/openai-ceo-sam-altman-says-ai-is-ready-for-entry-level-jobs-but-unbothered-gen-z-have-made-it-their-new-work-friend/ar-AA1G9Y7T?ocid=msedgntp&pc=U531&cvid=8f4eeddbcf404ac086d362e1e015f247&ei=68">says that</a> AI systems can replace entry level workers, and will soon be able to code “like an experienced software engineer.” Elsewhere, he’s been <a href="https://futurism.com/the-byte/sam-altman-ai-destroying-jobs">blunter</a>, claiming "Jobs are definitely going to go away, full stop."</p><p>But the question remains: What’s actually happening on the ground, right now? There’s no doubt that lots of firms are investing heavily in AI and <em>trying </em>to use it to improve productivity and cut labor costs. And it’s clear that in certain industries, especially creative ones, the rise of cheap AI-generated content is hitting workers hard. Yet broader economic data on AI impacts <a href="https://www.nber.org/papers/w33777">suggests a more limited disruption</a>. Two and a half years after the rise of ChatGPT, after a torrent of promises, CEO talk, and think pieces, how is—or isn’t—AI <em>really</em> reshaping work? </p><p>About a month ago, I <a href="https://www.bloodinthemachine.com/p/did-ai-kill-your-job">put out a call in hopes of finding some answers</a>. I had a vague idea for a project I’d call AI Killed My Job, that would seek to examine the many ways that management has used AI to impact, transform, degrade, or, yes, try to replace workers outright. It’s premised on the notion that we’ve heard lots of speculation and plenty of sales pitches for AI automation—but we have not heard nearly enough from the workers experiencing the phenomenon themselves.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rWji!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2e5d82-ee48-4ec6-b2cb-3b886003c09c_2080x620.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rWji!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2e5d82-ee48-4ec6-b2cb-3b886003c09c_2080x620.jpeg 424w, https://substackcdn.com/image/fetch/$s_!rWji!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2e5d82-ee48-4ec6-b2cb-3b886003c09c_2080x620.jpeg 848w, https://substackcdn.com/image/fetch/$s_!rWji!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2e5d82-ee48-4ec6-b2cb-3b886003c09c_2080x620.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!rWji!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2e5d82-ee48-4ec6-b2cb-3b886003c09c_2080x620.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rWji!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2e5d82-ee48-4ec6-b2cb-3b886003c09c_2080x620.jpeg" width="1456" height="434" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/be2e5d82-ee48-4ec6-b2cb-3b886003c09c_2080x620.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":434,"width":1456,"resizeWidth":null,"bytes":317369,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":false,"topImage":true,"internalRedirect":"https://www.bloodinthemachine.com/i/165134533?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2e5d82-ee48-4ec6-b2cb-3b886003c09c_2080x620.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rWji!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2e5d82-ee48-4ec6-b2cb-3b886003c09c_2080x620.jpeg 424w, https://substackcdn.com/image/fetch/$s_!rWji!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2e5d82-ee48-4ec6-b2cb-3b886003c09c_2080x620.jpeg 848w, https://substackcdn.com/image/fetch/$s_!rWji!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2e5d82-ee48-4ec6-b2cb-3b886003c09c_2080x620.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!rWji!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2e5d82-ee48-4ec6-b2cb-3b886003c09c_2080x620.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>So, I wrote a post here on the newsletter <a href="https://www.bloodinthemachine.com/p/did-ai-kill-your-job">explaining the idea</a>, posted <a href="https://bsky.app/profile/bcmerchant.bsky.social/post/3loowy27zr22y">a call to social media</a>, and asked for testimonials on various <a href="https://www.youtube.com/watch?v=JNj2jm_Iy54">news shows</a> and <a href="https://systemcrash.info/">podcasts</a>. I was floored by the response. The stories came rolling in. I heard from lots of folks I expected to—artists, illustrators, copywriters, translators—and many I didn’t—senior engineers, managers of marketing departments, construction industry consultants. And just about everyone in between. I got so many responses, and so many good ones, that I resolved to structure the project as a series of pieces that center the workers’ voices and testimonies themselves, and share their experiences in their own words. </p><p>Because I got so many accounts, I decided to break down the articles by field and background. Starting, today, with an industry that’s at once the source of the automating technology and feeling some of its most immediate impacts. Today, we’ll begin by looking at how AI is killing jobs in the tech industry. </p><p>I heard from workers who recounted how managers used AI to justify laying them off, to speed up their work, and to make them take over the workload of recently terminated peers. I heard from workers at the biggest tech giants and the smallest startups—from workers at Google, TikTok, Adobe, Dropbox, and CrowdStrike, to those at startups with just a handful of employees. I heard stories of scheming corporate climbers using AI to consolidate power inside the organization. I heard tales of AI being openly scorned in company forums by revolting workers. And yes, I heard lots of sad stories of workers getting let go so management could make room for AI. I received a message from one worker who wrote to say they were concerned for their job—and a follow-up note just weeks later to say that they’d lost it.</p><p>Of the scores of responses I received, I’ve selected 15 that represent these trends; some are short and offer a snapshot of various AI impacts or a quick look at the future of employment. Others are longer accounts with many insights into what it means to work in tech in the time of AI—and what it might mean to work, period. The humor, grace, and candor in many of these testimonials often amazed me. I cannot thank those who wrote them enough. Some of these workers took great risks to share their stories at a time when it is, in tech, a legitimate a threat to one’s job to speak up about AI. For this reason, I’ve agreed to keep these testimonies anonymous, to protect the identities of the workers who shared them.</p><p>Generative AI is the most hyped, most well-capitalized technology of our generation, and its key promise, that it will automate jobs, desperately needs to be examined. This is the start of that examination.</p><p><strong>Three very quick notes before we move on. First, </strong>this newsletter, and projects like AI Killed My Job, require a lot of work to produce. If you find this valuable, please consider becoming a paid subscriber. With enough support, I can expand such projects with human editors, researchers, and even artists—like Koren Shadmi, who I was able to pay a small fee for the 100% human-generated art above, thanks to subscribers like you. <strong>Second</strong>, if <em>your</em> job has been impacted by AI, and you would like to share your story as part of this project, please do so at <a href="mailto:AIkilledmyjob@pm.me">AIkilledmyjob@pm.me</a>. I would love to hear your account and will keep your account confidential as I would any source. <strong>Third, </strong>some news: I'm partnering with the good folks at <a href="https://perfectunion.us/">More Perfect Union</a> to produce a video edition of AI Killed My Job. If you're interested in participating, or are willing to sit for an on camera interview to discuss how AI has impacted your livelihood, <a href="mailto:AIkilledmyjob@perfectunion.us">please reach out</a>. Thanks for reading, human, and an extra thanks to all those whose support makes this work possible. Tech is just the first industry I plan on covering; I have countless more stories in fields from law to media to customer service to art to share. Stay tuned, and onwards.</p><p><em>This post was edited by Mike Pearl. </em></p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><h1><strong>“AI Generated Trainers”</strong></h1><p><strong>Content moderator at TikTok.</strong></p><p>I have a story. I worked for TikTok as a content moderator from August 2022 to April 2024, and though I was not replaced by AI, I couldn't help noticing that some of the trainers were. </p><p>In my first year, I would be assigned training videos that featured real people either reading or acting. These trainings would be viewed internally only, not available to the public. Topics could be things like learning about biases, avoiding workplace harassment, policy refreshers, and so on. In the early months of my time there, the trainings were usually recorded slideshows with humans reading and elaborating on the topics. Sometimes they were videos that included people acting out scenarios. Over time, the human trainers were replaced with AI by way of generated voices or even people going over the materials in the videos.</p><p>It was honestly scary to me. I don't know how to explain it. I remember that they had added embellishments to make them seem more human. I distinctly remember a woman with an obscure black tattoo on her bicep. The speech and movement wasn't as clean as what I see in videos now, but it was close enough to leave me with an eerie sensation.</p><p>As far as content moderation goes, much of that is already done by AI across all major social media platforms. There has historically been a need for human moderators to differentiate grey areas that technology doesn't understand. (Example: someone being very aggressive in a video and using profanity, but it not being directed at an individual. AI might think the video involves bullying another user and ban the video, but a moderator can review it and see that there's no problem/no targeted individual.)</p><div class="pullquote"><p>It was honestly scary to me. I don't know how to explain it. I remember that they had added embellishments to make them seem more human.</p></div><p> I think as AI models continue to learn, however, moderators will be replaced completely. That's just a theory, but I'm already seeing the number of these job postings dwindling and hearing murmurs from former coworkers on LinkedIn about widespread layoffs.</p><h1><strong>“AI is killing the software engineer discipline”</strong></h1><p><strong>Software engineer at Google.</strong></p><p>I have been a software engineer at Google for several years. With the recent introduction of generative AI-based coding assistance tools, we are already seeing a decline in open source code quality <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> (defined as "code churn" - how often a piece of code is written only to be deleted or fixed within a short time). I am also starting to see a downward trend of (a) new engineers' readiness in doing the work, (b) engineers' willingness to learn new things, and (c) engineers' effort to put in serious thoughts in their work.</p><p>Specifically, I have recently observed first hand some of my colleagues at the start of their career heavily relying on AI-based coding assistance tools. Their "code writing" consists of iteratively and alternatingly hitting the Tab key (to accept AI-generated code) and watching for warning underlines <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> indicating there could be an error (which have been typically based on static analysis, but recently increasingly including AI-generated warnings). These young engineers - squandering their opportunities to learn how things actually work - would briefly glance at the AI-generated code and/or explanation messages and continue producing more code when "it looks okay."</p><p>I also saw experienced engineers in senior positions when faced with an important data modeling task decided to generate the database schema with generative AI. I originally thought it was merely a joke but recently found out that they basically just used the generated schema in actual (internal) services essentially without modification, even if there are some obvious glaring issues. Now those issues have propagated to other code that needs to interact with that database and it will be more costly to fix, so chances are people will just carry on, pretending everything is working as intended.</p><p>All of these will result in poorer software quality. "Anyone can write code" sounds good on paper, but when bad code is massively produced, it hurts everyone including those who did not ask for it and have been trusting the software industry.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7RwP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f04f4b-2f13-48c8-83d0-0ae64e3d5f8c_2140x784.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7RwP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f04f4b-2f13-48c8-83d0-0ae64e3d5f8c_2140x784.png 424w, https://substackcdn.com/image/fetch/$s_!7RwP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f04f4b-2f13-48c8-83d0-0ae64e3d5f8c_2140x784.png 848w, https://substackcdn.com/image/fetch/$s_!7RwP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f04f4b-2f13-48c8-83d0-0ae64e3d5f8c_2140x784.png 1272w, https://substackcdn.com/image/fetch/$s_!7RwP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f04f4b-2f13-48c8-83d0-0ae64e3d5f8c_2140x784.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7RwP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f04f4b-2f13-48c8-83d0-0ae64e3d5f8c_2140x784.png" width="2140" height="784" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/c6f04f4b-2f13-48c8-83d0-0ae64e3d5f8c_2140x784.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":784,"width":2140,"resizeWidth":null,"bytes":173662,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/165134533?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79c401a3-c19c-4e09-92be-c6d6a416a715_2140x784.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7RwP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f04f4b-2f13-48c8-83d0-0ae64e3d5f8c_2140x784.png 424w, https://substackcdn.com/image/fetch/$s_!7RwP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f04f4b-2f13-48c8-83d0-0ae64e3d5f8c_2140x784.png 848w, https://substackcdn.com/image/fetch/$s_!7RwP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f04f4b-2f13-48c8-83d0-0ae64e3d5f8c_2140x784.png 1272w, https://substackcdn.com/image/fetch/$s_!7RwP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6f04f4b-2f13-48c8-83d0-0ae64e3d5f8c_2140x784.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">Story: <a href="https://www.cnbc.com/2024/10/30/dropbox-slashes-20percent-of-global-workforce-eliminating-500-roles.html">CNBC</a>.</figcaption></figure></div><h1><strong>“How AI eliminated my job at Dropbox”</strong></h1><p><strong>Former staff engineer at Dropbox.</strong></p><p>I was part of the 20% RIF at Dropbox at the end of October.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> The stated reason for this was to focus on <a href="https://dash.dropbox.com/">Dash</a>, their AI big-bet. The 10% RIF in 2023 was also to focus more on Dash.</p><p>How did this eliminate my job? Internal reprioritization, that's how. I was moving into an area that was scheduled to focus on improving Dropbox's reliability stance in 2025 and beyond, intended to be a whole year initiative. It's tricky to go into details, but the aim was to take a holistic view of disaster preparedness beyond the long standing disaster scenarios we had been using and were already well prepared for. Projects like this are common in well established product-lines like Dropbox's file-sync offering, as they take a comprehensive overview of both audit compliance (common criteria change every year) and market expectations.</p><p>This initiative was canned as part of the RIF, and the staffing allocated to it largely let go. Such a move is consistent with prioritizing Dash, a brand new product that does not have dominant market-share. Startups rarely prioritize availability to the extent Dropbox's file-sync product does because the big business problem faced by a startup is obtaining market-share, not staying available for your customers. As products (and companies) mature, stability starts gaining priority as part of customer <em>retention</em> engineering. Once a product becomes dominant in the sector, stability engineering often is prioritized <em>over</em> feature development. Dropbox file-sync has been at this point for several years.</p><p>With Dash being a new product, and company messaging being that Dash is the future of Dropbox, a reliability initiative of the type I was gearing up for was not in line with being a new product scrapping for market-share. Thus, this project and the people assigned to it were let go.</p><p><em><strong>Blood in the Machine: What are you planning next?</strong></em></p><p>This job market is absolutely punishing. I had a .gov job for the .com crash, a publicly funded .edu job for the 2008 crash, and a safe place inside a Dropbox division making money hand over fist during the COVID crash (Dropbox Sign more than doubled document throughput over 2020). This is my first tech winter on the bench, and I'm getting zero traction. 37 job apps in the months I've been looking, 4 got me talking to a human (2 of which were referrals), all bounced me after either the recruiter or technical screens. Never made it to a virtual onsite.</p><p>This has to do with me being at the Staff Engineer level, and getting there through non-traditional means. The impact is when I go through the <em>traditional</em> screens for a high level engineer I flame out, because that wasn't my job. The little feedback I've gotten from my hunt is a mix of 'over-qualified for this position' and 'failed the technical screen.' Attempting to branch out to other positions like Product Manager, or Technical Writer have failed due to lack of resume support and everyone hiring Senior titles.</p><p>I may be retired now. I'm 50, but my money guy says I've already made retirement-money; any work I do now is to increase lifestyle, build contingency funds, or fund charitable initiatives. The industry is absolutely toxic right now as cost-cutting is dominating everything but the most recently funded startups. We haven't hit an actual recession in stock-prices due to aggressive cost and stock-price engineering everywhere, and cost-engineering typically tanks internal worker satisfaction. I've been on the bench for six months, money isn't a problem. Do I want to stick my head back into the cortisol amplifier?</p><p>Not really.</p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><h1><strong>“It's no longer only an issue of higher-ups: colleagues are using chatgpt to undermine each other.”</strong></h1><p><strong>Tech worker, marketing department.</strong></p><p>I used to work at a mid-sized Silicon Valley startup that does hardware. The overall project is super demanding, and reliant on skilled, hands-on work. Our marketing team was tiny but committed. My manager, the CMO<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a>, was one of the rare ones: deeply experienced and seasoned in the big ones, thoughtful, and someone who genuinely loved his craft.</p><p>Last year, a new hire came in to lead another department. Genuinely believe she is a product of the "LinkedIn hustler / thought-leadership / bullshit titles" culture. Super performative.</p><p>Recently and during a cross-functional meeting with a lot of people present, she casually referred to a ChatGPT model she was fine-tuning as our "Chief Marketing Officer"—in front of my manager. She claimed it was outperforming us. It wasn’t—it was producing garbage. But the real harm was watching someone who’d given decades to his field get <strong>humiliated</strong>, not by a machine, but by a colleague weaponizing it.</p><p>Today, in the name of “AI efficiency,” a lot of people saw the exit door and my CMO got PIPd.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p><p>The irony here is two-fold: one, it does not seem that the people who left were victims of a turn to "vibe coding" and I suspect that the "AI efficiency" was used as an excuse to make us seem innovative even during this crisis. Two, this is a company whose product desperately needs real human care.</p><p><em>[If your job has been impacted by AI, and you would like to share your story as part of this project, please do so at <a href="mailto:AIkilledmyjob@pm.me">AIkilledmyjob@pm.me</a>.]</em></p><h1><strong>“AI killed two of my jobs”</strong></h1><p><strong>Veteran tech worker at Adobe and in the public sector.</strong></p><p>AI killed my previous job, and it's killing my current one.</p><p>I used to work at Adobe... Despite constantly telling my team and manager I strongly disliked GenAI and didn't want to be involved in GenAI projects, when AI hype really started picking up, my team was disbanded and I was separated from my teammates to be put on a team specifically working on GenAI. I quit, because I didn't see any way for me to organize, obstruct, or resist without being part of building something that went against my values. I often wonder whether I should have let myself get fired instead. This was before we learned that Adobe trained Firefly on Stock contributions without contributors' opt-in, and before the Terms of Service debacle, so I'm glad I wasn't there for that at least.</p><p>Now I work in the public sector. It's better in most ways, but I have to spend ridiculous amounts of time explaining to colleagues and bosses why no, we can't just "use AI" to complete the task at hand. It feels like every week there's a new sales pitch from a company claiming that their AI tool will solve all our problems—companies are desperate to claw back their AI investment, and they're hoping to find easy marks in the public sector.</p><p>I don't want to be a curmudgeon! I like tech and I just want to do tech stuff without constantly having to call bullshit on AI nonsense. I'd rather be doing my actual job, and organizing with my colleagues. It's exhausting to deal with credulous magical thinking from decision-makers who should know better.</p><p><strong>*My work at Adobe*</strong> </p><p>When I was at Adobe, I worked in Document Cloud. So like Acrobat, not Photoshop. For most of my time there, my job was evaluating machine learning models to see if they were good enough to put in a product. The vast majority of the time, Document Cloud leadership killed machine learning projects before they ended up in a product. That was either because the quality wasn't good enough, or because of a lack of "go-to-market.” In other words, middle and upper management generally did not accept that machine learning is only appropriate for solving a small subset of problems, which need to be rigorously-scoped and well-defined. They were looking for "everything machines" (these are derogatory air quotes, not a direct quote) that would be useful for huge numbers of users. </p><p>By the time AI hype really started to pick up, I had moved to a team working on internal tools. I wasn't building or evaluating machine learning models and I was outspoken about not wanting to do that. When LLM hype got really big, senior leadership started describing it as an "existential threat" (that is a direct quote as far as I remember), and re-organizing teams to get LLMs into Document Cloud as soon as possible. Adobe did not do *anything* quickly, so this was a huge change. A big red flag for me was that rather than building our own LLMs, Adobe used OpenAI's chatbots. When I asked about all of OpenAI's ethical and environmental issues, management made generic gestures towards being concerned but never actually said or did anything substantive about it. At that point I quit, because I had specifically been saying I didn't want to be involved in GenAI, and given the rushed and sloppy nature of the rollout, I didn't want my name anywhere near it. </p><p><strong>*Colleagues' reactions*</strong> </p><p>Definitely I knew some colleagues who didn't like what Adobe was doing. There were a lot of people who privately agreed with me but publicly went along with the plan. Generally because they were worried about job security, but also there's a belief at Adobe that the company's approach to AI isn't perfect but it's more ethical than the competition. Despite being a huge company, teams were mostly isolated from each other, and as far as I know there wasn't a Slack channel for talking about AI concerns or anything like that. When I asked critical questions during department meetings or expressed frustration with leadership for ignoring concerns, people told me to go through the chain of command and not to be too confrontational. </p><p>Looking back, I wish my goal hadn't been to persuade managers but instead to organize fellow workers. I was probably too timid in my attempts to organize. I do regret that I didn't try having more explicit 1-on-1's about this, even though it would have been risky. Obviously I was very lucky/privileged to have enough savings to even consider quitting or letting myself get fired in this shitty job market, and I often wonder if I could have done more to combine strategies and resources with other colleagues so that fighting back would be less risky for everyone. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2HSW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F735a03c9-b829-4b36-a385-c038c826bee3_1118x790.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2HSW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F735a03c9-b829-4b36-a385-c038c826bee3_1118x790.png 424w, https://substackcdn.com/image/fetch/$s_!2HSW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F735a03c9-b829-4b36-a385-c038c826bee3_1118x790.png 848w, https://substackcdn.com/image/fetch/$s_!2HSW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F735a03c9-b829-4b36-a385-c038c826bee3_1118x790.png 1272w, https://substackcdn.com/image/fetch/$s_!2HSW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F735a03c9-b829-4b36-a385-c038c826bee3_1118x790.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2HSW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F735a03c9-b829-4b36-a385-c038c826bee3_1118x790.png" width="1118" height="790" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/735a03c9-b829-4b36-a385-c038c826bee3_1118x790.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":790,"width":1118,"resizeWidth":null,"bytes":234536,"alt":"","title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/165134533?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68f289bd-de1d-4b21-9c88-8a66d2f11ac0_1118x790.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!2HSW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F735a03c9-b829-4b36-a385-c038c826bee3_1118x790.png 424w, https://substackcdn.com/image/fetch/$s_!2HSW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F735a03c9-b829-4b36-a385-c038c826bee3_1118x790.png 848w, https://substackcdn.com/image/fetch/$s_!2HSW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F735a03c9-b829-4b36-a385-c038c826bee3_1118x790.png 1272w, https://substackcdn.com/image/fetch/$s_!2HSW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F735a03c9-b829-4b36-a385-c038c826bee3_1118x790.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">Screenshot, Adobe’s blog. <a href="https://blog.adobe.com/en/publish/2024/10/14/adobes-new-skilling-initiative-aims-bring-ai-literacy-content-creation-digital-marketing-skills-30-million-worldwide">Source</a>.</figcaption></figure></div><p><strong>*Impact of AI on work*</strong> </p><p>When the GenAI push started, a lot more of my colleagues started working nights and weekends, which was rare (and even discouraged) before then. Managers paid lip service to Adobe's continuing commitment to work-life balance, but in practice that didn't match up with the sense of urgency and the unrealistic deadlines. I'm not aware of anyone who got fired or laid off specifically because of getting replaced by AI, and it looks like teams are still hiring. Although for what it's worth, in general Adobe does not do layoffs these days, but instead they pressure people into quitting by taking work away from them, putting them on PIPs, that kind of thing. </p><p>I found out that a colleague who had been struggling with a simple programming task for over a month—and refusing frequent offers for help—was struggling because they were trying to prompt an LLM for the solution and trying to understand the LLM's irrelevant and poorly-organized output. They could have finished the work in a day or two if they had just asked for help clarifying what they needed to do. I and their other teammates would gladly have provided non-judgmental support if they had asked. </p><p>Our team found out that a software vendor (I can't say which one but it was one of the big companies pushing Agentic AI) was using AI to route our service request tickets. As a result, our tickets were being misclassified, which meant that they were failing to resolve high-priority service disruptions that we had reported. We wasted days on this, if not weeks.</p><p><strong>*My current job*</strong> </p><p>At my current job, I'm basically a combination of programmer and database administrator. I like the work way more than what I did at Adobe. Much like the corporate world, there are a lot of middle and upper managers who want to "extract actionable insights" from data, but lack the information literacy and technical knowledge to understand what they can (or should) ask for. And the people below them are often unwilling to push back on unreasonable expectations. It's very frustrating to explain to executives that the marketing pitches they hear about AI are not reflective of reality. It makes us seem like we're afraid of change, or trying to prevent "progress" and "efficiency." </p><p>So I would say the private and public sector have this in common: the higher up you go in the organization, the more enthusiastic people are about "AI,” and the less they understand about the software, and (not coincidentally) the less they understand what their department actually does. And to the extent that workers are opposed to "AI,” they're afraid of organizing, because it feels like executives are looking for reasons to cut staff.</p><h1><strong>“No crypto, no AI”</strong></h1><p><strong>Tech worker.</strong></p><p>So this is sort of an anecdote in the opposite direction of AI taking jobs—in a recent interview process at a mature startup in the travel tech space, part of the offer negotiations were essentially me stating “yeah I don’t want to work here if you expect me to use or produce LLM-based features or products” (this is relevant as the role is staff data scientist, so ostensibly on supply side of AI tooling), and them responding with “yeah if you want to do LLM work this isn’t the place for you.” </p><p>Though my network isn’t extensive, I feel like this is a growing sentiment in the small- and medium-tech space - my primary social media is on a tech-centric instance of the fediverse (<a href="http://hachyderm.io/">hachyderm.io</a>) and more often than not when I see the #GetFediHired hashtag, it’s accompanied by something akin to “no crypto, no AI” (also no Microsoft Teams, but I digress).</p><h1><strong>“Gradual addition of AI to the workplace”</strong></h1><p><strong>Computer programmer.</strong></p><p>Our department has now brought in copilot, and we are being encouraged to use it for writing and reviewing code. Obviously we are told that we need to review the AI outputs, but it is starting to kill my enjoyment for my work; I love the creative problem solving aspect to programming, and now the majority of that work is trying to be passed onto AI, with me as the reviewer of the AI's work. This isn't why I joined this career, and it may be why I leave it if it continues to get worse.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ohTd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8ecb452-f4da-4cf8-8969-643533a5878e_205x205.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ohTd!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8ecb452-f4da-4cf8-8969-643533a5878e_205x205.png 424w, https://substackcdn.com/image/fetch/$s_!ohTd!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8ecb452-f4da-4cf8-8969-643533a5878e_205x205.png 848w, https://substackcdn.com/image/fetch/$s_!ohTd!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8ecb452-f4da-4cf8-8969-643533a5878e_205x205.png 1272w, https://substackcdn.com/image/fetch/$s_!ohTd!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8ecb452-f4da-4cf8-8969-643533a5878e_205x205.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ohTd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8ecb452-f4da-4cf8-8969-643533a5878e_205x205.png" width="205" height="205" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/c8ecb452-f4da-4cf8-8969-643533a5878e_205x205.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":205,"width":205,"resizeWidth":null,"bytes":55655,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/165134533?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8ecb452-f4da-4cf8-8969-643533a5878e_205x205.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ohTd!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8ecb452-f4da-4cf8-8969-643533a5878e_205x205.png 424w, https://substackcdn.com/image/fetch/$s_!ohTd!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8ecb452-f4da-4cf8-8969-643533a5878e_205x205.png 848w, https://substackcdn.com/image/fetch/$s_!ohTd!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8ecb452-f4da-4cf8-8969-643533a5878e_205x205.png 1272w, https://substackcdn.com/image/fetch/$s_!ohTd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8ecb452-f4da-4cf8-8969-643533a5878e_205x205.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h1><strong>“my experience with AI at work and how I just want to make it do what I don't want to do myself”</strong></h1><p><strong>Software engineer at a large tech firm.</strong></p><p>All my life, I’ve wanted to be an artist. Any kind of artist. I still daydream of a future where I spend my time frolicking in my own creativity while my own work brings me uninterrupted prosperity.</p><p>Yet this has not come to pass, and despite graduate level art degrees, the only income I can find is the result of a second-class coding job for a wildly capitalist company. It’s forty hours a week of the dullest work imaginable, but it means I have time to indulge in wishful thinking and occasionally, a new guitar.</p><div class="pullquote"><p>Real use cases where AI can be used to do work that regular old programming could not are so rare that when I discovered one two weeks ago, I asked for a raise in the same breath as the pitch.</p></div><p>I am experiencing exactly what you describe. There’s been layoffs recently, and my company is investing heavily in AI, even though they’re not sure yet how best to make it do anything that our corporate overlords imagine it should do.</p><p>From the c-level, they push around ideas about how we could <em>code AI to do work</em>, but in reality, those on the ground are only using AI <em>to help write code that does the work</em>, as the code always has. Real use cases where AI can be used to do work that regular old programming could not are so rare that when I discovered one two weeks ago, I asked for a raise in the same breath as the pitch.</p><p>And here I am, five hundred words into this little essay, and I’ve barely touched on AI! Nor have I even touched any of the AI tools that are so proudly thrust into my face to produce this. I’ve played around with AI tools for creative writing, and while they’re good at fixing my most embarrassing grammar errors, none of them have helped me in my effort to bridge the gap between my humble talent as a creative and my aspirations for my effort.</p><p>There’s a meme going on Pinterest that I believe sums up this moment: “We wanted robots to clean the dishes and do our laundry, so we could draw pictures and write stories. Instead they gave us robots to draw pictures and write stories, so we could clean dishes and do laundry.” This feels very true in the sense that human talent is getting valued not for the time it took to gain it and the ingenuity it proves, but for how well it feeds the greed of those who can afford to invest in bulk. But art in capitalism has always been this way, hasn’t it? If we don’t have a patron, we might as well eat our paint, and AI only tightens that grip that the privileged have held us in for centuries.</p><p>I’ve never been so fortunate to consider the work that funds my DoorDash addiction to be my passion’s output, and perhaps that’s why I’m not afraid of what I’ll lose. But it’s that same work that has me sharing notes with fellow programmers, and many of them will say with blunt honesty that they’re worried they’ll be replaced by AI. This is a vulnerability I rarely see from the group of people who often elevated their work as <em>valuable</em> and <em>practical</em>, as opposed to my efforts to learn how to make music and poetry, which were <em>wasteful</em> and <em>useless</em>. But I am like a plant that learned how to grow on rocks and eat insects. In a meeting soon, I’m going to level with them:</p><p><em>Don’t you understand? This work, what we do day in and day out for a soulless organization that drives profit from stealing our essence, </em>this is the laundry!<em> And if they think I’ll just throw that work into a machine and let it do all the work for me, they’re right. But it’s a machine that automates the work of running machines that automates the work that people used to do by hand, while constantly stealing glances at the clock, just waiting for the moment when they could be out from under the gaze of some righteous egomaniac. </em></p><p>Maybe this is just the perspective of someone who’s seen her work, of almost any type, get devalued with such regularity that it’s hard to imagine the robots making it any more difficult than it already is. No one’s ever really cared about my Instagram posts. No one pretends that my code will change the world. Perhaps, someday, I’ll make more money while babysitting on the weekends. I spend a lot of time thinking about things that haven’t worked out for me, and for us, as a society, and I think some of our worst failures come from moments when we can’t differentiate between the ability to use machines and our abilities as machines.</p><p>Last week I made a pie for my family, and I obviously didn’t get paid for it. Somewhere off in the offices of the illuminati, an account will calculate the value of the oven that baked the crust, the refrigerator that cooled the filling, the bougie pie dish that made my effort look food-blog ready. But there’s no monetary value in the work I did that literally put food on the table, and I rarely, if ever, get paid to perform the music I love, or receive more than pocket change for the short stories I publish. I keep thinking that the solution for both problems exist in some future innovation, but I can’t imagine what that invention would be, and I can’t find proof of a real connection between the two.</p><p>Maybe ChatGPT knows the answer to this riddle? I can throw a penny into our new philosophy vending machine, but I might come up with a better answer myself if I think about it while I unload the dishwasher.</p><p>PS I didn’t use ai to write this, also didn’t even bother to push it through an ai extruder to check the grammar. I guess I’m just feeling too lazy today to push that button! Have a nice weekend.</p><p><em>[If your job has been impacted by AI, and you would like to share your story as part of this project, please do so at <a href="mailto:AIkilledmyjob@pm.me">AIkilledmyjob@pm.me</a>.]</em></p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><h1><strong>“AI-native high school interns”</strong></h1><p><strong>Fintech worker.</strong></p><p>Hello! I am a tech worker at a fintech. My workplace has been pushing AI really hard this year.</p><p>Here's the latest thing <a href="https://www.linkedin.com/posts/activity-7333887709163225095-nw-W/">they thought up</a>:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!i2Lg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6504df52-a517-48c5-b69f-a1cf682173ea_573x766.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!i2Lg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6504df52-a517-48c5-b69f-a1cf682173ea_573x766.png 424w, https://substackcdn.com/image/fetch/$s_!i2Lg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6504df52-a517-48c5-b69f-a1cf682173ea_573x766.png 848w, https://substackcdn.com/image/fetch/$s_!i2Lg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6504df52-a517-48c5-b69f-a1cf682173ea_573x766.png 1272w, https://substackcdn.com/image/fetch/$s_!i2Lg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6504df52-a517-48c5-b69f-a1cf682173ea_573x766.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!i2Lg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6504df52-a517-48c5-b69f-a1cf682173ea_573x766.png" width="573" height="766" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/6504df52-a517-48c5-b69f-a1cf682173ea_573x766.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":766,"width":573,"resizeWidth":null,"bytes":153295,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/165134533?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfea7210-2d51-40c9-9f12-29fdb191c5b8_573x766.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!i2Lg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6504df52-a517-48c5-b69f-a1cf682173ea_573x766.png 424w, https://substackcdn.com/image/fetch/$s_!i2Lg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6504df52-a517-48c5-b69f-a1cf682173ea_573x766.png 848w, https://substackcdn.com/image/fetch/$s_!i2Lg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6504df52-a517-48c5-b69f-a1cf682173ea_573x766.png 1272w, https://substackcdn.com/image/fetch/$s_!i2Lg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6504df52-a517-48c5-b69f-a1cf682173ea_573x766.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>It 100% feels like testing the waters for just how unqualified and underpaid your workforce can be. Just as long as they can work the shovel of LLM they're good right?</p><p>The children yearn for the (LLM) mines!</p><h1><strong>“CrowdStrike”</strong></h1><p><strong>Current CrowdStrike employee.</strong></p><p>I work for CrowdStrike, the leading cybersecurity company in the United States. As a current employee, I can't reveal specific details about myself.</p><p>As you may have heard, CrowdStrike laid off 500 employees on May 7th, 2025. These were not underperformers. Many of them were relatively new hires. This action was presented as a strategic realignment with a special focus in "doubling down on our highest-impact opportunities," to quote CEO George Kurtz.</p><p>In the internal email, he states further:</p><blockquote><p><strong>AI investments accelerate execution and efficiency: AI has always been foundational to how we operate. AI flattens our hiring curve, and helps us innovate from idea to product faster. It streamlines go-to-market, improves customer outcomes, and drives efficiencies across both the front and back office. AI is a force multiplier throughout the business.</strong></p></blockquote><p>So, AI has literally killed many jobs at CrowdStrike this week. I'm fortunate to be among the survivors, but I don't know for how long.</p><p>Generative AI, particularly LLMs, is permeating every aspect of the company. It's in our internal chats. It's integrated into our note-taking tools. It's being used in triage, analysis, engineering, and customer communications. Every week, I'm pinged in an announcement that some new AI capability has been rolled out to me and that I am expected to make use of it. Customers who are paying for live human service packages from us are increasingly getting the output of an LLM instead. Quality Assurance reviewers have started criticizing reviewees for failing to run things through AI tools for things as trivial as spelling and grammar. Check out the <a href="https://www.crowdstrike.com/en-us/">front page</a> and count the number of times "AI" is mentioned. It didn't used to be like this.</p><p>CrowdStrike is currently achieving record financials. At the time I write this, CRWD is trading at $428.63 in striking range of the stock's 52-week high. The efforts of my colleagues and I to rebuild from the incident of July 19, 2024 have been rewarded with shareholder approval and 500 layoffs. Some of the impacted individuals were recent graduates of 4-year schooling who, in addition to student loans, have moving expenses because they physically relocated to Texas shortly before this RIF occurred.</p><p>Many lower-level employees at CrowdStrike are big fans of generative AI; as techy people in a techy job, they fit the bill for that. Even so, many of them have become wary… of what increased AI adoption means for them and their colleagues. Some of the enthusiastic among them are beginning to realize that they're training the means of additional layoffs—perhaps their own.</p><p>CrowdStrikers have been encouraged to handle the additional per capita workload by simply working harder and sometimes working longer for no additional compensation on either count. While our Machine Learning systems continue to perform with excellence, I have yet to be convinced that our usage of genAI has been productive in the context of the proofreading, troubleshooting, and general babysitting it requires. Some of the genAI tools we have available to us are just completely useless. Several of the LLMs have produced inaccuracies which have been uncritically communicated to our customers by CrowdStrikers who failed to exhibit due diligence. Those errors were caught by said customers, and they were embarrassing to us all.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!i_OY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dce8950-7270-43bd-a8d9-d655074c5417_1054x514.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!i_OY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dce8950-7270-43bd-a8d9-d655074c5417_1054x514.png 424w, https://substackcdn.com/image/fetch/$s_!i_OY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dce8950-7270-43bd-a8d9-d655074c5417_1054x514.png 848w, https://substackcdn.com/image/fetch/$s_!i_OY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dce8950-7270-43bd-a8d9-d655074c5417_1054x514.png 1272w, https://substackcdn.com/image/fetch/$s_!i_OY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dce8950-7270-43bd-a8d9-d655074c5417_1054x514.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!i_OY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dce8950-7270-43bd-a8d9-d655074c5417_1054x514.png" width="1054" height="514" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/8dce8950-7270-43bd-a8d9-d655074c5417_1054x514.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":514,"width":1054,"resizeWidth":null,"bytes":132306,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/165134533?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb18f4d41-6df4-4f29-9e5b-231319511a11_1054x514.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!i_OY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dce8950-7270-43bd-a8d9-d655074c5417_1054x514.png 424w, https://substackcdn.com/image/fetch/$s_!i_OY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dce8950-7270-43bd-a8d9-d655074c5417_1054x514.png 848w, https://substackcdn.com/image/fetch/$s_!i_OY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dce8950-7270-43bd-a8d9-d655074c5417_1054x514.png 1272w, https://substackcdn.com/image/fetch/$s_!i_OY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dce8950-7270-43bd-a8d9-d655074c5417_1054x514.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">CNBC, screenshot. <a href="https://www.cnbc.com/2025/05/07/crowdstrike-announces-5percent-job-cuts-says-ai-reshaping-every-industry.html">Source</a>.</figcaption></figure></div><p>I would stop short of saying that the existence of genAI tools within the company is directly increasing the per capita workload, but an argument could be made of it indirectly accomplishing that. The net result is not a lightening of the load as has been so often promised.</p><p>Morale is at an all-time low. Many survivors have already started investigating their options to leave either on their own terms or whenever the executives inevitably decide an LLM is adequate enough to approximately replace us.</p><p>The company is very proud of its <a href="https://www.crowdstrike.com/en-us/careers/#:~:text=Our%20workplace%20recognitions">recognitions</a> as an employer. As CrowdStrikers, we used to be proud of it too. Now we just feel betrayed.</p><h1><strong>“Coding assistants push”</strong></h1><p><strong>Software engineer, health tech startup.</strong></p><p>I work as a software engineer and we've been getting a push to adopt AI coding assistants in the last few months. I tried it, mostly to be able to critique, and found it super annoying, so I just stopped using it. But I'm starting to get worried. Our CEO just posted this in an internal AI dedicated Slack channel. The second message is particularly concerning.</p><p>[It’s a screenshot of a message containing a comment from another developer. It reads:]</p><blockquote><p>"I am sufficiently AI-pilled to think that if you aren't using agentic coding tools, then you are the problem. They are good enough now that it's a skills issue. Almost everyone not using them will be unemployed in 2 years and won't know why (since they're the ones on Hacker News saying "these tools never work for me!" and it turns out they are using very bad prompts and are super defensive about it)."</p></blockquote><p>We've had some layoffs long before this AI wave and the company has not picked up the pace in terms of hiring since. I'm sure now they're thinking twice before hiring anyone though. The biggest change was in how the management is enthusiastically incentivizing us to start using AI. First they offered coding assistants for everyone to use, then the hackdays we had every semester turned into a week long hackathon specifically focused on AI projects.</p><p>Now we have an engineer, if you can call him that, working on a project that will introduce more than 30k lines of AI generated code into our codebase, without a single unit test. It will be impossible to do a proper code review on this much code and it will become a maintenance nightmare and possibly a security hazard. I don't need to tell you how much management is cheering on that.</p><h1><strong>“My job hasn't been killed, yet”</strong></h1><p><strong>Front end software engineer at a major software company.</strong></p><p>My job hasn't been killed yet, but there's definitely a possibility that it could be soon. I work for a major software company as a front end software engineer. I believe that there's been AI-related development for about a year and a half. It's a little hard to nail down exactly because I'm one of the few remaining US-based developers and the majority of our engineering department is in India. The teams are pretty siloed and the day-to-day of who's on what teams and what they're doing is pretty opaque. There's been a pretty steady increase of desire and pressure to start using AI tools for a while now. As a result, timelines have been getting increasingly shorter, likewise the patience of upper management. They've tried to create tools that would help with some of the day-to-day repeatable UI pieces that I work on, but the results were unusable from my end and I found that I can create them on my own in the same amount of time.</p><div class="pullquote"><p>The agents themselves had names and AI-generated profile pictures of minorities that aren't actually represented in the upper levels of the company, which I find kind of gross.</p></div><p>Around October/November of last year, the CEO and President (who's the former head of Product) had decided to go all-in on AI development and integrate it in all aspects of our business. Not just engineering, but all departments (Sales, Customer Operations, People Operations, etc). Don't get a ton of insight from other departments other than I've heard that Customer Ops is hemorrhaging people and the People Ops sent an email touting that we could now use AI to write recognition messages to each other celebrating workplace successes (insulting and somewhat dystopian). On the engineering side, I think initially there was a push to be an AI leader in supply chain, so there were a lot of training courses, hackathons and (for India) AI-focused off-sites where they wanted to get broad adoption of AI tools and ideas for products that we can use AI in.</p><p>Then in February, the CEO declared that what we have been doing is no longer a growth business and we were introducing an AI control tower and agents, effectively making us an AI first company. The agents themselves had names and AI-generated profile pictures of minorities that aren't actually represented in the upper levels of the company, which I find kind of gross. Since then, the CEO has been pretty insistent about AI in every communication and therefore there's an increased downward pressure to use it everywhere. He has never been as involved in the day-to-day workings of the company as he has been about AI. Most consequential is somewhere he has gotten the idea that because code can now be generated in a matter of minutes, whole SAS<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> applications, like the ones we've been developing for years, can be built in a matter of days. He's read all these hype articles declaring 60-75% increase in engineering productivity. I guess there was a competitor in one of our verticals that has just come on the scene and done basically what our app can do, but with more functionality. A number things could explain this, but the conclusion has been that they used AI and made our app in a month. So ever since then, it's been a relentless stream of pressure to fully use AI everywhere to "improve efficiency" and get things out as fast as possible. They've started mandating tracking AI usage in our JIRA stories<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a>, the CEO has led Engineering all-hands<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> (he has no engineering background), and now he is mandating that we go from idea to release in a single sprint (2 weeks) or be able to explain why we're not able to meet that goal.</p><p>I've been working under increasingly more compressed deadlines for about a year and am pretty burned out right now, and we haven't even started pushing the AI warp speed churn that they've proposed recently. It's been pretty well documented how inaccurate and insecure these LLMs are and, for me, it seems like we're on a pretty self-destructive path here. We ostensibly do have a company AI code of conduct, but I don't know how this proposed shift in engineering priority doesn't break every guideline. I'm not the greatest developer in the world, but I try to write solid code that works, so I've been very resistant to using LLMs in code. I want my work to be reliable and understandable in case it does need to be fixed. I don't have time to mess around and go down rabbit holes that the code chatbots would inevitably send me down. So I foresee the major bugs and outages just sky-rocketing under this new status quo. How they pitch it to us is that we can generate the code fast and have plenty of time to think about architecture, keep a good work/life balance, etc. </p><p>But in practice, we will be under the gun of an endless stream of 2 week deadlines and management that won't be happy at how long everything takes or the quality of the output. The people making these decisions love the speed of code generation but never consider the accuracy and how big the problem is of even small errors perpetuated at scale. No one else is speaking up to these dangers, but I feel like if I do (well, more loudly than just to immediate low-level managers), I'll be let go. It's pretty disheartening and I would love to leave, but of course it's hard to find another job competing with all the other talented folks that have been let go through all this. Working in software development for so long and seeing so many colleagues accept that we are just prompt generators banging out substandard products has been rough. I'm imagining this must be kind of what it feels like to be in a zombie movie. I'm not sure how this all turns out, but it doesn't look great at the moment.</p><p>The one funny anecdote during all this AI insanity is they had someone from GitHub demo do a live presentation on Co-Pilot and the agents. Not only was everything he demoed either unreliable or underwhelming, but he <em>could not stop</em> <em>yawning</em> loudly during his own presentation. Even the AI champions are tired.</p><p><em><strong>Less than a month later, the engineer emailed me a followup.</strong></em></p><p>And I just got laid off yesterday. The reason cited was they need full stack developers and want engineers that are in India, and not for performance. My front-end-focused position was rendered obsolete. Very plausible since they definitely prefer hiring young and less expensive developers abroad. So AI is not technically the direct cause, but definitely a factor in the background. They'll hire a bunch of new graduates to churn out whatever AI solutions that they think they can hype. Annoyingly, they did announce two new AI agents yesterday, again with faces and names of women. The positive is that they did give me a decent severance, so in the short term I'm fine, financially but also that I don't have to deal with the pressure of ridiculous deadlines.</p><h1><strong>“AI experience”</strong></h1><p><strong>Edtech worker.</strong></p><p>I work for a small edtech startup and do all of our marketing, communications, and social media. I've always enjoyed doing our ed policy newsletter and other writing related projects. My boss absolutely loves AI, but until recently I'd been able to avoid it. A few weeks ago, my boss let me know that all of my content writing would now be done on ChatGPT so I would have more time to work on other projects. He also wants me to use AI to generate images of students, which I've luckily been able to push back on. </p><p>Although he says it's a time saver, I don't actually have other projects, so not only am I creating complete slop, but I'm also left with large amounts of time to do nothing. Being forced to use AI has turned a job I liked into something I dread. As someone with a journalism background, it feels insulting to use AI instead of creating quality blog posts about education policy. Unfortunately, as a recent grad, I haven't had much luck finding another job despite applying to hundreds, so for now I have to make do with the situation, but I will say that having to use AI is making me reconsider where I'm working.</p><h1><strong>“AI makes everything worse”</strong></h1><p><strong>Senior developer at a cloud company.</strong></p><p>I work for a cloud service provider (who will retaliate if you don't post this anonymously, unfortunately), and they're absolutely desperate for the current AI fad to be useful for something. </p><p>They're completely ignoring the environmental costs (insane power requirements, draining lakes of freshwater for cooling, burning untold CPU and GPU hours that could be dedicated to something useful instead) because there's a buck to be made. They hope. But they're still greenwashing the company of course. </p><p>For cloud companies, AI is a gold rush; until the bubble bursts, they can sell ridiculous amounts of expensive server time (lots and lots of CPU/GPU/memory/storage) and tons of traffic to and from the models. They're selling shovels to the gold miners, and are in a great position to charge rent if someone strikes a vein of usefulness. </p><div class="pullquote"><p>I can see a scenario coming fast that's going to set back software development by years</p></div><p>But they're desperate for this to keep going. They're demanding we use AI for literally everything in our jobs. Our managers want to know what we're using AI for and what AI "innovations" we've come up with. If we're not using AI for everything, they want to know why not. I don't think we're quite at the point of this being part of our performance evaluations, but the company is famously opaque about that, so who knows. It's certainly something the employees worry about. </p><p>My work involves standards compliance. Using AI for any part of it will literally double our work load because we'll have to get it to do the thing, and then carefully review and edit the output for accuracy. You can't do compliance work with vibes. What's the point of burning resources to summarize things when you need to review the original and then the output for accuracy anyway? </p><p>I can see a scenario coming fast that's going to set back software development by years (decades? who knows!): </p><ul><li><p>C-suite: we don't need these expensive senior developers, interns can code with AI </p></li><li><p>C-suite: we don't need these expensive security developers, AI can find the problems </p></li><li><p>senior developers are laid off, or quit due to terrible working conditions (we're already seeing this) </p></li><li><p>they're replaced with junior developers, fresh out of school... cheap, with no sense of work-life balance, and no families to distract them </p></li><li><p>all the vibe coding goes straight to production because, obviously, we trust the AI and don't know any better; also we've been told to use AI for everything </p></li><li><p>at some point, all the bugs and security vulnerabilities make everything so bad it actually starts impacting the bottom line </p></li><li><p>uh oh, the vibe coders never progressed beyond junior skill levels, so nobody can do the code reviews, nobody can find and fix the security problems </p></li><li><p>if all the fired senior developers haven't retired or found other jobs (a lot of these people want to get out of tech, because big tech has made everything terrible), they'll need to be hired back, hopefully at massive premiums due to demand</p></li></ul><p>If these tools were generally useful, they wouldn't need to force them on us, we'd be picking them up and running with them.</p><div class="pullquote"><p>At first, the exec’s AI speech was greeted by the typical heart-eyes and confetti emojis, but then I saw there were a few thumbs-down emojis thrown into the mix. This was shocking enough on its own, but then the thumbs-downs multiplied, tens and hundreds of them appearing on the screen, making those few little confettis seem weak and pathetic. I was already floored at this point, and then someone posted the first tomato…</p></div><h1><strong>“then came the tomatoes”</strong></h1><p><strong>Tech worker at a well-known tech company.</strong></p><p>I work at a fairly well-known tech company currently trying to transform itself from a respected, healthy brand to a win-at-all-costs hyperscaler. The result has mostly been a lot of bullshit marketing promises pegged to vaporware, abrupt shifts in strategy that are never explained, and most of all, the rapid degrading of a once healthy, candid corporate culture into one that is intolerant of dissent, enforces constant positivity, and just this week, ominously announced that we are “shifting to a high-performance culture.”</p><p>The company leadership also recently (belatedly) declared that “we are going all in on AI.” </p><p>I don’t use AI. I morally object to it, for reasons I hardly need to explain to you. And now I feel like I’m hiding plain sight, terrified someone will notice I’m actually doing all my own work.</p><p>We’re hiring for new roles and have been explicitly told that no candidate will be considered for *any* job unless they’re on board with AI. Every department has to show how they’re “incorporating AI into their workflows.” I heard through the grapevine that anyone so much as expressing skepticism “does not have a future with the company.”</p><p>It is pretty bleak. I’d leave, but I keep hearing it’s the same everywhere.</p><p>But then something insane happened.</p><p>At the most recent company all-hands, typically the site of the post painful sycophancy, one of our executives gave a speech formally announcing our big AI gambit. The meeting is so big that there is no Zoom chat, so people can only directly react via emojis. At first, the exec’s AI speech was greeted by the typical heart-eyes and confetti emojis, but then I saw there were a few thumbs-down emojis thrown into the mix. This was shocking enough on its own, but then the thumbs-downs multiplied, tens and hundreds of them appearing on the screen, making those few little confettis seem weak and pathetic. I was already floored at this point, and then someone posted the first tomato. It caught on like wildfire until there were wave after wave of virtual tomatoes being thrown at the executive’s head—a mass outcry against being forced to embrace AI at gunpoint. He tried to keep going but his eyes kept darting to the corner of his screen where the emojis appeared, in increasing panic.</p><p>It was goddamn inspiring. And while the executives didn’t immediately abandon all their AI plans, they are definitely shaken by what happened, and nervous about mass dissent. As they should be.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!SxsR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a3042a3-f769-4d77-9c3b-7755ce14d186_2080x620.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!SxsR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a3042a3-f769-4d77-9c3b-7755ce14d186_2080x620.jpeg 424w, https://substackcdn.com/image/fetch/$s_!SxsR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a3042a3-f769-4d77-9c3b-7755ce14d186_2080x620.jpeg 848w, https://substackcdn.com/image/fetch/$s_!SxsR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a3042a3-f769-4d77-9c3b-7755ce14d186_2080x620.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!SxsR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a3042a3-f769-4d77-9c3b-7755ce14d186_2080x620.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!SxsR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a3042a3-f769-4d77-9c3b-7755ce14d186_2080x620.jpeg" width="1456" height="434" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/6a3042a3-f769-4d77-9c3b-7755ce14d186_2080x620.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":434,"width":1456,"resizeWidth":null,"bytes":232863,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/165134533?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a3042a3-f769-4d77-9c3b-7755ce14d186_2080x620.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!SxsR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a3042a3-f769-4d77-9c3b-7755ce14d186_2080x620.jpeg 424w, https://substackcdn.com/image/fetch/$s_!SxsR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a3042a3-f769-4d77-9c3b-7755ce14d186_2080x620.jpeg 848w, https://substackcdn.com/image/fetch/$s_!SxsR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a3042a3-f769-4d77-9c3b-7755ce14d186_2080x620.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!SxsR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a3042a3-f769-4d77-9c3b-7755ce14d186_2080x620.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><div><hr></div><p><em>Thanks again to every tech worker who shared their story to me, whether it was included here or not—and to </em>every <em>worker who has written in to <a href="mailto:AIkilledmyjob@pm.me">AIkilledmyjob@pm.me</a>, period. I intend to produce the next installment in coming weeks, so subscribe if below if that’s of interest. And if you’d like to support this work, and receive the paywalled Critical AI reports and special commentary, please consider becoming a paid subscriber. My wonderful paid subscribers are the only reason I am able to do any of this. A million thanks. </em></p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p><em>Finally, one more time with feeling: If your job has been impacted by AI, and you would like to share your story as part of this project, please do so at <a href="mailto:AIkilledmyjob@pm.me">AIkilledmyjob@pm.me</a>. If you’re willing to participate in an on-camera interview, contact us at <a href="mailto:aikilledmyjob@perfectunion.us">AIkilledmyjob@perfectunion.us</a>. Thanks everyone—until next time. </em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>The title is somewhat tongue-in-cheek; we recognize that <a href="https://www.bloodinthemachine.com/p/the-ai-jobs-apocalypse-is-for-the">AI is not sentient</a>, that it’s <a href="https://gizmodo.com/robots-are-not-coming-for-your-job-management-is-1835127820">management, not AI, that fires people</a>, but also that there are many ways that AI can “kill” a job, by sapping the pleasure one derives from work, draining it of skill and expertise, or otherwise subjecting it to degradation.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>The two footnotes in this account were provided by the worker, and are presented exactly as shared. The first is this link: https://www.gitclear.com/coding_on_copilot_data_shows_ais_downward_pressure_on_code_quality</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>like how Google Docs signals typos, an example: <a href="https://superuser.com/questions/1796376/getting-rid-of-the-red-squiggly-underline-on-errors">https://superuser.com/questions/1796376/getting-rid-of-the-red-squiggly-underline-on-errors</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Editor’s note: RIF is “reduction in force.”</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Ed: Chief Marketing Officer.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Ed: PIP stands for Performance Improvement Plan — in tech, getting a PIP, or PIPd, is like getting an official warning that you’re underperforming and thus more likely to get terminated. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>Ed: Software as a service.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>Ed: <a href="https://www.atlassian.com/software/jira">Jira</a> is project management and bug-tracking software.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>Ed: All-hands are meetings where everyone from the entire company or department are required to attend.</p></div></div>The Black Box Myth - Cybernetic Forests6853eb260bdcf000019fed912025-06-22T11:10:33.000Z<img src="https://mail.cyberneticforests.com/content/images/2025/06/black-box.jpg" alt="The Black Box Myth"><p>If you missed it, my piece in Tech Policy Press tackles the relationship the AI industry has with uncertainty: using what they don't know about AI as evidence that they're right, or that it should be feared, or that it can't be properly regulated. </p><p>The "Black Box" myth is often used to convince us that the model is "deciding" things through mysterious logics or reasoning – when the "black box" refers specifically to the cascading effects of flipping numbers in a neural net. We know why they write the text they do: it's the data. We just don't know what the precise mechanism is. While that's valid for research, the industry smudges the mystery into self-mythology. </p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.techpolicy.press/the-black-box-myth-what-the-industry-pretends-not-to-know-about-ai/?ref=mail.cyberneticforests.com"><div class="kg-bookmark-content"><div class="kg-bookmark-title">The Black Box Myth: What the Industry Pretends Not to Know About AI | TechPolicy.Press</div><div class="kg-bookmark-description">Tech Policy Press fellow Eryk Salvaggio says it’s a problem is that those of us outside of the AI industry don’t know what rules they are following.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://mail.cyberneticforests.com/content/images/icon/9a2224d300c1699fc1b87235aac36318e2c76cec-867x867-4.png" alt="The Black Box Myth"><span class="kg-bookmark-author">Tech Policy Press</span><span class="kg-bookmark-publisher">Eryk Salvaggio</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://mail.cyberneticforests.com/content/images/thumbnail/2a004af276d04d2a9e9362396b64064e4bc0db3c-1200x675.png" alt="The Black Box Myth" onerror="this.style.display = 'none'"></div></a></figure><div class="kg-card kg-button-card kg-align-center"><a href="https://www.techpolicy.press/the-black-box-myth-what-the-industry-pretends-not-to-know-about-ai/?ref=mail.cyberneticforests.com" class="kg-btn kg-btn-accent">Read the Whole Piece</a></div><hr><h2 id="upcoming-in-person-events-june-july-rome-melbourne">Upcoming In-Person Events, <br>June & July :: Rome & Melbourne </h2><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mail.cyberneticforests.com/content/images/2025/06/networks.png" class="kg-image" alt="The Black Box Myth" loading="lazy" width="2000" height="1722" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/06/networks.png 600w, https://mail.cyberneticforests.com/content/images/size/w1000/2025/06/networks.png 1000w, https://mail.cyberneticforests.com/content/images/size/w1600/2025/06/networks.png 1600w, https://mail.cyberneticforests.com/content/images/size/w2400/2025/06/networks.png 2400w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">A composite image combining diagrams from the 1960 Mark I Perceptron operator's manual and a 1969 Arpanet map of the Internet.</span></figcaption></figure><h3 id="rome-26-june-is-ai-art-net-art"><strong>Rome, 26 June: Is AI Art Net Art?</strong></h3><p><strong>w/ Vladan Joler & Valentina Tanni</strong><br><em>@ Bibliotheca Hertziana, Rome</em></p><p>Gen AI’s images are a distillation of the internet, inheriting the text categories assigned to images alongside the images themselves. How do artists work with, or resist, these competing systems of powers, logics, and communication?</p><p>In this exploratory discussion, <a href="https://anatomyof.ai/?ref=mail.cyberneticforests.com" rel="noreferrer">Vladan Joler</a>, <a href="https://www.valentinatanni.com/?ref=mail.cyberneticforests.com" rel="noreferrer">Valentina Tanni</a> and Eryk Salvaggio will examine how AI has changed the way we imagine and frame these systems today. We look to Internet Art, a movement anchored in critique and resistance, to find paths relevant to critical artistic engagement with AI and uncover what has been elided from the movement from net.art to AI art. Free & open to the public!</p><div class="kg-card kg-button-card kg-align-center"><a href="https://www.biblhertz.it/events/42198/2206?ref=mail.cyberneticforests.com" class="kg-btn kg-btn-accent">Details</a></div><hr><figure class="kg-card kg-image-card"><img src="https://mail.cyberneticforests.com/content/images/2025/06/Human-Movie-Beach-Still-_-Small-1.jpg" class="kg-image" alt="The Black Box Myth" loading="lazy" width="1000" height="562" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/06/Human-Movie-Beach-Still-_-Small-1.jpg 600w, https://mail.cyberneticforests.com/content/images/2025/06/Human-Movie-Beach-Still-_-Small-1.jpg 1000w" sizes="(min-width: 720px) 720px"></figure><h3 id="melbourne-july-3-human-movie-performance"><strong>Melbourne, July 3: <em>Human Movie</em> (Performance!)</strong></h3><p><strong>w/ JODI (NL, BE) & Debris Facility Pty Ltd (AUS)</strong><br><em>@ Club Miscellania, Melbourne</em></p><p>I'll perform <em>Human Movie</em> as part of a series of performances including the net.art legends JODI and the Australian "para-corporate and parasitic entity," Debris Facility Pty Ltd. Open to the public, details below!</p><div class="kg-card kg-button-card kg-align-center"><a href="https://events.humanitix.com/network-anarchy?ref=mail.cyberneticforests.com" class="kg-btn kg-btn-accent">Get Tickets</a></div><hr><figure class="kg-card kg-image-card"><img src="https://mail.cyberneticforests.com/content/images/2025/06/Network-anarchy-and-unstable-diffusions.png" class="kg-image" alt="The Black Box Myth" loading="lazy" width="1280" height="720" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/06/Network-anarchy-and-unstable-diffusions.png 600w, https://mail.cyberneticforests.com/content/images/size/w1000/2025/06/Network-anarchy-and-unstable-diffusions.png 1000w, https://mail.cyberneticforests.com/content/images/2025/06/Network-anarchy-and-unstable-diffusions.png 1280w" sizes="(min-width: 720px) 720px"></figure><h3 id="melbourne-7-8-july-noisy-joints-embodying-the-ai-glitch"><strong>Melbourne, 7-8 July: Noisy Joints: Embodying the AI Glitch</strong></h3><p><strong>w/ Camila Galaz</strong><br><em>@ RMIT Media Portal, Deakin Downtown, Melbourne</em></p><p>The entire conference is going to be great. Here's our part: </p><p>Artists and researchers Eryk Salvaggio and <a href="https://www.camilagalaz.com/?ref=mail.cyberneticforests.com" rel="noreferrer">Camila Galaz</a> present a participatory workshop on interrupting and reframing the outputs of generative AI systems. Drawing from a critical AI puppetry workshop originally developed at the Mercury Store in Brooklyn, New York, <em>Noisy Joints</em> invites participants to think through the body—its categorisation, misrecognition, and noise—within AI image-generation systems. How do our physical movements interact with machine perception? How can choreographies of shadow, gesture, and failure unsettle the logic of automated categorisation?</p><p>Across the session, participants will explore these questions through short talks, collaborative video-making, glitch-puppetry exercises, and experimental use of tools like Runway’s GEN3 model. Using shadows, projections, and improvised movement, the workshop will trace a playful and critical path through the interfaces and assumptions that shape AI perception. No technical experience is required.</p><p><em>Convened by Joel Stern (RMIT), Thao Phan (ANU), and Christopher O’Neill (Deakin).</em></p><div class="kg-card kg-button-card kg-align-center"><a href="https://www.admscentre.org.au/event/network-anarchy-and-unstable-diffusions/?ref=mail.cyberneticforests.com" class="kg-btn kg-btn-accent">More Details</a></div>A moral crusade against AI takes shape - Blood in the Machinehttps://www.bloodinthemachine.com/p/a-moral-crusade-against-ai-takes2025-06-22T10:02:24.000Z<p>Greetings all, </p><p>Hope everyone’s hanging in there during [whatever fresh horror has most recently arisen as you are reading this]. Seeing as how we now horrifically teeter on the brink of all-out war, I almost gave up on publishing this edition altogether. But there’s just so much going on in the AI world, and serious new currents that bear examination. </p><p>To wit: The New Pope has come out swinging against AI, just as more disturbing and detailed stories of chatbots exacerbating mental health crises are surfacing, and a new scientific paper that, if it checks out, confirms some of our worst fears about AI’s impacts on human cognition. Is Leo beginning a moral crusade against AI—should he? We’ll get to it all in this week’s Critical AI report. </p><p>A quick note: The full post here is for paying subscribers. I’ve been light on the paid-only stuff lately, mostly because I’ve been focusing on longer-form reportage, AI Killed My Job interviews, and some other stuff in the works. Thanks to everyone for bearing with, and many, <em>many</em> thanks to all who chip in to support this work; I’m truly grateful to all of you. If you find value in it, and you read regularly—and you’re able— consider upgrading, too. Simply put, paid subscribers make all this work possible.</p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><h2>How the new pope came to take on AI</h2><p>The first-ever American pope dubbed himself Leo, after the previous Leo, XIII, who got himself nicknamed “Pope of the Workers” for advocating for labor unions and critiquing industrial capitalism back in the 19th century. The American Pope Leo XIV revealed he’d claimed the mantle to declare his intent to advocate for human and worker rights in the age of AI.</p><p>The <a href="https://www.wsj.com/tech/ai/pope-leo-ai-tech-771cca48?st=cwREcv">Wall Street Journal published a deep dive</a> into how Leo XIV arrived here, how he’s in many ways following a lead staked out by Francis, and why he is, essentially, our first day one papal tech critic. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1D1t!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25d85d0-e335-49b4-98cd-46c07093ed21_1776x777.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1D1t!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25d85d0-e335-49b4-98cd-46c07093ed21_1776x777.png 424w, https://substackcdn.com/image/fetch/$s_!1D1t!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25d85d0-e335-49b4-98cd-46c07093ed21_1776x777.png 848w, https://substackcdn.com/image/fetch/$s_!1D1t!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25d85d0-e335-49b4-98cd-46c07093ed21_1776x777.png 1272w, https://substackcdn.com/image/fetch/$s_!1D1t!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25d85d0-e335-49b4-98cd-46c07093ed21_1776x777.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1D1t!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25d85d0-e335-49b4-98cd-46c07093ed21_1776x777.png" width="1456" height="637" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/c25d85d0-e335-49b4-98cd-46c07093ed21_1776x777.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":637,"width":1456,"resizeWidth":null,"bytes":1459253,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":false,"topImage":true,"internalRedirect":"https://www.bloodinthemachine.com/i/166414630?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25d85d0-e335-49b4-98cd-46c07093ed21_1776x777.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1D1t!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25d85d0-e335-49b4-98cd-46c07093ed21_1776x777.png 424w, https://substackcdn.com/image/fetch/$s_!1D1t!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25d85d0-e335-49b4-98cd-46c07093ed21_1776x777.png 848w, https://substackcdn.com/image/fetch/$s_!1D1t!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25d85d0-e335-49b4-98cd-46c07093ed21_1776x777.png 1272w, https://substackcdn.com/image/fetch/$s_!1D1t!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25d85d0-e335-49b4-98cd-46c07093ed21_1776x777.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>From the story:</p><blockquote><p>VATICAN CITY—Two days into his reign, the new American pope spoke softly to a hall full of red-capped cardinals and invoked the digital-age challenge to human dignity he intended to address with the power of his 2,000-year-old office: artificial intelligence.</p><p>The princes of the Catholic Church listened intently as Pope Leo XIV laid out his priorities for the first time, revealing that he had chosen his papal name because of the tech revolution. As he explained, his namesake Leo XIII stood up for the rights of factory workers during the Gilded Age, when industrial robber barons presided over rapid change and extreme inequality.</p><p>“Today, the church offers its trove of social teaching to respond to another industrial revolution and to innovations in the field of artificial intelligence that pose challenges to human dignity, justice and labor,” Leo XIV told the College of Cardinals, who stood and cheered for their new pontiff and his unlikely cause.</p></blockquote><p>Interesting details abound here, both about the history of Leo XIII, who really did inveigh against the excesses of industrial capitalism and make crucial strides in driving the church to advocate for suffering workers, and about Francis’s evolution in his thinking about Silicon Valley and modern tech. </p><blockquote><p>In a groundbreaking 1891 encyclical called Rerum Novarum—Latin for “Of New Things”—Leo XIII backed calls for labor unions, living wages and safer working conditions. But he also affirmed the right to private enterprise and property.</p><p>“The hiring of labor and the conduct of trade are concentrated in the hands of comparatively few; so that a small number of very rich men have been able to lay upon the teeming masses of the laboring poor a yoke little better than that of slavery itself,” Leo XIII wrote.</p></blockquote><p>By invoking Leo—about as radical an icon on the labor front as a pope can get, it seems—the new American Leo appears to be preparing to affix his legacy to addressing AI. And that’s at least in part because his predecessor, Pope Francis, had soured so deeply on Silicon Valley and big tech. Francis started out his term sanguine, embracing tech like Twitter and Snapchat and welcoming Valley honchos to the Vatican. But as the Valley’s excesses of became clearer—the mass slaughter of Rohingas in Myanmar, spurred on by viral Facebook posts that the company declined to address, seemed to be a tipping point—Francis outlook darkened accordingly, and he began calling for governments to step in.</p><blockquote><p>In one meeting, Francis compared AI’s dependency on web content produced in a few languages—largely English—to the story of the Tower of Babel, in which overconfident humans, speaking a single tongue, attempted to build a tower to heaven, prompting God to scatter them and sabotage their creation.</p><p>During another meeting, the pope warned tech leaders against believing that they understood humans just because their data could predict human behavior: “You miss their humanity because you cannot reduce the human being to its data,” Father Salobir recalled…. Later that year, Francis warned of a “technological dictatorship” and called on governments to develop a legally binding international treaty to regulate AI.</p></blockquote><p>So the new pope names himself Leo, after a staunch critic of industrialization and a proponent of worker’s rights. (Influenced indirectly or otherwise, perhaps, by <a href="https://www.bloodinthemachine.com/p/one-year-of-blood-in-the-machine">the legacy of the Luddites</a>.) If Francis built his reputation on advocating for the poor and calling for action to address climate change, it seems Leo is poised to make a similar move with regards to AI. Time will tell. Regardless, tech leaders are now desperately trying to set meetings with the Vatican to plead their case to the pope, in hopes that they can limit the chances that he’ll mount any kind of sustained campaign against them or their most-hyped product category. </p><p>So far, Leo is holding fast. On Friday, June 20th, the <a href="https://www.vaticannews.va/en/pope/news/2025-06/pope-leo-on-ai-exceptional-tool-but-cannot-forget-human-dignity.html">Vatican News published Leo’s comments</a> on AI delivered at the second annual Rome Conference on Artificial Intelligence:</p><blockquote><p>"All of us, I am sure, are concerned for children and young people, and the possible consequences of the use of AI on their intellectual and neurological development."</p><p>"Our youth," he insisted, "must be helped, and not hindered, in their journey towards maturity and true responsibility," underscoring they "are our hope for the future."</p><p>"Society’s well-being," he also underscored, "depends upon their being given the ability to develop their God-given gifts and capabilities, and to respond to the demands of the times and the needs of others with a free and generous spirit."</p></blockquote><p>These comments could not be more timely, given the fresh round of reports of the malign impact AI is having on young people, education, and cognitive development. More on that in a second.</p><p>To me, the pope’s comments throw into relief the relative lack of institutional moral leadership over AI’s risks, harms, and excesses. Our most mainstream news orgs are all but cheerleading the ascent of companies like OpenAI and their products. Higher education is too often content to blithely charge ahead—the California State University system <a href="https://www.calstate.edu/csu-system/news/Pages/CSU-AI-Powered-Initiative.aspx">rushed to ink a deal</a> to become the “nation’s first and largest AI-empowered university system.” Colleges like Ohio State have followed suit. US Congress is trying to pass a law that will *ban* states from regulating or legislating AI at all. The White House wants no guardrails at all, and has vowed to see the American AI industry dominate. </p><p>All of this while the full scale of the moral crisis that AI stands to beget could not be clearer….</p>
<p>
<a href="https://www.bloodinthemachine.com/p/a-moral-crusade-against-ai-takes">
Read more
</a>
</p>
This is the gentle singularity? - Blood in the Machinehttps://www.bloodinthemachine.com/p/this-is-the-gentle-singularity2025-06-18T23:49:48.000Z<p>When Sam Altman published his latest blog post <a href="https://blog.samaltman.com/the-gentle-singularity">“A Gentle Singularity”,</a> my first thought was, ‘ok so how much is OpenAI trying to fundraise this time?’ It was a half-assed joke to myself, not initially intended for public consumption, an occupational hazard of spending too much time observing the AI industry. See, Altman has a habit of making grandiose statements about the transformative power of his company’s technology (which he knows will be picked up by the tech media) whenever there is <a href="https://www.bloodinthemachine.com/p/welcome-to-the-age-of-ai-generated">an express financial incentive for him to do so</a>. </p><p>It’s a pattern stretching back years, one <a href="https://ainowinstitute.org/publications/ai-generated-business">I’ve documented at length before</a>. When OpenAI needs an infusion of cash, or wants to seal a deal, out come the promises of AGI. Just last February, Altman published <a href="https://blog.samaltman.com/three-observations">“Three Observations,”</a> the final of which was “the socioeconomic value of linearly increasing intelligence is super-exponential in nature.”<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> That turned out to be a rather direct entreaty to Softbank, which was at that very time considering leading an enormous investment round in OpenAI, to pull the trigger. It was ultimately a successful one, too: Softbank inked a deal promising to deliver $40 billion for the AI company. But that was just a few months ago. Altman couldn’t be going back to the well so soon, so transparently, could he?</p><p><em>A quick message: BLOOD IN THE MACHINE is 100% reader-supported and made possible by my incomparable paying subscribers. I’m able to keep the vast majority of my work free to read and open to all thanks to that support. If you can, for the cost of a coffee a month, consider helping me keep this thing running. Thanks everyone. Onwards.</em></p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p>Of course he could. Proving that you can rarely be too cynical when considering the motives of OpenAI executives, it was soon revealed that the instincts behind my personal in-joke were correct, and Altman was already indeed gunning for more billions from investors. Just two days after Altman published “A Gentle Singularity,” <a href="https://www.theinformation.com/articles/openai-discussed-raising-money-saudi-arabia-indian-investors">the Information reported</a> that OpenAI had <a href="https://www.newarab.com/news/openai-has-discussed-raising-money-saudi-arabia-investors">entered into talks</a> with Saudi Arabia’s Public Investment Fund and the United Arab Emirates’ MGX fund to help fill out OpenAI’s next funding round. There is always, you see, a next funding round to be filled out. </p><p>The gist of Altman’s latest mini-manifesto is that, whether you can feel the AGI or not, we are already in the midst of the early stages of inexorable, AI-made utopia, thanks to OpenAI’s software products. “We are past the event horizon; the takeoff has started,” Altman writes. “Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.”</p><p>To the extent that Altman’s manifesto is “about” anything other than showing the Saudis and Emiratis that he <a href="https://techcrunch.com/2025/06/11/sam-altman-thinks-ai-will-have-novel-insights-next-year/">can generate headlines</a> <a href="https://www.forbes.com/sites/lanceeliot/2025/06/11/sam-altman-says-ai-has-already-gone-past-the-event-horizon-but-no-worries-since-agi-and-asi-will-be-a-gentle-singularity/">in the press</a> about his company at will, it’s a rejoinder to his chief competitor, Anthropic CEO and former OpenAI executive Dario Amodei, who made his own splash in the news cycle a couple weeks back when he <a href="https://www.bloodinthemachine.com/p/the-ai-jobs-apocalypse-is-for-the">warned that AI would soon be taking millions of jobs</a>. </p><p>Sam Altman’s position on that matter has been rather fluid—as ChatGPT burst onto the scene two years ago, he <a href="https://www.wsj.com/tech/ai/tech-leaders-say-ai-will-change-what-it-means-to-have-a-job-2dd556fb">warned of mass disruptive job loss</a>, calling for a <a href="https://www.bloomberg.com/news/articles/2024-07-22/ubi-study-backed-by-openai-s-sam-altman-bolsters-support-for-basic-income">universal basic income program</a>; more recently he has been articulating the more palatable position that actually the changes will be less destructive, and might not be all that <em>noticeable</em> really. Some jobs will come and others will go, and the future will look like the good parts of the present just with a new layer of super-productivity grafted on top. This is, essentially, Altman’s “gentle singularity.” (The singularity is a concept derived from mathematics and embraced by science fiction authors and tech industry folks, and is now generally understood to describe the moment when technological progress becomes so rapid that it becomes uncontrollable by humans.) </p><p>The new positioning offers a raft of advantages—it assures early OpenAI investors, clients, and backers that the major technological changes they bought into are here, even if they can’t quite feel them personally or detect them in their companies’ bottom lines yet. It suggests that those changes still stand to be profound, but no one has to really do much to prepare for them, aside from purchasing AI products. </p><p>Now, what Sam Altman <em>actually</em> believes is immaterial—or rather, it’s 100% material, in that it is entirely aligned at any given time with what will maximize investor buy-in. The “gentle” framing seems designed to promote OpenAI as the friendlier neighborhood job automator, signaling to corporations interested in AI enterprise products that OpenAI will treat such matters more diplomatically than the other AI companies in town. “A Gentle Singularity” is supposed to be something all reasonable future-forward people can get behind, our <a href="https://allpoetry.com/All-Watched-Over-By-Machines-Of-Loving-Grace">machines of loving grace</a> harmoniously in tow. </p><p>It’s not. It may not be Altman’s <em>worst</em> fabrication yet, but it may be the most insulting. It’s not just that Sam Altman’s touched and humble reluctant prophet schtick is wearing impossibly thin, but the audacity of declaring a “gentle” singularity in service of soliciting funds from a nation that <a href="https://www.theguardian.com/world/2025/jun/18/saudi-arabia-turki-al-jasser-executed">executes dissident journalists</a>, as his company grafts itself onto Donald Trump’s department of defense on the brink of all-out war. Altman wants us to look out the window and be assured that <em>this</em> is the gentle singularity?</p><p>I’ll elaborate. Let’s take quick stock of what else has happened in just the week since Altman published his article, and bear in mind that this is all happening in a world in which the gentle AI revolution is underway <em>right now</em>: </p><p><strong>-OpenAI has sought out funding from Saudi Arabia and the United Arab Emirates</strong>, two regimes with some of the worst records on human rights in the world. In particular, the Saudi Arabia Public Investment Fund, from which OpenAI is reportedly soliciting investment, has been linked to human rights violations by orgs like <a href="https://www.hrw.org/report/2024/11/20/man-who-bought-world/rights-abuses-linked-saudi-arabias-public-investment-fund">Human Rights Watch</a>. Not only does PIF fund deadly megaprojects like NEOM, which has <a href="https://www.archpaper.com/2024/10/documentary-reveals-21000-workers-killed-saudi-vision-2030-neom/">claimed the lives of 21,000 workers</a>, but it was used to help facilitate the Saudis’ murder by of Jamal Khashoggi. Is this the gentle singularity? The same one on the verge of being financed by regimes that bonesaw dissenting journalists and overwork migrants to death in the desert?</p><p>-<strong>OpenAI’s Chief Product Officer, Kevin Weil, was sworn into the US Army’s newly launched “Detachment 201”</strong> which <a href="https://www.army.mil/article/286317/army_launches_detachment_201_executive_innovation_corps_to_drive_tech_transformation">the military describes</a> as “an effort to recruit senior tech executives to serve part-time in the Army Reserve as senior advisors… By bringing private-sector know-how into uniform, Det. 201 is supercharging efforts like the Army Transformation Initiative, which aims to make the force leaner, smarter, and more lethal.” Three days after that announcement, <strong>OpenAI was awarded <a href="https://www.cnbc.com/2025/06/16/openai-wins-200-million-us-defense-contract.html">a one-year $200 million contract</a> from Donald Trump’s Department of Defense</strong>, to integrate AI products into the US military. I would not have guessed that a gentle singularity would have involved helping render militaries “more lethal” but then again I’m no gentle singularity expert. </p><p>-<strong>OpenAI is joining <a href="https://www.axios.com/2025/06/16/ceos-huddle-ai-energy-tech">a closed door meeting</a> with the DOE, fossil fuel executives, and the Emiratis to accelerate energy production for hyperscaling AI. </strong>The meeting, dubbed ENACT, has raised fears that ramped up energy production will primarily be fossil fuel-generated, increasing emissions and contributing to climate change. The gentle singularity, in other words, seems slated to (<a href="https://www.bloodinthemachine.com/p/ai-will-never-solve-this">continue to</a>) be powered by fossil fuels, exacerbating the climate crisis every step of the way.</p><p>-Speaking of closed door meetings, <strong>OpenAI and the other major AI players <a href="https://www.politico.com/news/2025/06/06/the-ai-lobby-plants-its-flag-in-washington-00389549">have successfully lobbied</a></strong> to keep federal and state regulators away from the technology. As <a href="https://www.bloodinthemachine.com/p/de-democratizing-ai">previously reported in these pages</a>, OpenAI is one of the chief parties behind the 10-year ban on state level AI lawmaking that has been included in the “One Big Beautiful Bill” being debated in the senate. The provision’s future is not certain, as a number of Republican <a href="https://thehill.com/policy/technology/5356152-gop-squares-off-over-ai-ban/">senators appear to have come out against it</a>, but the fact that it exists at all, has made it this far (the House of Representatives voted to include it in the budget reconciliation bill), and may yet become law is plenty disturbing. </p><p>There’s a part in Altman’s manifesto in which he expounds at length on the importance of “harnessing the collective will and wisdom of people” and allowing society to decide how to use AI. Yet OpenAI is pushing Republicans and the Trump administration to pass a law that bans states from doing exactly that.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> The gentle singularity is not intended to be subject to democratic input, it seems.</p><p>Finally, at the very moment that Altman was writing his post, it’s likely that the most notable public-facing use of AI came in the form of the torrents of slop used to make protests against state oppression look like war zones, and protestors like thuggish criminals.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a5f40d6-35dd-43ba-9781-4948a3e75e09_1397x834.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a5f40d6-35dd-43ba-9781-4948a3e75e09_1397x834.png 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a5f40d6-35dd-43ba-9781-4948a3e75e09_1397x834.png 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a5f40d6-35dd-43ba-9781-4948a3e75e09_1397x834.png 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a5f40d6-35dd-43ba-9781-4948a3e75e09_1397x834.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a5f40d6-35dd-43ba-9781-4948a3e75e09_1397x834.png" width="1397" height="834" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/7a5f40d6-35dd-43ba-9781-4948a3e75e09_1397x834.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":834,"width":1397,"resizeWidth":null,"bytes":1500545,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/166110309?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a5f40d6-35dd-43ba-9781-4948a3e75e09_1397x834.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a5f40d6-35dd-43ba-9781-4948a3e75e09_1397x834.png 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a5f40d6-35dd-43ba-9781-4948a3e75e09_1397x834.png 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a5f40d6-35dd-43ba-9781-4948a3e75e09_1397x834.png 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a5f40d6-35dd-43ba-9781-4948a3e75e09_1397x834.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">Image via <a href="https://bsky.app/profile/did:plc:qnd2lqn52ernybdfonq4mc43/post/3lrdqi2cs7k2q?ref_src=embed">Charlie Warzel</a>: i've been trying to see what the right wing platforms (even outside of X) are showing people regarding the protests in LA and i am not sure it can be overstated just how much the AI slop is being used to reinforce the idea that downtown is a post-apocalyptic war zone</figcaption></figure></div><p>Is this the gentle singularity? </p><p>Or perhaps this is:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee6dfed3-5321-4e2d-9522-10ee423aed37_537x648.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee6dfed3-5321-4e2d-9522-10ee423aed37_537x648.png 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee6dfed3-5321-4e2d-9522-10ee423aed37_537x648.png 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee6dfed3-5321-4e2d-9522-10ee423aed37_537x648.png 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee6dfed3-5321-4e2d-9522-10ee423aed37_537x648.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee6dfed3-5321-4e2d-9522-10ee423aed37_537x648.png" width="537" height="648" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/ee6dfed3-5321-4e2d-9522-10ee423aed37_537x648.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":648,"width":537,"resizeWidth":null,"bytes":252958,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/166110309?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee6dfed3-5321-4e2d-9522-10ee423aed37_537x648.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee6dfed3-5321-4e2d-9522-10ee423aed37_537x648.png 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee6dfed3-5321-4e2d-9522-10ee423aed37_537x648.png 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee6dfed3-5321-4e2d-9522-10ee423aed37_537x648.png 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee6dfed3-5321-4e2d-9522-10ee423aed37_537x648.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>Or this?</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd334e35a-9a58-423f-85bb-842bfb121edd_600x857.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd334e35a-9a58-423f-85bb-842bfb121edd_600x857.jpeg 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd334e35a-9a58-423f-85bb-842bfb121edd_600x857.jpeg 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd334e35a-9a58-423f-85bb-842bfb121edd_600x857.jpeg 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd334e35a-9a58-423f-85bb-842bfb121edd_600x857.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd334e35a-9a58-423f-85bb-842bfb121edd_600x857.jpeg" width="600" height="857" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/d334e35a-9a58-423f-85bb-842bfb121edd_600x857.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":857,"width":600,"resizeWidth":null,"bytes":104181,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/166110309?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd334e35a-9a58-423f-85bb-842bfb121edd_600x857.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd334e35a-9a58-423f-85bb-842bfb121edd_600x857.jpeg 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd334e35a-9a58-423f-85bb-842bfb121edd_600x857.jpeg 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd334e35a-9a58-423f-85bb-842bfb121edd_600x857.jpeg 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd334e35a-9a58-423f-85bb-842bfb121edd_600x857.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>Those last two images have been shared directly by Iran or its major national media, and <a href="https://www.404media.co/the-ai-slop-fight-between-iran-and-israel/">they are AI-generated fabrications of course</a>. They depict fantasias of retaliatory violence after Israel launched its military assault there. Whether it’s American rightwing X accounts <a href="https://www.politico.com/newsletters/politico-nightly/2025/06/11/the-la-protests-are-drowning-in-ai-slop-00401401">looking to help stir the pot</a> and to justify a militarized response to protests in LA, or images of machine-generated carnage in the Middle East, AI is being used to ramp up tensions, glorify violence, and to substantiate hate and prejudice. The opposite, one might argue, of a <em>gentle</em> singularity. </p><p>I don’t point all this out merely to be snide, or even just to mock the premise of Altman’s blog post, though it very much does deserve to be mocked. Now more than ever, we need to think about “AI” not merely as consumer technology, but as an idea and a logic that is shaping political economy around the globe. And there’s nothing gentle about it. </p><p>The major players in AI (OpenAI, Meta, Google, Microsoft) are above all bent on concentrating power and capital—again, just take a scroll through the above list of OpenAI’s moves in just the last week or two—as rapidly as they can, and by brute force if necessary. They are doing so by partnering with governments embracing authoritarianism and crushing dissent, signing contracts with a military preparing for—or at the very least—abetting, war, and teaming up with fossil fuel companies in the time of climate crisis. And AI-generated art is a pillar, as Gareth Watkins put it, of t<a href="https://newsocialist.org.uk/transmissions/ai-the-new-aesthetics-of-fascism/">he modern aesthetics of fascism</a>. It can be used to bend depictions of reality to whatever whims one desires; warped as the product may be, it’s fundamentally truth-proof. </p><p>We cannot separate the AI products—text and image generators capable of producing cheap and voluminous content—or the companies building them, from these contexts. Or from the fact that they are at root automation products that are not yet close to being profitable, and thus demand new mass markets and exemption from regulation. Thus OpenAI heralding the wisdom of crowds to decide how to use AI out of one corner of its mouth while lobbying to shut down any lawmaking around AI period from the other. Thus OpenAI’s enthusiastic partnering up with an administration that uses its image-generating tools <a href="https://www.bloodinthemachine.com/p/openais-studio-ghibli-meme-factory">to mock and degrade the powerless</a>. </p><p>Altman insists that the AI revolution is already here and underway, that it may not feel all that different to you yet, but that we are past the “event horizon.” As such, I feel obligated to ask, one last time: Is this the gentle singularity? </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe356b2ac-ec20-4b22-bdad-28a21bee8b0c_1184x1266.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe356b2ac-ec20-4b22-bdad-28a21bee8b0c_1184x1266.webp 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe356b2ac-ec20-4b22-bdad-28a21bee8b0c_1184x1266.webp 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe356b2ac-ec20-4b22-bdad-28a21bee8b0c_1184x1266.webp 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe356b2ac-ec20-4b22-bdad-28a21bee8b0c_1184x1266.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe356b2ac-ec20-4b22-bdad-28a21bee8b0c_1184x1266.webp" width="1184" height="1266" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/e356b2ac-ec20-4b22-bdad-28a21bee8b0c_1184x1266.webp","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":1266,"width":1184,"resizeWidth":null,"bytes":1431594,"alt":null,"title":null,"type":"image/webp","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/166110309?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe356b2ac-ec20-4b22-bdad-28a21bee8b0c_1184x1266.webp","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe356b2ac-ec20-4b22-bdad-28a21bee8b0c_1184x1266.webp 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe356b2ac-ec20-4b22-bdad-28a21bee8b0c_1184x1266.webp 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe356b2ac-ec20-4b22-bdad-28a21bee8b0c_1184x1266.webp 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe356b2ac-ec20-4b22-bdad-28a21bee8b0c_1184x1266.webp 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h2>Verner Vinge and the Technological Singularity</h2><p>Speaking of singularities, I’m a little surprised that we don’t hear more about the late mathematician and science fiction author Verner Vinge’s <a href="https://ntrs.nasa.gov/citations/19940022856">1993 paper</a>, “The coming technological singularity: How to survive in the post-human era.” The paper predicts “superhuman” intelligence within 35 years, or by 2028, which is pretty firmly in line with what Altman and co are talking about these days. You’d think it would at the very least make for an X post with some decent viral potential among the blue check set. I read a couple of Vinge’s novels back in the day, and including Rainbow’s End, a depiction of the coming singularity set on a college campus in 2025.</p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p>Thanks as always for reading all, and more very soon. Hammers up.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>My favorite line in that post follows directly after: “A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future,” Altman writes. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>That part: “…focus on making superintelligence cheap, widely available, and not too concentrated with any person, company, or country. Society is resilient, creative, and adapts quickly. If we can harness the collective will and wisdom of people, then although we’ll make plenty of mistakes and some things will go really wrong, we will learn and adapt quickly and be able to use this technology to get maximum upside and minimal downside. Giving users a lot of freedom, within broad bounds society has to decide on, seems very important. The sooner the world can start a conversation about what these broad bounds are and how we define collective alignment, the better.”</p></div></div>Complete Accuracy Collapse - Cybernetic Forests68484d49a7d0790001e348692025-06-15T11:03:35.000Z<img src="https://mail.cyberneticforests.com/content/images/2025/06/Screenshot-2025-06-10-at-11.57.17---PM.png" alt="Complete Accuracy Collapse"><p>A new paper from Apple proves what I've discussed <a href="https://mail.cyberneticforests.com/a-critique-of-pure-llm-reason/" rel="noreferrer">before</a> – that so-called "reasoning" models aren't doing much reasoning at all – but now that is clear even by the bizarre industry definitions of the term. </p><p>The researchers created a series of puzzles with similar problem-solving strategies, but at differing layers of complexity. At low complexity, these models do less well than standard LLMs. At medium complexity, they do better. But as complexity rises, they become far less efficient and less able to solve the problems, before dissolving into garbage – "complete accuracy collapse." </p><h3 id="a-history-of-reasoning">A History of Reasoning</h3><p>Reasoning models are part of a shift in strategy for AI companies. As data scaling becomes more difficult – because of the requirements of obtaining "fresh" data, the amount of compute required to process it, and the incremental scaling of both, in even the best-case scenario, being unable to match the rapid improvements of rapid scaling in the past. </p><p>Instead, model developers have turned to different architectures or strategies for how the models construct text. This is part of a long pattern: ChatGPT was not much more capable than GPT3, but by making it a chatbot, it <em>felt</em> exponentially different. Reasoning models were a similar kind of development, though they delivered better results in some tasks (more on that later). </p><p>These models worked by generating an outline from your prompt. Then, the model generated text from that outline. As it moved through the outline, it revisited each step of its outline to strengthen attention on that part, allowing more consistency over long text generation periods. It also, some argued, allowed for more complex problem solving. </p><h3 id="why-reasoning-collapsed">Why Reasoning Collapsed</h3><p>On the surface, this makes sense: generating text, revisiting the outcome of that text, and then generating new text, section by section, keeps the model's "attention" where it needs to be. <em>Attention</em> needs clarification here: it's basically just a matter of how many words back the model can reference internally. As it moves beyond that context window, it loses the originating context and starts to drift into exponentially "hallucinatory" patterns of text. </p><p>Reasoning models are still exclusively capable of generating text, but they are generating <em>more</em> text before the text it shows you. This unseen text adds internal reminders – flags that steer the model's inevitable drift of "attention" back to the start of the process. </p><p>These models also introduce a <em>self-verification</em> process intended to reduce wrong answers or "hallucinations." But OpenAI's own <a href="https://cdn.openai.com/pdf/2221c875-02dc-4789-800b-e7758f3722c1/o3-and-o4-mini-system-card.pdf?ref=mail.cyberneticforests.com" rel="noreferrer">reports</a> show that its o3 model hallucinated 33% of the time and 48% for o4-mini. That's an increase from <em>16% of the time</em> in its simpler models.</p><p>In addition, they found the models delivered this incorrect information with greater confidence because the models could explain "how" they arrived at the wrong answer. But because all LLM generated text is designed to be <em>plausible</em>, rather than <em>accurate</em>, the models themselves cannot tell the difference. AI reasoning models fools even <em>itself</em>. </p><p>In simple terms: </p><ul><li>Words generated earlier tend to be more reflective of the prompt than words generated later. </li><li>Reasoning models generate longer text at the start, like a to-do list summarizing the problem-solving process it needs to take on. </li><li>That to-do list is a set of new internal prompts the model returns to as it generates more words from each part of it. </li><li>In theory, this to-do list, paired with giving models more time to process each step, should create something like manually re-writing the prompt over a series of steps, making them better at "reasoning."</li></ul><p><strong>But they aren't.</strong> As the authors of the Apple paper observed, "when problems reach high complexity with longer compositional depth, both [standard LLMs and reasoning] model types experience <em>complete performance collapse</em>." An earlier study found that top of the line “reasoning models” introduced errors on general questions <a href="https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html?ref=mail.cyberneticforests.com#:~:text=Others%2C%20such%20as%20the%20San,OpenAI's%20o3%20climbed%20to%206.8."><u>51% to 79%</u></a> of the time.</p><h3 id="running-errands-in-circles">Running Errands in Circles</h3><p>There is a world of difference between revisiting steps on a to-do list for the sake of reasoning your way through a problem, and revisiting those steps to extend the text based on the words it contains. The latter is just extending language – while reasoning means developing a working memory for solving a problem. </p><p>Here's the proof. Reasoning models operate just as well on simple problems at the outset. The issue is that the to-do list becomes mandatory. As a scaffold, it introduces new opportunities to revisit and revise the correct answer over time. It's like navigating a city: you can walk a straight line to quickly arrive somewhere. But if you have a bunch of errands to run, you create opportunities to get lost. </p><p>"Reasoning" architecture was meant to serve as a simulation of doubt, but because it is regimented, this doubt is forced. That makes them bad at easy tasks – they <em>must</em> explore wrong answers – and more complex ones, because they have more errands to run. They create longer text than they need, threads that end up never being resolved. </p><p>In the end, the paper found that: </p><blockquote>"state-of-the-art LRMs (e.g., o3-mini, DeepSeek-R1, Claude-3.7-Sonnet-Thinking) still fail to develop generalizable problem-solving capabilities, with accuracy ultimately collapsing to zero beyond certain complexities across different environments." </blockquote><h3 id="anecdotal-evidence">Anecdotal Evidence </h3><p>I keep coming back to a core question about LLMs, which is defining how people use them. I don't think the people who claim to be using them are dupes, or lying, or delusional. But the failure rates assessed on these models are pretty astounding. I think that the way these things are "useful" is different from how they are being tested and what people are doing with them. </p><p>Sometimes, bad material is less important than the speed with which it is produced: cynical examples include <a href="https://mail.cyberneticforests.com/slop-infrastructures-1-2/" rel="noreferrer">AI slop</a>'s role in <a href="https://www.theguardian.com/commentisfree/2025/apr/21/ai-slop-artificial-intelligence-social-media?ref=mail.cyberneticforests.com" rel="noreferrer">spamming your Instagram</a> with weird ideas & products, or public health reports the government can produce when it <a href="https://www.washingtonpost.com/health/2025/05/30/maha-report-ai-white-house/?ref=mail.cyberneticforests.com" rel="noreferrer">doesn't really give a shit</a>. </p><p>But I also want to resist the idea that all LLM use is cynical. I think there are disconnects between what we mean by "using" LLMs, particularly in cases where the models are being asked to generate ideas or brainstorm. </p><p>In my <a href="https://mail.cyberneticforests.com/what-does-it-mean-to-use-generative-ai/" rel="noreferrer">previous post</a> about defining "use," I framed it this way: </p><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">💡</div><div class="kg-callout-text">We know what Large Language Models <i><em class="italic" style="white-space: pre-wrap;">do</em></i> — they create statistically likely arrangements of text — so we might ask, “to whom is that useful?”</div></div><p>Perhaps <em>missing the mark</em> is one thing people are doing with this type of AI. LLMs can generate scaffolding of a text that isn't complete, can generate images that show us what we don't want but help steer us toward something we <em>do</em> want.</p><p>A core affordance of generative AI is that it separates wounded pride away mistakes. It shifts accountability in the iteration of ideas: "<em>the AI made this</em>," and if your boss or coworkers say it sucks, no problem. The AI pitched a bad idea and that can help steer <em>you</em> to a good idea. </p><p>But I want to believe that many people who use them do so because they're insecure about their own capacity for creativity and professional reputations. </p><p>When creativity is called for in professional settings, it nearly always exists in unsupportive environments. If you're the person hired to bring creativity to the office they never really let you do it, and you never really know what the acceptable boundaries are. Rejection stings, and if you're in the industry, this rejection is relentless. Generate 25 pitch ideas, 24 and half get struck out. AI can give you 250 and you don't have to care about any of them. </p><p>Lots of people in the intense pressure cooker of marketing, design or media production clearly benefit from showing people text they didn't write framed as "might be kind of ok but yeah no haha it's AI, haha, sure, no, yeah, I can do much better than that <em>obviously</em> but what <em>exactly</em> do <em>you</em> think we should change I mean <em>I know</em> but just open to suggestions while I have you, haha!" </p><p>Students, too, are encouraged to be creative but often imagine creativity in a trade-off with rigor. When they're graded for the content instead of the creativity they don't understand the connection, as if it was an either/or. Generative AI creates illusions of rigor or creativity in assignments. It gets a good grade without pushing the limits of what risks a student learns to take. It's true: students are graded on the product, rarely the creative risk they took to get there. </p><p>The ultimate sales case for AI is <em>removing accountability</em>, a tool for granting social permission to make proposals without ownership, a safe space distanced from the vulnerability of risk. Maybe this should make me nervous – that we get out of the practice of taking risks, become more conservative, and collectively shift our ideas toward an ever-narrowing acceptable mean. </p><blockquote class="kg-blockquote-alt">People get bored of new technologies and narrow means.</blockquote><p>But people get bored of new technologies and narrow means. That gives me some hope. Generative AI has a market because people are anxious about taking creative risks. That makes these risks – and the people willing to take them – more valuable than ever. The rapid convergence of ideas generated by creative industries will burn itself out. No creative industry can survive on the rapid delivery of its competitors accumulated averages. </p><hr><h2 id="upcoming-in-person-events-june-july-rome-melbourne">Upcoming In-Person Events, <br>June & July :: Rome & Melbourne </h2><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mail.cyberneticforests.com/content/images/2025/06/networks.png" class="kg-image" alt="Complete Accuracy Collapse" loading="lazy" width="2000" height="1722" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/06/networks.png 600w, https://mail.cyberneticforests.com/content/images/size/w1000/2025/06/networks.png 1000w, https://mail.cyberneticforests.com/content/images/size/w1600/2025/06/networks.png 1600w, https://mail.cyberneticforests.com/content/images/size/w2400/2025/06/networks.png 2400w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">A composite image combining diagrams from the 1960 Mark I Perceptron operator's manual and a 1969 Arpanet map of the Internet.</span></figcaption></figure><h3 id="rome-26-june-is-ai-art-net-art">Rome, 26 June: Is AI Art Net Art?</h3><p>Gen AI’s images are a distillation of the internet, inheriting the text categories assigned to images alongside the images themselves. How do artists work with, or resist, these competing systems of powers, logics, and communication?</p><p>In this exploratory discussion, <a href="https://anatomyof.ai/?ref=mail.cyberneticforests.com" rel="noreferrer">Vladan Joler</a>, <a href="https://www.valentinatanni.com/?ref=mail.cyberneticforests.com" rel="noreferrer">Valentina Tanni</a> and Eryk Salvaggio will examine how AI has changed the way we imagine and frame these systems today. We look to Internet Art, a movement anchored in critique and resistance, to find paths relevant to critical artistic engagement with AI and uncover what has been elided from the movement from net.art to AI art. Free & open to the public!</p><div class="kg-card kg-button-card kg-align-center"><a href="https://www.biblhertz.it/events/42198/2206?ref=mail.cyberneticforests.com" class="kg-btn kg-btn-accent">Details</a></div><hr><figure class="kg-card kg-image-card"><img src="https://mail.cyberneticforests.com/content/images/2025/06/Human-Movie-Beach-Still-_-Small-1.jpg" class="kg-image" alt="Complete Accuracy Collapse" loading="lazy" width="1000" height="562" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/06/Human-Movie-Beach-Still-_-Small-1.jpg 600w, https://mail.cyberneticforests.com/content/images/2025/06/Human-Movie-Beach-Still-_-Small-1.jpg 1000w" sizes="(min-width: 720px) 720px"></figure><h3 id="melbourne-july-3-human-movie-performance">Melbourne, July 3: <em>Human Movie</em> (Performance!)</h3><h3 id="with-jodi-nl-be-debris-facility-pty-ltd-aus"><em>With JODI (NL, BE) & Debris Facility Pty Ltd (AUS)</em></h3><p>I'll perform <em>Human Movie</em> as part of a series of performances including the net.art legends JODI and the Australian "para-corporate and parasitic entity," Debris Facility Pty Ltd. Open to the public, details below!</p><div class="kg-card kg-button-card kg-align-center"><a href="https://events.humanitix.com/network-anarchy?ref=mail.cyberneticforests.com" class="kg-btn kg-btn-accent">Get Tickets</a></div><hr><figure class="kg-card kg-image-card"><img src="https://mail.cyberneticforests.com/content/images/2025/06/Network-anarchy-and-unstable-diffusions.png" class="kg-image" alt="Complete Accuracy Collapse" loading="lazy" width="1280" height="720" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/06/Network-anarchy-and-unstable-diffusions.png 600w, https://mail.cyberneticforests.com/content/images/size/w1000/2025/06/Network-anarchy-and-unstable-diffusions.png 1000w, https://mail.cyberneticforests.com/content/images/2025/06/Network-anarchy-and-unstable-diffusions.png 1280w" sizes="(min-width: 720px) 720px"></figure><h3 id="melbourne-7-8-july-noisy-joints-embodying-the-ai-glitch-with-camila-galaz">Melbourne, 7-8 July: Noisy Joints: Embodying the AI Glitch<br><em>With Camila Galaz</em></h3><p>The entire conference is going to be great. Here's our part: </p><p>Artists and researchers Eryk Salvaggio and <a href="https://www.camilagalaz.com/?ref=mail.cyberneticforests.com" rel="noreferrer">Camila Galaz</a> present a participatory workshop on interrupting and reframing the outputs of generative AI systems. Drawing from a critical AI puppetry workshop originally developed at the Mercury Store in Brooklyn, New York, <em>Noisy Joints</em> invites participants to think through the body—its categorisation, misrecognition, and noise—within AI image-generation systems. How do our physical movements interact with machine perception? How can choreographies of shadow, gesture, and failure unsettle the logic of automated categorisation?</p><p>Across the session, participants will explore these questions through short talks, collaborative video-making, glitch-puppetry exercises, and experimental use of tools like Runway’s GEN3 model. Using shadows, projections, and improvised movement, the workshop will trace a playful and critical path through the interfaces and assumptions that shape AI perception. No technical experience is required.</p><p><em>Convened by Joel Stern (RMIT), Thao Phan (ANU), and Christopher O’Neill (Deakin).</em></p><div class="kg-card kg-button-card kg-align-center"><a href="https://www.admscentre.org.au/event/network-anarchy-and-unstable-diffusions/?ref=mail.cyberneticforests.com" class="kg-btn kg-btn-accent">More Details</a></div>Enter the gift interface - escape the algorithmhttps://escapethealgorithm.substack.com/p/enter-the-gift-interface2025-06-11T14:27:12.000Z<div class="pullquote"><p></p><p><a href="https://escapethealgorithm.substack.com/">Escape the Algorithm</a> is a newsletter about taking control of our attention and finding a more human side of the internet. After you <a href="https://escapethealgorithm.substack.com/">subscribe</a>, you can make me feel supported by performing a <em><a href="https://escapethealgorithm.substack.com/p/become-a-supporter">tiny act of codependence</a></em>: mail me a gift or a postcard, take me out for coffee in person, contribute a story to the newsletter, or become a paid subscriber. <a href="https://escapethealgorithm.substack.com/p/become-a-supporter">Learn more about becoming a ᵐⁱᶜʳᵒsupporter</a>.</p></div><p>In late March, <span class="mention-wrap" data-attrs="{"name":"Spencer Chang","id":3363406,"type":"user","url":null,"photo_url":"https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/f03fdd99-399f-41da-ae8b-5664287133d7_2973x3236.jpeg","uuid":"f8b46976-907d-48b8-b4b6-38209b1cff7c"}" data-component-name="MentionToDOM"></span> and I wrapped up our Gift Interfaces class at the School for Poetic Computation. Over the course of 10 weeks, we immersed ourselves in gift-giving cultures and practices to imagine design behind scale, questioned the norms of what shape a gift can take, and most importantly, gave and received: to and from each other, our loved ones, and our communities.</p><p>For our final class, we held a potluck and gift wrapping party where we “wrapped” (documented) all the gifts given throughout the class, uploaded them to a gift interface, and took turns opening each others’ gifts. The resulting website represents the archive of our work together:</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{"mediaUploadId":"f68e609a-57de-4385-8c38-6790cd742458","duration":null}"></div><p></p><p>This is the letter we wrote to our students:</p><blockquote><p>Dear Gift Givers,</p><p>The very first day that we met, we shared stories of meaningful gifts we've received. Elan told of the many summers that his mom spent weeks pre-cooking and individually wrapping a month's worth of kosher meals so that Elan could attend sleepaway music camp. Spencer spoke of a group living experience in which gifting was permanently in the air --- communal meals, skill shares, handwritten notes.</p><p>We couldn't have known it at the time but it feels obvious now that in those very first 10 minutes we were already casting a powerful spell, manifesting the kind of learning that can only come from gathering, and the kind of gathering that can only come from learning, and that in so doing, this gift of gatherlearning would be reflected back to us tenfold.</p><p>You attended to each other as strangers, in your strangenesses, and then unfolded the ones you thought you knew best, only to love them in their unknowability. You made instruments that <a href="https://drive.google.com/file/d/1Ymvf7oD7Jm00iDjT0xTgo102q1N_SYRJ/view?usp=sharing">measure the color of the</a> sky. You <a href="https://drive.google.com/file/d/1KHv20nib1UPx3hVDc5MJEq_Gg85ocoQa/view?usp=sharing">hid poems in flower petals</a>, wrote letters to trees. You <a href="https://www.are.na/ahmed-alawadhi/the-one-3xao2rxqlrs">saw as much of god</a> in mosquitos as in artichokes. You <a href="https://www.are.na/block/35221361">implored us</a> to imagine a large wide-mouth jar, wrapped extremely tightly with duct tape around the top so it won't leak, weighing about a pound, sour mango deliciousness.</p><p>We channeled into you <a href="https://www.are.na/block/34107567">the words of </a><span class="mention-wrap" data-attrs="{"name":"Sal Randolph","id":404871,"type":"user","url":null,"photo_url":"https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58dc1520-ec44-451c-9d1d-cf7e89f64fd1_3600x3600.jpeg","uuid":"6c955f70-7b65-4092-989c-50af21f21c40"}" data-component-name="MentionToDOM"></span>: "The line between gift to the other and pleasure for the self is always blurred and shifting. The gift goes back and forth a thousand times a day. It's a kind of game." Little did we know how short and blurry the day would be, that the game had already well begun. We channeled into you <a href="https://www.are.na/block/34500737">Robin Wall Kimmerer's story</a> of a hunter who, when asked by an anthropologist why he shared surplus meat instead of storing it for lean times, simply replied "I store my meat in the belly of my brother." All the while, you were already filling our bellies to the brim.</p><p>After we prompted you to give gifts, we requested gift interfaces: rituals and tools that create the context for giving. And in the resulting exchange, in between the literal responses to our query, a preposterous glimmer. Tacky! Inevitable! Exquisite! You were the gift interface the whole time.</p><p>Thank you for all your propositions, questions, and stories throughout our time together. We cultivated generosity materially in our gifts but also spiritually in our mutual attention. Exploring these questions so core to our identity as a people together, we hope you have found a few ideas about the mystery of giftmaking. We hope the seeds you have planted (and will continue to plant) grow beyond us. We hope they gain lives of their own, sow seeds of their own. And we hope those seeds eventually find their way back to you, like a long lost classmate, again and again and again.</p><p>Love,<br>Spencer and Elan ~ <em>May 19, 2025</em></p></blockquote><p></p><p>Visit the website to peruse the wonderful gifts, view our syllabus, sift through inspiration, listen to the class playlist, and add a thank you note.</p><p class="button-wrapper" data-attrs="{"url":"https://gifting-interfaces.pages.dev/","text":"Enter the gift interface","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://gifting-interfaces.pages.dev/"><span>Enter the gift interface</span></a></p><p></p><div><hr></div><h3><strong>🏃 Escape attempts</strong></h3><h5><em><strong>Acts of algorithmic resistance</strong></em></h5><p><a href="https://www.rithm.love/">rithm.love</a> is a dating app where your profile is just a screenshot of your instagram explore page</p><p><a href="https://www.are.na/lucy-pham/abandoned-blogs">Abandoned blogs</a> is a collection of abandoned blogs</p><p><a href="https://www.instagram.com/reel/DJkLiiJJsv7/">The Rent Reducer 9000</a> is a little free library that passively monitors nearby open houses and triggers a gunshot noise to bring down rent prices</p><p><a href="https://neal.fun/internet-roadtrip/">Internet Roadtrip</a> is a crowdsourced Google Street View roadtrip </p><p><a href="https://heartothere.org/">hear to there</a> lets you can travel to places through voice memos and field recordings</p><p><a href="https://www.seven39.com/">seven39</a> is a social network that only works for three hours a day</p><p><a href="https://walzr.com/library-spy">Library Spy</a> is a scraped feed of books that are checked out of the New York Public Library</p><p></p><h3><strong>🌸 Screenshot garden</strong></h3><h5><em><strong>Souvenirs from my internet travels, presented without context</strong></em></h5><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F506793a6-6fcd-449a-b681-4c94d1e838ac_1179x2164.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F506793a6-6fcd-449a-b681-4c94d1e838ac_1179x2164.jpeg 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F506793a6-6fcd-449a-b681-4c94d1e838ac_1179x2164.jpeg 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F506793a6-6fcd-449a-b681-4c94d1e838ac_1179x2164.jpeg 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F506793a6-6fcd-449a-b681-4c94d1e838ac_1179x2164.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F506793a6-6fcd-449a-b681-4c94d1e838ac_1179x2164.jpeg" width="488" height="895.7014418999152" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/506793a6-6fcd-449a-b681-4c94d1e838ac_1179x2164.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":2164,"width":1179,"resizeWidth":488,"bytes":392402,"alt":"A Facebook Marketplace listing for a “Ms Rachel 4 foot birthday cut out” priced at $50, marked down from $80. The image shows a large, distorted close-up photo of a cardboard cutout of Ms. Rachel’s smiling face. The cutout has been used once for a party, as noted in the description.","title":null,"type":"image/jpeg","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://escapethealgorithm.substack.com/i/165573901?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F506793a6-6fcd-449a-b681-4c94d1e838ac_1179x2164.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="A Facebook Marketplace listing for a “Ms Rachel 4 foot birthday cut out” priced at $50, marked down from $80. The image shows a large, distorted close-up photo of a cardboard cutout of Ms. Rachel’s smiling face. The cutout has been used once for a party, as noted in the description." title="A Facebook Marketplace listing for a “Ms Rachel 4 foot birthday cut out” priced at $50, marked down from $80. The image shows a large, distorted close-up photo of a cardboard cutout of Ms. Rachel’s smiling face. The cutout has been used once for a party, as noted in the description." srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F506793a6-6fcd-449a-b681-4c94d1e838ac_1179x2164.jpeg 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F506793a6-6fcd-449a-b681-4c94d1e838ac_1179x2164.jpeg 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F506793a6-6fcd-449a-b681-4c94d1e838ac_1179x2164.jpeg 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F506793a6-6fcd-449a-b681-4c94d1e838ac_1179x2164.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><h3><strong>📬 Mailbox</strong></h3><h5><em><strong>letters and ephemera from ᵐⁱᶜʳᵒsupporters</strong></em></h5><p>This month, I received this delightful postcard from London, sent by Escape the Algorithm’s newest ᵐⁱᶜʳᵒsupporter, Janis:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F016fadb6-0e64-4848-962e-be4b560c7742_4088x4050.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F016fadb6-0e64-4848-962e-be4b560c7742_4088x4050.png 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F016fadb6-0e64-4848-962e-be4b560c7742_4088x4050.png 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F016fadb6-0e64-4848-962e-be4b560c7742_4088x4050.png 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F016fadb6-0e64-4848-962e-be4b560c7742_4088x4050.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F016fadb6-0e64-4848-962e-be4b560c7742_4088x4050.png" width="1456" height="1442" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/016fadb6-0e64-4848-962e-be4b560c7742_4088x4050.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":1442,"width":1456,"resizeWidth":null,"bytes":7404590,"alt":"A folded notecard featuring an Impressionist painting by Claude Monet on the front, depicting the Houses of Parliament in London under a hazy, colorful sky with purples, blues, and pinks. Below the card, the interior is displayed with a handwritten letter on light blue paper. The letter is addressed to \"Elan\" and references Monet's painting process and how it connects to the themes of Elan's newsletter—particularly around observation, slowness, and escaping the algorithm. The note is signed \"Janis :) May 2025.\"","title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://escapethealgorithm.substack.com/i/165573901?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F016fadb6-0e64-4848-962e-be4b560c7742_4088x4050.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="A folded notecard featuring an Impressionist painting by Claude Monet on the front, depicting the Houses of Parliament in London under a hazy, colorful sky with purples, blues, and pinks. Below the card, the interior is displayed with a handwritten letter on light blue paper. The letter is addressed to "Elan" and references Monet's painting process and how it connects to the themes of Elan's newsletter—particularly around observation, slowness, and escaping the algorithm. The note is signed "Janis :) May 2025."" title="A folded notecard featuring an Impressionist painting by Claude Monet on the front, depicting the Houses of Parliament in London under a hazy, colorful sky with purples, blues, and pinks. Below the card, the interior is displayed with a handwritten letter on light blue paper. The letter is addressed to "Elan" and references Monet's painting process and how it connects to the themes of Elan's newsletter—particularly around observation, slowness, and escaping the algorithm. The note is signed "Janis :) May 2025."" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F016fadb6-0e64-4848-962e-be4b560c7742_4088x4050.png 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F016fadb6-0e64-4848-962e-be4b560c7742_4088x4050.png 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F016fadb6-0e64-4848-962e-be4b560c7742_4088x4050.png 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F016fadb6-0e64-4848-962e-be4b560c7742_4088x4050.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><blockquote><p>I hope this card has safely crossed the Atlantic!!</p><p>When I saw the opportunity to be a minor supporter of your newsletter, I immediately thought of this (post) card as I think it symbolically & timelessly met your newsletter is about. The painting by Monet was part of an exhibition at the Courtauld on Monet's works in London. He was there for a short period, staying and perched at the fancy Savoy Hotel across the corner, looking southwards across the river to capture the views in painting. The challenge, and charm, was that Monet liked the short one hour window before what we now know as "golden hour" — mist, fog & pollution smog, which gave the sky & river different hues.</p><p>And so Monet sat at the hotel window every day for months to capture various iterations of the same view.</p><p>In many ways, to me, this feels like escaping the algorithm, like treasuring the brief moments with friends, like finding your own enjoyment of (weirdly) specific niches, and documenting small acts of daily occurrence, object & observation for mostly personal or occasionally public pleasure. At least that’s what your newsletter & essays feel like to me.</p><p><strong>Thank you for your writing & sharing your curiosity with us!</strong></p><p><em>Janis</em><br><em>May 2025</em></p></blockquote><p></p><div class="pullquote"><p></p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ad11559-a893-4b1e-b826-40c698e288b1_918x651.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ad11559-a893-4b1e-b826-40c698e288b1_918x651.png 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ad11559-a893-4b1e-b826-40c698e288b1_918x651.png 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ad11559-a893-4b1e-b826-40c698e288b1_918x651.png 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ad11559-a893-4b1e-b826-40c698e288b1_918x651.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ad11559-a893-4b1e-b826-40c698e288b1_918x651.png" width="114" height="80.84313725490196" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/5ad11559-a893-4b1e-b826-40c698e288b1_918x651.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":651,"width":918,"resizeWidth":114,"bytes":249945,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://escapethealgorithm.substack.com/i/165573901?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ad11559-a893-4b1e-b826-40c698e288b1_918x651.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ad11559-a893-4b1e-b826-40c698e288b1_918x651.png 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ad11559-a893-4b1e-b826-40c698e288b1_918x651.png 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ad11559-a893-4b1e-b826-40c698e288b1_918x651.png 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ad11559-a893-4b1e-b826-40c698e288b1_918x651.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p><a href="https://escapethealgorithm.substack.com/">Escape the Algorithm</a> is a newsletter about taking control of our attention and finding a more human side of the internet. After you <a href="https://escapethealgorithm.substack.com/">subscribe</a>, you can make me feel supported by performing a <em><a href="https://escapethealgorithm.substack.com/p/become-a-supporter">tiny act of codependence</a></em>: mail me a gift or a postcard, take me out for coffee in person, contribute a story to the newsletter, or become a paid subscriber. <a href="https://escapethealgorithm.substack.com/p/become-a-supporter">Learn more about becoming a ᵐⁱᶜʳᵒsupporter</a>.</p></div>The weaponization of Waymo - Blood in the Machinehttps://www.bloodinthemachine.com/p/the-weaponization-of-waymo2025-06-10T05:06:21.000Z<p>The incinerated husks of Waymo cars were lined up neatly in a row downtown, on Los Angeles Street; one, two, three, four, five of them. And I do mean incinerated. I’ve never seen cars torched so thoroughly. There were sedan-sized piles of ash where Jaguar I-Paces outfitted with lidar sensors used to be, recognizable as cars at all only by wheel frames and blackened chassis. They smelled awful.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3791ac0d-54e6-4c65-bb2d-1753d881e35d_4032x3024.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3791ac0d-54e6-4c65-bb2d-1753d881e35d_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3791ac0d-54e6-4c65-bb2d-1753d881e35d_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3791ac0d-54e6-4c65-bb2d-1753d881e35d_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3791ac0d-54e6-4c65-bb2d-1753d881e35d_4032x3024.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3791ac0d-54e6-4c65-bb2d-1753d881e35d_4032x3024.jpeg" width="1456" height="1092" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/3791ac0d-54e6-4c65-bb2d-1753d881e35d_4032x3024.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":1092,"width":1456,"resizeWidth":null,"bytes":5217400,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":false,"topImage":true,"internalRedirect":"https://www.bloodinthemachine.com/i/165365788?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3791ac0d-54e6-4c65-bb2d-1753d881e35d_4032x3024.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3791ac0d-54e6-4c65-bb2d-1753d881e35d_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3791ac0d-54e6-4c65-bb2d-1753d881e35d_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3791ac0d-54e6-4c65-bb2d-1753d881e35d_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3791ac0d-54e6-4c65-bb2d-1753d881e35d_4032x3024.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>The day before, Sunday, June 9th, Trump brought in the national guard to shoot rubber bullets and tear gas at people protesting ICE raids, and the blazing, graffiti’d Waymos quickly became an icon of the conflict. Images of masked demonstrators standing atop the roof of the smoldering vehicles went viral, circulated by news outlets and shared unapprovingly by Elon Musk. You have probably seen many of them at this point.</p><p><em>A quick message: BLOOD IN THE MACHINE is 100% reader-supported and made possible by my incomparable paying subscribers. I’m able to keep the vast majority of my work free to read and open to all thanks to that support. If you can, for the cost of a coffee a month, or, say, an LA taxi ride a year, consider helping me keep this thing running. Thanks everyone. Onwards.</em></p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5ce50e5-403a-41da-bd5d-1a2a8346bf71_584x632.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5ce50e5-403a-41da-bd5d-1a2a8346bf71_584x632.png 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5ce50e5-403a-41da-bd5d-1a2a8346bf71_584x632.png 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5ce50e5-403a-41da-bd5d-1a2a8346bf71_584x632.png 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5ce50e5-403a-41da-bd5d-1a2a8346bf71_584x632.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5ce50e5-403a-41da-bd5d-1a2a8346bf71_584x632.png" width="584" height="632" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/d5ce50e5-403a-41da-bd5d-1a2a8346bf71_584x632.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":632,"width":584,"resizeWidth":null,"bytes":376436,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":false,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/165365788?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5ce50e5-403a-41da-bd5d-1a2a8346bf71_584x632.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5ce50e5-403a-41da-bd5d-1a2a8346bf71_584x632.png 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5ce50e5-403a-41da-bd5d-1a2a8346bf71_584x632.png 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5ce50e5-403a-41da-bd5d-1a2a8346bf71_584x632.png 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5ce50e5-403a-41da-bd5d-1a2a8346bf71_584x632.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>That one in particular, is blunt and memorable—man conquering machine and all that.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c73ebb4-a2e1-4a03-9800-85fe27a2650f_486x784.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c73ebb4-a2e1-4a03-9800-85fe27a2650f_486x784.png 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c73ebb4-a2e1-4a03-9800-85fe27a2650f_486x784.png 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c73ebb4-a2e1-4a03-9800-85fe27a2650f_486x784.png 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c73ebb4-a2e1-4a03-9800-85fe27a2650f_486x784.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c73ebb4-a2e1-4a03-9800-85fe27a2650f_486x784.png" width="486" height="784" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/0c73ebb4-a2e1-4a03-9800-85fe27a2650f_486x784.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":784,"width":486,"resizeWidth":null,"bytes":593379,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":false,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/165365788?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c73ebb4-a2e1-4a03-9800-85fe27a2650f_486x784.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c73ebb4-a2e1-4a03-9800-85fe27a2650f_486x784.png 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c73ebb4-a2e1-4a03-9800-85fe27a2650f_486x784.png 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c73ebb4-a2e1-4a03-9800-85fe27a2650f_486x784.png 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c73ebb4-a2e1-4a03-9800-85fe27a2650f_486x784.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><div id="youtube2-DzEO33Be--0" class="youtube-wrap" data-attrs="{"videoId":"DzEO33Be--0","startTime":null,"endTime":null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/DzEO33Be--0?rel=0&autoplay=0&showinfo=0&enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Google <a href="https://www.nbcnews.com/news/us-news/live-blog/trump-los-angeles-immigration-protests-rcna211695#rcrd81170">suspended Waymo service to downtown LA</a>, and <a href="https://www.cbsnews.com/sanfrancisco/news/waymo-san-francisco-service-curtailed-ice-protests/">also in San Francisco</a>, where solidarity protests unfolded. “Why the self-driving cars were targeted remains unknown” is a refrain I heard multiple times on TV and radio news. The reason does not seem so secret to a lot of people.</p><p>“Oh they called them up on purpose, lit ‘em on fire like that,” a cameraman shooting on the scene the next day told me. The charred husks in a neat line do seem to suggest that was the case. Other witnesses and journalists who were there shared the same story: People summoned the cars to light them on fire when they arrived. Protestors were reportedly calling them “spy cars” as they were vandalized and set ablaze, and some noted how the cars can share data with the LAPD.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf13ee38-e583-46f2-a253-d78f1a10dcf3_3024x4032.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf13ee38-e583-46f2-a253-d78f1a10dcf3_3024x4032.jpeg 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf13ee38-e583-46f2-a253-d78f1a10dcf3_3024x4032.jpeg 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf13ee38-e583-46f2-a253-d78f1a10dcf3_3024x4032.jpeg 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf13ee38-e583-46f2-a253-d78f1a10dcf3_3024x4032.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf13ee38-e583-46f2-a253-d78f1a10dcf3_3024x4032.jpeg" width="1456" height="1941" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/bf13ee38-e583-46f2-a253-d78f1a10dcf3_3024x4032.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":1941,"width":1456,"resizeWidth":null,"bytes":5201660,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/165365788?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf13ee38-e583-46f2-a253-d78f1a10dcf3_3024x4032.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf13ee38-e583-46f2-a253-d78f1a10dcf3_3024x4032.jpeg 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf13ee38-e583-46f2-a253-d78f1a10dcf3_3024x4032.jpeg 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf13ee38-e583-46f2-a253-d78f1a10dcf3_3024x4032.jpeg 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf13ee38-e583-46f2-a253-d78f1a10dcf3_3024x4032.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>As a result, there’s been some speculation that the cars <a href="https://www.barrons.com/articles/burning-waymo-self-driving-cars-los-angeles-e6a7731f">were burned to destroy evidence</a>. There’s likely more to it than that—and that the cars weren’t torched to destroy evidence as much to punish collaborators. </p><p>It’s true, after all, that Waymos are roving surveillance machines. <a href="https://www.404media.co/lapd-publishes-crime-footage-it-got-from-a-waymo-driverless-car/">404 Media has reported</a> that the LAPD, as well as other police departments across the country, have obtained surveillance footage from Waymo vehicles and used it as evidence. Google, for its part, confirmed that it hands over this data upon request, usually, it says, through court order, warrant, or subpoena. </p><p>No one I spoke to would cop to having anything to do with actually burning the cars, much less discuss the reason the headline-grabbing tactic was deployed. But it might be noted that ICE raids are carried out using data provided by Silicon Valley companies—most notably Peter Thiel and Alex Karp’s Palantir, which has <a href="https://www.wired.com/story/ice-palantir-immigrationos/">a $30 million contract with ICE</a> to manage a “real-time” surveillance system on immigrants. But whether directly or <a href="https://www.businessinsider.com/google-amazon-microsoft-ice-cbp-third-party-contracts-cloud-2021-10">through third party contractors</a>, much of big tech, including Google, has made deals with ICE, too.</p><p>Who knows whether that played a role in ICE protestors’ coordinating a pyrotechnic display on Sunday, whether it was a spontaneous idea to make a memorable visual provocation, or just part of the pure chaos unfolding that day. But as I’ve argued in this newsletter before, in <a href="https://www.bloodinthemachine.com/p/torching-the-google-car-why-the-growing">light of previous epidemics of self-driving car trashings and torchings</a>, such actions are liable to spring from the growing reservoir of public anger towards a Silicon Valley that has grown unaccountable and extractive—and has now largely aligned itself with a punitive state. </p><div><hr></div><div class="digest-post-embed" data-attrs="{"nodeId":"62b27deb-12d5-43d4-baa4-c398fb2571f0","caption":"We don’t yet know exactly why a group of people very publicly graffitied, smashed, and torched a Waymo car in San Francisco. But we know enough to understand that this is an explosive milestone in the growing, if scattershot, revolt against big tech.","cta":"Read full story","showBylines":true,"size":"lg","isEditorNode":true,"title":"Torching the Google car: Why the growing revolt against big tech just escalated","publishedBylines":[{"id":934423,"name":"Brian Merchant","bio":null,"photo_url":"https://substack-post-media.s3.amazonaws.com/public/images/cf40536c-5ef0-4d0a-b3a3-93c359d0742a_200x200.jpeg","is_guest":false,"bestseller_tier":100}],"post_date":"2024-02-14T18:44:45.919Z","cover_image":"https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feedbf0a0-37c3-47bc-9325-b7e6e627920c_648x376.jpeg","cover_image_alt":null,"canonical_url":"https://www.bloodinthemachine.com/p/torching-the-google-car-why-the-growing","section_name":null,"id":141612935,"type":"newsletter","reaction_count":49,"comment_count":12,"publication_name":"Blood in the Machine","publication_logo_url":"https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21f9bf3-26aa-47e8-b3df-cfb2404bdf37_256x256.png","belowTheFold":true}"></div><div><hr></div><p>Now, given <a href="https://www.bloodinthemachine.com/p/the-tech-oligarchy-has-been-here">Silicon Valley’s more obvious alliance</a> with the Trump administration, that ambient anger has only been sharpened. It has <a href="https://www.bloodinthemachine.com/p/deconstructing-the-new-american-oligarchy">sparked backlashes of a different kind</a>, like the protests and firebombing of Teslas and Tesla dealerships. Waymos occupy part of this same nexus; surveillance technology operated by tech giant, undemocratically, opaquely, led by a CEO that visits Mar-a-Lago and <a href="https://www.politico.com/live-updates/2025/01/09/congress/google-donation-donald-trump-inauguration-00197233">donates to Trump</a>. The <a href="https://www.nytimes.com/2025/06/09/technology/waymo-san-francisco-immigration-protests.html">New York Times spoke</a> to a protestor in San Francisco yesterday who underlined this point: </p><blockquote><p>The robot taxis have become a way for some protesters to display resistance to the tech industry’s close ties to the Trump administration, said Elise Joshi, an activist in San Francisco who attended rallies on Monday.</p><p>“Waymos don’t have human drivers, they’re devoid of humanity,” she said. Destroyed robot taxis are “symbolic of the attempts, throughout the history of this country, by the tech industry to strip us of community.”</p></blockquote><p>Burning them down in broad daylight makes for quite a statement. Now there are just the husks left. But it wasn’t just Waymo cars that were targeted, though. Lime scooters were hurled down at police cars from an overpass and heaped onto the Waymo pyres, too. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a7333a0-f0ec-4021-ac3d-321a3179c153_594x635.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a7333a0-f0ec-4021-ac3d-321a3179c153_594x635.png 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a7333a0-f0ec-4021-ac3d-321a3179c153_594x635.png 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a7333a0-f0ec-4021-ac3d-321a3179c153_594x635.png 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a7333a0-f0ec-4021-ac3d-321a3179c153_594x635.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a7333a0-f0ec-4021-ac3d-321a3179c153_594x635.png" width="594" height="635" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/5a7333a0-f0ec-4021-ac3d-321a3179c153_594x635.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":635,"width":594,"resizeWidth":null,"bytes":413324,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/165365788?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a7333a0-f0ec-4021-ac3d-321a3179c153_594x635.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a7333a0-f0ec-4021-ac3d-321a3179c153_594x635.png 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a7333a0-f0ec-4021-ac3d-321a3179c153_594x635.png 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a7333a0-f0ec-4021-ac3d-321a3179c153_594x635.png 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a7333a0-f0ec-4021-ac3d-321a3179c153_594x635.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>I saw them wedged in bushes, broken into pieces on the side of the street, or toppled on the sidewalk next to a contingent of riot gear-clad police.</p><p>One thing that I was thinking about as I walked around downtown, the somewhat gloomy summer fog helping to hold the fumes and the apocalyptic mood from last nights’ violence in the air, multiple helicopters and an airborne drone circling, was the way that protestors had turned the self-driving cars against the state they were designed to appease. </p><p>Waymos are ultimately an automation technology product. And a key function of any kind of automation is that it serves as an accountability sink for the companies and managers that deploy it. Self-driving cars, AI, app-unlocked e-scooters—tech companies like this stuff because it promises not just to remove labor from the equation, but accountability. There is no driver to blame if something goes wrong with the self-driving car, it’s a glitchy algorithm; or better yet, it was probably somebody else’s fault because Google’s algorithm is safe and sophisticated, even if it won’t tell you how that algorithm actually works. Trade secret, sorry. Decision-making and responsibility is masked behind obscured algorithmic systems, available to the company and the state, but not to us. Meanwhile there is also no one visible who is responsible for maintaining and organizing the e-scooters; the app will teach <em>you</em> where to put the scooter or entice you to charge it for a buck. </p><p>These are technologies that are at best agnostic to the communities and environments they operate in, and more often are extractive. The Waymo contributes to congestion, surveils neighborhoods, replaces the Uber or cab driver, and sends any resultant future profits upstream to a tech company in Mountain View. The scooters jam up walkways, become e-waste, and scramble local traffic patterns. But it’s hard to blame robots for any of that, and get it to stick.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24bf56b3-dfd0-42f1-a738-a6bff83542c4_4032x3024.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24bf56b3-dfd0-42f1-a738-a6bff83542c4_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24bf56b3-dfd0-42f1-a738-a6bff83542c4_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24bf56b3-dfd0-42f1-a738-a6bff83542c4_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24bf56b3-dfd0-42f1-a738-a6bff83542c4_4032x3024.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24bf56b3-dfd0-42f1-a738-a6bff83542c4_4032x3024.jpeg" width="1456" height="1941" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/24bf56b3-dfd0-42f1-a738-a6bff83542c4_4032x3024.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":1941,"width":1456,"resizeWidth":null,"bytes":2869916,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/165365788?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24bf56b3-dfd0-42f1-a738-a6bff83542c4_4032x3024.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24bf56b3-dfd0-42f1-a738-a6bff83542c4_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24bf56b3-dfd0-42f1-a738-a6bff83542c4_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24bf56b3-dfd0-42f1-a738-a6bff83542c4_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24bf56b3-dfd0-42f1-a738-a6bff83542c4_4032x3024.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>Yet the ICE protestors who torched the Waymo cars and hurled Lime scooters onto the flames, are turning what the tech companies see as an asset into a vulnerability, and they’re exploiting it. There is no one around to stop them from hailing a Waymo car and destroying it, and no one to intervene in the untimely demise of the Lime scooter. They have turned these indifferent and extractive technologies into instruments of protest. They are weaponizing the accountability sink. </p><p>I walked up the hill to Grand Park by city hall, where the SEIU was holding a rally in support of David Huerta, the labor leader who had been arrested by ICE over the weekend and charged with felony counts of obstructing an immigration officer—he allegedly sat down and blocked an entrance as ICE agents were aiming to round up undocumented workers. I passed the national guard and their vests full of nonlethal ammo, and rows of police SUVs with the lights flashing on. </p><p>The drone hung in the sky, the helicopters circled, and I wondered if that was no longer burnt lithium I was smelling but the residue of tear gas. The band played Stand by Me, and some people sung along. There were no Waymos in sight, as far as I could see from my vantage point at the top—the ban still held. </p><div><hr></div><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p>That’s all for today. Cut me some slack if there are typos etc, wrote this up in a hurry/state of exhaustion.</p><p>I’m headed to New York this week to participate in a panel discussion for the Tribeca premiere of an Audible series I’m in—<a href="https://tribecafilm.com/films/world-premiere-of-audible-s-what-could-go-wrong-2025">What Could Go Wrong?</a> by Contagion screenwriter Scott Z Burn. I’ll be talking about AI, labor, and the luddites, as per usual. Come say hello if you’re around.</p><p>[I <em>PROMISE</em> the AI Killed My Job story is forthcoming, but, well, there was an uprising in my city over the weekend and that took precedence.]</p><p>Until next time—hammers up.</p><p>bcm</p>The Decision Not to Decide - Cybernetic Forests683c3a1798f7f00001c51c692025-06-08T11:01:36.000Z<img src="https://mail.cyberneticforests.com/content/images/2025/06/HfG_Reshape_2025-HighRes@eignerframes_DSC6464.jpg" alt="The Decision Not to Decide"><p><em>This is an expanded text based on previously delivered remarks to the </em><a href="https://reshapeforum.hfg-gmuend.de/?ref=mail.cyberneticforests.com" rel="noreferrer"><em>Re Shape AI Forum</em></a><em>, held in Hochschule für Gestaltung in Schwäbisch Gmünd, Germany in May 2025, and post-conference reflections. </em></p><p>What, to a designer, is AI?</p><p>Like any conversation about AI, the question requires some translation: as always, we need to ask what <em>artificial intelligence</em> means here. We also need to ask what we mean by <em>design</em>. There are many forms of design, from systems design to product design to print design, even the ways architects design buildings. The dictionary may be helpful: a link between these practices is <em>design</em> as a verb, "to conceive and plan out in the mind."</p><p>The role of design, therefore, almost always starts with the organization and presentation of information. Maps, charts, and graphic design might be the product or the end goal on the way to something beyond them. But a good building might tell visitors how to move through it. Good software helps users make sense of how to use it. In all cases, at this pure abstract level, design means to organize and structure concepts.</p><p>AI, then, has to be understood by designers as a tool for removing various degrees of decision-making in conceptualizing specific tasks and applications. When we hear that AI "saves time," what that means is that it gives designers the option <em>not to think</em> about parts of the process. It may automate a text description to fill in a page with plausible text or generate mock-ups of storyboards or snippets of code for a specific function.</p><p>In animation and design schools, I hear about students using Gen AI to create storyboards, making decisions that require <em>thinking</em> <em>through</em> things such as which line to draw next – and how that line would connect to the character or the world they're building.</p><p>Of course, this is an ideal. Students, in particular, never really do this, at least not at the beginning, and let's be frank: neither do busy design professionals. Students focus on the abstract stuff and the high-level movements from frame to frame. Part of teaching is to develop a level of intentionality, to ask: "But why did this happen?" or "Tell me about the <em>decision</em> you made here?"</p><p>When we think about everything as designed, we practice thinking about information hierarchies. That includes writing fiction or essays to architecture to posters for a lost cat. Design is not only about outlines, but the orchestration of details.  </p><p>Generative AI takes these conversations away: "Oh, I don't know; the AI just put that there," and so on. It's as if the decision wasn't significant enough to them and could be made later. This is sometimes true! However, making those decisions in the planning phase helps ensure the decision gets made at all. The earlier details are considered, the more the total design can make sense of and integrate them. When an intention is articulated <em>before</em> production, it assures that the decision gets made at all rather than forced into being.</p><p>I think students often don't realize that there is a decision to be made at all. This is what worries me the most about the use of AI in this thing I am calling design, which is, I argue, <em>the organization of structures of meaning in ways that convey that meaning externally</em>.</p><p>If the work of design here is around conveying meaning, then details allow a greater density of information to be expressed simultaneously, more efficiently, and even unconsciously to the audience. A designer who fails to <em>pack</em> meaning into tight spaces leaves the signal empty. In that case, arbitrary information creeps in, <em>diluting the strength of the signal of that intention</em>.</p><h2 id="decisions-are-a-form-of-power">Decisions are a Form of Power</h2><p><br>Designers who have been doing this for a while understand the exercise of power involved in these decisions. Decisions themselves are a form of power. Designers exercise power over the user's access to information through different mechanisms called affordances. These affordances exert control over access to possibility by limiting the imagination of what's possible in, for example, software and the social imagination.</p><p>Limiting the imagination might be a strange reversal of how people typically think of design. Consider graphic design. The designer needs to limit the imagination of what the reader encountering the graphic might assume its message to be. So, they prioritize the most essential information in ways that restrict the user's frames of reference. Put a big picture of a cat on there, and the user is less likely to imagine this is a poster about the solar system. Put the word "missing" in a large font, and you will create a direct understanding of the situation: the poster is not just telling people about your awesome cat, but requesting their help in finding it.</p><p>The user – the reader of the poster – sees those two things and immediately knows that a cat is lost, so they intuitively respond. They'll watch for the cat. The social context of the missing cat poster offers some degree of assistance to the designer. The designer has provided enough information and <em>reduced the reader's imagination</em> so that the user can intuit, based on a social understanding of cat poster logic, that this is the message.</p><p>Such principles apply to all kinds of design thinking in various careers. Policy 'design', for example, is constantly at war with the imagination. When voters fail to imagine the world of possible policy, they may resign themselves to cynicism. The imagination can also be tapped to activate interest in or fear of policy decisions.</p><p>Consider the phrase '<em>defund the police'</em>, and its role in conveying a particular kind of world. Voters and resistant politicians often filled in the consequences of that world by conjuring the imagination of anarchy. What if it just meant managing a city's homeless population without putting it on the list of things police have to do? Moving jobs unrelated to public security away from police and toward a civic community task force paints a different, more precise image of the policy imagination at work here.</p><p>In a more banal way, graphic software interfaces also constrain the imagination of what is possible. For example, code is flexible. If you can write code, you can write different functions for exact tasks. But most people don't want to look at or learn how to write the lines of code needed to get specific, highly customizable tasks done. So, designers create interfaces to simplify the activation of specific possibilities by designing buttons that automate the activation of specific code.</p><p>So, interfaces and design of all kinds exercise a certain kind of decision-making, and decision making is a form of power. Enter generative AI, and let's go back to our question: What, to a designer, is generative AI?</p><h2 id="automating-decisions-gives-power-away">Automating Decisions Gives Power Away</h2><p><br>To reiterate: AI is a tool for removing degrees of decision-making in conceptualizing and implementing specific tasks and applications. Removing decisions "saves time," giving designers the option <em>not to think about parts of the process</em>. In turn, AI fills in for these decisions with statistically plausible details drawn from similar vectorial representations in the AI model. It's filled in with the average of other people's decisions.</p><p>Many assert that averaging works: what it replaces was probably ignorable already. A missing cat photo can show a picture of a cat sitting on a porch and nobody pays attention to the porch, so no harm in replacing it with an average porch, or removing it altogether. But this <em>does</em> convey information and context in subtle ways. The cat's size, where it hangs out, and its coloration: Is the cat brown like wood or darker? You may not need this in your cat poster, but that is a <em>decision</em>.</p><p>When the stakes are higher (though perhaps no stakes are higher than one's missing cat), the decisions about the details become even more crucial. Humans have an ever-decreasing degree of patience for their own attention. Information environments are flooded with information people do not pay attention to, which is preferable to information they pay just minimal attention to. If you have a limited window of opportunity to convey information through noise, every aspect of design ought to be considered against the message you intend to send and how it is efficiently transmitted.</p><p>Carefully navigated simplification is at the heart of good design. AI itself is anchored in a human desire for <em>simplification</em> of the process of communicating ideas and information. AI is rooted in metaphors of reduction, generating simplistic, rather than <em>simple,</em> formats for information delivery. AI relies on activating the socially constructed comprehension of high-level information: understanding what a building looks like, as opposed to what a specific building looks like—understanding that a cat poster has a cat photo, but neglecting to pay attention to the specific cat.</p><p>When AI is used as a tool to simplify complex (or even specific) imaginaries, the designer hands over enormous power and opportunity. This can alienate us from grappling with the loss of whatever AI replaces. This is often described in terms of labor—fairly! I am all for people keeping jobs. But the focus on missing jobs is more than a dismissible number on a jobs chart. The job represents a series of human decisions that will no longer be <em>decided, </em>but <em>assumed.</em></p><p>What do we lose when we hand power over design decisions to algorithmic conjecture?</p><h2 id="what-we-trade-decisions-for">What We Trade Decisions For</h2><p>By automating these decisions, we replace them with references to generalized, previously established, socially constructed imaginaries.  </p><ul><li><strong>Generalized</strong>, in that the AI relies on plausibility rather than specificity: does this look like the background of an architectural schematic? Does this look like a piece of functional code? Does this look like a cat poster?</li><li><strong>Previously Established,</strong> because it has trained on a library of images, code snippets, or writing that has already existed and extends the patterns found in the dataset rather than uncovering new ideas.</li><li><strong>Socially constructed,</strong> in that the designer using AI often relies on genre tropes clearly understood by 'most' people, which is why the AI produced them in the first place.</li></ul><p>Now, arguably, this might have a place. It may not matter, to the people looking at an architectural schematic, that the trees in the courtyard are a native kind of tree or that the silhouettes have a reasonable number of arms. They create a reasonable facsimile of detail in which a person does not need to research what kind of tree might grow best in this courtyard. The issue is that somebody, somewhere, has to <em>decide not to decide</em>. The decision <em>not to decide, </em>paradoxically, requires enormous expertise. The cultivation of that expertise comes through the experience of making decisions.</p><p>The decision <em>not to decide</em> requires the skill to imagine what is not in front of you and the recognition that the people you design for will not care much or see their imagination drift due to it <em>not</em> being presented or presented in an abstract sense. Without this experience, the imagination of what AI <em>can</em> do closes off how we might create work <em>without</em> AI. We can automate a decision when <em>know</em> we don't need to consider it.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mail.cyberneticforests.com/content/images/2025/06/HfG_Reshape_2025-HighRes@eignerframes_DSC6428.jpg" class="kg-image" alt="The Decision Not to Decide" loading="lazy" width="2000" height="1333" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/06/HfG_Reshape_2025-HighRes@eignerframes_DSC6428.jpg 600w, https://mail.cyberneticforests.com/content/images/size/w1000/2025/06/HfG_Reshape_2025-HighRes@eignerframes_DSC6428.jpg 1000w, https://mail.cyberneticforests.com/content/images/size/w1600/2025/06/HfG_Reshape_2025-HighRes@eignerframes_DSC6428.jpg 1600w, https://mail.cyberneticforests.com/content/images/size/w2400/2025/06/HfG_Reshape_2025-HighRes@eignerframes_DSC6428.jpg 2400w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Eryk Salvaggio in front of an image of Fordite at the Re Shape AI Forum, from a discussion on adversarial AI practices excised from this transcript. Image by Stefan Eigner, </span><a href="https://www.eignerframes.de/?ref=mail.cyberneticforests.com"><span style="white-space: pre-wrap;">https://www.eignerframes.de/</span></a></figcaption></figure><h2 id="plausible-perspectives">Plausible Perspectives</h2><p>At a recent conference, I watched a presentation by an AI lead at a major tech company. They were using a pipeline of generative AI to write texts about speculative design following a standardized futurist playbook.</p><p>He created an RSS feed about technology and design to identify what futurists call "signals" - what happened today, and what future might it lead to? The first part of his AI pipeline skimmed and summarized that day's news feed.</p><p>The designer had linked that to a chatbot for which he had created a series of personas. These personas reflected an invented person, including details such as career status, line of work, gender, nationality and race. The designer created a system in which the text was generated from the "perspective" of those personas. He suggested this was a way to build diversity in the speculative design imagination. By asking the model to produce text from combinations of specificities, he argued, he made more realistic representations of marginalized people and points of view (the designer himself came from a region of the world under-represented in AI and design).</p><p>I found the entire enterprise to be earnest but philosophically misguided. The designer had replaced human decision-making from the pipeline. Here, they had deferred to the Large Language Model's statistical representation of decisions people with various experiences <em>might</em> make. To reiterate, by automating these decisions, he had replaced them with references to <em>generalized, previously established, socially constructed</em> imaginaries, centering the resulting hypothesis on the <em>automated imagination</em> of often marginalized people.</p><p>The process was fascinating and troubling. The resulting speculative design proposals, ostensibly at the core of this project, were about as uninteresting as you might expect. The view counts on the posts suggested nobody was paying much attention to them. But this idea of automated representation – that people's identities can be modeled and predicted in ways that displace their actual experiences – saturates the tech industry's AI promises. It is there in discussions of AI for governance, for art, for design. </p><p>Low intentionality has low stakes in low-stakes design, and people have made bad posters forever. However, low intentionality is a side effect of laziness, of a lack of attention or interest in specific details. <em>Details, </em>though, are at the core of design as a practice. Low intentionality poses enormous risks in scaled software, and applications, and especially in policy and systems design.</p><p>Recently, a health report from the Make America Healthy Again commission, led in the US by Robert F Kennedy Jr, was found to have <a href="https://www.reuters.com/business/healthcare-pharmaceuticals/trump-administration-report-us-child-health-cited-nonexistent-studies-media-2025-05-30/?ref=mail.cyberneticforests.com">made up citations</a> in a pattern indicating the use of a Large Language Model. This is telling: it was policy in which evidence was not really a factor. The use of AI to write the report signaled, paradoxically, high intentionality in the <em>gesture</em> of publishing a report, but low intentionality in considering the details of scientific evidence.</p><p>There's a false sense among the AI booster community that AI can generate entire products without the process of thought that goes into them. This can be images, code, and text. But images, code, and text are pieces of larger processes, and they reflect the deliberation and attention to detail that ensure those larger processes function as intended.</p><p>So, what, to a designer, is AI? I propose that it is a tool that can be implemented cautiously, with an awareness that it is substituting for decision-making. At best, AI can fill in details that don't matter, things that knowledgeable people can discern are ignorable.</p><p>The greater degree of your product design pipeline you draw from AI, the more ignorable it will be. The more you remove elements of decision-making, the less intentional the work will be. The more dependent upon these decisions a system is, the more important those decisions are. In handing away power over these decisions, we need to consider the weight of these decisions, not least of which is the decision not to decide.</p><hr><h2 id="this-week-in-tech-policy-press">This Week in Tech Policy Press</h2><h3 id="musk-ai-and-the-weaponization-of-administrative-error">Musk, AI, and the Weaponization of 'Administrative Error'</h3><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mail.cyberneticforests.com/content/images/2025/06/81cb75c9f4cb4c5737a810a71eff66c2750a5262-1200x675.png" class="kg-image" alt="The Decision Not to Decide" loading="lazy" width="1200" height="675" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/06/81cb75c9f4cb4c5737a810a71eff66c2750a5262-1200x675.png 600w, https://mail.cyberneticforests.com/content/images/size/w1000/2025/06/81cb75c9f4cb4c5737a810a71eff66c2750a5262-1200x675.png 1000w, https://mail.cyberneticforests.com/content/images/2025/06/81cb75c9f4cb4c5737a810a71eff66c2750a5262-1200x675.png 1200w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Elon Musk and President Donald Trump held a press conference to announce Musk's departure from DOGE, Friday, May 30, 2025, in the Oval Office. (Official White House </span><a href="https://commons.wikimedia.org/wiki/File:P20250530MR-1409_President_Donald_Trump_participates_in_a_press_conference_with_departing_DOGE_adviser_Elon_Musk.jpg?ref=mail.cyberneticforests.com#/media/File:P20250530MR-1409_President_Donald_Trump_participates_in_a_press_conference_with_departing_DOGE_adviser_Elon_Musk.jpg"><span style="white-space: pre-wrap;">photo</span></a><span style="white-space: pre-wrap;"> by Molly Riley)</span></figcaption></figure><p>With Musk and Trump in a post-DOGE meltdown, it's tempting to think this marks some end for DOGE's AI takeover of the federal government. But as I wrote this week (before the social media wars), Elon Musk (and "AI") are best understood as accountability shields. With his departure, the AI takeover continues, but implementation shifts to Palantir, founded by Peter Thiel, friend of Musk and JD Vance.</p><p>The Trump-Musk breakdown adds to destabilization as the heart of Trumpism: creating conditions through which "administrative" or "programming" or "formatting" errors, or "rogue employees" like Musk or any other number of fired admins, are always there to blame for any crisis. These crises, and the distancing Trump builds into them, ultimately give Trump a unique form of political leverage.</p><div class="kg-card kg-button-card kg-align-center"><a href="https://www.techpolicy.press/musk-ai-and-the-weaponization-of-administrative-error/?ref=mail.cyberneticforests.com" class="kg-btn kg-btn-accent">Read the Piece in Tech Policy Press</a></div><p></p>Elon Musk's job has already been automated - Blood in the Machinehttps://www.bloodinthemachine.com/p/elon-musks-job-has-already-been-automated2025-06-05T23:46:50.000Z<p>The thing that will stick in my head most about the (first? <a href="https://www.theguardian.com/us-news/live/2025/jun/05/donald-trump-travel-ban-elon-musk-joe-biden-friedrich-merz-us-politics-live-news-updates">last</a>??) “Elon in the White House” chapter, now that it is ostensibly drawing to a close, won’t be the more headline-generating carnival antics—not Trump and Musk hawking Teslas on the White House lawn, not Musk hosting a press conference in the Oval Office while his son taunted the president—but a handful of lines that Speaker of the House Mike Johnson liked to trot out in the peak DC Elon days of March: </p><p>“Elon has cracked the code,” Johnson said as part of a sound bite he delivered a few times in press conferences and <a href="https://www.facebook.com/watch/?v=552802267787956">on Fox talk shows</a>. Whereas previously the deep state was hidden, Johnson said, Musk had “created these algorithms that are constantly crawling through the data,” finding fraud and waste. He made a little spidery gesture with his hand to emphasize the point. “As he told me in his office, the data doesn’t lie.”</p><div id="youtube2-xTj0huJrLgo" class="youtube-wrap" data-attrs="{"videoId":"xTj0huJrLgo","startTime":"69s","endTime":null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/xTj0huJrLgo?start=69s&rel=0&autoplay=0&showinfo=0&enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>This was all disingenuous of course. You did not need AI-powered algorithms run by the CEO of Tesla to look up line items in the USAID budget, or to then declare the expenses you found part of a Democratic deep state conspiracy. I am aware of no legitimate fraud that was uncovered by DOGE. But that idea—that a heroic tech titan was turning his entrepreneurial acumen to the business of modernizing the government, rescuing it from bureaucratic capture with cutting edge AI technologies—was extremely useful to the GOP, for a while. </p><p>It empowered DOGE to <a href="https://www.bloodinthemachine.com/p/the-federal-tech-workers-facing-down">do mass firings</a>, gave the GOP an air of being future-forward, and helped <a href="https://www.bloodinthemachine.com/p/whats-really-behind-elon-musk-and">the Trump administration dodge accountability.</a> It was a big job. Sadly for Elon, after successfully creating the framework through which the GOP can accomplish those goals without him, he has been pushed out in ignominy. He lost his job, and has, you might even say, been replaced by his own AI.</p><p><em>A quick message: BLOOD IN THE MACHINE is 100% reader-supported and made possible by my exceptional paying subscribers. I’m able to keep the vast majority of my work free to read and open to all thanks to that support. If you can, for the cost of a coffee a month, or, uh, a coffee table book a year, consider helping me keep this thing running. Paid subscribers also get access to the Critical AI report section beyond the paywall after the article, where we’ll dive into the Washington Post’s new AI editor, look at a case of Potemkin AI, aka a $1.5 billion startup with an “AI” was really just a bunch of humans in a trenchcoat, and the impacts of AI on teachers, according to teachers. Thanks everyone. Onwards.</em></p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p>There are clearly many factors leading to Elon’s exit from the White House, as his 5-month term as a special government adviser is over. There’s the dire state of some of his companies, notably Tesla, whose reputation and stock price alike have been sullied by widespread backlash to his political activities; Musk’s own story is that he’s returning to focus on them. (Whatever else Elon’s exit is, it’s a victory for <a href="https://www.bloodinthemachine.com/p/get-your-signal-group-chats-together">organizers in the Tesla Takedown movement</a>.) There are also the widely reported tensions between Trumpworld and Musk: In one particularly juicy anecdote, Treasury Secretary Scott Bessent and Musk are said to have clashed so heatedly over DOGE that they <a href="https://www.newsweek.com/elon-musk-scott-bessent-physical-confrontation-2079285">got into a shoving match</a>, and now Elon is literally <a href="https://www.vanityfair.com/news/story/i-am-not-taking-drugs-insists-black-eyed-elon-musk">leaving DC with a black eye</a>. A lot of Trump insiders were tired of Elon’s unpredictability and <a href="https://futurism.com/trump-officials-calling-musk-crazy-uncle-elon">bad jokes</a>, and maybe <a href="https://www.nytimes.com/2025/05/30/us/elon-musk-drugs-children-trump.html">all the drugs</a>, and they wanted him out. </p><p>And all that was <em>before</em> Trump and Musk <a href="https://www.nytimes.com/live/2025/06/05/us/trump-elon-musk">went nuclear on each other</a>, with Musk taking to X to call Trump’s signature bill “disgusting” and Trump saying he was “disappointed in Elon” at a press conference and Musk posting that Trump is in the Epstein files and Trump threatening to cancel Musk’s billions in government contracts and who knows what else by the time you read this. Until <em>then</em>, a lot of the discussion has focused so far on how much power Musk will retain in Washington, and how much damage he inflicted over the last five months.</p><p>Analysts calculated that while DOGE claims it has cut $160 billion from the budget by cancelling programs and laying off workers, it has also <a href="https://www.cbsnews.com/news/doge-cuts-cost-135-billion-analysis-elon-musk-department-of-government-efficiency/">incurred $135 billion in costs</a>, via lost productivity, bungled layoffs, and damage incurred to digital systems and infrastructure; to say nothing of the mounting legal costs related to allegations of illegal firings. What DOGE has “saved” in labor costs, in other words, has all but been cancelled out by new costs incurred. Dismantling USAID and <a href="https://www.kff.org/global-health-policy/fact-sheet/the-trump-administrations-foreign-aid-review-status-of-pepfar/">cutting off PEPFAR</a>, meanwhile, have <a href="https://www.npr.org/2025/05/28/nx-s1-5413322/aid-groups-say-usaid-cuts-are-already-having-deadly-consequences">almost certainly led to tens of thousands of deaths</a> by preventing people who relied on the agencies to fund medical clinics from getting it. And the layoffs have upended and ruined lives.</p><p>And whether or not this is the end of the road for the Trump-Musk alliance, we do have to understand that the damage is continuing and will continue<em> </em>to be done via DOGE, via those frameworks Musk set in motion. That basically, Elon’s job has been automated<strong>.</strong> After he put in all that work for the GOP, donating to Trump’s campaign, then validating DOGE mission and even taking the flack for it, and accomplishing some key political goals—well, now Trump and the GOP don’t need him anymore. Especially if he’s going to be a pain in the ass around the White House, <a href="https://apnews.com/article/wisconsin-supreme-court-elon-musk-81f71cdda271827ae281a77072a26bad">lose them elections in Wisconsin</a>, and get in the way of passing their tax cuts. The system he hyped and set in motion can function without him—you might even venture to say that Elon has lost his gig to DOGE’s AI—or at least a fantasy of AI—that he helped construct himself. </p><p>One of Musk’s top deputies in DOGE, Thomas Shedd, has <a href="https://www.bloodinthemachine.com/p/doges-ai-first-strategist-is-now">thrust an “AI-first strategy”</a> onto a number of federal agencies, an effort Musk has pushed. Musk’s a big believer in the power of AI to automate jobs, after all: Last year, <a href="https://www.cnn.com/2024/05/23/tech/elon-musk-ai-your-job">at a tech conference</a>, Musk said that pretty soon, “Probably none of us will have a job.” </p><p>Many on the right have of course long dreamed of downsizing the federal workforce, ridding offices of civil servants, and installing more partisan ideologues. In the recent <a href="https://www.newyorker.com/magazine/2025/06/09/curtis-yarvin-profile">New Yorker profile of the tech-right thoughtfluencer Curtis Yarvin</a>, Ava Kofman notes that, ‘In a 2021 appearance on a far-right podcast, Vice-President J. D. Vance… cited Yarvin when suggesting that a future Trump Administration “fire every single mid-level bureaucrat, every civil servant in the administrative state, replace them with our people,’ and ignore the courts if they objected.” This was also an arch goal of the famed Project 2025 policy document that has animated much of Trump’s early agenda. Musk and his DOGE project made it possible. </p><div><hr></div><div class="digest-post-embed" data-attrs="{"nodeId":"7b5196b5-44c3-4a1d-b096-5eaa46f8cd8c","caption":"The “What did you do last week?” emails were bad enough. Sent out at the behest of Elon Musk’s DOGE team, the terse note exhorted two million government workers to justify their jobs with a list of five “bullets” of things they accomplished...","cta":"Read full story","showBylines":true,"size":"md","isEditorNode":true,"title":"What's really behind Elon Musk and DOGE's AI schemes","publishedBylines":[{"id":934423,"name":"Brian Merchant","bio":null,"photo_url":"https://substack-post-media.s3.amazonaws.com/public/images/cf40536c-5ef0-4d0a-b3a3-93c359d0742a_200x200.jpeg","is_guest":false,"bestseller_tier":100}],"post_date":"2025-02-25T20:34:49.744Z","cover_image":"https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faef0abc1-c814-4171-a9ae-3ef6ff98cc33_2048x1366.jpeg","cover_image_alt":null,"canonical_url":"https://www.bloodinthemachine.com/p/whats-really-behind-elon-musk-and","section_name":null,"id":157851976,"type":"newsletter","reaction_count":121,"comment_count":17,"publication_name":"Blood in the Machine","publication_logo_url":"https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21f9bf3-26aa-47e8-b3df-cfb2404bdf37_256x256.png","belowTheFold":true}"></div><div><hr></div><p>This was Musk’s job: to use his tech billionaire mythos to give technocratic cover for an expansive and widely unpopular political project. To provide the spark, the justification, and the narrative to render the act of firing tens of thousands of federal employees if not palatable, then at least thinkable and actionable. Under DOGE, this was no mere heartless mass layoffs of public workers, no, it was an “AI-first strategy” that was necessary to right-size the government and run it efficiently, like a business. Musk’s job was to imbue the proceedings with a sense of futurity, to make a show of promoting AI tools that are <a href="https://www.theatlantic.com/technology/archive/2025/03/gsa-chat-doge-ai/681987/">supposed to replace government workers</a>—and to impart the logic that federal offices are to be hollowed out, mined for data, and automated, onto DOGE and the Republican party.</p><p>Russ Vought, the head of the Office of Management and Budget and an author of the Project 2025 document, had made it clear long before DOGE that terminating career federal workers was a priority. This may be why, unlike other White House officials who butted heads with Musk, Vought got on board with his project. </p><p>“Musk has helped to create a permission structure for the kind of cuts that Vought and his fellow budget hawks have long dreamed of making,” <em>Politico</em> noted in an article about <a href="https://www.politico.com/news/2025/03/24/inside-elon-musk-and-russ-voughts-quiet-alliance-00243290">the “quiet alliance” between the two men</a>. Now Vought is <a href="https://www.wsj.com/politics/policy/doge-russ-vought-trump-elon-musk-0a1bbeb4">largely seen as Musk’s successor at DOGE</a>, picking up where Elon left off. He’s already taken over as the acting head of the gutted Consumer Finance Protection Bureau (CFRB), effectively shutting it down. And DOGE is, <a href="https://www.wired.com/story/doge-elon-musk-really-not-leaving/">per WIRED</a>, who interviewed a number of government employees impacted by the agency, “busier than ever.”</p><p>Readers of this newsletter know that automation often is <a href="https://www.bloodinthemachine.com/p/whats-really-behind-elon-musk-and">not about efficiency at all, but control</a>—and that automation technology, AI or otherwise, doesn’t actually have to work to be activated by management. Automation is a story. It’s one of the most powerful permission structure-generators for layoffs out there, and one of the most reliable ways for bosses to consolidate control. Russ Vought and JD Vance don’t much care whether the AI chatbots DOGE is touting in the federal government can replace jobs effectively or not—their goal is to winnow non-loyal federal workers from the government, and if the logic of “AI-first” or “automation” help them do that, so be it. </p><p>Musk conceived and launched DOGE, imbued it with its Silicon Valley-adjacent operating mythology, made himself its avatar, and animated its mission. He succeeded in firing thousands of people, hollowing out government agencies, and opening up departments to data mining operations. That operation is now free to be run by lower-profile Trumpworld denizens like Vought, where, arguably, it can do just as much if not more damage without attracting the headlines and attention it otherwise would have with Musk’s involvement. </p><p>And so, that will likely be his legacy: a semi-autonomous DOGE, empowered to foist its “AI-first” logic on departments to justify layoffs and cuts, to be replaced by brittle AI systems operated by Silicon Valley firms. Now that Musk is out, the baton can and will be picked up by Vance and Vought and whoever else needs to use it. At least in its efforts to carve up the federal government, the GOP doesn’t need Musk anymore—in fact, he may be a liability. (There’s a reason that most oligarchs don’t run around with chainsaws onstage taking open delight in the misery of others—people start to fucking hate them.)</p><p>No, they just need the made-up algorithms they told everyone Elon created, the ones that are “constantly crawling through the data.” In reality, of course, those algorithms are just a bunch of DOGE guys like Big Balls ransacking federal offices and making lists of people to fire. </p><p>“Was it all bullshit?” Trump asked his advisers about DOGE’s project recently, per the <a href="https://www.wsj.com/politics/policy/trump-elon-musk-relationship-64773ad1?gaa_at=eafs&gaa_n=ASWzDAgFX4E26MXm3avqMAJCatGTdkacy_fNoJDz_tvfAaHM2iQOHVlAY_HhPYyWiMI%3D&gaa_ts=6841f7b0&gaa_sig=OWrKv28yatDHIY5ez5p60KZZJ6Zp-VAmhL5pnyNPYgjju-ulmPTwQWkbKdiFm4vNjS1kjXype-5PexqrxGNB-g%3D%3D">Wall Street Journal</a>. And of course it is. Trump doesn’t understand, as some of his staffers do, as Vought does, that a lot of automation is exactly that—a bullshit story that serves management nonetheless. They may not get the $2 trillion in cuts Musk promised, but they are consolidating control of the bureaucracy—tearing up privacy protections and ramping up surveillance in the process—and reshaping it in their image. Musk may be out of the picture, but he leaves behind a powerful imperative that the party can exploit long after his gamer chair has been wheeled out of the White House.</p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h1>CRITICAL AI REPORT 6/5/25</h1><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0aad0506-3738-4f96-82ba-6d8301c889af_1631x1155.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0aad0506-3738-4f96-82ba-6d8301c889af_1631x1155.png 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0aad0506-3738-4f96-82ba-6d8301c889af_1631x1155.png 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0aad0506-3738-4f96-82ba-6d8301c889af_1631x1155.png 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0aad0506-3738-4f96-82ba-6d8301c889af_1631x1155.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0aad0506-3738-4f96-82ba-6d8301c889af_1631x1155.png" width="1456" height="1031" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/0aad0506-3738-4f96-82ba-6d8301c889af_1631x1155.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":1031,"width":1456,"resizeWidth":null,"bytes":119022,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/165215035?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0aad0506-3738-4f96-82ba-6d8301c889af_1631x1155.png","isProcessing":false,"align":null}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0aad0506-3738-4f96-82ba-6d8301c889af_1631x1155.png 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0aad0506-3738-4f96-82ba-6d8301c889af_1631x1155.png 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0aad0506-3738-4f96-82ba-6d8301c889af_1631x1155.png 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0aad0506-3738-4f96-82ba-6d8301c889af_1631x1155.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><h2>How to resist AI in 2025</h2><p>It can be a daunting question, to say the least. Big tech is all-in on AI, firms like OpenAI have amassed historic amounts of venture capital, and Silicon Valley AI products are inundating and often eroding our institutions, our workplaces, our daily lives. So how do we fight back? </p><p>The AI Now Institute—where I hold a journalist in residence honorific—issued its <a href="https://ainowinstitute.org/publications/research/ai-now-2025-landscape-report">2025 landscape report this week</a>, and I could not recommend spending some time with heartily enough. It does a deft job of breaking down the key harms inflicted by AI companies, and the biggest threats to society if the current trajectory Silicon Valley is taking holds. </p><p>The report, parts of which I reviewed, and parts of which I contributed to, explains how AI is entrenching power for big tech and the oligarch class, speeding deregulation, dislodging human expertise and threatening workers—and how there's still time to refuse all that. It delves into the recent history of “AGI” and how it’s wielded by AI companies—a topic I investigated at length <a href="https://ainowinstitute.org/publications/ai-generated-business">in my last AI Now report</a>, too. </p><p>From the executive summary: </p><blockquote><p><strong>it’s not just market power we need to be concerned with</strong>: These tech oligarchs are counting on a wholesale rewriting of our social and economic foundations, using AI as the justification. From breaking apart the US federal government and raiding citizen data under the guise of efficiency, to redesigning workflows to devalue human labor and creativity so they are AI-ready, to redirecting our entire energy infrastructure to prioritize their technology over people’s basic needs, the vision promulgated by tech oligarchs requires, as a foundation, the unraveling of core social, political, and economic fabrics.</p><p>Across our information ecosystem, from science to education, healthcare, culture, and art, AI is being positioned as a disruptive new infrastructure and a mediating force. <strong>In truth, though, it rehashes an old playbook, helicoptering in solutions built on the extraction of expertise and value from all corners of society</strong>—solutions that always, eventually, amount to the further degradation of life for the most marginalized among us.</p></blockquote><p>The report provides a roadmap for labor, community organizers, and policymakers to take on AI firms. I’m biased, sure, but it’s absolutely worth a read. </p><h2>The Washington Post seeks to pair cheap content with an “AI writing coach”</h2><p>It’s rare that a corporation comes along and announces an effort openly intended to deskill workers with technology, but, well, Jeff Bezos’s Trump-era Washington Post </p>
<p>
<a href="https://www.bloodinthemachine.com/p/elon-musks-job-has-already-been-automated">
Read more
</a>
</p>
Democracy & the AI Question - Cybernetic Forests6832f795c7a61100012e2aef2025-06-01T11:02:06.000Z<img src="https://mail.cyberneticforests.com/content/images/2025/05/image0-1.jpeg" alt="Democracy & the AI Question"><p>Remarks from an online discussion during <em>L’IA en question, questions à l’IA, </em>a two-day conference at the Centre Pompidou, Paris, on May 24/25. The event focused on debates about the role of AI in contemporary society. I spoke at the panel on AI and democracy. I spoke alongside the “against” panel with Charleyne Biondi, Olivier Alexandre and Gaspard Koenig, moderated by Stéphan-Eloïse Gras. </p><p>Unfortunately, the remarks as prepared were not delivered in full due to the time constraints required for sequential translation.</p><hr><p><strong>Q: We often hear that AI is helping us to make better informed decisions, that it is helpful to create policies and drive strategies. But is AI really helping democracy?</strong></p><p>That depends on what we mean by <em>AI</em>. The best use of AI is advanced data analytics. In that case, we collect data carefully and cater to the questions we seek to answer. Then, the data provides another layer of insight into a network of perspectives on the challenge we are addressing. This process is called data science; and to be clear, we have often done even <em>this</em> quite poorly.</p><p>But after multiple failed predictions of election outcomes, biased research results, and so on, we were beginning to learn how to frame data collection better: focused around specific inquiries, ensure our data collection matched our goals, test the assumptions behind our questions, and match our claims to the resulting outcomes. We began to accept that data-driven approaches had limits. This mindfulness of the limits of data as a window into the world provided a framework for better uses of data analytics, which sometimes used machine learning.</p><p>Today's use of AI is vastly different. <em>Generative</em> AI is not about carefully curated, constrained datasets to analyze data. Consider, for example, Large Language Models. These models are, by definition, large. They are trained on data from billions of sources, connecting words based on frequency, sometimes calibrated to specific kinds of text, for example, to resemble answering a question.</p><p>But they do not contain any insight into the nature of those words, nor do they comprehend the world this language describes. The industry designs them to produce <em>plausible language</em>. Plausible language means that a human reader can look at it and decide that a thinking person has written it. But plausibility does not mean <em>truth</em> or <em>accuracy</em>. I can tell you many things that sound like convincing language – I could tell you statistics and give you citations. But they may be fake. That is the difference between plausibility and simple, approximate, accuracy.</p><p>So, suppose we choose to use large language models for decision-making at the scale required of politics. In that case, we are automating bad data science. We are not building tools to answer specific questions. We are using general tools and <em>pretending</em> they can answer specific questions, based on how plausibly it crafts the sentence.</p><p>Furthermore, there are many ways to control and manipulate these AI outputs. For example, there is the "system prompt." This prompt is a set of instructions to the program that steers the language of its responses. A designer, such as a tech company, can use this system prompt to bend the model's output toward any decision it wants. Consider, for example, that OpenAI's early image generation model, DALL-E, would threaten to block users for requesting images of people of the same gender kissing one another. That political decision was invisible to most people: we can't see what controls they are imposing.</p><p>As such, Large Language Models pose a real danger to political deliberation in that they are offered to us as if they are "objective" or even "intelligent." They reflect biases, of course. However, tech companies can also invisibly manipulate the generation of this text. This manipulation is, of course, present in <em>all</em> media. However, we rarely had a regime insist, for example, that network television should replace government.</p><p>Today, states are turning to a corporate-owned media system called AI and deploying it in ways that undermine or replace political decision-making about laws and policy. In that light, this is not helpful at all. It is a fundamentally corrosive concentration of power into the hands of technicians.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mail.cyberneticforests.com/content/images/2025/05/IMG_3500_VSCO.jpeg" class="kg-image" alt="Democracy & the AI Question" loading="lazy" width="2000" height="1500" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/05/IMG_3500_VSCO.jpeg 600w, https://mail.cyberneticforests.com/content/images/size/w1000/2025/05/IMG_3500_VSCO.jpeg 1000w, https://mail.cyberneticforests.com/content/images/size/w1600/2025/05/IMG_3500_VSCO.jpeg 1600w, https://mail.cyberneticforests.com/content/images/size/w2400/2025/05/IMG_3500_VSCO.jpeg 2400w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Inside the Centre Pompidou</span></figcaption></figure><p><strong>How is the current vision and organization of AI affecting the US administration? Is it reinforcing or weakening democracy?</strong></p><p>Elon Musk's DOGE operation has accessed government agencies across the federal government by seizing a very limited authority to build better websites. The Obama administration initially created a small team over a decade ago to help government agencies streamline online interfaces and create better government tools.</p><p>Under this authority for the civil service's "digital transformation," DOGE has grabbed the authority to fire hundreds of thousands of government workers and close entire offices, such as US AID. These firings and layoffs are often done illegally, and much of it is being challenged in the courts.</p><p>DOGE has replaced these employees with Large Language Models in some cases, and strives to do so completely. That means that Silicon Valley products are now in a position of political decision-making, seizing the authority of Congress, which is elected to represent the people in how it distributes tax money. It is really inaccurate even to say that "the AI" is firing people or blocking funds. AI is a screen used to distance the people with real power from accountability.</p><p>We cannot forget that the people who <em>built</em> the AI are <em>telling</em> the AI how act and how to frame its responses. We have seen, for example, that Elon Musk's AI, Grok, was responding to many queries on the social media network X with unrelated arguments about white genocide. Someone might ask about a baseball player and would get a response about how the baseball player isn't talking enough about white genocide. This is because of the calibration of the system prompt. It's a decision made to change how it responds.</p><p>Most likely Elon Musk, a white South African, decided to advocate for white South Africans using the "white genocide" talking point popular with the right. Emphasizing these responses was a political decision imposed on the model, and it should not surprise us. Every LLM has system prompts, and every company decides what they say. We could just as easily make a model that replies with subtle persuasion toward casting a certain vote, or firing a certain kind of employee, and it would be difficult for anyone to know.</p><p>In DOGE, this AI has been used to evaluate emails from federal workers justifying their jobs—which they must write every week. The AI is fed these messages, and then it decides: even if it hallucinates or acts according to Elon Musk's system prompt, the worker can be fired. Or, using a list of terms, which includes words such as "bias," "communities," "under-represented," or even the word "women," federal grants have been halted or canceled. This barely requires AI at all. But AI becomes a tool of obfuscation, and its abuses can be too easily hidden: AI becomes the excuse, which is its main function today in the US government. It is a tool for making excuses, a tool for plausible deniability. </p><p>So, it has been very good for consolidating power into the technical class of Silicon Valley elites, and protecting conservative politicians from the consequences of far-right policy decisions. It has done very little toward building a better democracy.</p><p><strong>How would you respond to some who claim we should replicate DOGE in other countries?</strong></p><p>I note that the abuse of power in these systems is rampant. The engineers, the technical class, become a kind of elite, as policymakers and the general public can easily be misled about how these systems work and how they arrive at decisions. The changes put in place without our complete understanding will be remarkably challenging to undo. Even when AI systems fail, the AI industry can propose itself as a solution to that failure, such as charging the government for more engineers or demanding even more access to citizens' private data. So, even a system failure can be used by the tech industry to justify further investment.</p><p>Furthermore, the entire idea at the heart of AI in government — this speeding up of government efficiency — is misguided, depending on what we aim to make "more efficient." Democracy is, as the theorist Chantal Mouffe notes, not about agreement but about the <em>process of disagreement</em> and the often uneasy accommodation of difference. Democracy is a constant process of negotiation between competing interests. And so it becomes hazardous to speak to an <em>efficiency of democracy</em> in the way Silicon Valley describes it today.</p><p>It further points to this myth of Artificial General Intelligence, or so-called "superintelligence," which the industry assures us will make just, fair, or wise decisions. But fair to whom, wise from what perspective or goal? We will never arrive at a universal agreement about who holds power or how to use it. The universal agreement is not possible in a democracy. The end of the debate, if achieved through our deference to an automated decision-maker claiming to know what is best for all of us? That is the end of politics and the end of democracy.</p><p>I often hear that different forms of AI are possible. But if they were possible in this world, we would have them. Right now, the form of AI our world has made is based on data extraction, surveillance, and the concentration of power – because that is what our current system incentivizes. If we first remake the world, we may change the way we make AI. Otherwise, we risk changing ourselves for the way AI makes the world.</p><hr><h2 id="signal-to-noise-curators-conversation">Signal to Noise: Curators Conversation </h2><p>The National Communication Museum in Melbourne has shared the video of our curator's conversation about <a href="https://ncm.org.au/exhibitions/signal-to-noise?ref=mail.cyberneticforests.com" rel="noreferrer">Signal to Noise</a>, which also functions as a walk-through of the exhibition. There's a bunch of related material online at the website, as well – hope you enjoy it! It's a great tour if you can't make it to Melbourne (or if you can and want to do a deeper dive). </p><figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/X3-O8fnigcs?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Signal to Noise: In Conversation with the Curators | National Communication Museum (NCM)"></iframe></figure><hr>The "AI jobs apocalypse" is for the bosses - Blood in the Machinehttps://www.bloodinthemachine.com/p/the-ai-jobs-apocalypse-is-for-the2025-05-31T01:26:42.000Z<p>Like a lot of figureheads in the AI industry, Anthropic CEO Dario Amodei says that ordinary people are not ready for the changes AI is about to unleash on the world. In a widely circulated <a href="https://www.axios.com/2025/05/28/ai-jobs-white-collar-unemployment-anthropic">interview with Axios</a> this week, Amodei warns we are on the brink of what his interviewers describe as a “job apocalypse” that will wipe out half of entry level jobs and cause the unemployment rate to rise up to 20%. People are unprepared, Amodei says. "Most of them are unaware that this is about to happen.” But before we know it, “cancer is cured, the economy grows at 10% a year, the budget is balanced—and 20% of people don't have jobs."</p><p>On Thursday, Business Insider’s CEO Barbara Peng <a href="https://www.businessinsider.com/a-note-from-business-insiders-ceo?utm_source=twitter&utm_campaign=news-sf&utm_medium=social">announce</a>d that the company was laying off 21% of its staff and in the same announcement that it was “going all-in on AI.” According to my sources, in addition to hitting some reporters, the cuts largely impacted copywriters—a job that has been targeted for replacement with AI by many companies over the last two years. Peng noted that 70% of Insider employees use its enterprise AI systems, and that they’re trying to get that up to 100%. Alex Springer, BI’s parent company, <a href="https://www.axelspringer.com/en/ax-press-release/axel-springer-and-openai-partner-to-deepen-beneficial-use-of-ai-in-journalism">has a deal with OpenAI</a> that licenses its content to the AI company and gives it access to AI tech.</p><p>Is this a sign of Amodei’s AI jobs apocalypse at hand? </p><p><em>A quick message: BLOOD IN THE MACHINE is 100% reader-supported and made possible by my brilliant paying subscribers. I’m able to keep the vast majority of my work free to read and open to all thanks to that support. If you can, for the cost of a coffee a month, or, uh, a coffee table book a year, consider helping me keep this thing running. Thanks everyone. Onwards.</em></p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":"button-wrapper"}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p>I guess it depends on how you define “AI jobs apocalypse.” The way that AI executives and business leaders want you to define it is something like ‘an unstoppable phenomenon in which consumer technology itself inexorably transforms the economy in a way that forces everyone to be more productive, for them’. </p><p>As such, perhaps we should maybe pump the brakes here and look at what’s actually going on, which is more like ‘large technology firms are selling automation software to Fortune 500 companies, executives, and managers who are then deciding to use that automation technology to fire their workers or reduce their hours.’ There is nothing elemental or preordained about this. The “AI jobs apocalypse” is bosses like Barbara Peng deciding to lay off reporters and copywriters and highlighting her commitment to AI while she is doing so. </p><p>From what I’ve been told, “AI” isn’t really making much of an impact on BI reporters’ daily working lives, though they do have access to Grammarly, editing software that predates ChatGPT and generative AI products. But traffic at Business Insider is down, just like it is at many, many news orgs right now, in part because discovery from search is down—because ChatGPT and Google AI Overview have buried links to their stories. And there’s an incentive to put BI’s partnership with OpenAI in a positive light. </p><p>So instead of, say, pushing back on the way tech companies are taking news orgs’ work and reproducing it on their platform via AI snippets and overviews—capturing user loyalty and ad revenue in the process—most media bosses have decided to partner with those companies and to, say, fire copywriters in the wake of declining revenue streams. This may be somewhat reductive, but these are all <em>human decisions</em>, even when they are made from a menu of all-bad options. And management, more often than not, will align with the interests of the money—represented here by the AI companies—over their workers. Same as it ever was. </p><p>Look, I know that this can <em>feel </em>apocalyptic and insurmountable. Hell <a href="https://www.bloodinthemachine.com/p/so-the-la-times-replaced-me-with">I was laid off by a media company</a> that then sought to increase its value by adding AI tools to the columns like those I used to write. But I cannot emphasize enough that this is exactly how the AI companies want us all to think. That AI, in the precise form that they are selling it, is inevitable. Adapt or be left behind. The economy will be totally transformed. Get on board, or lose your competitive advantage. Be stranded when the AI jobs apocalypse hits. </p><p>But of course there is no AI jobs <em>apocalypse—</em>an apocalypse is catastrophic, terminal, predetermined—but there <em>are</em> bosses with great new incentives/justifications for firing people, for cutting costs, for speeding up work. There is, to split hairs for a minute, <a href="https://www.bloodinthemachine.com/p/the-ai-jobs-crisis-is-here-now">a real AI jobs </a><em><a href="https://www.bloodinthemachine.com/p/the-ai-jobs-crisis-is-here-now">crisis</a></em>, but that crisis is born of executives like Peng, CEOs like Duolingo’s Louis von Ahn and Klarna’s Sebastian Siemiatkowski all buying what Amodei (and Sam Altman, and the rest of the new AI enthusetariat) is selling. Amodei and the rest are pushing not just automation tools, but an entire new permission structure for enacting that job automation—and a framework that presents the whole phenomenon as outside their control. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff99fd5ff-c037-4ea7-9ca1-840ed5c8d59a_2048x1365.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff99fd5ff-c037-4ea7-9ca1-840ed5c8d59a_2048x1365.jpeg 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff99fd5ff-c037-4ea7-9ca1-840ed5c8d59a_2048x1365.jpeg 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff99fd5ff-c037-4ea7-9ca1-840ed5c8d59a_2048x1365.jpeg 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff99fd5ff-c037-4ea7-9ca1-840ed5c8d59a_2048x1365.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff99fd5ff-c037-4ea7-9ca1-840ed5c8d59a_2048x1365.jpeg" width="1456" height="970" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/f99fd5ff-c037-4ea7-9ca1-840ed5c8d59a_2048x1365.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":970,"width":1456,"resizeWidth":null,"bytes":423419,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/164017158?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff99fd5ff-c037-4ea7-9ca1-840ed5c8d59a_2048x1365.jpeg","isProcessing":false,"align":null}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff99fd5ff-c037-4ea7-9ca1-840ed5c8d59a_2048x1365.jpeg 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff99fd5ff-c037-4ea7-9ca1-840ed5c8d59a_2048x1365.jpeg 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff99fd5ff-c037-4ea7-9ca1-840ed5c8d59a_2048x1365.jpeg 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff99fd5ff-c037-4ea7-9ca1-840ed5c8d59a_2048x1365.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">Dario Amodei at TechCrunch Disrupt in 2023. Image: <a href="https://flickr.com/photos/techcrunch/53202070830/in/photolist-2p4gVSg-2p4gPpY-2p4hwab-2p4fNfn-2p4fNfc-2p4fNbV-2p4gNV6-2p4hw9z-2p4hw8h-2p4gPtv-2p4fNcr-2p4fNcS-2p4hw99-2p4gPrM-2p4hw9u-2p4gPrw-2p4fNbK">Flickr</a>. CC 2.0</figcaption></figure></div><p>And man, is it working! Just look at how Jim VandeHei and Mike Allen, the CEO and executive editor of Axios, respectively, *absolutely lap up* Amodei’s pronouncements. They are completely sold! The tone of the whole article is that of eager students furiously nodding along, pausing to get some agreement from [checking notes here, ah yes] Steve Bannon, before issuing the following disclosure at the bottom of the article: </p><blockquote><p><strong>Full disclosure:</strong> At Axios, we ask our managers to explain why AI won't be doing a specific job before green-lighting its approval. (Axios stories are always written and edited by humans.) Few want to admit this publicly, but every CEO is or will soon be doing this privately. Jim <a href="https://www.axios.com/2025/05/20/ai-leadership">wrote a column</a> last week explaining a few steps CEOs can take now.</p></blockquote><p>Here is an excerpt from Jim’s column, about what he’s doing at Axios to “prepare people” for <a href="https://www.axios.com/2025/05/20/ai-leadership">the age of AI</a>: </p><blockquote><p>We tell most staff they should be spending 10% or more of their day using AI to discover ways to double their performance by the end of the year. Some, like coders, should shoot for 10x-ing productivity as AI improves.</p></blockquote><p>Mr. VandeHei could not possibly illustrate my point any more thoroughly if he 10x-ed his descriptive powers with AI. The message is this: There is an AI jobs apocalypse coming, everything is going to change, and if you hope to survive it, you’re going to have to learn to be a lot more productive, for me, your boss. How a reporter is supposed to use AI to “double their performance” without generating articles outright remains undisclosed. That’s a lot of summarized emails.</p><p>But the mindset is prevailing, if VandeHei is to be believed: “<em>we're </em>betting [AI] approximates the hype in the next 18 months to three years,” he writes. “And so are most CEOs and top government officials we talk to, even if they're strangely silent about it in public.”</p><p>And <em>that’s</em> a real crisis, in my view! This AI automation mania pushes bosses to train the crosshairs on anything and everything that isn’t built to optimize corporate efficiency, and as a result, you get the journalism layoffs, the Duolingo cuts; you get DOGE. As I wrote a couple weeks ago about the REAL AI jobs crisis: </p><blockquote><p>The AI jobs crisis does not, <a href="https://gizmodo.com/robots-are-not-coming-for-your-job-management-is-1835127820#:~:text=Listen%3A%20'Robots'%20are%20not,on%20its%20comparatively%20superior%20merits.">as I’ve written before</a>, look like sentient programs arising all around us, inexorably replacing human jobs en masse. It’s a series of management decisions being made by executives seeking to cut labor costs and consolidate control in their organizations. The AI jobs crisis is not any sort of SkyNet-esque robot jobs apocalypse—it’s <a href="https://www.bloodinthemachine.com/p/the-federal-tech-workers-facing-down">DOGE firing tens of thousands of federal employees</a> while waving the banner of <a href="https://www.bloodinthemachine.com/p/doges-ai-first-strategist-is-now">“an AI-first strategy.”</a> </p></blockquote><p>If AI turns out to be able to do half of what its staunchest advocates say it can, isn’t its immense power an opportunity to decide for ourselves the kind of jobs that we think are important for a society to have? Why are we limited to playing defense against the whims of those carrying out this AI jobs apocalypse, the executives and the managerial class? The answer is pretty simple: That’s who the AI jobs apocalypse is for! </p><p>Which is why visions like Amodei’s wind up underlining how impoverished his cohort’s visions for the future really are: Here is a technology that he believes is the most transformational thing since electricity or whatever, capable of doing hundreds of millions of humans’ jobs within the next few years, and all he can suggest is that governments should “prepare” for the job loss, and maybe institute a 3% tax on AI. Altman used to talk a little bit about a universal basic income—the bare minimum for gesturing towards an interest in the lives of the losers of the AI automation era—but he doesn’t even do that anymore. Now it’s nothing, except the occasional grim suggestion that <a href="https://www.youtube.com/watch?v=A-5chPHZ18E">the social contract itself might have to be rewritten</a> in the AI companies’ favor.</p><p>Nothing better clarifies the nature of these projects more than Amodei and Altman proclaiming their technologies will soon be able to do everyone’s jobs on earth—but that vast swaths of those people are probably doomed to be miserable. Not them, though. They will be rich. </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p>PS I know I am maybe not helping by running a project called <a href="https://www.bloodinthemachine.com/p/did-ai-kill-your-job">AI Killed My Job</a>, but the idea for that was I’d include the ways that AI has degraded or ‘killed’ jobs beyond eliminating them, too, and also ‘The Bosses Used AI to Kill My Job’ just felt too long! I’ll share some of the stories from that project next week. </p><p>As always, thanks for reading, and hammers up. </p>An 'always on' OpenAI device is a massive backlash waiting to happen - Blood in the Machinehttps://www.bloodinthemachine.com/p/an-always-on-openai-device-is-a-massive2025-05-29T01:10:55.000Z<p>Last week, the big tech news was that <a href="https://www.bloodinthemachine.com/p/openais-desperate-quest-to-become">OpenAI had acquired Jony Ive’s design studio</a> in yet another deal worth so many billions of dollars, <a href="https://www.nytimes.com/2025/05/21/technology/openai-jony-ive-deal.html">to build</a> some vaguely defined “AI devices.” While the announcement itself was void of meaningful information about the apparently forthcoming products, details of OpenAI’s plans and what the they’re supposed to look like soon started leaking out. </p><p>As those details leaked, basically all I could think about was how much people are going to hate these things. If an OpenAI hardware product ever does launch, there’s going to be plenty of, let’s say, rather justified hostility towards them. An OpenAI-branded device might make the public vitriol spewed at the Google Glass—which was banished by backlash for a decade—tame by comparison.</p><p><em><strong><</strong>BLOOD IN THE MACHINE is 100% reader-supported and made possible by my excellent paying subscribers. I’m able to keep the vast majority of my work free to read and open to all thanks to that support—if you can, for the cost of a beer a month, or, uh, a Nintendo Switch game a year, you can help me keep this thing running. On top of that, you can access the Critical AI reports, a short installment of which you’ll find at the bottom of this article. That’s a perk for paid supporters, and today, we’ll wade into artists taking the fight to AI in Sweden, the self-driving trucking boom, and a study that reveals AI is terrible at therapy. Thanks everyone, for all you do. Onwards.<strong>/></strong></em></p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p>According to the <a href="https://www.wsj.com/tech/ai/what-sam-altman-told-openai-about-the-secret-device-hes-making-with-jony-ive-f1384005?mod=rss_Technology">Wall Street Journal</a>, Altman told OpenAI staff that the incoming device won’t be a phone or glasses, but that it will be portable and always on, to gather data about its users’ environment, and, of course, that it will be a phenomenal success: Altman <a href="https://www.wsj.com/tech/ai/what-sam-altman-told-openai-about-the-secret-device-hes-making-with-jony-ive-f1384005?mod=rss_Technology">said</a> it will add $1 trillion to OpenAI’s value. He also said that the products, which he referred to as “companions,” will ship by the end of next year, and that they will sell them “faster than any company has ever shipped 100 million of something new before.”</p><p>The supply chain analyst Ming-Chi Kuo, meanwhile, <a href="https://x.com/mingchikuo/status/1925543472993321066?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1925543472993321066%7Ctwgr%5Ebb64b84995dbbf9b546871acde90453636d3e430%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.theverge.com%2Fnews%2F672533%2Fjony-ive-sam-altman-ai-device-ipod-shuffle-ai-pin">reported</a> that according to his research, mass production is expected to begin in 2027, and that</p><blockquote><p>3. The current prototype is slightly larger than the AI Pin, with a form factor as compact and elegant as an iPod Shuffle…<br>4. One of the intended use cases is wearing the device around the neck. <br>5. It will have cameras and microphones for environmental detection, with no display functionality. <br>6. It is expected to connect to smartphones and PCs, utilizing their computing and display capabilities.</p></blockquote><p>The self-described AR/VR/AI Content Creator Ben Geskin <a href="https://x.com/BenGeskin/status/1925552927885640124">made an AI-generated design mockup</a> using the above details, which looked like this: </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8adff620-93fa-4e5a-8bb5-587884b7b138_1024x1024.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8adff620-93fa-4e5a-8bb5-587884b7b138_1024x1024.jpeg 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8adff620-93fa-4e5a-8bb5-587884b7b138_1024x1024.jpeg 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8adff620-93fa-4e5a-8bb5-587884b7b138_1024x1024.jpeg 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8adff620-93fa-4e5a-8bb5-587884b7b138_1024x1024.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8adff620-93fa-4e5a-8bb5-587884b7b138_1024x1024.jpeg" width="1024" height="1024" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/8adff620-93fa-4e5a-8bb5-587884b7b138_1024x1024.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":1024,"width":1024,"resizeWidth":null,"bytes":609974,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":false,"topImage":true,"internalRedirect":"https://www.bloodinthemachine.com/i/164663523?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8adff620-93fa-4e5a-8bb5-587884b7b138_1024x1024.jpeg","isProcessing":false,"align":null}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8adff620-93fa-4e5a-8bb5-587884b7b138_1024x1024.jpeg 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8adff620-93fa-4e5a-8bb5-587884b7b138_1024x1024.jpeg 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8adff620-93fa-4e5a-8bb5-587884b7b138_1024x1024.jpeg 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8adff620-93fa-4e5a-8bb5-587884b7b138_1024x1024.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>Obviously, this is not a rendering of the final product, or even a prototype of the product, it’s one guy on social media sharing an AI-generated image (which, humorously, he watermarked). But it’s effective enough at conveying what an always-listening, always data-collecting OpenAI device would be, so it’s been passed around social media a good deal since. Surprise, people hate it. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e8eaa08-f776-4f50-b4c8-c4ad16f06b82_510x689.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e8eaa08-f776-4f50-b4c8-c4ad16f06b82_510x689.png 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e8eaa08-f776-4f50-b4c8-c4ad16f06b82_510x689.png 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e8eaa08-f776-4f50-b4c8-c4ad16f06b82_510x689.png 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e8eaa08-f776-4f50-b4c8-c4ad16f06b82_510x689.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e8eaa08-f776-4f50-b4c8-c4ad16f06b82_510x689.png" width="510" height="689" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/0e8eaa08-f776-4f50-b4c8-c4ad16f06b82_510x689.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":689,"width":510,"resizeWidth":null,"bytes":210436,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/164663523?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e8eaa08-f776-4f50-b4c8-c4ad16f06b82_510x689.png","isProcessing":false,"align":null}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e8eaa08-f776-4f50-b4c8-c4ad16f06b82_510x689.png 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e8eaa08-f776-4f50-b4c8-c4ad16f06b82_510x689.png 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e8eaa08-f776-4f50-b4c8-c4ad16f06b82_510x689.png 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e8eaa08-f776-4f50-b4c8-c4ad16f06b82_510x689.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><div class="bluesky-wrap outer" style="height: auto; display: flex; margin-bottom: 24px;" data-attrs="{"postId":"3lq5nwqrllk2j","authorDid":"did:plc:pbsoa4lqzvaxmhzpfzkkbvwm","authorName":"James Kelleher","authorHandle":"jameskelleher.pilcrow.ie","authorAvatarUrl":"https://cdn.bsky.app/img/avatar/plain/did:plc:pbsoa4lqzvaxmhzpfzkkbvwm/bafkreifd4jsqicqbthzpyoqau6i37zqujuzoin2cc3zgnh3lgficu7utbm@jpeg","text":"I’m a little disappointed this doesn’t have a shock collar","createdAt":"2025-05-27T12:12:11.820Z","uri":"at://did:plc:pbsoa4lqzvaxmhzpfzkkbvwm/app.bsky.feed.post/3lq5nwqrllk2j","imageUrls":["https://cdn.bsky.app/img/feed_thumbnail/plain/did:plc:pbsoa4lqzvaxmhzpfzkkbvwm/bafkreicip3j6r6xjhiwlceejwyl5qqrg6gtbrkup5pou33tm6hhmmlzade@jpeg"]}" data-component-name="BlueskyCreateBlueskyEmbed"><iframe id="bluesky-3lq5nwqrllk2j" data-bluesky-id="9465722214687138" src="https://embed.bsky.app/embed/did:plc:pbsoa4lqzvaxmhzpfzkkbvwm/app.bsky.feed.post/3lq5nwqrllk2j?id=9465722214687138" width="100%" style="display: block; flex-grow: 1;" frameborder="0" scrolling="no"></iframe></div><p>For good reason:</p><div class="bluesky-wrap outer" style="height: auto; display: flex; margin-bottom: 24px;" data-attrs="{"postId":"3lptkyuglz22z","authorDid":"did:plc:26hkfmtr7jrchexamh7hk6rz","authorName":"Dr Abeba Birhane","authorHandle":"abeba.bsky.social","authorAvatarUrl":"https://cdn.bsky.app/img/avatar/plain/did:plc:26hkfmtr7jrchexamh7hk6rz/bafkreidv6rsgsby63rq43lexdkaverys6ovyvz6qgrphlxzb5zj7loiv54@jpeg","text":"well, this is what silicon valley is cooking up\n\na device that's always on. so when a user goes about their life talking to people, everyone who comes within a certain capture radius will have their data (raw and natural conversation data) enrolled","createdAt":"2025-05-23T11:53:04.161Z","uri":"at://did:plc:26hkfmtr7jrchexamh7hk6rz/app.bsky.feed.post/3lptkyuglz22z","imageUrls":["https://cdn.bsky.app/img/feed_thumbnail/plain/did:plc:26hkfmtr7jrchexamh7hk6rz/bafkreia2mysdovltqcp32zxwkzpgstjo7qu732pgy7nubfoljc23ukikci@jpeg"]}" data-component-name="BlueskyCreateBlueskyEmbed"><iframe id="bluesky-3lptkyuglz22z" data-bluesky-id="015988131541000428" src="https://embed.bsky.app/embed/did:plc:26hkfmtr7jrchexamh7hk6rz/app.bsky.feed.post/3lptkyuglz22z?id=015988131541000428" width="100%" style="display: block; flex-grow: 1;" frameborder="0" scrolling="no"></iframe></div><p>Now, this is still mostly speculation; some reports are saying it will be worn like a necklace, others that Ive doesn’t want it to be a “wearable,” others just that it will be a “third device” in addition to your phone and computer etc etc etc. </p><p>But given what we do know—that it will always be on to receive data—and will necessarily contain instruments to collect that data—microphone, camera, sensors— it is safe to assume that it will be a surveillance machine of considerable proportions. So that’s one strike against it right out the gate, and a valid reason for people to reject its very premise; ChatGPT collects user input to train its models already, now just imagine that everything you hear, see, or say becomes training data owned by OpenAI. </p><p>If you’re old enough to remember the debut of Google Glass in 2013, then you’re old enough to remember one of the most pronounced bouts of public backlash against a tech product in the 21st Century. Early users of Google Glass—the search giant’s augmented reality glasses—were dubbed <a href="https://techcrunch.com/2013/01/28/glassholes/">Glassholes</a> and endlessly mocked on <a href="https://x.com/StartupLJackson/status/258297418485989377?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E258297418485989377%7Ctwgr%5Eb5a2e7357cc59856d9319e825f09154a095a182a%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Ftechcrunch.com%2F2013%2F01%2F28%2Fglassholes%2F">social</a> and <a href="https://www.theatlantic.com/technology/archive/2013/04/rise-term-glasshole-explained-linguists/316015/">regular media</a>. In particular, people <a href="https://www.marketplace.org/story/2013/03/14/google-glass-end-privacy">recoiled</a> <a href="https://www.zdnet.com/article/google-glass-let-the-evil-commence/">against</a> the idea of normalizing a device that allowed them to be photographed or documented against their will. The device flopped among developers and early users, too, and that public outcry, and the sense of utter rejection of Glass it stamped into popular culture, basically banished the very concept of such a device for nearly a decade. </p><p>A lot has changed since then, of course. There’s a case to be made that courtesy of other “always on” devices like Alexa, Ring, and a decade-plus of total smartphone saturation, public tolerance for mass surveillance and tech company-orchestrated privacy violations has been chipped away. Then again, sometimes we just need a blunt reminder of how bad things have gotten to get angry all over again. Notably, Google had been involved in a <a href="https://www.theguardian.com/technology/2013/aug/14/google-gmail-users-privacy-email-lawsuit">number</a> of <a href="https://www.theguardian.com/technology/2010/may/15/google-admits-storing-private-data">privacy</a>-<a href="https://spectrum.ieee.org/google-agrees-to-20-years-of-privacy-audits">violating</a> <a href="https://www.theguardian.com/technology/2010/oct/24/google-investigated-data-privacy-breaches">scandals</a> in the run-up to Glass’s launch, which the glasses then crystalized; the device, in other words, also became something of a beacon for the discomfort, distrust, and ambient anger people had at Google in general.</p><p>It’s very easy to see the same happening to OpenAI. </p><p>And remember, Google wasn’t particularly unpopular in 2013. People had privacy concerns, but Google still carried its don’t be evil aura, and Silicon Valley was widely seen as a beneficent engine of progress. It’s a different landscape today, and a very different story with OpenAI and AI products more generally. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6c22546-5066-4032-aefb-c7f676f7f468_1280x1092.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6c22546-5066-4032-aefb-c7f676f7f468_1280x1092.jpeg 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6c22546-5066-4032-aefb-c7f676f7f468_1280x1092.jpeg 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6c22546-5066-4032-aefb-c7f676f7f468_1280x1092.jpeg 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6c22546-5066-4032-aefb-c7f676f7f468_1280x1092.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6c22546-5066-4032-aefb-c7f676f7f468_1280x1092.jpeg" width="1280" height="1092" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/b6c22546-5066-4032-aefb-c7f676f7f468_1280x1092.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":1092,"width":1280,"resizeWidth":null,"bytes":129025,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/164663523?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6c22546-5066-4032-aefb-c7f676f7f468_1280x1092.jpeg","isProcessing":false,"align":null}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6c22546-5066-4032-aefb-c7f676f7f468_1280x1092.jpeg 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6c22546-5066-4032-aefb-c7f676f7f468_1280x1092.jpeg 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6c22546-5066-4032-aefb-c7f676f7f468_1280x1092.jpeg 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6c22546-5066-4032-aefb-c7f676f7f468_1280x1092.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p><a href="https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/">Poll</a> after <a href="https://www.brookings.edu/articles/what-the-public-thinks-about-ai-and-the-implications-for-governance/">poll</a> finds that the public is more concerned than excited about AI. There’s a widespread <a href="https://www.wired.com/story/uncanny-valley-podcast-5-in-sam-altman-we-trust/">distrust of Sam Altman and OpenAI</a>. And, perhaps most importantly, there’s a sustained anger at the very foundations that the modern AI industry is built on. There’s anger among artists, writers, creatives, and other workers at the way AI systems were built on their work without consent or compensation, at the way AI output is being used to degrade their livelihoods, at the way AI companies have benefitted at their expense. There’s anger over the growing energy, water, and resource costs of powering the AI systems. There’s anger among the AI safety crowd at the way the major AI companies are developing consumer products without guardrails or accountability. There’s widespread anger over the specter potential job loss. There’s a lot of anger.</p><p>Throw a bunch of OpenAI-branded pendants—or orbs, or AI iPod shuffles, or whatever it ends up being—around early adopters’ necks, and my guess is you’ll see another serious round of public shaming, of what you might call AI-generated Glassholing. The stakes are higher; we’re not just talking about privacy, which is of course very important, but often not as viscerally felt as threats to people’s livelihoods. And there are a lot of people who would no doubt see OpenAI device users as gleefully wearing their contempt for others around their necks. This is a recipe for mass backlash, in other words. </p><p>Look, the thing could be total vaporware—we might hear a couple updates on io progress from Altman here and there before it fades away, the announcement alone having done its part to keep the AI narrative rolling. It could also launch and be laughably awkward and bad, with the most obnoxious people you can imagine repeating questions too loud in the restaurant to the hunk of plastic strapped to their necks, and thus go the way of the reviled Humane Pin—which Altman also invested in, recall—all on its own. I doubt Altman and Ive launch something that sucks so obviously overtly, they’d probably rather not launch at all. So if the thing comes to fruition, then we really might see users boasting OpenAI pendants in the open, embracing their a roles as a consumer avatars of technological inequality, as proud real-world AI influencer blue X checkmarks. </p><p>In that case, well, in a world where <a href="https://www.bloodinthemachine.com/p/torching-the-google-car-why-the-growing">Google’s Waymos are coned and torched</a>, where Tesla dealerships are <a href="https://www.bloodinthemachine.com/p/deconstructing-the-new-american-oligarchy">firebombed for what they stand for</a>—well, things might get interesting. </p><p>In this context, it’s also important to remember that backlashes are often not only well-deserved, but hold the potential for political and social progress, too. The successful pushback against Glass, as writers like Rose Eveleth have argued, helped establish boundaries for what rights we were willing to cede to tech companies. “Google Glass is a story about human beings setting boundaries and pushing back against surveillance,” Eveleth <a href="https://www.wired.com/story/google-glass-reasonable-expectation-of-privacy/">wrote for Wired</a>, “a tale of how a giant company’s crappy product allowed us to envision a better future.” Looks like we’re going to need to tell a story like that again pretty soon. </p><h2>The CRITICAL AI REPORT: Artists take a hard line against AI, the brewing self-driving truck crisis, and why AI should never replace your therapist</h2><p>It’s not all grim news out there—artists and workers are banding together to fight for their livelihoods, and finding legally binding ways to push back on big AI. One example:</p>
<p>
<a href="https://www.bloodinthemachine.com/p/an-always-on-openai-device-is-a-massive">
Read more
</a>
</p>
Human Noise, AI Filters: "Mr. Watson, Come Here." - Cybernetic Forests68322145c7a61100012e29c12025-05-25T11:03:17.000Z<h3 id="remarks-from-the-artificial-intelligence-contemporary-creation-conference-24-may-2025-at-the-jeu-de-paume-paris">Remarks from the <em>Artificial Intelligence & Contemporary Creation </em>conference, 24 May 2025 at the Jeu de Paume, Paris. </h3><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">💡</div><div class="kg-callout-text"><i><em class="italic" style="white-space: pre-wrap;">A French translation of this text is also available </em></i><a href="https://www.cyberneticforests.com/bruite?ref=mail.cyberneticforests.com" rel="noreferrer"><i><em class="italic" style="white-space: pre-wrap;">here</em></i></a><i><em class="italic" style="white-space: pre-wrap;">. </em></i></div></div><img src="https://mail.cyberneticforests.com/content/images/2025/05/Screenshot-2025-05-22-at-7.10.09-PM.png" alt="Human Noise, AI Filters: "Mr. Watson, Come Here.""><p>Thank you to Antonio Somaini for the introduction and invitation here today. </p><p>I would like to begin with a new myth of noise: the story of Thomas Watson, Alexander Graham Bell's laboratory assistant. </p><p>Even before the invention of the radio, Watson had a peculiar relationship with the sound of transmissions. In his journals, he describes himself staying awake into the night; his ear pressed to the test speaker – listening to the strange noise within the wire.</p><p>Hillel Schwartz, in a passage from his colossal work, "Making Noise," speculates on Watson's nighttime entertainment:  </p><blockquote>"Was a snap, followed by a grating sound, the aftermath of an explosion on the sun's surface? Was something like the chirping of a bird a signal from a far planet? What occult forces were just noticeably at work in the recesses of telephonic sound?" (Making noise, p. 330). </blockquote><p>I was in a similar position when I began working with AI-generated <a href="https://mail.cyberneticforests.com/gaussian-pop/" rel="noreferrer">field recordings</a> and room tone, the ambient hums of empty spaces. I noted myself intuitively scanning this ambiance for clues of what was taking place in these pockets of the latent space, trying to assess their shape and any presence within it through the tonality of the hum.</p><p>Of course, there were no rooms. No needles inscribing vibrations into the grooves of wax cylinders. No tape heads magnetizing audio strips. There was nobody in the room, because there was no room. The audio of a diffusion-based model approximates a room tone through sonic mimicry—a stochastic parrot, in its most literal sense.</p><p>For Watson, the sound on these wires was an amplification of the cosmos, fluctuations of cosmic events beyond his comprehension. A lingering effect of a fantastic expanse leaving its traces in a piece of copper and some magnets. It was a black box, too: perhaps, someday, identifiable, if only we listened well enough. </p><p>How shocking, satisfying, and strange, then, it must have been for Watson on that fated day when the sound of the cosmos did give way to a voice: the voice of a technologist, calling him by name, paired with a simple demand:</p><p><em>"Mr. Watson, come here; I want to see you."</em></p><p>Today, we are all unwitting lab assistants, collecting data for the invisible wires for the sake of being seen by the machine. In return, the machines repeat our names, seducing us with a similar call:</p><p><em>"User, come here; I want to see you."</em></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mail.cyberneticforests.com/content/images/2025/05/STC-Repeater.jpg" class="kg-image" alt="Human Noise, AI Filters: "Mr. Watson, Come Here."" loading="lazy" width="1200" height="733" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/05/STC-Repeater.jpg 600w, https://mail.cyberneticforests.com/content/images/size/w1000/2025/05/STC-Repeater.jpg 1000w, https://mail.cyberneticforests.com/content/images/2025/05/STC-Repeater.jpg 1200w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Diagram of an Underwater Telegraph Cable, showing the dense outer layer created as a defense against interference.</span></figcaption></figure><p>Any sense of awe for this noise in the wire quickly gave way to its utilitarian focus: to move the <em>world</em> out of the channel so we humans could occupy it completely. To create a perfect conditions for the technology to operate, vast expanses of wiring would need to be isolated from any possible <em>event</em>. The event might be wind rattling the line or trees cutting the wire completely. It could be overcrowding, wherein isolated voices break their barriers, creating interference known as cross-talk—solar radiation. Lightning strikes.</p><p>Today, the world of the wire has dissolved into wireless networks, and what data solutions salespeople claim is "a cloud." They offer a false sensation of anti-materiality to match our emerging false sense of anti-corporeality. The data center does not exist, we are told, by the same people who tell us that intelligence does not need a body.</p><p>Despite these shifts, the struggle to define noise and signal is still lurking within the design theory of artificial intelligence systems. These systems evolved from a simple neuron: on and off, a one or a zero, a stream of light passing through the hole of a punchcard or else being stopped. </p><p>Early robots, like Grey Walter’s tortoise, moved toward light and stopped in the darkness. The zero and the one divided the world into two categories: the one, a legible kind of measurement, and the zero, an absence of legibility. This, by the way, was at the time called a <em>neuron</em>. It is a simplification to suggest that a one is a mark of a signal and a zero is a mark of noise, but a zero is absolutely an absence of an electric signal. </p><p>Social media was once a quiet observer, like the compression algorithms in a telephone wire designed to carve out and block interference. Social media measured us and made sure our streams were satisfactory. But soon, the social media wire had observed enough, and then algorithms could <em>predict </em>and <em>sculpt</em> the conditions through which our signals could be amplified. In the age of the AI-voice-in-the-wire, we come back to this: </p><p><em>"User, come here; I want to see you." </em></p><p>We are called upon to provide a signal, and we are rewarded with dopamine when we do. The strong signal travels through the social media feed; the weak signal stands still.</p><h3 id="noisy-human">Noisy Human<br></h3><p><em>"User, come here; I want to see you." </em></p><p>What does it mean to be seen? Sarah Ahmed's work in "<em>The Promise of Happiness</em>" is worth referencing here: the calling out of familial shame. <em>We only want what is best for you, </em>as Ahmed's example, can be wielded as a means to encourage a child toward conformity and away from themselves.<em> </em></p><p>This phrase wields a coercive power in which the concern is not about your well-being or living as you are, but the need for control by those who express such a concern. To such a parent, and to the algorithms of AI, which "see" us in similar ways, the "problematic" child is not <em>seen</em> but is <em>measured, compared</em>, <em>assessed</em>, and <em>assigned</em> a <em>category</em>. The child, or the user, is less <em>seen</em> than <em>measured against, </em>with the standard of measurement being a kind of aggregated user who behaves according to the known patterns that create an algorithmic logic. If the category is appropriate, the difficult child is invited into the parent's – or platform's – hospitality. But if you betray the category or are illegible to the measurement system, you are a non-signal. You are noise.</p><p>We can start there – to a resistance to categorization – and move into critical media art. The American theorist Sarah Watkins Fisher describes the position of critical media artists as a form of “<a href="https://www.dukeupress.edu/the-play-in-the-system?ref=mail.cyberneticforests.com" rel="noreferrer">parasitical resistance</a>.” She suggests that these artists act as parasites on the hospitality of tech companies and their platforms. We are constantly negotiating between resistance and complicity, testing the limits of what is permissible within a system in order to reveal or subvert the rules and boundaries of that hospitality.</p><p>AI creates a particular strand of platform hospitality. They extend this hospitality to us through interfaces and mythologies about what the technology will do for us, all of which carefully erase the source of this power, which is often the users themselves. AI invites us into its summaries of the World Wide Web, distilling countless perspectives from a range of web pages into a single, authoritative voice. In the arts, we all know by now that the training data is taking the artist's material, packaging it, and selling it in new forms to other users.</p><p>So what can we do, as artists, to test the hospitality of AI systems? How might we become noisy humans?</p><h3 id="three-noises">Three Noises</h3><p>Noise enters, as a metaphor, into AI systems in many ways. I will name three. </p><p>First, there is the Model Architecture. This is not a metaphor: diffusion models require noise to operate. We will discuss this in some detail in a moment. </p><p>Second, there is what I call <a href="https://mail.cyberneticforests.com/slop-infrastructures-1-2/" rel="noreferrer">Slop Infrastructure</a>: their capacity to produce, at scale, an amplification of algorithmic culture that floods out specific signals in favor of plausible signals, creating a sea of noise in which authentic communication becomes nearly impossible. </p><p>Third, AI is a political structure, as seen recently with <a href="https://mail.cyberneticforests.com/anatomy-of-an-ai-coup/" rel="noreferrer">Elon Musk's DOGE takeover</a> of the American civil service. The AI industry pushes AI upon us in a rush to resolve the tensions of politics, a way of diminishing the vibrant, unpleasant struggle of political debate, deliberation, and consensus building.</p><p>I have written more about all of these in other <a href="https://www.techpolicy.press/author/eryk-salvaggio/?ref=mail.cyberneticforests.com" rel="noreferrer">forums</a>. But today, as this is a conference about contemporary creation, I want to speak specifically about <em>noise,</em> as a potent material in my artistic practice, in a concrete way. My interest in noise draws from the above definitions, and makes noise a powerful material for me to work with, which is ultimately a critical AI practice.</p><p><strong>Model Architecture</strong></p><p>When a diffusion model generates an image, it begins with random noise, a cascading sequence of arbitrary red, green, and blue values. These pixels are compared against what you have described in your prompt. If the diffusion model sees noise, it is designed to remove it.</p><p>It removes noise by introducing basic, high-level outlines of the images connected to your prompt. Then, it begins to refine the details in these images. Most models do this by evaluating each state of the image to an image recognition system. CLIP is one such system, a computer vision model. CLIP examines the images at each stage of generation to determine whether the image is noise—that is, an unrecognizable image—or whether the image is more like your prompt.</p><p>To illustrate this, you can look at this image of what CLIP looks for in pictures labeled <em>Australia. </em>To generate an image out of noise, the image first must resemble something that looks like this image, which is what CLIP "expects." Then, the image can pass through. CLIP, then, is a filter. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mail.cyberneticforests.com/content/images/2025/05/Salvaggio-Jeu-de-Paume-Human-Noise-AI-Filters--1-.jpg" class="kg-image" alt="Human Noise, AI Filters: "Mr. Watson, Come Here."" loading="lazy" width="960" height="540" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/05/Salvaggio-Jeu-de-Paume-Human-Noise-AI-Filters--1-.jpg 600w, https://mail.cyberneticforests.com/content/images/2025/05/Salvaggio-Jeu-de-Paume-Human-Noise-AI-Filters--1-.jpg 960w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Diagram, from an OpenAI White Paper, of what neurons "see" in response to certain keywords. The activation for "Australia" is a hybrid koala-man figure.</span></figcaption></figure><p>If you ask for a picture of a "typical Australian person," you get koalas because you can see a koala-like structure in what CLIP recognizes.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mail.cyberneticforests.com/content/images/2025/05/Salvaggio-Jeu-de-Paume-Human-Noise-AI-Filters.jpg" class="kg-image" alt="Human Noise, AI Filters: "Mr. Watson, Come Here."" loading="lazy" width="960" height="540" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/05/Salvaggio-Jeu-de-Paume-Human-Noise-AI-Filters.jpg 600w, https://mail.cyberneticforests.com/content/images/2025/05/Salvaggio-Jeu-de-Paume-Human-Noise-AI-Filters.jpg 960w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Eight images of "Australian Person" rendered in Midjourney; 3 of the 8 are koalas.</span></figcaption></figure><p>This initial generation's colorations, shape, and so on, <em>structure</em> the images from generative AI. This <em>structuring</em> is what interests me about the relationship between AI, latent spaces, and visual culture. What we're doing with AI-generated images is structuring noise through the <em>assignment of categories. </em>These categories exist in what we call the <em>latent space, </em>or, when referring to the mathematical storage of possible pixel combinations, we call it the <em>vector space</em>. But this vector space is often referred to as generating <em>infinite possibilities</em>, as if the vectors — the structure of mathematical pattern-finding — give shape to <em>possibility</em>. </p><p>This, I suggest, is a misread of what the vector space does. The vector space <em>constrains</em> the infinite possibilities of noise because infinite possibilities <em>look like noise</em> but don't look like <em>images</em>. So, we're constraining noise to the patterns of previous images in the dataset to conform to that data. Those patterns must be read as plausible images by CLIP. Plausibility comes from grafting the patterns of cats onto a blur or grafting the pattern of a dog onto that blur. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mail.cyberneticforests.com/content/images/2025/05/Salvaggio-Jeu-de-Paume-Human-Noise-AI-Filters--2-.jpg" class="kg-image" alt="Human Noise, AI Filters: "Mr. Watson, Come Here."" loading="lazy" width="960" height="540" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/05/Salvaggio-Jeu-de-Paume-Human-Noise-AI-Filters--2-.jpg 600w, https://mail.cyberneticforests.com/content/images/2025/05/Salvaggio-Jeu-de-Paume-Human-Noise-AI-Filters--2-.jpg 960w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Image of noise, left; this "seed" is then used to structure the prompt "cat" (middle) and "dog" (right). Similar aspects of the seed's structure are applied to fit the patterns of "cat" and "dog" upon it, and close examination reveals similar structures (white legs in similar positions, similar patches of light on the horizon, etc).</span></figcaption></figure><p>Are we getting <em>new ways of seeing a dog</em>? Or are we seeing <em>old ways of seeing a dog</em> stretched onto new shapes? This fitting of the patterns to the structure of noise explains why the images from AI models can so often resemble strange, uninformed taxidermy: just as taxidermists of the 19th century usually had to recreate animals from hides and text descriptions and no direct experience, the AI models must work to find the bare minimum of plausibility in matching a specific user prompt to whatever random cluster of noise has been generated.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mail.cyberneticforests.com/content/images/2025/05/ocelottaxidermy.jpg" class="kg-image" alt="Human Noise, AI Filters: "Mr. Watson, Come Here."" loading="lazy" width="1200" height="800" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/05/ocelottaxidermy.jpg 600w, https://mail.cyberneticforests.com/content/images/size/w1000/2025/05/ocelottaxidermy.jpg 1000w, https://mail.cyberneticforests.com/content/images/2025/05/ocelottaxidermy.jpg 1200w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Two taxidermed ocelots from 1800s and 1934 (courtesy Museum für Naturkunde Berlin)</span></figcaption></figure><p>Ultimately, these images restructure prior patterns to new forms but cannot create anything startlingly new, at least not through the vector space alone. Noise activates the vector space, gives it form, structures the patterns that come out of it.</p><p> Still, they can fail spectacularly, which is why so much of our AI art is deemed "surreal." But it has become difficult for me to view such generations as novel, on their own, per se, or to define it in any way as some form of "other intelligence" grappling with the images we provide, nor do I see some automated incarnation of the active <em>unlearning of sight</em> mastered by surrealists like Andre Breton. In contrast to surrealism, it’s not <em>psychoanalytic</em> but <em>data analytics</em>, which means the strange arrangements of images do not really point to anything more than vectors in space, finding their way into patterns of noise, assigned by a clumsy machine that isn’t even doing data analytics in a meaningful way.</p><p>Rather than surrealism or alien others, what I see is <em>bad taxidermy</em>. </p><p>What we see from diffusion models is not even an image, per se; instead, I consider it a <em>hypothesis</em>, by a machine, of what an image <em>might</em> look like, based on past patterns, <em>if structured to fit new noise arrangements</em>. This is not to say that artists cannot create something interesting with AI. Artists can naturally find new ways to make something interesting out of anything: that is what art is. So please, do not interpret me as begrudging the work of those who find compelling images in the latent space. </p><p>Rather, I am speaking of my own interest in defying the categorization inherent in the logic of these images, tied to my experience in policy research around things such as automated surveillance and a profound discomfort with misrecognition.</p><p>My critique of AI has a specific focus, which is the discourse of AI companies which obscure this logic of categorical reduction behind the language of creativity. The AI industry is saturated in spectacle. Meanwhile, research has shown that the more people understand about AI, the less impressed they are with it. </p><p>To that end, "<a href="https://www.youtube.com/watch?v=axZqWiwYxbM&ab_channel=Stereolab-Topic&ref=mail.cyberneticforests.com" rel="noreferrer"><em>L'imaginaire is our worst enemy</em></a>." The spectacle of AI, more so than the tool of AI, speaks to dreams of generalities, born from abstractions, forced together through the violent reduction of our world to tiny signals. </p><p>Ultimately, I am intrigued by noise as material. Noise does something strange when you ask AI systems to grapple with it. I have been working with the artifacts produced by AI in response to noise prompts since late 2022. But four weeks ago, I could finally ask ChatGPT 4o to create an image of Gaussian noise. It was the first time a model could try to explain why.</p><ul><li>"I wasn't able to generate the image of Gaussian noise due to an error on my end. Sorry about that. If you'd like, you can either rephrase or try a different request, and I'll jump right back in." </li></ul><p>I asked why it couldn't generate an image of Gaussian noise.</p><ul><li>"My image generation system, which leans towards visually coherent artistic or photo-like results, isn't optimized for purely mathematical or abstract visual patterns, like Gaussian noise. It tries to find meaning or form, and Gaussian noise being random pixel intensity variations with a normal distribution doesn't provide that kind of structure. When I (the I is ChatGPT) prompted it (the it is ChatGPT) to generate Gaussian noise, it (ChatGPT) probably choked, trying to reconcile that with its usual patterns of quote, what an image should look like."</li></ul><p>So, what <em>should</em> an image look like? Well, apparently, not <em>this:</em> </p><figure class="kg-card kg-image-card"><img src="https://mail.cyberneticforests.com/content/images/2025/05/dbe1731b-b134-47a7-95e2-f74c1ce7741a_1024x512--1-.jpg" class="kg-image" alt="Human Noise, AI Filters: "Mr. Watson, Come Here."" loading="lazy" width="1024" height="512" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/05/dbe1731b-b134-47a7-95e2-f74c1ce7741a_1024x512--1-.jpg 600w, https://mail.cyberneticforests.com/content/images/size/w1000/2025/05/dbe1731b-b134-47a7-95e2-f74c1ce7741a_1024x512--1-.jpg 1000w, https://mail.cyberneticforests.com/content/images/2025/05/dbe1731b-b134-47a7-95e2-f74c1ce7741a_1024x512--1-.jpg 1024w" sizes="(min-width: 720px) 720px"></figure><p>Because this is what happened if you asked Stable Diffusion to generate an image of noise in late 2022. In strict terms, it isn't noise, not <em>Gaussian noise</em>. Different things happen in different models. Here's a Midjourney image. </p><figure class="kg-card kg-image-card"><img src="https://mail.cyberneticforests.com/content/images/2025/05/image0-1.png" class="kg-image" alt="Human Noise, AI Filters: "Mr. Watson, Come Here."" loading="lazy" width="2000" height="1333" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/05/image0-1.png 600w, https://mail.cyberneticforests.com/content/images/size/w1000/2025/05/image0-1.png 1000w, https://mail.cyberneticforests.com/content/images/size/w1600/2025/05/image0-1.png 1600w, https://mail.cyberneticforests.com/content/images/size/w2400/2025/05/image0-1.png 2400w" sizes="(min-width: 720px) 720px"></figure><p>I get this question often, so I want to tackle it directly. I'm not generating an image of noise from the training data. I can look at the training data for some of these models, and nothing in the training data looks like this. It's not the latent space <em>representation</em> of noise. We are looking at the model starting with an image of noise and then trying to move that noise toward the prompt.</p><p>Now, if it were moving towards the image of a cat, it would create an image of a cat. CLIP would intervene and say there's a 10% likelihood that that's an image of a cat. Let's clean up that noise and get it closer to a <em>representation of a cat</em> that looks like <em>a photograph of a cat</em>. What's happening here is I'm asking for <em>noise</em>. When CLIP sees the first image generated, it says, 'Yes, that's noise.' </p><p>But there are still 25-70 steps or so. We can set that number to anything we want, but usually it's 25-70, where it's trying to add detail to that noise. As it adds detail to that noise, CLIP is starting to say, <em>that's not noise anymore</em>. Paradoxically, in that process within a generative AI model, that's an indication to add noise to the image so that it could clean that noise up in the direction of more detail.</p><p>In other words, we're introducing a glitch. We're introducing friction between a fundamentally unstable definition of noise, trying to find patterns in that noise that violate the direction of the prompt, and then essentially completing 70 steps later with abstract images that end up looking like these images. </p><p>Such images are, for me, an aesthetic gesture to that which resists the categorization AI requires to produce an image. Noise floods the structure, overwhelming its logic with a contradictory logic. The result is a material, which I have been incorporating into my practice in a range of ways. </p><p>I find these noise images inspiring as a symbolic form, representing what is inconceivable within the structure of the latent space. This is a photograph, arguably, of a machine failing to reconcile the paradox of defining noise against itself. In the Situationist sense, the only truly generative things machines produce are mistakes. </p><p>There is something about noise that points to its many, often paradoxical, meanings. Noise speaks to the political role of AI in our current moment. Elon Musk's DOGE is mobilizing AI in the US to eradicate the noise of democracy – to make decisions about how resources are deployed, how power flows, and to whom – but this is only the most high level illustration of the technosolutionist impulse to reduce what has been deemed <em>noise</em> into manageable structures. </p><p>Noise, then, is not only resisting the machinic impulse to constrain; it is the definition given to that which <em>resists being modeled</em> by this political ideology of categorization by 1 and 0. Noise is what lives beyond what the vectors are capable of regulating. In diffusion models, it stands for the uncompressible world beyond the data set, beyond the reducible signal.</p><p>Aaron's Zwincher, writing in the field of sonic studies, writes in his 2019 book, <em>Noise Politics</em>: </p><blockquote>"Noise is that which always fails to come into definition. The question of noise and who has the right to define it, is found at the center of the power struggle between succeeding generations, between hegemony, and innovation. Noise is found both in the clamor of the unwashed masses and in the relentless din of progress and construction of the new. Noise is found in diversity and confrontation with the unknown, the other, and the strange. Noise is in structures of control and domination, as well as in the failure of these systems, and their inability to be holistic or totalizing. Despite these forms of noise, noise is not a consonance of opposites, but rather a troubled unity, a unity that does not synthesize without remainder." </blockquote><p>I put this idea of noise in contrast to an organizing logic of AI, what Marvin Minsky discusses in terms of <em>schemas</em>. </p><h3 id="framing-the-system">Framing the System</h3><p>A schema is the word for our expectations about the world, which help us organize it in efficient ways. We have an idea of what a birthday party is supposed to look like, so when we go to a birthday party, we have an idea of what that party will be. But when we get there, it's almost always different because we can't predict the future, and a schema, or a frame, is not a prediction about the world.</p><p>Minsky describes what would become a kind of foundational logic of AI systems, even up to this day, drawing on psychology research:</p><blockquote>"When one encounters a new situation (or makes a substantial change to one's view of a problem), one selects from memory a structure called a frame. This is a remembered framework to be adapted to fit reality by changing details as necessary. A frame is a data-structure for representing a stereotyped situation, like being in a certain kind of living room or going to a child's birthday party. Attached to each frame are several kinds of information. Some of this information is about how to use the frame. Some is about what one can expect to happen next. Some is about what to do if these expectations are not confirmed." </blockquote><p>It is a hypothesis that centers expectations but sometimes finds conflicts. This is one of the fundamental misunderstandings of how AI is presented today. We assume that the machine understands the world if there is some kind of schema. AI researchers call this, now, a "world model." </p><p>In contrast, we have the human. In psychoanalysis, we see schemas constantly negotiated in conflict with the overwhelming presence of information, what Cecile Malaspina calls "an overwhelming freedom of choice," wherein noise is present.</p><p>Schemas constrain the noise of the world. It's a constraint of all the possibilities that could be present in that birthday party reduced to a simplified, vague model based on previous experience. When we refuse to update our schemas, it can create cognitive dissonance, another kind of noise, that leads to maladaptive behavior to actual scenarios in front of us. This is foundationally the challenge we face with so many AI models: a failure to acknowledge boundaries within which to shift an understanding of their own application. </p><p>The problem with generative AI, as I see it, is this belief within the industry that arriving at the <em>correct schemas</em> means we create a tool that mirrors our human capacities. </p><p>But this idea of a "correct" schema is impossible. Schemas were never meant to be the <em>limit</em> of our experience. We intend to negotiate, challenge, discard, and adapt these schemas. To <em>arrive</em> at and <em>preserve</em> a frame in this way – as if right-clicking on the image returning to us from the prompt – is to stop too short of investigating the real world and all its complex, endless transformations: we fail to imagine a new kind of birthday party. </p><h3 id="users-use">Users <em>Use</em></h3><p>But listen: I don’t want to overstate this. My point is not that we are passive users, who cannot negotiate a relationship with the system. I am arguing, precisely, that this negotiation is <em>possible</em>, and through the creative misuse of AI systems, or strategies of appropriation, subversion, or glitching, artists can model new relationships to technology that steer us away from a reliance upon the corporate affordances of the models, and therefore, the power that these interfaces exert over users. </p><p>Noise images, in this regard, represent – at least metaphorically! – glimpses of a still-unmodeled possibility, a reminder, or even an artifact, of a computational confrontation with what remains impossible for the machine to imagine. The model is limited, overdetermined by its program, mechanistic, and cannot re-evaluate its position regarding rendering an image of noise. It cannot shift frames or update its schemas. It is locked forever in the checkpoint of its training, and all possibilities are generated from within a frozen schema, never even aware of their relationship to what they produce. </p><p>So we return, then, to Thomas Watson, who listened so intently to the noise in the wire before being summoned into the next room. What might he have heard if he had not heard his own name? What possibilities would there have been in the failure to transmit the voice over that wire? Once it was defined as a voice transmission technology, our imagination of the listening device for the unexplainable residue of cosmic radiation became obsolete. I want to expand our imagination of what purposes these systems can serve. That's the true creative imagination: not the overdetermined structure of the mechanistic process, but the ability to stand outside definitions and imagine the possibilities that lay, latent, beyond them.</p><h3 id="moth-glitch">Moth Glitch</h3><p>I have a short amount of time left, and so I would like to show one film, a film made from these noise images in a video diffusion model and superimposed with moths – a reminder of the first computer bug, a 1948 system failure in the computers of Harvard University. The moth had distorted the wiring. Unrelated, 20 years later, the filmmaker Stan Brakhage would reimagine the methods and materials of filmmaking by gluing dead insects he'd found at his camp – dead from exhaustion, the night before, for flying endlessly into the lamp bulb, an internal glitch evocative of the dopamine rewards of social media. In this film, Moth Glitch, the materiality of these AI noise glitches is paired with the failure to produce an actual moth – instead, we are left with the glitched manifestations of millions of moths, seen, as it were, from some other room.</p><figure class="kg-card kg-embed-card"><iframe src="https://player.vimeo.com/video/990413919?app_id=122963" width="400" height="240" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media" title="Moth Glitch (2024)"></iframe></figure><p>Thank you for your attention. </p><p><strong>Remarks as written for the <em>Artificial Intelligence and Contemporary Creation </em>conference at the Jeu de Paume, Paris, coinciding with "<em>The World According to AI</em>" exhibition, and organized by the International Laboratory for Research in the Arts (LIRA) and the Institute for Research on Cinema and Audiovisual (IRCAV) at the Sorbonne Nouvelle University, with the contribution of the Institut Universitaire de France (IUF).</strong></p><hr>OpenAI's desperate quest to become an AI monopoly - Blood in the Machinehttps://www.bloodinthemachine.com/p/openais-desperate-quest-to-become2025-05-24T00:13:48.000Z<p>Earlier this week, I had a <a href="https://www.bloodinthemachine.com/p/dismantling-the-empire-of-ai-with">great chat with AI journalist Karen Hao</a> about her new book, <a href="https://www.penguinrandomhouse.com/books/743569/empire-of-ai-by-karen-hao/">Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI</a>. (Which, by the way, looks like the book is killing it out there, congrats Karen, it’s getting lots of coverage and ticking up the sales charts; that’s what you call the Blood in the Machine bump.) As such, key points from her examination of the recent history of the AI industry have been on my mind, as OpenAI, Google and Microsoft made grand new announcements about their imagined AI-monopolized futures. </p><p>OpenAI <a href="https://openai.com/sam-and-jony/?video=1086433620">announced</a> it was teaming up with Jony Ive, the world famous designer behind the Mac refresh of the 90s, the iPod, and the iPhone. In <a href="https://www.nytimes.com/2025/05/21/technology/openai-jony-ive-deal.html">a $5.6 billion deal</a>, OpenAI is acquiring LoveFrom, Ive’s design studio, and Altman and Ive revealed that they’ve been building an AI device of some kind. Ive says, naturally, that it’s shaping up to be the biggest thing he’s ever done. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c808ce-b4fd-42bf-995c-3218af7d41d1_1000x750.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c808ce-b4fd-42bf-995c-3218af7d41d1_1000x750.jpeg 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c808ce-b4fd-42bf-995c-3218af7d41d1_1000x750.jpeg 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c808ce-b4fd-42bf-995c-3218af7d41d1_1000x750.jpeg 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c808ce-b4fd-42bf-995c-3218af7d41d1_1000x750.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c808ce-b4fd-42bf-995c-3218af7d41d1_1000x750.jpeg" width="1000" height="750" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/d3c808ce-b4fd-42bf-995c-3218af7d41d1_1000x750.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":750,"width":1000,"resizeWidth":null,"bytes":268744,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":false,"topImage":true,"internalRedirect":"https://www.bloodinthemachine.com/i/164253163?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c808ce-b4fd-42bf-995c-3218af7d41d1_1000x750.jpeg","isProcessing":false,"align":null}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c808ce-b4fd-42bf-995c-3218af7d41d1_1000x750.jpeg 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c808ce-b4fd-42bf-995c-3218af7d41d1_1000x750.jpeg 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c808ce-b4fd-42bf-995c-3218af7d41d1_1000x750.jpeg 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c808ce-b4fd-42bf-995c-3218af7d41d1_1000x750.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">From OpenAI’s announcement of the IO partnership, via OpenAI.</figcaption></figure></div><p>This after OpenAI has announced, in quick succession, that it’s pursuing an <a href="https://www.theverge.com/openai/648130/openai-social-network-x-competitor">AI social media network</a>, more <a href="https://www.wired.com/story/openai-launches-an-agentic-web-based-coding-tool/">work automation products,</a> and AI for <a href="https://www.wired.com/story/openai-adds-shopping-to-chatgpt/">online shopping</a>.</p><p>Meanwhile, Google and Microsoft both had their big product demo events—<a href="https://www.theverge.com/google/670250/google-io-news-announcements-gemini-ai-android-xr">I/O for Google</a>, Build for <a href="https://www.digitaltrends.com/computing/microsoft-build-2025-important-announcements/">Microsoft</a>—which featured a flood of AI services and products in different stages of development, from always-on AI agents that view the world from your smartphone camera lens to more Google Overview in more languages around the world to autonomous coding agents. AI, everywhere, all of the time; each of these tech giants—and Meta, too, as well as Anthropic—are driving to be the one-stop shop for consumer AI. One way to think about this is that the tech giants are using AI to try to entrench their monopolies, and to try to prevent OpenAI (or Anthropic) from building their own, which they very much want to do.</p><p><strong>«A quick note:</strong> BLOOD IN THE MACHINE IS 100% READER SUPPORTED. I cannot do this work without my wonderful paid subscribers, who make all of this possible. I keep approximately 97% of my work unpaywalled, and it’s thanks to those supporters I can do that. You are the best. You are the reason I can pay rent, pay for summer camp for the kids, do a close read of a very long book <a href="https://www.bloodinthemachine.com/p/dismantling-the-empire-of-ai-with">to interview its author</a>, spend a week <a href="https://www.bloodinthemachine.com/p/de-democratizing-ai">reporting on the effort to ban states from making AI laws</a>, and so forth. Please consider upgrading to paid if you can, for the cost of a nice coffee a month. This human thanks you—hammers up.<strong>/»</strong></p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p>In her book, Hao details how Altman’s two key mentors, the eminent VCs Paul Graham and Peter Thiel, sold their mentee on the ethos of growth-at-all-costs, and that startups should “aim for monopoly.” ”Competition is for losers” as <a href="https://www.wsj.com/articles/peter-thiel-competition-is-for-losers-1410535536">Thiel famously put it</a>. Hao also highlights <a href="https://blog.samaltman.com/successful-people">a 2013 blog post</a> from Altman, who muses “the most successful founders do not set out to create companies. They are on a mission to create something closer to a religion, and at some point it turns out that forming a company is the easiest way to do so.” Elsewhere, Altman talks admiringly of Napoleon, and his efforts to create and expand empire. </p><p>The unifying theme is Altman’s admiration for sheer ambition and unabashed megalomania. Using the quasi-religious trappings of AGI, or “artificial general intelligence,” software that can replace most working humans, as his chief product goal, Altman is pursuing relentless expansion in the mode of Amazon, Meta, and the other tech giants before it, around the strength of a popular product, ChatGPT. ‘Empire-building’ is a good critical articulation of what OpenAI and the other companies are trying to do with AI—given that it surfaces the exploited labor and environmental tolls necessary to make it possible—and “creating something closer to a religion” is a good description of how they are trying to do it. But “aiming for monopoly” is how all of this is being processed in the boardrooms. </p><p>“Aim for monopoly” neatly sums up OpenAI’s business strategy, much like “scale” summed up its approach to building better LLMs. It’s why DeepSeek, which built a competitive AI model for much cheaper, didn’t ruffle Altman’s feathers at all, even as news that such a model was possible <a href="https://www.bloodinthemachine.com/p/the-great-undermining-of-the-american">briefly scrambled the markets</a>. Altman has known from the start that the technology or even the product is ultimately ancillary to the power of the story. It’s the power of narrative, and of political standing, that justifies more investment, more partnerships, more public buy-in—and a shot at uprooting consumers used to certain tech platforms while providing them with what are in all functionality, often relatively similar services. (Information retrieval, software automation, digital socializing.)</p><p>Perhaps this has been obvious for a while now, but it’s what a lot of folks—myself included—sometimes get wrong about “AI.” It’s not a race to have the best technology, though all the companies surely want to do that, but more a race to become “The AI Company.” To become the first thing consumers think of when they hear “AI.” </p><p>This is why I think Altman and OpenAI execs no longer lose much sleep over <a href="https://bdtechtalks.substack.com/p/does-openai-have-a-moat">the “moat” question</a>; i.e., whether a competitor has a technological advantage, potentially resulting in its chatbot product being marginally better than ChatGPT’s. It’s because “AI,” which has never really been as much a technology as a concept, a marketing term, a loose description of a future-tinged idea about automation, is now even <em>less </em>than all that. It’s a commodity, a product category, a line of business, and naturally, all of these companies want to monopolize it. </p><p>The leadership of the established tech giants know that “AI” is a means of describing a new way consumers might want to search for information, and thus compete with their search products, or socialize digitally, and thus compete with their social media networks, or automate work tasks, and thus compete with their enterprise software offerings. This is the real reason they have gone so fully, and often embarrassingly, all-in on AI. (See <a href="https://www.bloomberg.com/news/features/2025-05-15/microsoft-ceo-satya-nadella-on-his-ai-efforts-and-openai-partnership">this profile of Microsoft CEO Satya Nadella</a>, in which he says he doesn’t listen to podcasts but downloads them onto chatbots so he can ask questions about them. Very cool.) I’m sure they are excited by the technology, too, but the real threat has always been that a company like OpenAI can come along and undermine their key offerings with a better story about the future, with a flashier if less reliable product. </p><p>OpenAI meanwhile, is using its head start, self-curated mythology, and prime positioning in the broader AI story to aim for the heart of the sun and establish an AI monopoly at any costs. It wants to colonize the very public conception of AI, and knows that it has the best chance of doing so. </p><p>Where some people look at OpenAI’s latest announcements and see <a href="https://davekarpf.substack.com/p/openai-has-an-unsubtle-communications?utm_source=post-email-title&publication_id=387131&post_id=164242349&utm_campaign=email-post-title&isFreemail=true&r=k107&triedRedirect=true&utm_medium=email">a rapidly advancing technology</a>, I see a company desperate to plant its flag on any possible surface that might feasibly become an “AI” product or service. Part of this, as <span class="mention-wrap" data-attrs="{"name":"Dave Karpf","id":672568,"type":"user","url":null,"photo_url":"https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F71cbbb1b-4bca-484a-b9f2-dd3b8bd8dba9_960x1440.jpeg","uuid":"5194b835-cc44-4096-9156-feeef2a7dbd2"}" data-component-name="MentionToDOM"></span> has noted in the link above, is a comms strategy—OpenAI has rather adeptly released a drip of AI announcements to keep the press and potential investors on the hook—and part of this is efforts of varying degrees of sincerity to attempt conquest in a a market segment, <em>any </em>market segment. The latest is some kind of AI-first device that will change everything.</p><div id="youtube2-W09bIpc_3ms" class="youtube-wrap" data-attrs="{"videoId":"W09bIpc_3ms","startTime":null,"endTime":null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/W09bIpc_3ms?rel=0&autoplay=0&showinfo=0&enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>And look, maybe I’m just disposed to be critical of all this stuff now. But to me it seems less revolutionary, than, I don’t know, desperate? For one thing, Altman is just *saying shit* now. He describes Ive’s company, with apparent earnestness, as “the densest collection of talent I think that I’ve ever heard of in one place and probably has ever existed in the world.” OK Sam. (You can see even Jony Ive struggle not to cringe as Sam rattles off those lines. In fact, I challenge you to watch this whole thing, it is borderline unbearable, and I hope that Francis Ford Coppola made enough of a location fee from letting them film in his wine bar Zoetrope to get a head start on Megapolis II.)</p><p>For another, let’s take a quick spin through the kinds of products that Altman and OpenAI has announced over the last months:</p><p>-A <a href="https://openai.com/index/introducing-chatgpt-search/">search engine</a> <br>-<a href="https://www.theverge.com/openai/648130/openai-social-network-x-competitor">A social network</a><br>-<a href="https://www.wired.com/story/openai-launches-an-agentic-web-based-coding-tool/">A web-based coding tool</a><br>-<a href="https://www.wired.com/story/openai-adds-shopping-to-chatgpt/">Online shopping</a><br>-A device designed by the maker of the iPhone</p><p>This is on top of previous efforts like an AI app store and workplace productivity software. In other words, it sure looks a lot like OpenAI is attempting to build a company that is all but indistinguishable from the other tech giants, just, you know, with AI. Just look at that picture above that OpenAI released with the Ive announcement—if there is a less subtle way to try to say “I am the next Steve Jobs” then I am certainly not aware of it. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe35eec4f-1ac4-431d-9c2b-ab5fd1d1278b_640x480.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe35eec4f-1ac4-431d-9c2b-ab5fd1d1278b_640x480.jpeg 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe35eec4f-1ac4-431d-9c2b-ab5fd1d1278b_640x480.jpeg 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe35eec4f-1ac4-431d-9c2b-ab5fd1d1278b_640x480.jpeg 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe35eec4f-1ac4-431d-9c2b-ab5fd1d1278b_640x480.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe35eec4f-1ac4-431d-9c2b-ab5fd1d1278b_640x480.jpeg" width="640" height="480" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/e35eec4f-1ac4-431d-9c2b-ab5fd1d1278b_640x480.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":480,"width":640,"resizeWidth":null,"bytes":30839,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/164253163?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe35eec4f-1ac4-431d-9c2b-ab5fd1d1278b_640x480.jpeg","isProcessing":false,"align":null}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe35eec4f-1ac4-431d-9c2b-ab5fd1d1278b_640x480.jpeg 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe35eec4f-1ac4-431d-9c2b-ab5fd1d1278b_640x480.jpeg 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe35eec4f-1ac4-431d-9c2b-ab5fd1d1278b_640x480.jpeg 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe35eec4f-1ac4-431d-9c2b-ab5fd1d1278b_640x480.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>And let’s think again back to the earliest days of OpenAI—again detailed in Hao’s book, as well as my<a href="https://ainowinstitute.org/publications/ai-generated-business"> own AI Now report on OpenAI’s origins</a>—when Altman used Elon Musk’s framing of the AI x-risk problem, and even his very language, to get him on board to start OpenAI in the first place. The DNA of tech titans past is all over Altman’s project, from the tactics of the same VCs that helped build Facebook, the clear design lifting/legacy-bolstering from Apple, the ideology of Elon Musk, and the grow-at-a-loss until you’re unstoppable approach of Amazon. </p><p>Note that I am <em>not</em> claiming that the LLMs and chatbots at the core of OpenAI’s project are <em>not</em> a novel technology with unique applications from consumer technologies of generations past. But that for all of Altman and OpenAI’s talk of bleeding edge technology that stands to upend the foundations of the world itself, the technology is taking root in some pretty familiar ways! It’s the battle tested, VC-led, Peter Thiel-endorsed, quest for monopoly that has governed the Silicon Valley mindset since at least the zero percent interest rate era. <a href="https://knightcolumbia.org/content/ai-as-normal-technology">AI is a normal technology</a>, as the computer scientists and writers Arvind Narayanan and Sayash Kapoor have explained. But in order to become the next normal tech giant, Sam Altman has to go to great lengths to act like it’s not. Yet ultimately, Altman’s OpenAI doesn’t want to birth an AI god so much as it wants to be Apple.</p><p>Of course, that ZIRP era is now over. OpenAI has attracted historic sums of investment. It’s throwing around billions to brand itself as the next Apple with no actual device in sight. It’s still not close to being profitable. A lot of these initiatives feel pretty flimsy, to me, and there’s a mad dash, rush-to-make-a-deal-with-the-Saudis, see what sticks kind of vibe to a lot of what OpenAI’s up to these days. So the question becomes: What happens if you aim for monopoly—on a scale that no tech company has attempted as fast before—and you miss? </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h2><strong>THE AI-GENERATED WRITING CRISIS IS IN FULL SWING</strong></h2><p>This week, the internet savaged a freelance writer who generated a summer reading list with ChatGPT, along with the <em>Chicago Tribune</em>, which published the feature, without edits, as part of an insert in its print edition. The outrage is justified: This is a profound failure of the editorial process, yet another depressing hallmark of These Times—AI slop making it to print—and an abdication of journalistic ethics. </p><p>And YET. I hope we save some ire for the tech companies that have actually created the conditions that pushed this writer to do such an unadvised thing. <a href="https://www.404media.co/chicago-sun-times-prints-ai-generated-summer-reading-list-with-books-that-dont-exist/">404 Media interviewed the man</a>, who turned out to be a veteran freelancer who puts together work like this as a second job in an effort to scrape together a living. And you know what? I feel for this guy. </p><p>No writer should ever use AI, <a href="https://www.cjr.org/feature-2/how-were-using-ai-tech-gina-chua-nicholas-thompson-emilia-david-zach-seward-millie-tran.php">that’s my hardline stance</a>, for any part of the writing process. Editors should not be accepting AI-generated work. But let’s at least acknowledge the extent that Google, Meta, and big tech have corroded the economy for writing and journalism—through, what a coincidence, their monopolization of platforms that facilitate digital content distribution in a process not unlike the one described above—and that AI companies are here to drive a stake through its heart.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F677e5321-3187-4453-b612-8cf1c7625d68_668x401.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F677e5321-3187-4453-b612-8cf1c7625d68_668x401.png 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F677e5321-3187-4453-b612-8cf1c7625d68_668x401.png 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F677e5321-3187-4453-b612-8cf1c7625d68_668x401.png 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F677e5321-3187-4453-b612-8cf1c7625d68_668x401.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F677e5321-3187-4453-b612-8cf1c7625d68_668x401.png" width="668" height="401" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/677e5321-3187-4453-b612-8cf1c7625d68_668x401.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":401,"width":668,"resizeWidth":null,"bytes":50371,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/164253163?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F677e5321-3187-4453-b612-8cf1c7625d68_668x401.png","isProcessing":false,"align":null}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F677e5321-3187-4453-b612-8cf1c7625d68_668x401.png 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F677e5321-3187-4453-b612-8cf1c7625d68_668x401.png 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F677e5321-3187-4453-b612-8cf1c7625d68_668x401.png 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F677e5321-3187-4453-b612-8cf1c7625d68_668x401.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>AI-content generation has facilitated a bona fide race to the bottom, where writers are getting paid so little in a lot of cases the incentives to use AI to auto-generate some text must be immense. Hell, I remember back to my up-and-coming blogger days some fifteen years ago, where I was paid $12 an article by a large media corporation to “add value” to existing news stories—I had to write one an hour to make anything close to minimum wage. And I thought it was my big break! </p><p>Hold fast to your ethics, yes, and don’t use AI to do journalism or writing at all if you respect your craft—but let’s recognize that AI slop is often a systemic issue created by monopolistic tech platforms and direct our energies to resisting such a system, and not exclusively to hating on victims who become enablers. </p><h2><strong>MORE BLOODY STUFF</strong></h2><p>I have gotten so many great submissions to <a href="https://www.bloodinthemachine.com/p/did-ai-kill-your-job">the AI Killed My Job inbox</a>. I hope to start sharing them next week — if I haven’t gotten back to you yet, and you sent in a submission, expect to hear back soon! For now, I’ll share the art that my friend Koren Shadmi whipped up for the project. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b759908-c86b-4fc3-a66f-1e1077ecd314_2080x620.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b759908-c86b-4fc3-a66f-1e1077ecd314_2080x620.jpeg 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b759908-c86b-4fc3-a66f-1e1077ecd314_2080x620.jpeg 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b759908-c86b-4fc3-a66f-1e1077ecd314_2080x620.jpeg 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b759908-c86b-4fc3-a66f-1e1077ecd314_2080x620.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b759908-c86b-4fc3-a66f-1e1077ecd314_2080x620.jpeg" width="1456" height="434" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/6b759908-c86b-4fc3-a66f-1e1077ecd314_2080x620.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":434,"width":1456,"resizeWidth":null,"bytes":317369,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/164253163?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b759908-c86b-4fc3-a66f-1e1077ecd314_2080x620.jpeg","isProcessing":false,"align":null}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b759908-c86b-4fc3-a66f-1e1077ecd314_2080x620.jpeg 424w, https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b759908-c86b-4fc3-a66f-1e1077ecd314_2080x620.jpeg 848w, https://substackcdn.com/image/fetch/w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b759908-c86b-4fc3-a66f-1e1077ecd314_2080x620.jpeg 1272w, https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b759908-c86b-4fc3-a66f-1e1077ecd314_2080x620.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>The Majority Report edited my segment on <a href="https://www.bloodinthemachine.com/p/the-ai-jobs-crisis-is-here-now">the AI jobs crisis</a> into a standalone bit, which is cool.</p><div id="youtube2-JNj2jm_Iy54" class="youtube-wrap" data-attrs="{"videoId":"JNj2jm_Iy54","startTime":null,"endTime":null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/JNj2jm_Iy54?rel=0&autoplay=0&showinfo=0&enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p><a href="https://www.buzzsprout.com/2425400/episodes/17212217-teens-are-done-with-big-tech-s-internet-google-triples-down-on-ai-and-grok-is-pro-white-genocide">This week’s System Crash</a> covers some of this ground and more, so for a deeper dive into bad AI, why teens are rejecting the internet wholesale, and Grok’s white genocide freakout, give it a listen. </p><p>Okay! That’s about it for now, thanks for reading everyone, have a nice long holiday weekend rest, if you can. I’m going camping, and I think I’m gonna lock my phone locked in my desk, touch grass, read a book about anything other than AI. The hammers will be waiting on Tuesday. </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p><em>*Thanks to Mike Pearl for editing help with this post.</em></p>