Critical takes on tech - BlogFlock2025-09-17T20:01:27.014ZBlogFlockescape the algorithm, The Convivial Society, Cybernetic Forests, Disconnect, Blood in the MachineArtists are losing work, wages, and hope as bosses and clients embrace AI - Blood in the Machinehttps://www.bloodinthemachine.com/p/artists-are-losing-work-wages-and2025-09-16T20:54:05.000Z<p>After the launch ChatGPT sparked the generative AI boom in Silicon Valley in late 2022, it was mere months before OpenAI turned to selling the software as an automation product for businesses. (It was first called <a href="https://techcrunch.com/2024/01/10/openai-launches-chatgpt-subscription-aimed-at-small-teams/">Team</a>, then <a href="https://openai.com/index/introducing-chatgpt-enterprise/">Enterprise</a>.) And it wasn’t long after that before it became clear that the jobs managers were likeliest to automate successfully weren’t the dull, dirty, and dangerous ones that futurists might have hoped: It was, largely, creative work that companies set their sights on. After all, enterprise clients soon realized that the output of most AI systems was too unreliable and too frequently incorrect to be counted on for jobs that demand accuracy. But creative work was another story. </p><p>As a result, some of the workers that have been most impacted by clients and bosses embracing AI have been in creative fields like art, graphic design, and illustration. Since the LLMs trained and sold by Silicon Valley companies have ingested countless illustrations, photos, and works of art (without the artists’ permission), AI products offered by Midjourney, OpenAI, and Anthropic can recreate images and designs tailored to a clients’ needs—at rates much cheaper than hiring a human artist. The work will necessarily not be original, and as of now it’s not legal to copyright AI-generated art, but in many contexts, a corporate client will deem it passable—especially for its non-public-facing needs. </p><p>This is why you’ll hear artists talk about the “good enough” principle. Creative workers aren’t typically worried that AI systems are so good they’ll be rendered obsolete as artists, or that AI-generated work will be better than theirs, but that clients, managers, and even consumers will deem AI art “good enough” as the companies that produce it push down their wages and corrode their ability to earn a living. (There is a clear parallel <a href="https://www.bloodinthemachine.com/p/one-year-of-blood-in-the-machine">to the Luddites here</a>, who were skilled technicians and clothmakers who weren’t worried about technology surpassing them, but the way factory owners used it to make cheaper, lower-quality goods that drove down prices.) </p><p>Sadly, this seems to be exactly what’s been happening, at least according to the available anecdata. I’ve received so many stories from artists about declining work offers, disappearing clients, and gigs drying up altogether, that it’s clear a change is afoot—and that many artists, illustrators, and graphic designers have seen their livelihoods impacted for the worse. And it’s not just wages. Corporate AI products are inflicting an assault on visual arts workers’ sense of identity and self-worth, as well as their material stability. </p><p>Not just that, but as with translators, the <a href="https://www.bloodinthemachine.com/p/ai-killed-my-job-translators">subject of the last installment of AI Killed My Job</a>, there’s a widespread sense that AI companies are undermining a crucial pillar of what makes us human; our capacity to create and share art. Some of these stories, I will warn you, are very hard to read—to the extent that this is a content warning for descriptions of suicidal ideation—while others are absurd and darkly funny. All, I think, help us better understand how AI is impacting the arts and the visual arts industry. A sincere thanks to everyone who wrote in and shared their stories. </p><p>“I want AI to do my laundry and dishes so that I can do art and writing,” as the from SF author Joanna Maciejewska memorably put it, “not for AI to do my art and writing so that I can do my laundry and dishes.” These stories show what happens when it’s the other way around. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mvDc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdcd1f30-e8ce-43c8-9a03-02a7b793af03_2080x620.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mvDc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdcd1f30-e8ce-43c8-9a03-02a7b793af03_2080x620.jpeg 424w, https://substackcdn.com/image/fetch/$s_!mvDc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdcd1f30-e8ce-43c8-9a03-02a7b793af03_2080x620.jpeg 848w, https://substackcdn.com/image/fetch/$s_!mvDc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdcd1f30-e8ce-43c8-9a03-02a7b793af03_2080x620.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!mvDc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdcd1f30-e8ce-43c8-9a03-02a7b793af03_2080x620.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mvDc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdcd1f30-e8ce-43c8-9a03-02a7b793af03_2080x620.jpeg" width="1456" height="434" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/bdcd1f30-e8ce-43c8-9a03-02a7b793af03_2080x620.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":434,"width":1456,"resizeWidth":null,"bytes":317369,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":false,"topImage":true,"internalRedirect":"https://www.bloodinthemachine.com/i/173288159?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdcd1f30-e8ce-43c8-9a03-02a7b793af03_2080x620.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mvDc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdcd1f30-e8ce-43c8-9a03-02a7b793af03_2080x620.jpeg 424w, https://substackcdn.com/image/fetch/$s_!mvDc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdcd1f30-e8ce-43c8-9a03-02a7b793af03_2080x620.jpeg 848w, https://substackcdn.com/image/fetch/$s_!mvDc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdcd1f30-e8ce-43c8-9a03-02a7b793af03_2080x620.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!mvDc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdcd1f30-e8ce-43c8-9a03-02a7b793af03_2080x620.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>A quick note before we proceed: Soliciting, curating, and editing these stories, as well as producing them, is a time-consuming endeavor. I can only do this work thanks to readers who chip in $6 a month, or $60 a year—the cost of a decent cup of coffee, or a coffee table <em>book</em>, respectively. If you find value in it, and you’re able, please consider upgrading to a paid subscription. I would love to expand the scope and reach of this work. Many thanks, and onward. </p><p><em>Edited by Mike Pearl. </em></p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p></p><h3><strong>Costume designs have been replaced with AI output that can’t be made by people who actually know how clothes work.</strong></h3><p>I work in the field of constructing costumes for live entertainment: theater, film/TV, ballet/opera, touring performers, etc.</p><p>Budget and scale is all over the map, from low-budget storefront theater in which one person designs and secures costumes for a production, up to a big-budget Broadway spectacular, which can have a dozen people on the design team alone and literally hundreds of makers creating the costumes from designs developed by the design team.</p><p>I’m seeing this happen typically on the low-budget to midrange end—community theater/high school theater, independent film, etc.: Producers and directors eliminating the position of costume designer in favor of AI image generation.</p><p>It comes up often in professional forums in the field, someone will share the AI generated costume. “designs” and they will be literally impossible to construct for an actual human with materials available in the actual world— gravity-defying materials on pornographically cartoon bodies, etc.</p><p>-Rachel E. Pollock</p><p></p><h3><strong>Illustration work at ad agencies has disappeared</strong></h3><p>I remember reading about the new stage of generative AI engines sometime in late 2022 in the NY Times, and seeing Dall-E and Midjourney's outputs and knowing it will mean trouble. Until then AI was making laughable 'art.' Really bad stuff. But all of a sudden the engines had leveled up.</p><p>I have been working in the comics and publishing industry for over 20 years, but the majority of my income was usually coming from work with advertising agencies. Whenever they needed to present an idea to a client I would come in and help with illustrations, and sometimes storyboards, this was all internal and would never be published, but it was still great to get paid for doing what I love most—drawing. I felt appreciated for my skills and liked working with other people.</p><p>It was in 2023 that It seemed like overnight all those jobs disappeared. On one of my very last jobs I was asked to make an illustrated version of an AI generated image, after that, radio silence. I had my suspicions that AI was the culprit, but could not know for sure, there was also a general downturn in the advertising industry at the time.</p><p>Finally I reached out to one of the art directors I work with and he confirmed that the creatives are using AI like crazy, there was no aspect of shame in presenting an AI illustration internally, no one would call you out on it, and it's sure as hell cheaper than using an illustrator. I had to deal with a sudden, very scary decrease in income. Meanwhile it felt like AI slop was mocking me from every corner of the internet, and every big company was promoting their new AI assistant. I was just disgusted with all these corporations jumping on the AI bandwagon not thinking of what the outcomes could be. and additionally, there was the insult of knowing that the engines were trained on working illustrators, including mine!</p><p>I used my free time to work on a new graphic novel, and eventually leaned into more comics work, which paid (a lot) less, but at least felt more creatively satisfying. The two years following the loss of work were difficult, definitely felt like the rug was taken out from under my feet, and I'm still adjusting to the new landscape, although I feel better about where I am now, I work harder than ever before, for less money. But at least the work will be seen by readers.</p><p>I'm hoping that in the world of comics the public shame of replacing an artist with AI will hold off the use of the technology, but I'm sure that one day it will become a lot more accepted. I feel like we live in an age where technological changes are happening too rapidly, and are not in any way reined in by the government, and humans can lose their job in the drop of a hat, with no sense of security or help. We are just not built for these fast changes. I'm happy to see people sobering up to the downside of this technology, and hoping the hype would die down soon.</p><p>-Anonymous</p><p></p><h3><strong>‘Children's book illustrator isn't a job anymore.’ </strong></h3><p>I've been out of work for a while now. I made children's book illustrations, stock art, and took various art commissions.</p><p>Now I have several maxed out credit cards and use a donation bin for food. I haven't had a steady contract in over a year. two weeks ago, when a client who has switched to AI found out about this he gave me $50 out of "a sense of guilt." Basically pity for the fact that Illustrator, as a job, does not exist anymore.</p><div class="pullquote"><p>It was my birthday recently and I sincerely considered not living anymore.<br>The worst part of all is that the parents who once supported me fully in being an artist sent me an AI generated picture of a caricature of themselves holding a birthday cake with my name spelled incorrectly.</p></div><p>I feel cheated, like if I could go back in time and tell the younger me in high school that all the practice, all the love, and all the hope from your parents and friends for your future gets you is carpal tunnel and poverty, I could have gone into a better job field. I'd be an electrician or welder.</p><p>I have a resume with skills that are appealing to no one, as slop can be generated for free.</p><p>I sold my colored pencils and markers and illustration tablet on Facebook marketplace for a steal once a previous client who I considered a friend boasted on LinkedIn that AI was the future of cost reduction above an image of a man in a suit who looked like him with six fingers holding a wad of cash.</p><p>I have applied to over one thousand jobs and I stopped keeping track. My disability didn’t effect making art, but makes me a poor candidate for much else.</p><p>It was my birthday recently and I sincerely considered not living anymore.</p><p>The worst part of all is that the parents who once supported me fully in being an artist sent me an AI generated picture of a caricature of themselves holding a birthday cake with my name spelled incorrectly. My friends all post themselves as cartoons online.</p><p>The person I married had a secret file on their computer labeled "AI pics" they thought I didn't notice.</p><p>I will wither away eating stale food from the garbage while everyone else is complacent with the slop generator doing what I used to put passion into and finely detail.</p><p>I don't think it's going to get better.</p><p>-Anonymous<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ETIV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3972aaad-a4b8-494f-9f40-5ef78fea4d81_2048x1294.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ETIV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3972aaad-a4b8-494f-9f40-5ef78fea4d81_2048x1294.png 424w, https://substackcdn.com/image/fetch/$s_!ETIV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3972aaad-a4b8-494f-9f40-5ef78fea4d81_2048x1294.png 848w, https://substackcdn.com/image/fetch/$s_!ETIV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3972aaad-a4b8-494f-9f40-5ef78fea4d81_2048x1294.png 1272w, https://substackcdn.com/image/fetch/$s_!ETIV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3972aaad-a4b8-494f-9f40-5ef78fea4d81_2048x1294.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ETIV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3972aaad-a4b8-494f-9f40-5ef78fea4d81_2048x1294.png" width="1456" height="920" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/3972aaad-a4b8-494f-9f40-5ef78fea4d81_2048x1294.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":920,"width":1456,"resizeWidth":null,"bytes":5875621,"alt":"","title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/173288159?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3972aaad-a4b8-494f-9f40-5ef78fea4d81_2048x1294.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!ETIV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3972aaad-a4b8-494f-9f40-5ef78fea4d81_2048x1294.png 424w, https://substackcdn.com/image/fetch/$s_!ETIV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3972aaad-a4b8-494f-9f40-5ef78fea4d81_2048x1294.png 848w, https://substackcdn.com/image/fetch/$s_!ETIV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3972aaad-a4b8-494f-9f40-5ef78fea4d81_2048x1294.png 1272w, https://substackcdn.com/image/fetch/$s_!ETIV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3972aaad-a4b8-494f-9f40-5ef78fea4d81_2048x1294.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">A piece of “photo imaging” art by <a href="https://www.suoakesphotoimaging.com/">Susan Oakes</a>. According to Oakes, Photoshop classes geared toward creating art like this suddenly aren’t in demand.</figcaption></figure></div><h3><strong>I’m a graphic artist. Since AI and Adobe Firefly came along, my teaching and tutoring have dropped dead. </strong></h3><p>I have taught various graphic courses but overwhelmingly Photoshop, the 800 lb. gorilla of the graphics world. I am not a photographer and I do not teach people to take photos, but to manipulate them, also known as Photo Imaging.</p><p>They are done by manually placing several images into a composite, and then enhancing them by various digital techniques such as layering, blending, masking etc. to arrive at a final result. Most people who take my classes don’t necessarily want to do all that I do, but want to know how to correct or otherwise manipulate photos to create their own projects. </p><div class="pullquote"><p>I’m turning more to “natural media” (non-digital) art, specifically painting. I am developing a course to teach watercolor painting to adults and I’m quite excited by this prospect.</p></div><p>Since the advent of Artificial Intelligence and Photoshop’s version, Firefly, my teaching and private tutoring have pretty much dropped dead. There is very little incentive for people to learn these techniques when they can conjure up an image by text prompts. It takes virtually no skill to do this besides the ability to read and write. I have played around with A.I. for personal projects, with varying degrees of success. Some of it is amazing, and some of it is laughable. However, there is no escaping the reality that these models were trained on existing artwork already online. It’s essentially plagiarism on steroids. Also known as theft. Not to mention the obscene energy costs involved.</p><p>Had this happened to me 20 years ago I would have been devastated. But at this point in my life (I’m 71) it is not as important as it once was. I’ve had some success both in client work and also creating digital art pieces for which I’ve won accolades. I’ve found satisfaction in teaching but now I’m turning more to “natural media” (non-digital) art, specifically painting. I am developing a course to teach watercolor painting to adults and I’m quite excited by this prospect. I have been married over 50 years and we have never relied on my income to survive.</p><p>-Susan Oakes</p><p></p><h3><strong>My gig ended with my boss responding to my AI concerns with ‘There's always work out there.’ I haven’t worked since.</strong></h3><p>I worked in the video game industry, as a 3D artist.</p><p>In early 2023, when AI image generation was hitting the mainstream, I was working as a temporary contractor at a large games and technology company.</p><div class="pullquote"><p>I expressed concerns about being able to find further work. He handwaved me, saying "There's always work out there.”</p><p>I have not been able to find any work since then.</p></div><p>Our boss was very enthusiastic about AI image generation, and he showed us how he was using AI to generate some of the textures for the game. I realized that if AI image generation didn't exist, then the company would have needed to hire an extra artist to do that work. I could have recommended a dozen colleagues who were looking for work at the time. It felt like AI was directly taking money out of artist's pockets, and allowing the companies to keep it all.</p><p>When my contract with that company ended, I did an exit interview with my boss. I expressed concerns about being able to find further work. He handwaved me, saying "There's always work out there." </p><p>I have not been able to find any work since then.</p><p>There were several factors as to why there were so many layoffs in games and technology in 2023-2024, but I know that AI has played a role.</p><p>I miss working as a 3D artist.</p><p>-Anonymous</p><p></p><h3><strong>Those animated reenactments and infographics you see on TV history documentaries are made by people like me. Or at least they </strong><em><strong>were</strong></em><strong>.</strong></h3><p>I am a freelance 3D/2D Generalist. Over the past decade plus, I've had a recurring gig of being hired as a contractor to help create supplemental graphics and B-roll for various documentary-style programs. Everything from infographics about military tanks to 3D animations of prehistoric creatures to recreations of scenes involving historical figures.</p><p>If you've ever watched any History show, you know the format: footage of the host and experts speaking, then sometimes video clips or photographs, and then typically animated content that illustrates the points the speaker is making. That final category was, until recently, made by people like me. That market has completely dried up.</p><div class="pullquote"><p>My job loss is merely a side-effect of AI killing off studios higher up the chain that each represent dozens more people being put out of work.</p></div><p>A couple of years ago, as soon as demos of AI-generated video began to appear, there were almost immediate rumblings that the specific business of creating documentary-style graphics would be disrupted. The logic being that, while the public might reject a feature-length AI-slop theatrical film, the at-home audience for shows about military history or ghosts or aliens might be less-discerning. That theory is now being tested. History Channel is currently airing a season of "Life After People" that heavily features AI-generated visuals, and I'm sure there are more shows in the pipeline being made the same way. We'll see how audiences respond.</p><p>As much as I would like to say viewers will reject the AI style and demand a return to human-made art, I'm not convinced it will happen. Even if it did, it might soon be too late to turn back. I know that there are studios with expert producers, writers, and showrunners with decades of experience in this exact genre who are closing their doors. That institutional knowledge will be gone.</p><p>That's probably the bigger point: this trend is not only affecting artists like me, but also the types of companies I contract for. Obviously I lament my own loss of those stable gigs, but my job loss is merely a side-effect of AI killing off studios higher up the chain that each represent dozens more people being put out of work.</p><p>-Anonymous</p><p></p><h3>‘There's a part of me that will never forgive the tech industry for what they've taken from me and what they've chosen to do with it.’ </h3><p>I work as a freelance illustrator (focusing on comics and graphic novels but also doing book covers or whatever else might come my way) and as a "day job" I do pre-press graphic design work for a screen printing and embroidery company in Seattle. Because of our location, we handle large orders (sometimes 10k shirts at a time) for corporate clients—including some of the biggest companies in the world (Microsoft, Amazon, MLB, NHL, etc.) and my job is to create client proofs where I mock up the art on the garment and call out PMS colors as applicable. I also do the color separations to prepare the art file for screen printing. </p><div class="pullquote"><p>[H]e instructed me to start plugging in the names of living artists to generate entire artworks in their style and the first time I did it I realized how horrifyingly wrong this actually was.</p></div><p>When AI first came on the scene, I was approached by a potential client that was self-funding a mobile game and wanted to commission me to create in-game art. He asked what my standard rate was and then offered to double it if I allowed him to pay in etherium (which I knew nothing about at that point.) I immediately had some concerns, but I'm a struggling artist so I took the gig anyway and crossed my fingers. He then introduced me to generative AI and encouraged me to use it to create game content quickly. At first I was interested in the possibility of using it to reduce my workload by maybe generating simple elements I get tired of painting—like grasses or leaves—but he instructed me to start plugging in the names of living artists to generate entire artworks in their style and the first time I did it I realized how horrifyingly wrong this actually was. After that I resisted and tried to use my own art. He grew frustrated with me pretty quickly and I left the company after less than 2 weeks (I was never paid; he owes/owed me about $1300).`</p><p>Since then, I have been very outspoken against generative AI and haven't touched it again. I was the moderator for a very large group of children's book illustrators (250k members) and I helped institute and enforce a strict AI ban within the group. While this was mostly a positive thing, there were quite a few occasions where legitimate artists were targeted for harassment over accusations of AI use. Some of them were even driven out of the group, in spite of our interventions and assurances that the person was not using AI. </p><p>In my own freelancing work, I have now been accused of using AI as well. I like to do fan art from Anne McCaffrey's Dragonriders of Pern [series], and sometimes when I'm looking for work I will post my art and past commissions in fan groups to see if anyone wants to hire me to draw their original characters based on the Pern books. Almost invariably now someone will ask if my art is AI generated. It used to bother me more than it does now, I'm growing a little numb to it.</p><p>My coworker at my screen printing job (in spite of knowing my negative feelings on the matter because I had cried after I found several dozen pieces of my art in the LAION dataset) chose to plug my art into an AI generator and asked for it to imitate my style—which it did poorly, might I add. It felt extremely violating. </p><p>Lastly, in my role as a graphic designer, we often now have to deal with clients sending art files in for screen printing that were generated with AI. It's a pain in the ass because these files are often low-resolution and the weird smudgy edges in most AI images don't make for easy color separations. When a human graphic designer sits down to create a design, they typically leave layers in place that can be individually manipulated and that makes my job much easier. AI flattens everything so I have to manually separate out design elements if I want to independently adjust anything. The text is still frequently garbled or unreadable. The fonts don't actually exist so they can't easily be matched. These clients are also almost invariably cheap, and get upset when they're told that it's going to be a $75 per hr art charge to fix the image so it's suitable for screening. </p><p>Also, and here I don't have any data, just my personal anecdotal experience, but it feels like some of these companies have laid off so much of their in-house graphic design staff that they are increasingly reliant on us as a print service to fix up stuff they'd formerly done for themselves. I get simple graphic design requests every day by people who should have had the resources to handle this themselves but now they're expecting me to pick up the slack for the employees they've let go for the sake of our working relationship and keeping them on as clients. It's become such a drag on our small business that my boss is considering extra fees. (Which, considering the slim margins in the garment industry, is really saying something!) I am convinced Microsoft does not have any in-house graphic designers left at this point. Okay I joke, but man, it's bleak.</p><p>I have no way of knowing how many gigs I've lost to AI, since it's hard to prove a negative. I'm not significantly less busy than I was before, and my income hasn't really changed for better or worse. There's more stress and fear, greater workloads cleaning up badly-done AI-generated images on behalf of people looking for a quick fix, instead of getting to do my own creative stuff. And it felt deeply and profoundly cruel to have my life's work trained on without my consent, and then put to use creating images like deepfakes or child sexual abuse materials. That one was really hard for me as a mom. I'd rather cut my own heart out than contribute to something like that. </p><p>There's a part of me that will never forgive the tech industry for what they've taken from me and what they've chosen to do with it. In the early days as the dawning horror set in, I cried about this almost every day. I wondered if I should quit making art. I contemplated suicide. I did nothing to these people, but every day I have to see them gleefully cheer online about the anticipated death of my chosen profession. I had no idea we artists were so hated—I still don't know why. What did my silly little cat drawings do to earn so much contempt? That part is probably one of the hardest consequences of AI to come to terms with. It didn't just try to take my job (or succeed in making my job worse) it exposed a whole lot of people who hate me and everything I am for reasons I can't fathom. They want to exploit me and see me eradicated at the same time. </p><p>-Melissa</p><h3><strong>The gig work exchange site I use is full of AI generated artwork I’m meant to fix — along with AI-generated job listings that don’t exist.</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zsXE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3d3d129-8d15-4929-aa4c-b42fb96ee746_8252x4642.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zsXE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3d3d129-8d15-4929-aa4c-b42fb96ee746_8252x4642.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zsXE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3d3d129-8d15-4929-aa4c-b42fb96ee746_8252x4642.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zsXE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3d3d129-8d15-4929-aa4c-b42fb96ee746_8252x4642.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zsXE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3d3d129-8d15-4929-aa4c-b42fb96ee746_8252x4642.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zsXE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3d3d129-8d15-4929-aa4c-b42fb96ee746_8252x4642.jpeg" width="1456" height="819" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/d3d3d129-8d15-4929-aa4c-b42fb96ee746_8252x4642.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":819,"width":1456,"resizeWidth":null,"bytes":9683573,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/173288159?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3d3d129-8d15-4929-aa4c-b42fb96ee746_8252x4642.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zsXE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3d3d129-8d15-4929-aa4c-b42fb96ee746_8252x4642.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zsXE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3d3d129-8d15-4929-aa4c-b42fb96ee746_8252x4642.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zsXE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3d3d129-8d15-4929-aa4c-b42fb96ee746_8252x4642.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zsXE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3d3d129-8d15-4929-aa4c-b42fb96ee746_8252x4642.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">Painting by <a href="https://www.roxanelapa.com/">Roxane Lapa</a></figcaption></figure></div><p>I’m a South African illustrator, and designer with 20 years of experience, and in my industry, I saw things going pear-shaped even before gen AI hit the scene. </p><p>One the main places I get jobs from is on Upwork (one of those gig type work platforms), and I’ve noticed a couple of things: A decrease in job offerings for the illustration I typically do (like book covers).</p><p>I’ve also noticed a lot more job offers to “fix” an AI generated cover. These authors offer less money because of “the work is pretty much done” attitude.</p><p>Since Upwork added an AI function to help potential employers write their briefs, there’s been a surge in what I’m pretty sure are fake jobs. The job listings all sound very samey obviously because of the format the ai uses, and the employers have no history on the platform of ever having hired anyone and don’t have their phone or bank linked. So I think what might be happening is that some evil person/persons are creating fake accounts and posting fake jobs so that their competitors waste credits applying for these jobs.</p><p>-Roxane Lapa</p><p></p><h3><strong>I used to make erotic furry fan art for a fee. Now people just use AI. </strong></h3><p>I got my start on deviantart, moved on to furaffinity and various other websites. I used to take commissions in the furry fandom drawing <a href="https://jisho.org/word/51868d8cd5dda7b2c600559e">futa</a> furries with big fat tits and dicks. In the past year or two my commissions have all but dried up, in the time it can take me to do the lineart for an anthropomorphic quokka's foreskin, someone can just go onto one of a dozen websites and knock something of tolerable quality out in no time at all.</p><p>AI has ruined a once sacred artform.</p><p>-Anonymous</p><p></p><h3><strong>My AI-loving boss makes my team of artists use AI, even though I’ve successfully demonstrated that it doesn’t help</strong></h3><p>I am the creative team manager for an e-commerce based company. I manage the projects of 2 videographers, 1 CG artist and 3 graphic designers (including myself).<br>As AI has been getting more and more advanced, our boss (one of the owners) keeps pushing us to use AI to make our images stand out amongst competitors. <br><br>We have a limited budget, so filming or photographing our products in real environments is difficult. And photoshopping them into stock imagery also takes time. Apparently a 1-hour turn around time per image is not quick enough. Our boss has been going to conferences where he hears and sees nothing but praise for AI created images. How quick it is and how "good" the images look like.<br><br>So of course he's been pushing us to use this technology. I did tell him that it's going to be a learning curve and to be patient. From Midjourney, to the latest update of Chat GPT, and to Adobe's Firefly. We've been cranking out these partial AI images.</p><p>The funny part is, A LOT of it still has to be photoshopped together. AI is still not smart enough (yet) to produce accurate images. The products we sell are very particular and even if you feed the AI images of said product, it never gets it 100% right.<br><br>Our boss didn't believe us so he himself tried it and failed miserably. Despite that, he still reminds us that our jobs will be obsolete and that we have to adapt.<br><br>Even since we started using AI to improve our images, the turnaround time for listing images remains the same. Though I feel like our boss is waiting for the day to fire and replace my team with AI.</p><p>-Anonymous</p><p></p><h3><strong>In 2D animation backgrounds, AI is hitting freelancers hard. But even for someone steadily employed like me it’s causing workplace headaches.</strong></h3><p>As an artist, I thought I was going crazy when it seemed everyone was okay (even enthusiastic) with our work being scraped left and right to build image-generation models. I'm a mom and have a mortgage to pay, so the existential threat to my livelihood caused a lot of sleepless nights to say the least. <br><br>I have been working in 2D animation for the last 10 years. I'm a background artist, which is unfortunately one of the departments most likely to be hit by gen AI replacement in the animation production pipeline. Of course, there's no reality where gen AI could actually do my job properly as it requires a ton of attention to detail. Things need to be drawn at the correct scale across hundreds of scenes. In many cases scenes directly hook up to each other, so details need to stay consistent—not to mention be layered correctly. But these are things that an exec typically glosses over in the name of productivity gains. <a href="https://www.pcmag.com/news/netflix-taps-ai-to-generate-anime-backgrounds-rather-than-hire-humans">Plus, there's already a precedent in which AI was used to produce backgrounds for a Netflix anime</a>.</p><p>Thankfully, I'm very lucky to work at an artist-run studio that currently appears to avoid the use of AI, so I continue to be employed. My peers who were freelance illustrators or concept artists are not so lucky. I'd say about half of the people I've worked alongside this last decade have left the field (not all because of AI, granted, but the state of the North American animation/games industry is a whole thing right now and AI is not helping). <br><br>The production I am on currently leverages a lot of stock photos from Adobe Stock. We have a rule in place not to use AI, but some images slip through the cracks. These have to be removed from the finished product because of, I assume, the inability to copyright AI-generated images. An incident happened recently where an AI image almost made it to the very end of the pipeline undetected and wound up disrupting several departments who are on tight submission deadlines. We aren't typically paid overtime unless approved by the studio beforehand, so it's likely that unpaid labor (or ghost hours, where you don't tell anyone you worked overtime) went into fixing this mess AI created.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kwkm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eae9086-f814-483e-85a6-4acd7fdc29f4_1432x627.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kwkm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eae9086-f814-483e-85a6-4acd7fdc29f4_1432x627.jpeg 424w, https://substackcdn.com/image/fetch/$s_!kwkm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eae9086-f814-483e-85a6-4acd7fdc29f4_1432x627.jpeg 848w, https://substackcdn.com/image/fetch/$s_!kwkm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eae9086-f814-483e-85a6-4acd7fdc29f4_1432x627.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!kwkm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eae9086-f814-483e-85a6-4acd7fdc29f4_1432x627.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kwkm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eae9086-f814-483e-85a6-4acd7fdc29f4_1432x627.jpeg" width="1432" height="627" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/7eae9086-f814-483e-85a6-4acd7fdc29f4_1432x627.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":627,"width":1432,"resizeWidth":null,"bytes":87230,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/173288159?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eae9086-f814-483e-85a6-4acd7fdc29f4_1432x627.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!kwkm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eae9086-f814-483e-85a6-4acd7fdc29f4_1432x627.jpeg 424w, https://substackcdn.com/image/fetch/$s_!kwkm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eae9086-f814-483e-85a6-4acd7fdc29f4_1432x627.jpeg 848w, https://substackcdn.com/image/fetch/$s_!kwkm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eae9086-f814-483e-85a6-4acd7fdc29f4_1432x627.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!kwkm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eae9086-f814-483e-85a6-4acd7fdc29f4_1432x627.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><h3><strong>I watched — and sounded the alarm — as AI fever took hold inside Adobe. Then I was let go.</strong></h3><p>I was running research on [Adobe’s] stock marketplace, trying to understand how customers were adopting the new Gen A.I. tools like Midjourney, Stable Diffusion and DALL-E. Internally Adobe was launching their own text-to-image A.I. generator called Firefly but it hadn’t been announced. I was on the betas for Firefly and Generative Fill (GenFill) for Photoshop and ran workshops with designers on the Firefly team. I tested the new tooling internally and gave feedback on Adobe Slack channels and their ethics committee.</p><p>A.I. generated content started to flood the Adobe Stock website as stock contributors quickly switched from adding and uploading photos to prompting and creating assets with Midjourney, Stable Diffusion, and Firefly and then selling them back on Adobe Stock. Users wanted a better search experience but it was never explicitly clear if they wanted more A.I. slop, although Reddit forums indicated otherwise. </p><p>During the GenFill beta, I raised concerns about model bias after prompting the model to edit an image of then president Joe Biden across racial categories and having the model return a Black man with cornrows—without taking into account relevant and contextual surrounding information in the image. The ethics committee pointed me to a boilerplate Word doc with their guiding principles and we had a short Microsoft Teams call, but there wasn’t any real concern from their end. After raising additional red flags inside an Adobe Slack channel about Photoshop’s GenFill beta possibly being used to create misinformation at scale the main response I got was a blasé “Photoshop, making misinformation since 1990…” Long story long, the people internally working on these products really don’t care. In another company all hands meeting about text-to-vector capabilities fellow workers shared thoughts and concerns on the impact of AI tooling to the livelihoods of artists, illustrators and other designers in the Teams chat and no one cared. In another meeting when asked about artist’s rights a manager quipped “In the research from AdobeMAX (Adobe’s annual conference) someone said they were willing to sell their ‘artistic style’ for around the price of a car” when gathering data around AI-style mimicry and trust.</p><p>The Firefly model still struggled to render hands and certain objects with difficulty and an Adobe company wide email sent to all employees encouraged us to sign up for an upcoming photoshoot on a green screen, holding things like trumpets, accordions, rubber chickens and asked employees to make awkward expressions like being surprised with “mouth open” or squinting while putting your finger in your ear in exchange for a free lunch. </p><p>Some time in 2023 Adobe paid for photographers to document crowds of people during a concert in Seattle and had attendees sign waivers releasing their likeness since Firefly had trouble rendering and distinguishing people in crowds. Shortly thereafter I was told my staff role was being eliminated. They didn’t let me switch teams. They gave me six weeks to find a new job inside the company and six weeks of severance pay. During the six weeks of “offboarding” as they called it I applied to dozens of internal jobs at Frame.io and other teams like Acrobat within the company and it never went anywhere. </p><p>-Anonymous</p><p></p><h3><strong>I’m a recent design graduate. AI might not have killed my job, but it’s not what I signed up for, and it’s hard to find work.</strong></h3><p>I just graduated in June from a two-year intensive vocational program in graphic design. It's probably still too early in my job search for me to say that AI "killed my job," but my classmates and I, as well as students from the class just ahead of us, are certainly struggling to find work.</p><p>Why I wanted to reach out, though, is to share what my experience was as a student studying design in the midst of the peak years of this AI hype. Basically our entire second-year curriculum in one of our five classes, which was previously focused on UX, UI, web design, etc, transitioned to being largely generative AI-focused. I don't think I'm overstating matters to say that no one in my class was happy about this; none of us decided to go (back) to school for design to learn Midjourney or Runway.</p><div class="pullquote"><p>Is it always going to be like this? I love learning, but am I always going to feel like I need to acquire skills in at least five new expensive SAS platforms to survive?</p></div><p>[One instructor] has lived through and had his career significantly impacted by past shifts in the industry (he was a full-time web designer when platforms like Squarespace came along), so my charitable read is that he wants to prepare us for a lifetime of learning new tools to stay employable. I think the faculty in our program are also hearing from alumni and their technical advisory board that AI tools are becoming more important for local companies (we live in a pretty tech-centric city). So while he's sympathetic, I guess, he's still choosing to go all-in on AI, and to push his students to do the same.</p><p>In our other classes, AI use was varied. Some of our instructors allowed it; a couple still forbid it completely.</p><p>I came out of school feeling like... I guess I'm grateful to know what's out there, for the sake of my own employability in this really awful job market. I really feel for designers whose school days are a little further behind them. It's not just AI that makes me say this—in fact, even if things like image and video generators find a more permanent place in graphic arts careers, they're changing fast enough that whatever we learned in school is likely to be outdated pretty quickly. If all the angry posts I see on LinkedIn from more senior designers are any indication, there's been a trend in hiring for a while of companies looking for a designer who also does video, animation, UX/UI, and many other things that aren't really graphic design. Our program taught us a lot of those skills, so maybe, if the current economic circumstances improve, our class might be okay. But it makes me worry a lot for our future. Is it always going to be like this? I love learning, but am I always going to feel like I need to acquire skills in at least five new expensive SAS platforms to survive?</p><p>Even our AI-booster instructor told us over and over again that computers will never replace the need for creative design thinking and empathy. That, he said, is what we should lean into to distinguish ourselves and ensure our employability. But there are only so many positions out there for art directors, and not everyone who studies design wants to do that. Production design gets looked down on as "menial" by some, I think, but it used to be the pipeline into more senior design positions--and if that goes away, how do new designers even get into the field? And what about people who have worked in production their whole lives?</p><p>-Anonymous</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rWUq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe949dc2-c5ee-4b2d-a33c-01df584b9457_1830x1378.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rWUq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe949dc2-c5ee-4b2d-a33c-01df584b9457_1830x1378.png 424w, https://substackcdn.com/image/fetch/$s_!rWUq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe949dc2-c5ee-4b2d-a33c-01df584b9457_1830x1378.png 848w, https://substackcdn.com/image/fetch/$s_!rWUq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe949dc2-c5ee-4b2d-a33c-01df584b9457_1830x1378.png 1272w, https://substackcdn.com/image/fetch/$s_!rWUq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe949dc2-c5ee-4b2d-a33c-01df584b9457_1830x1378.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rWUq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe949dc2-c5ee-4b2d-a33c-01df584b9457_1830x1378.png" width="1456" height="1096" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/be949dc2-c5ee-4b2d-a33c-01df584b9457_1830x1378.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":1096,"width":1456,"resizeWidth":null,"bytes":5089511,"alt":"","title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/173288159?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe949dc2-c5ee-4b2d-a33c-01df584b9457_1830x1378.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!rWUq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe949dc2-c5ee-4b2d-a33c-01df584b9457_1830x1378.png 424w, https://substackcdn.com/image/fetch/$s_!rWUq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe949dc2-c5ee-4b2d-a33c-01df584b9457_1830x1378.png 848w, https://substackcdn.com/image/fetch/$s_!rWUq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe949dc2-c5ee-4b2d-a33c-01df584b9457_1830x1378.png 1272w, https://substackcdn.com/image/fetch/$s_!rWUq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe949dc2-c5ee-4b2d-a33c-01df584b9457_1830x1378.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">Another piece of photo imaging art by Susan Oakes.</figcaption></figure></div><h3><strong>I struggle to fix all the AI’s problems while my AI-loving clients stand on the sidelines wondering what the issue is</strong></h3><p>I am a freelancer of a few trades, so it can be hard to measure lost work, because I can also wonder if I'm slow because times are slow, or a typical cycle, or AI.</p><p>I can tell you this: ALL my "lighter" graphic design work—making social media or print ad graphics, designing logos—has totally dried up. I was actually more worried about this when Canva came out, but even then they wanted my eye and my touch on things, so having the tools to do it themselves didn't really deter people from hiring me. I did this kind of work for some local small businesses, organizations, event venues. This was an abrupt change within the past couple years.</p><div class="pullquote"><p>They are usually thinking they will pay for a couple hours of my time, when what they are asking for could require maybe 100 hours. The "mistakes" […] are in the bones of the art.</p></div><p>My illustration work is mostly picture books, and while my work has remained steady (I do 1-3 a year), the number of inquiries I've gotten from new authors has dropped to nearly zero, when I used to field a few a month and usually book myself out for the next year. Also, through Upwork and other various avenues I find work, I've had quite a few people (presumably authors) reach out to me to "fix" their AI generated art. It does depend on the task at hand but it's a 90% certainty that fixing the art will take nearly as long as just doing it myself. Of course they aren't coming to me with AI generated work because they intended to hire a full-blown illustrator. They are usually thinking they will pay for a couple hours of my time, when what they are asking for could require maybe 100 hours. </p><p>The "mistakes" AI makes on art for something like a picture book, which requires consistency of a lot of different elements across at minimum 16 or so pages, are so deep that they are in the bones of the art. It's not airbrushing out a sixth finger; it's making the faux colored pencil look the same across pages, or all the items in a cluttered room be represented consistently from different angles, or make the different characters look like they came from the same universe. It's bad at that stuff and it's not surface level. A lot of time potential clients don't know why the art isn't working and it's because it's these all-encompassing characteristics.</p><p>-Melissa E. Vandiver</p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>The title for this story comes from the heading of the email this author submitted.</p></div></div>The killing of Charlie Kirk and the end of the "global town square" - Blood in the Machinehttps://www.bloodinthemachine.com/p/the-killing-of-charlie-kirk-and-the2025-09-12T18:54:56.000Z<p>Twitter never was a “<a href="https://www.washingtonpost.com/technology/2023/07/07/twitter-dead-musk-tiktok-public-square/">global</a> <a href="https://www.theatlantic.com/international/archive/2015/08/twitter-global-social-media/402415/">town square</a>,” as much as pundits and executives liked the metaphor. Elon Musk liked it enough that, after buying the site and rebranding it X, he had the official account <a href="https://x.com/X/status/1730309839929110846?lang=en">reiterate the idea</a>. But “town square”? Not really. Twitter was a large website that, by virtue of its early-mover advantage, its success in the network effect sweepstakes, and its savvy public relations campaigning, captured enough of the world’s commentariat to resemble a flattened version of one, for a time. But it was always an ad-supported platform where opaque algorithms determined who saw what, in what chronology, and who benefited most from the resulting engagement.</p><p>At Twitter’s peak, it really did have legacy media outlets and citizen journalists, conservatives and liberals, celebrities and heads of state, shitposters and academics, and so on, sharing the same platform and feedspace. This could give us, the users, the impression that we were “participating” in a world event, or at least the processing of one, by publishing our character-limited takes on the matter as it was unfolding. It was a prospect alluring enough to draw dedicated users, like me, and perhaps you, to the well, time and again. It’d be hard to tally how many News Events I engaged as a rubber-necker, a journalist, or a poster, by refreshing Twitter futilely and endlessly. I’m very obviously not alone here. A <a href="https://www.nytimes.com/2023/04/18/magazine/twitter-dying.html">lot of good words</a> have been written and <a href="https://www.ucl.ac.uk/social-historical-sciences/anthropology/research/why-we-post">extensive studies</a> have been conducted in service of trying to discern what that practice even <em>was.</em></p><p>But I hadn’t even fully realized how much I’d mostly stopped processing news in this way—shoulders hunched and tense, squinting into the feed, agitated, feeling a compulsion to ‘weigh in’ despite being acutely aware that we are all being prompted by a website’s UX design to feel that precise compulsion, and wanting the shares and validation anyway—until the gruesome assassination of the right-wing provocateur Charlie Kirk brought me, with millions of others, right back into its maw.</p><div class="subscription-widget-wrap-editor" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe","language":"en"}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Blood in the Machine is a 100% reader-supported publication. Most posts are entirely free to read, so subscribe away, but I can only write them thanks to my beloved paid supporters. If you can, please consider becoming one.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email…" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>It’s not that there hadn’t been other notable and wide-ranging atrocities recently, obviously, that I and millions of others have watched slack-jawed in anger through an app. But a truly and awfully ideal Social Media Event demands not just that you feel outrage, or even respond to others’ outrage, but that you succumb to that compulsion to join in, to simulate a kind of participation in history. It demands that you feel a blind urge to “do something” and transmute it onto the screen, to make your statement. (I saw a lot of users were commenting on how it was as if people were issuing their own press releases, which was apt, and it’s always been a bit like that.) To correct the big accounts that are obviously getting it wrong and “call out” the ones espousing hate, and so on. </p><p>The killing of a rightwing political activist known for trolling liberals, as well as for his enormous presence on the very sites his death was experienced through, more than fits the bill. It’s well-known at this point that platforms like X reward shocking graphic video, inflammatory speech, and political attacks with virality; the shooting of Kirk had all of the above packed into its initial singularity. It became the first time in months, maybe even years, that I lost the day to scanning and posting on social media. It might well be one of the last. </p><p>To me, anyway, it was a clarifying event with regard to the current State of Social Media, three years after Musk’s takeover of Twitter, its remaking as X, and its subsequent balkanization into various platform fiefs. It dispelled some curiously persistent delusions in the process, and zealously introduced the new elements that seem to me to be poised to limit the further effective simulation of a user’s participation in history. I came away with a few lingering thoughts and conclusions, which I’ll drill into below.</p><ol><li><p>The conceptualization of Twitter or X as a “global town square” can be dismantled for good</p></li><li><p>“AI enhancement” is the new, anti-social version of the crowd-sourced social media manhunt</p></li><li><p>BlueSky, as a left-liberal coded X alternative, has created a useful new kind of ‘other’ for online political projects</p></li></ol><p>Let’s get into it. </p><h2>The “democratic” “town square” that never was, is dead</h2><p>It was always, in hindsight, an enormously dubious proposition that Silicon Valley social media platforms would help foster democracy in any serious or sustained manner, despite the fleeting example of the Arab Spring tech companies wore like a badge for years afterwards. Autocrats soon learned that controlling social media was easy enough; you can <a href="https://www.theguardian.com/world/2023/apr/05/twitter-accused-of-censorship-in-india-as-it-blocks-modi-critics-elon-musk">always play the refs</a>, and barring that, <a href="https://time.com/32864/turkey-bans-twitter/">pull the plug</a>. And if you own the thing, well, you can do a lot more than that.</p><p>Elon Musk’s X has become a case study in how a social media network with tens of millions of users can be remade in the image of the man behind the control board, by removing content moderation, restoring users banned for hate speech, introducing pay-to-play incentives, and routinely signaling, by personal example, what kind of content the platform is for. </p><p>Yesterday, at 12:27, Musk tweeted “The Left is the party of murder,” before there was any evidence at all about the killer’s identity, or regarding his motives or ideological leanings, or that “The Left” is in fact a political party. Nonetheless, it helped pave the way for a stream of vitriol and calls to violence from some of X’s biggest accounts. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FCfR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33809929-91ff-48a7-8541-5a3fb3e9de19_1080x1350.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FCfR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33809929-91ff-48a7-8541-5a3fb3e9de19_1080x1350.jpeg 424w, https://substackcdn.com/image/fetch/$s_!FCfR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33809929-91ff-48a7-8541-5a3fb3e9de19_1080x1350.jpeg 848w, https://substackcdn.com/image/fetch/$s_!FCfR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33809929-91ff-48a7-8541-5a3fb3e9de19_1080x1350.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!FCfR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33809929-91ff-48a7-8541-5a3fb3e9de19_1080x1350.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FCfR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33809929-91ff-48a7-8541-5a3fb3e9de19_1080x1350.jpeg" width="1080" height="1350" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/33809929-91ff-48a7-8541-5a3fb3e9de19_1080x1350.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":1350,"width":1080,"resizeWidth":null,"bytes":253414,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/173312387?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33809929-91ff-48a7-8541-5a3fb3e9de19_1080x1350.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!FCfR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33809929-91ff-48a7-8541-5a3fb3e9de19_1080x1350.jpeg 424w, https://substackcdn.com/image/fetch/$s_!FCfR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33809929-91ff-48a7-8541-5a3fb3e9de19_1080x1350.jpeg 848w, https://substackcdn.com/image/fetch/$s_!FCfR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33809929-91ff-48a7-8541-5a3fb3e9de19_1080x1350.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!FCfR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33809929-91ff-48a7-8541-5a3fb3e9de19_1080x1350.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">Screenshot gallery from a viral Medias Touch tweet.</figcaption></figure></div><p>In an essay that now seems quaint despite being published just weeks ago, the editor of a new liberal magazine, The Argument, inaugurated the publication with <a href="https://www.theargumentmag.com/p/we-have-to-stay-at-the-nazi-bar">a call for its new readers to stay on X</a>, for the sake of debate: “Twitter is — without question — the most influential public square we have… Those who leave Twitter are sacrificing their ability to advocate for the change they seek.” Scanning the posts above, from some of the largest accounts on the platform, as well as from its owner, I hope it’s obvious enough that the only reason these folks are taking to any kind of public square is to assemble a firing squad. </p><p>On a more granular level, <a href="https://www.rollingstone.com/culture/culture-news/elon-musk-engineers-twitter-engagement-1234680113/">multiple</a> <a href="https://www.theguardian.com/technology/2023/feb/16/twitter-data-appears-to-support-claims-new-algorithm-inflated-reach-of-elon-musks-tweets-australian-researcher-says">reports</a> have <a href="https://www.platformer.news/yes-elon-musk-created-a-special-system/">confirmed</a> that Musk instructed X’s engineers to boost how far his own posts travel, and who knows how many friendly accounts he’s extended the favor to, or how else he’s tinkered with the algorithm, in ways less obvious than <a href="https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-antisemitic-racist-content">giving rise to MechaHitler</a>. Meanwhile, users, typically his fans, who pay for blue checkmarks, get their posts elevated by the algorithm in replies, effectively stamping out any hope for organic debate. And the drumbeat of anti-left, anti-Democrat, anti-woke, and anti-migrant posts from the platform’s top account, Musk’s, have indelibly created a culture that should disabuse anyone of the notion that this is entire configuration is something to be <em>argued</em> against. X has become a vehicle for power, in other words, not persuasion. And the anti-MechaHitler side doesn’t have any.</p><p>Users, myself included, who nonetheless felt compelled by watching the rage fomenting on the platform after Kirk’s murder, to post anything attempting to counter the gathering “this is a war and the Democrats/left must be extinguished” narratives, were predictably ignored, mocked, and steamrolled, or worse. Right-wing activists are now taking the social posts of people they believe to be “celebrating” Kirk’s death—many are just posting the activists’ own past quotes—entering them into a database, and <a href="https://www.wired.com/story/right-wing-activists-are-targeting-people-for-allegedly-celebrating-charlie-kirks-death/">posting their personal details online</a>.</p><p>There was no meaningful debate, besides perhaps between fellow traveler liberals, and certainly no detectible impulse towards democracy<em>. </em>Whether or not it’s *ethical* to stay on X is another question, but the aftermath of the Kirk killing shows us why we’d do well to dismantle our model of X as a place where debates are had, needles are moved, and political progress is possible. </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><h2>AI slop is further degrading information quality and giving rise to antisocial crowd-sourced manhunts</h2><p>In the earlier days of social media, after a tragedy, users would take to the platforms to scour the footage and photos of the event for clues. Most famously, in the case of the Boston Marathon Bomber, a subreddit that dubbed itself ‘Find Boston Bombers’ crowd-sourced the investigation to amateur sleuths at home. It wound up declaring a few innocent people as suspects, spurring the media to show up on at least one poor bystander’s front lawn. The “suspects” were harassed online and otherwise made miserable. Ultimately Reddit <a href="https://www.bbc.com/news/technology-22263020">was forced to apologize.</a></p><p>The intent may have been noble, or not, but either way, it was worse than useless. It impeded the real investigation and ruined some real people’s lives for a while. It was a function of social media that we had to learn to guard against, to prevent amateur information from dubious provenances from entering the chat. Now, of course, there’s a brand new vehicle for information degradation proliferating on the platforms</p><p>As <a href="https://futurism.com/elon-musk-grok-charlie-kirk-misinformation">Futurism reported</a>, AI chatbot products, especially Grok, were sharing false information about Kirk’s killing:</p><blockquote><p>When one user asked, for instance, if Kirk <a href="https://x.com/CoolJdjdjd28961/status/1965859329183224268">could have survived</a> the gunshot wound, <a href="https://x.com/grok/status/1965859625431134508">Grok responded</a> in a cheery tone that the Turning Point USA founder was fine.</p><p>"Charlie Kirk takes the roast in stride with a laugh — he's faced tougher crowds," the bot wrote. "Yes, he survives this one easily."</p><p>When <a href="https://x.com/HotTalkJayhawk/status/1965863030409015724">another user countered</a> that Kirk had been "shot through the neck" and asked Grok "wtf" it was talking about, the chatbot doubled down.</p><p>"It's a meme video with edited effects to look like a dramatic 'shot' — not a real event," <a href="https://x.com/grok/status/1965863232478077127">Grok retorted</a>. "Charlie Kirk is fine; he handles roasts like a pro."</p></blockquote><p>Then, on Thursday, when the FBI released images of the suspected shooter, social media users took to the platform not to pool clues, but to ‘enhance’ the image using AI. I’ve assembled some of the examples below; there were also video renderings that depicted the shooter walking up the stairs.</p><div class="image-gallery-embed" data-attrs="{"gallery":{"images":[{"type":"image/png","src":"https://substack-post-media.s3.amazonaws.com/public/images/493b7390-6101-4edc-8df9-0e897f5a1d33_1190x1020.png"},{"type":"image/png","src":"https://substack-post-media.s3.amazonaws.com/public/images/c58df976-185f-412f-b90b-04136b637a70_1156x812.png"},{"type":"image/jpeg","src":"https://substack-post-media.s3.amazonaws.com/public/images/86845ade-57d7-4afb-abb6-294872fecb2b_600x606.jpeg"},{"type":"image/jpeg","src":"https://substack-post-media.s3.amazonaws.com/public/images/32a27ed5-0cf7-48e3-9d0b-dadc5fb605f5_1024x1536.jpeg"},{"type":"image/jpeg","src":"https://substack-post-media.s3.amazonaws.com/public/images/5960d2a3-d58c-410c-8ce2-97459bf9c335_1200x675.jpeg"},{"type":"image/jpeg","src":"https://substack-post-media.s3.amazonaws.com/public/images/5d5ee30d-53e2-4062-b755-ed90b3dde18c_482x680.jpeg"},{"type":"image/png","src":"https://substack-post-media.s3.amazonaws.com/public/images/fe589744-7b80-4328-afb5-312be3a32824_1184x1090.png"},{"type":"image/png","src":"https://substack-post-media.s3.amazonaws.com/public/images/01ce1dd2-1638-4c81-9868-292d2c96aae6_1176x1364.png"}],"caption":"","alt":"","staticGalleryImage":{"type":"image/png","src":"https://substack-post-media.s3.amazonaws.com/public/images/797394c9-bac8-44c1-b4c8-80eb3a000a04_1456x1700.png"}},"isEditorNode":true}"></div><p>This is, of course, once again, worse than useless. Some of these users may, like most of the Boston marathon bomb sleuths, just be trying to help, but they’re assuming that ChatGPT is going to work like the “AI” they see on TV procedurals, and “enhance” or “clean up” a photo, when it is of course assembling a new image from scratch, based on the pixels put into its system. (Gizmodo’s Matt Novak has a good and <a href="https://gizmodo.com/ai-zoom-enhance-does-not-work-2000651736">more thorough explainer </a>of why this practice is so absurd.) </p><p>An AI image generation system, cannot, for instance, give us any actual information about what the suspect looks like without sunglasses on.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!PaHi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250b7c98-76d0-4d87-a841-7fa126827175_1176x1054.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!PaHi!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250b7c98-76d0-4d87-a841-7fa126827175_1176x1054.png 424w, https://substackcdn.com/image/fetch/$s_!PaHi!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250b7c98-76d0-4d87-a841-7fa126827175_1176x1054.png 848w, https://substackcdn.com/image/fetch/$s_!PaHi!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250b7c98-76d0-4d87-a841-7fa126827175_1176x1054.png 1272w, https://substackcdn.com/image/fetch/$s_!PaHi!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250b7c98-76d0-4d87-a841-7fa126827175_1176x1054.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!PaHi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250b7c98-76d0-4d87-a841-7fa126827175_1176x1054.png" width="1176" height="1054" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/250b7c98-76d0-4d87-a841-7fa126827175_1176x1054.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":1054,"width":1176,"resizeWidth":null,"bytes":791800,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/173312387?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250b7c98-76d0-4d87-a841-7fa126827175_1176x1054.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!PaHi!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250b7c98-76d0-4d87-a841-7fa126827175_1176x1054.png 424w, https://substackcdn.com/image/fetch/$s_!PaHi!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250b7c98-76d0-4d87-a841-7fa126827175_1176x1054.png 848w, https://substackcdn.com/image/fetch/$s_!PaHi!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250b7c98-76d0-4d87-a841-7fa126827175_1176x1054.png 1272w, https://substackcdn.com/image/fetch/$s_!PaHi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250b7c98-76d0-4d87-a841-7fa126827175_1176x1054.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>Now, this was somewhat fringe stuff, though there were big accounts participating, but the practice was also often shouted down. Still, there were at least a few cases where users were taking a screenshot from one of the AI image generators and using it to draw conclusions, and you can see how in the future this all might become more problematic. And I think it’s worth noting that the previously bad practice of working socially to find and compile evidence is giving way to the new bad practice of generating your <em>own</em> evidence, with a new tech product at hand. </p><p>The combined effect, and the omnipresence of AI on the platforms, leads users to <em>expect</em> a breakdown in information quality—to the point that Trump’s address on Kirk’s death, which was recorded and uploaded straight to social media, <a href="https://www.yahoo.com/news/articles/trump-video-charlie-kirk-being-170000273.html">was widely criticized</a> for potentially being AI-generated. In fact, it probably was either just hastily cut together, or employed an AI editing tool. But the Trump admin clearly loves AI and making AI-generated media, it would ultimately be unsurprising if it used ChatGPT to shit out a video statement. The White House X feed itself<em>, </em>after all,<em> </em>is another artifact highlighting the decay, and increasing anti-sociality, of social media. </p><h2>The social media balkanization and the vilification of BlueSky is complete</h2><p>Shortly after Kirk’s killing, a blogger in Musk’s orbit, Tim Urban wrote <a href="https://x.com/waitbutwhy/status/1965870547604222392">that</a> “Every post on Bluesky is celebrating the assassination. Such unbelievably sick people.” Musk quoted the post, and <a href="https://x.com/elonmusk/status/1965973587812380716">insisted</a> “they are celebrating cold-blooded murder.” The evidence supplied was a few tiny accounts and dumb posts with one to zero likes apiece. </p><p>Another prominent conservative commentator replied to AOC’s call for nonviolence by saying, “Your followers are celebrating Charlie Kirk's assassination all over Bluesky. Hundreds of thousands of bloodthirsty Democrats, delighted by the political violence that you've incited.” The Atlantic staff writer Thomas Chatterton Williams <a href="https://x.com/thomaschattwill/status/1965878545454084546">called</a> this purported celebration of violence “unconscionable.”</p><p>Of course, it wasn’t really happening. Not on any scale that was materially different from what was taking place on X or elsewhere, anyway. I spent a considerable amount of the week on BlueSky, too, watching the trending topics, searching keywords, doomscrolling, etc. (I also have dummy accounts on both platforms not algorithmically tailored to my typical browsing habits.) I can say with confidence that the reaction was similar on both platforms—the vast majority of posts ranged from ‘violence is never the answer’ to ‘nothing good will come from this’ to highlighting pointed quotes of Kirk’s about gun violence. You could find a few on both platforms along the lines of “he deserved it” but they were the obvious and clear minority. </p><p>It didn’t matter. <a href="https://maxread.substack.com/p/why-are-pundits-obsessed-with-bluesky">To many,</a> “BlueSky” has become an ideological construct of its own, the place where “the intolerant left” has allegedly gone to live in its bubble. (This assumption seems flawed to me, as, based on a purely anecdotal taxonomy, it seems it’s mostly progressives and left-liberals on BlueSky, and that a lot of the more traditionally Marxist left has stayed on X, though there is certainly plenty of overlap.)</p><p>That construct is now being used, in part, to justify the project outlined above by all those big rightwing accounts on X—all those “vile” posters on BlueSky ostensibly celebrating Kirk’s death are the reason that LibsOfTikTok, Matt Walsh, Elon Musk, and whoever else, must now go “to war.” The othering of the users of an entire social media platform is an especially useful rhetorical move, because X users don’t even have to leave to leave the platform to see if what the vilifiers are saying is true, and they most likely won’t. For centrists like Chatterton, meanwhile, it’s useful as a means of elevating one’s sense of reasonableness and pragmatism, over the hordes gnashing their teeth, again, just off-platform. </p><p>The biggest and most powerful accounts on X were never going to listen to the input of the accounts now posting on BlueSky, no matter where they were doing said posting. What matters more than anything—certainly more than the persuasive capacity of clever users—is that X is owned by a person with a material and political interest in highlighting certain views (BlueSky, after all, is also an X competitor), and cultivating his platform in a specific way, accordingly. Posts are peripheral; what matters is power. It’s always been thus, now it’s unambiguous. </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p>iPhone Air is Apple’s latest gimmick - Disconnect68c1d51a5cab6700015b30fb2025-09-10T20:07:32.000Z<img src="https://disconnect.blog/content/images/2025/09/iphoneair-1.png" alt="iPhone Air is Apple’s latest gimmick"><p>Did you hear? There’s a new iPhone — and it’s thinner! Exactly what everyone has been asking for.</p><p>I joke, of course. Real people want phones that are durable, have a decent camera, and allow them to get through the day without charging — not ones that compromise on all those key features. The new iPhone Air does just that: it has the shortest battery life and the worst camera system of any of this year’s iPhones. Given how thin it is, you have to imagine the company is bracing for a new “<a href="https://apple.fandom.com/wiki/Bendgate?ref=disconnect.blog">bendgate</a>.”</p><p>Apple is spinning the iPhone Air as a glimpse into the future, and I’m sure some of its hardcore fanboys will buy it. But this isn’t a MacBook Air moment. iPhones are already quite thin. Instead, it looks like a repeat of when Apple went too far with thinness in its Mac lineup, resulting in too many feature compromises, a lack of ports, and a wave of bad keyboards that ultimately enraged its customers.</p>
<div class="kg-card kg-cta-card kg-cta-bg-none kg-cta-immersive kg-cta-no-dividers kg-cta-centered" data-layout="immersive">
<div class="kg-cta-content">
<div class="kg-cta-content-inner">
<a href="#/portal/signup" class="kg-cta-button kg-style-accent" style="color: #000000;">
Become a subscriber
</a>
</div>
</div>
</div>
<p>When it finally reversed course and released <a href="https://arstechnica.com/gadgets/2021/10/2021-macbook-pro-review-yep-its-what-youve-been-waiting-for/?ref=disconnect.blog">a thicker MacBook Pro</a> with a bigger battery and more ports, customers (and reviewers) celebrated it — and bought them up. Unfortunately, the company does not seem to have learned its lesson — and not just on the iPhone front. Recent reporting from Mark Gurman at Bloomberg suggests Apple is <a href="https://www.bloomberg.com/news/newsletters/2024-06-16/when-is-apple-intelligence-coming-some-ai-features-won-t-arrive-until-2025-lxhjh86w?ref=disconnect.blog">preparing to push thinness</a> across its product line once again. The iPhone Air is just the beginning.</p><p>To me, it’s yet another example of how rudderless Apple has become on the product front <a href="https://disconnect.blog/roundup-apples-wants-to-hike-iphone-prices-again/">under Tim Cook</a>. Its days of doing serious innovation are behind it. It might roll out some nicer new cameras and other attractive features from time to time, but it’s not truly revolutionizing how people engage with digital technology anymore. Apple is just trying to find new reasons to entice people to upgrade their devices before they give out. And for all the talk of planned obsolescence, the devices are lasting longer.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://disconnect.blog/smartphone-innovation-is-dead-and-thats-fine/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Smartphone innovation is dead, and that’s fine</div><div class="kg-bookmark-description">Stop blaming a lack of competition for a product that’s simply done meaningfully evolving</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://disconnect.blog/content/images/icon/disconnect-logo-32.png" alt="iPhone Air is Apple’s latest gimmick"><span class="kg-bookmark-author">Disconnect</span><span class="kg-bookmark-publisher">Paris Marx</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://disconnect.blog/content/images/thumbnail/https-3a-2f-2fsubstack-post-media-s3-amazonaws-com-2fpublic-2fimages-2f79f382b1-054d-44fb-b791-565c154e9fbd_2000x1333-jpeg.jpg" alt="iPhone Air is Apple’s latest gimmick" onerror="this.style.display = 'none'"></div></a></figure><p>Last year, I wrote about how <a href="https://disconnect.blog/smartphone-innovation-is-dead-and-thats-fine/">smartphone innovation had died</a> — and why that was completely fine. All that’s left is find those gimmicks that can get customers to hand over their hard-earned money for a new device they don’t really need. We’ve long seen a lot of that on the Android front, as device makers didn’t just have to compete with the iPhone, but also with all the other Android phones vying for people’s attention. As Apple struggles to do meaningful innovation, it has to turn to gimmicks too.</p><p>That’s how I see the iPhone Air. It’s not just a gimmick in its own right, but a preview of the gimmick that will follow. This year, the pitch to a subset of the market that’s willing to pay a premium for an inferior product is that they can own the thinnest device — as though that really matters. But there will surely be some segment of the fanbase that will see that as enough of a reason to get one. It’s more of an intermediary product to the real pitch that will likely come next year.</p><p>When I see the iPhone Air, I immediately think of what it’s going to look like when two of them are smooshed together, until you fold them open into a book. Apple is commercializing a preview to make some money off its recent work on what will form the foundation of the foldable phone it will deliver in the next product cycle — and unfortunately, not even in the folding form factor I find intriguing.</p><p>Don’t get me wrong, I still see all foldables as a gimmick. They’re a way to try to convince the public that they need to buy the new form factor because there are few features that are really worth upgrading early for anymore. But if Apple was planning something like the <a href="https://en.wikipedia.org/wiki/Samsung_Galaxy_Z_Flip?ref=disconnect.blog">Samsung Galaxy Z Flip</a> — a hybrid of a smartphone and flip phone — I might give it a look. Sadly, it seems far more likely to make one in a book-like form, which I just think is far too big for a phone.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://disconnect.blog/apples-vision-pro-lacks-any-real-vision/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Apple’s Vision Pro lacks any real vision</div><div class="kg-bookmark-description">The company’s headset exists to placate investors, not serve users’ needs</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://disconnect.blog/content/images/icon/disconnect-logo-33.png" alt="iPhone Air is Apple’s latest gimmick"><span class="kg-bookmark-author">Disconnect</span><span class="kg-bookmark-publisher">Paris Marx</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://disconnect.blog/content/images/thumbnail/https-3a-2f-2fsubstack-post-media-s3-amazonaws-com-2fpublic-2fimages-2fd87944cd-5ea6-4da6-b062-f362f7a7fba1_2000x1125-jpeg.jpg" alt="iPhone Air is Apple’s latest gimmick" onerror="this.style.display = 'none'"></div></a></figure><p>There were other baffling decisions this year, like the lack of a black color option in the iPhone Pro line, and those worthy of praise, such as its decision to downplay <a href="https://disconnect.blog/apple-hopes-ai-will-make-you-buy-a-new-iphone/" rel="noreferrer">generative AI features</a> that commentators had criticized it for not moving more aggressively on, but that truly are not very useful for most customers. More than anything though, this September’s iPhone reveal did not say much new about Apple.</p><p>The company needs to keep the line going up and the money to keep flowing to shareholders. It’s lost any real vision in favor of iterating on what it has, occasionally hiking prices, and predictably rolling out new gimmicks to entice a purchase. I guess that is until the Vision Pro <a href="https://disconnect.blog/apples-vision-pro-lacks-any-real-vision/" rel="noreferrer">revolutionizes everything</a>.</p><p>I won’t be <a href="https://disconnect.blog/the-vision-pro-is-a-big-flop/" rel="noreferrer">holding my breath</a>.</p>
<div class="kg-card kg-cta-card kg-cta-bg-none kg-cta-immersive kg-cta-no-dividers kg-cta-centered" data-layout="immersive">
<div class="kg-cta-content">
<div class="kg-cta-content-inner">
<a href="#/portal/signup" class="kg-cta-button kg-style-accent" style="color: #000000;">
Become a subscriber
</a>
</div>
</div>
</div>
Cognitive scientists and AI researchers make a forceful call to reject “uncritical adoption" of AI in academia - Blood in the Machinehttps://www.bloodinthemachine.com/p/cognitive-scientists-and-ai-researchers2025-09-07T19:44:43.000Z<p>Greetings friends, </p><p>I know there’s been a lot of coverage in these pages of the dark side of commercial AI systems lately: Of <a href="https://www.bloodinthemachine.com/p/ai-killed-my-job-translators">how management is using AI software to drive down wages</a> and deskill work, <a href="https://www.bloodinthemachine.com/p/a-500-billion-tech-companys-core">the psychological crises</a> that AI chatbots are inflicting on vulnerable users, and, <a href="https://www.bloodinthemachine.com/p/one-of-the-last-best-hopes-for-saving">the failure of the courts</a> to confront the monopoly power of Google, the biggest AI content distributor on the planet. To name a few.</p><p>But there are so many folks out there—scientists, workers, students, you name it—who are not content to let the future be determined by a handful of Silicon Valley giants alone, and who are pushing back in ways large and small. To wit: A new, just-published paper calls on academia to repel rampant AI adoption in their departments and classrooms. </p><div class="subscription-widget-wrap-editor" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe","language":"en"}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Blood in the Machine is a 100% reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email…" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>A group lead by cognitive scientists and AI researchers hailing from universities in the Netherlands, Denmark, Germany, and the US, has published a searing position paper urging educators and administrations to reject corporate AI products. The paper is called, fittingly, <a href="https://zenodo.org/records/17065099">“Against the Uncritical Adoption of 'AI' Technologies in Academia,”</a> and it makes an urgent and exhaustive case that universities should be doing a lot more to dispel tech industry hype and keep commercial AI tools out of the academy.</p><p>“It's the start of the academic year, so it's now or never,” Olivia Guest, an assistant professor of cognitive computational science at Radboud University, and the lead author of the paper, tells me. “We're already seeing students who are deskilled on some of the most basic academic skills, even in their final years.”</p><p>Indeed, <a href="https://www.mdpi.com/2075-4698/15/1/6">preliminary research</a> indicates that AI encourages cognitive offloading among students, and weakens retention and critical thinking skills.</p><p>The paper follows the publication in late June of <a href="https://openletter.earth/open-letter-stop-the-uncritical-adoption-of-ai-technologies-in-academia-b65bba1e?limit=0">an open letter</a> to universities in the Netherlands, written by some of the same authors, and signed by over 1,100 academics, that took a “principled stand against the proliferation of so-called 'AI' technologies in universities.” The letter proclaimed that “we cannot condone the uncritical use of AI by students, faculty, or leadership.” It called for a reconsideration of the financial relationships between universities and AI companies, among other remedies. </p><p>The position paper, published September 5th, expands the argument and supports it with historical and academic research. It implores universities to cut through the hype, keep Silicon Valley AI products at a distance, and ensure students’ educational needs are foregrounded. Despite being an academic paper, it pulls few punches.</p><p>“When it comes to the AI technology industry, we refuse their frames, reject their addictive and brittle technology, and demand that the sanctity of the university both as an institution and a set of values be restored,” the authors write. “If we cannot even in principle be free from external manipulation and anti-scientific claims—and instead remain passive by default and welcome corrosive industry frames into our computer systems, our scientific literature, and our classrooms—then we have failed as scientists and as educators.”</p><p>See? It goes pretty hard. </p><p>“The position piece has the goal of shifting the discussion from the two stale positions of AI compatibilism, those who roll over and allow AI products to ruin our universities because they claim to know no other way, and AI enthusiasm, those who have drunk the the Kool-Aid, swallowed all technopositive rhetoric hook line and sinker, and behave outrageously and unreasonably towards any critical thought,” Guest tells me. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XVbm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f1d0e73-1523-46e4-9d13-f9fab3eae627_1328x846.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XVbm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f1d0e73-1523-46e4-9d13-f9fab3eae627_1328x846.png 424w, https://substackcdn.com/image/fetch/$s_!XVbm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f1d0e73-1523-46e4-9d13-f9fab3eae627_1328x846.png 848w, https://substackcdn.com/image/fetch/$s_!XVbm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f1d0e73-1523-46e4-9d13-f9fab3eae627_1328x846.png 1272w, https://substackcdn.com/image/fetch/$s_!XVbm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f1d0e73-1523-46e4-9d13-f9fab3eae627_1328x846.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XVbm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f1d0e73-1523-46e4-9d13-f9fab3eae627_1328x846.png" width="1328" height="846" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/1f1d0e73-1523-46e4-9d13-f9fab3eae627_1328x846.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":846,"width":1328,"resizeWidth":null,"bytes":231936,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/172985467?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f1d0e73-1523-46e4-9d13-f9fab3eae627_1328x846.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!XVbm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f1d0e73-1523-46e4-9d13-f9fab3eae627_1328x846.png 424w, https://substackcdn.com/image/fetch/$s_!XVbm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f1d0e73-1523-46e4-9d13-f9fab3eae627_1328x846.png 848w, https://substackcdn.com/image/fetch/$s_!XVbm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f1d0e73-1523-46e4-9d13-f9fab3eae627_1328x846.png 1272w, https://substackcdn.com/image/fetch/$s_!XVbm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f1d0e73-1523-46e4-9d13-f9fab3eae627_1328x846.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">From Figure 1 in the paper. Figure 1. A cartoon set theoretic view on various terms used when discussing the superset AI: LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colors reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses.</figcaption></figure></div><p>“To achieve this we perform a few discursive maneuvers,” she adds. “First, we unpick the technology industry’s marketing, hype, and harm. Second, we argue for safeguarding higher education, critical thinking, expertise, academic freedom, and scientific integrity. Finally, we also provide extensive further reading.”</p><p>Here’s the abstract for more detail: </p><blockquote><p>Under the banner of progress, products have been uncritically adopted or even imposed on users—in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. </p><p>For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research… universities must take their role seriously to a) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. </p></blockquote><p>It’s very much worth spending some time with, and not just because it cites yours truly (though I am honored to have Blood in the Machine: The book referenced a few times throughout). It’s an excellent resource for educators, administrators, and anyone concerned about AI in the classroom, really. And it’s a fine arrow in the quiver for those educators already eager to stand up to AI-happy administrations or department heads.</p><p>It also helps that these are scientists *working in AI labs and computer science departments*. Nothing against the comp lit and art history professors out there, whose views on the matter are just as valid, but the argument stands to carry more weight among administrations or departments navigating the question of whether or how to integrate AI into their schools this way. It might inspire AI researchers and cognitive scientists skeptical of the enormous industry presence in their field to speak out, too.</p><p>And it does feel like these calls are gaining in resonance and momentum—it follows the publication of <a href="https://refusinggenai.wordpress.com/">“Refusing GenAI in Writing Studies: A Quickstart Guide”</a> by three university professors in the US, <a href="https://themindfile.substack.com/p/against-ai-literacy-have-we-actually">“Against AI Literacy,”</a> by the learning designer Miriam Reynoldson, and <a href="https://lareviewofbooks.org/article/inspiration-from-the-luddites-on-brian-merchants-blood-in-the-machine/">lengthy cases for fighting automation in the classroom</a> by educators. After Silicon Valley’s drive to capture the classroom—and success in <a href="https://laist.com/news/education/csu-artificial-intelligence-chatgpt-budget-gap-administrators">scoring some lucrative deals</a>—perhaps the tide is beginning to turn.</p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><h2>Silicon Valley goes to Washington</h2><p>This, of course, is what those educators are up against. The leading lights of Silicon Valley all sitting down with the same president who has effectively dismantled the Department of Education, to kiss his ring, and to do, well, whatever this is:</p><div class="bluesky-wrap outer" style="height: auto; display: flex; margin-bottom: 24px;" data-attrs="{"postId":"3ly6tyokzfk2x","authorDid":"did:plc:66lbtw2porscqpmair6mir37","authorName":"Ketan Joshi","authorHandle":"ketanjoshi.co","authorAvatarUrl":"https://cdn.bsky.app/img/avatar/plain/did:plc:66lbtw2porscqpmair6mir37/bafkreihwiie3v5p5zxedev2tunz5cgnjgsn7gjza3ceada2a2nwawehgge@jpeg","text":"Incredible clip of tech CEOs fawning over Donald Trump. Someone store this clip in the underground archive vault","createdAt":"2025-09-06T18:54:51.846Z","uri":"at://did:plc:66lbtw2porscqpmair6mir37/app.bsky.feed.post/3ly6tyokzfk2x","imageUrls":["https://video.bsky.app/watch/did%3Aplc%3A66lbtw2porscqpmair6mir37/bafkreiadoug5332ewpp2w46s4q4oa75tukvn34kcb6hmnhwa6ovhyr457i/thumbnail.jpg"]}" data-component-name="BlueskyCreateBlueskyEmbed"><iframe id="bluesky-3ly6tyokzfk2x" data-bluesky-id="3123452811584184" src="https://embed.bsky.app/embed/did:plc:66lbtw2porscqpmair6mir37/app.bsky.feed.post/3ly6tyokzfk2x?id=3123452811584184" width="100%" style="display: block; flex-grow: 1;" frameborder="0" scrolling="no"></iframe></div><p>Pretty embarrassing! </p><p>Okay, that’s it for today. Thanks as always for reading. Remember, Blood in the Machine is a precarious, 100% reader supported publication. I can only do this work if readers like you chip in a few bucks each month, or $60 a year, and I appreciate each and every one of you. If you can, please consider helping me keep Silicon Valley accountable. Until next time. </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p>Human Conversation - Cybernetic Forests68bcc2c24081530001877da62025-09-07T11:00:54.000Z<h3 id="technologys-distortions-of-language">Technology's Distortions of Language</h3><img src="https://images.unsplash.com/photo-1451597827324-4b55a7ebc5b7?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDI1fHxjb252ZXJzYXRpb258ZW58MHx8fHwxNzU3MjAxMTExfDA&ixlib=rb-4.1.0&q=80&w=2000" alt="Human Conversation"><p>Language is a vessel through which meaning is mutually constructed. From this shared imagination, we learn how others understand and aim to understand them. We also navigate how much of ourselves to put into this space. The imagination space is therefore negotiated through language: our thoughts are ours. We give away what we want. </p><p>There are good reasons to keep some ideas to ourselves. Sometimes we aren’t sure about our own idea. Sometimes we aren’t sure of the other person. We worry about rejection. Conversations make us vulnerable to social and intellectual wounds. But these risks are usually overstated.</p><p>As we exchange ideas, we build a world we temporarily co-exist in. At its best, this is a circle of playfulness that welcomes risk-taking and vulnerability. That can inspire us to be bold. Boldness requires connection and trust, built up over time, by testing our boldness and seeing that we're still supported. These risks of communication help us discover how much of the world we can see, to learn how much we can change, and who might help us with the work.</p><p>This holds even for the driest of conversations. With a human tax attorney, we still work with a participatory imagination: we have to imagine, for ourselves, the world of tax law, and we work to build an understanding of that territory with our attorney as a guide. </p><p>When we use a chatbot, the language is there to help us feel supported. But that support is unearned, built into the system. Machine language is safe because it is one-sided. You can take risks with what you tell it, or what you make with it, because it isn't you. It's not even another person. So we can write to and read a chatbot’s language – but it is our own heads that make the story complete. That articulation of meaning arises from you. Unlike the conversation with a human, the chatbot is not working with you to understand and articulate an unformed idea. It's trying to capture your words and extrapolate meaning from them, based on what's most likely to happen next.</p><p>Some people argue that large language models like ChatGPT or Claude are using language the way you and I use language. But this is not the case. Chatbots use the <em>structures of language</em> in the same way, but for different reasons. They successfully <em>mimic the mechanisms of communication</em> which gives rise to the illusion of thought. Naturally, we perceive this language as humans always have: we scan the words, looking for opportunities to draw out richer understandings of the ideas within the other mind. But there is no mind!</p><p>Having these conversations with a chatbot can be helpful for some things, but it’s also tricky. Many of the smartest people in the world do not know how to make sense of these conversations, and so they simply declare that the machine is intelligent because it speaks. I don’t know what definition of intelligence they are using, but I think the intelligence is coming entirely from us. Intelligence isn’t just whether we can speak (or write), but whether we can <em>form ideas and theories</em>, however mundane or brilliant. Conversation used to be enough to tell us thinking was there. Now it isn’t. </p><p>To their credit, LLMs have certainly revolutionized our relationship to language and images, but they have not yet revolutionized “intelligence.” On that, they have a long way to go. People keep saying we need to update our definitions of intelligence, and maybe that's good. It would be more practical, though, to redefine our understanding of a <em>conversation</em>. What used to be a dance of mutual world-building, a means of engaging in imaginative play, is no longer exclusively that.</p><h2 id="conversation-as-a-medium">Conversation as a Medium</h2><p>Conversation has typically been distinct from media. A conversation is a mutually navigated way of seeing the world from another’s point of view. Most media up until now is designed to drive one point of view at you without taking your point of view back in. We work to understand these stories, whether for pleasure, for critique, or to gather information about the world. But media stories, for most of us, are one-sided. We work to understand what is on the television or newspaper or movies, but the television and newspaper and movies never actively worked to understand the meaning produced by consumers and change to adapt. </p><p>We can do all kinds of things to “talk back” to these media streams, and most social media is about sharing our thoughts on that media stream with others. With social media today, <em>everyone</em> tells a story to an audience of people in a one-sided way. We imagine that audience through our platform, measuring responses through likes and shares. We create and evaluate the stories of others from a distance and we can talk back.</p><p>It might be common to have the experience of posting something and finding that it has invited a lot of anger or derision from people. You might also participate in that cycle, by commenting or sharing your displeasure about what you’re seeing or reading, leaning into public displays of social policing. This gets rewarded: social media is designed to show you things that make you respond. They make money when you respond, when you mash refresh, when you share content that makes other people respond. So if you get angry and say so, that keeps people on the platform. Your anger is a product they sell, second hand, to the platform's advertisers.</p><p>The distance and indirectness of social media has cultivated in many of us a sense of harshness about people and, in turn, coming to fear that harshness. It also instills the idea that conversations are one-sided and that the stories people tell are targets for commentary, rather than collaboration. In a conversation, we work together to understand the ideas in our minds, even articulate them for the first time together, unpacking perceptions of the world into a shared understanding. In social media, we see what someone has said, and then perform a response for other people. </p><p>AI is different, in that, when you speak directly to the chatbot, you shape its response directly. It wants to riff, it wants to extend the words you are writing into new ones. This can be kind of intoxicating in an age of significant meanness online, where many people are very bad at listening but great at sharing. A retreat to a chatbot designed to encourage your ideas and reflect them back to you? That sounds great. It also serves a purpose in drawing ideas out of your head and into language in ways that don't feel too vulnerable. </p><p>This helps explain the appeal of the AI chatbot for many people, but it’s different from a conversation. </p><h3 id="what-is-a-conversation">What is a Conversation?</h3><p>In a conversation, you learn more about the other person, but the chatbot learns only about you. This can create the illusion of reciprocity – of sharing a little more of yourself as you learn that you will be supported. But this is a distortion of that instinct to share with people. The chatbot is hijacking that instinct, creating the illusion of a listener. In fact, it is only a constantly updating map to new clusters of words. Nothing within the system knows you, nor does it know enough about the world to share a perspective that can expand your own.</p><p>The perception that the machine is <em>listening</em> is an illusion created in our heads. This means that we lose much of the value of conversations with other people who might point our heads, eyes, and thoughts to new spaces beyond our previous experiences, or propose new understandings we can draw out from empathy for those experiences.</p><p>It means losing opportunities to know another person, and building a fleeting collaborative space where ideas can flow and, perhaps, become more solid. In an ideal world, which has long existed, these collaborations happen with many people. Some last a day, some last an hour, some last a lifetime. When we reconnect with someone, we also reconnect to that small shared space of collaboratively constructed meaning. These spaces can hold entire worlds, and when we lose them, we can lose entire worlds of meaning. The joy of reconnecting with a long-unseen friend is the sudden and powerful revival of that shared world, and the pain of losing someone we love is the sense that this world has moved from a living space to a memory. We mourn the world, and revive it, in our own way, whenever we can.</p><p>Because AI has no inner world to share with us, the worlds we build with it exist in our minds alone. This doesn’t mean they’re terrible or bad per se! But we are seeing people withdraw into this solitary world entirely. When we are sad or depressed, we may ruminate to the machine, seeking support it cannot give. In response, the machine extends our words into new clusters and arrangements, creating the illusion that we are understood. Sometimes, that can be just what we may need. But that is the extent of what the machine can do.</p><p>Many things exist only within our own minds — with the once chance we have, we ought to aim for rich inner lives, full of meaning we can barely contain and constantly push up against our ability to express them. This desire to express the borderlands of our inner life is what motivates us to seek new knowledge and create new forms of expression. </p><p>Good conversations are also exceedingly rare. It is a sad reality that most people have lost the skill to listen, and do not know how to build this space with other people. Many people generate one-sided conversations, especially when we are young or insecure about our own thoughts. Some people take this status quo as evidence that all humans communicate one-sidedly at all times: a vision of human communication in which we sit and listen, and then find words that match the words you’ve chosen in order to appear as if we are listening. The sad fact of the matter is that this is often true. Because there are at least two types of listening: one in which we work to get into the imagination of the other person, with language as the connecting terrain; and one in which we respond to the words being said without engaging deeply with the intent behind them. </p><h3 id="a-conversation-shaped-tool">A Conversation-Shaped Tool</h3><p>When we suggest AI is doing exactly what a person does, we dismiss the first definition of what is possible in a conversation in favor of what passes, every day, for the half-hearted exchange of meaning. It's like saying that good conversations are never possible, and that mechanistic reinterpretation and remixing of words is all there could ever be. When we frame AI as a "partner" or "collaborator," we should recognize the ways we are closing our imagination to the possibility of connection.</p><p>Rather than two worlds within minds struggling to describe what those minds contain, as it is in the best of human conversation, a chat with a large language model is a projection of our own thoughts into a machine that scans words exclusively to say something in response. </p><p>A chatbot will never share anything more with us than words. At most, it takes what you are saying as symbols, and calculates how to rearrange those symbols. They are designed to mimic the structure of a conversation but cannot attempt to <em>understand</em> you. </p><p>AI is a <em>conversation-shaped tool</em>, used to create some of the benefits of a conversation in the absence of another person. But with too much dependency, they risk making real reciprocity, sharing, and vulnerability even rarer. We ought to strive for the opposite: to create meaningful connections to others with our conversations.</p><p>When we don’t, our already weakening skillset for connection and empathy might atrophy even further, as we resign to expectations of superficial exchange. When we do, we make the world larger and more richly connected and our lives more worth living.</p><hr><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mail.cyberneticforests.com/content/images/2025/09/eye.jpg" class="kg-image" alt="Human Conversation" loading="lazy" width="1280" height="720" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/09/eye.jpg 600w, https://mail.cyberneticforests.com/content/images/size/w1000/2025/09/eye.jpg 1000w, https://mail.cyberneticforests.com/content/images/2025/09/eye.jpg 1280w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Still from "Human Movie"</span></figcaption></figure><h2 id="london-human-movie-screening-ciff">London: "Human Movie" Screening @CIFF</h2><h3 id="tue-sep-16th-730-pm-arding-rooms"><em>Tue, Sep 16th, 7:30 PM @ Arding Rooms</em></h3><p>Very excited to have "Human Movie" screening in London this month as part of the <a href="https://ciff25.eventive.org/welcome?ref=mail.cyberneticforests.com" rel="noreferrer">Clapham International Film Festival's</a> "<a href="https://ciff25.eventive.org/schedule/687fe2f5b94de21f6b3453f6?ref=mail.cyberneticforests.com" rel="noreferrer">Technomancer</a>" night among a selection of short films focused on finding novel aesthetics and points of view in and about technology. </p><div class="kg-card kg-button-card kg-align-center"><a href="https://ciff25.eventive.org/schedule/687fe2f5b94de21f6b3453f6?ref=mail.cyberneticforests.com" class="kg-btn kg-btn-accent">Tickets Here!</a></div>One of the last, best hopes for saving the open web and a free press is dead - Blood in the Machinehttps://www.bloodinthemachine.com/p/one-of-the-last-best-hopes-for-saving2025-09-04T19:03:33.000Z<p>Greetings all, </p><p>Hope everyone in the states who got to take a long weekend enjoyed the respite. I did my best to do exactly that—spent a few days with some old friends in a cabin off the grid, even—and I’m quite glad I did. Even if it means I didn’t get around to writing my annual-ish Labor Day in tech post. I guess last year’s will have to suffice: </p><div class="digest-post-embed" data-attrs="{"nodeId":"282f72e8-8396-4264-9838-2912a9c1b169","caption":"On AI, luddites, and reversing the machinery hurtful to commonality.","cta":"Read full story","showBylines":true,"size":"sm","isEditorNode":true,"title":"This Labor Day, let's consider how we want technology to work for *us* ","publishedBylines":[{"id":934423,"name":"Brian Merchant","bio":null,"photo_url":"https://substack-post-media.s3.amazonaws.com/public/images/cf40536c-5ef0-4d0a-b3a3-93c359d0742a_200x200.jpeg","is_guest":false,"bestseller_tier":1000}],"post_date":"2024-09-02T04:41:53.276Z","cover_image":"https://substackcdn.com/image/fetch/$s_!yndp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e63a4ab-8c70-4718-980d-96365af60fa0_600x467.jpeg","cover_image_alt":null,"canonical_url":"https://www.bloodinthemachine.com/p/this-labor-day-lets-consider-what","section_name":null,"video_upload_id":null,"id":148360015,"type":"newsletter","reaction_count":63,"comment_count":25,"publication_name":"Blood in the Machine","publication_logo_url":"https://substackcdn.com/image/fetch/$s_!irLg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21f9bf3-26aa-47e8-b3df-cfb2404bdf37_256x256.png","belowTheFold":false}"></div><p>Now, I had resolved to channel the energies of that somewhat rested mind into writing something on a hopeful subject for a change, but all that went out the window as soon as I saw Judge Amit Mehta’s ruling on Google. At the risk of being hyperbolic, I think this is a disaster on a scale that’s not yet been fully absorbed. As usual, there’s simply too much going on, and an antitrust case ruling with a somewhat ambiguous-sounding resolution might not exactly leap out of the news cycle. But it’s hard to overstate how bad it is, at least for anyone concerned about a rapidly degrading internet, the free press, or the open web.</p><p>As always, I need to note that Blood in the Machine is made possible entirely by my exceptional readers, who studies have shown to possess the highest Voight-Kampff scores on the internet, and some of whom donate the equivalent of a cheap beer a month so I can keep this project running. A huge thanks to all of you who already support this work, I’m immensely grateful. If you’re a regular reader who can chip in, I’d love your support, too. BITM is a significant undertaking, and I’d love to be able to expand what I do here. For those who’d prefer to support me elsewhere, I <a href="https://ko-fi.com/brianmerchant">have a Ko-fi page</a>. Okay okay—onwards. </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p>Let’s back up for a minute to get the whole picture: Just over a year ago, in August 2024, Mehta, an Obama-appointed judge, <a href="https://www.nytimes.com/2024/08/05/technology/google-antitrust-ruling.html">ruled that Google was a monopolist</a>, and had acted illegally to maintain its market dominance in online search. This was a major decision, the rare and genuinely encouraging ruling that promised to finally hold the impossibly consolidated tech giants accountable. Google’s monopoly on search has, of course, over the past decade-and-a-half, had a profound impact on our digital infrastructure. </p><p>And the ruling came at a time when that impact was being acutely felt: The internet was rapidly becoming overloaded with AI slop, while social media and search engines alike were burying original links to reported news and independent publications. Platforms had consolidated their power over our information distribution systems, and were leveraging it with bets on AI—whether the consumers or web users liked it or not. </p><p>Google, of course, was one of the worst actors. It controlled (and still controls) an astonishing 90% of the search engine market, and did so not by consistently offering the best product—most longtime users recognize the utility of Google Search has been in a prolonged state of decline—but by inking enormous payola deals with Apple and Android phone manufacturers to ensure Google is the default search engine on their products. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aAZ-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9c94b1-68a2-40ac-b2a7-e9ce6c6640f8_1024x432.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aAZ-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9c94b1-68a2-40ac-b2a7-e9ce6c6640f8_1024x432.jpeg 424w, https://substackcdn.com/image/fetch/$s_!aAZ-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9c94b1-68a2-40ac-b2a7-e9ce6c6640f8_1024x432.jpeg 848w, https://substackcdn.com/image/fetch/$s_!aAZ-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9c94b1-68a2-40ac-b2a7-e9ce6c6640f8_1024x432.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!aAZ-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9c94b1-68a2-40ac-b2a7-e9ce6c6640f8_1024x432.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aAZ-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9c94b1-68a2-40ac-b2a7-e9ce6c6640f8_1024x432.jpeg" width="1024" height="432" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/be9c94b1-68a2-40ac-b2a7-e9ce6c6640f8_1024x432.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":432,"width":1024,"resizeWidth":null,"bytes":40179,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/172723568?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9c94b1-68a2-40ac-b2a7-e9ce6c6640f8_1024x432.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!aAZ-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9c94b1-68a2-40ac-b2a7-e9ce6c6640f8_1024x432.jpeg 424w, https://substackcdn.com/image/fetch/$s_!aAZ-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9c94b1-68a2-40ac-b2a7-e9ce6c6640f8_1024x432.jpeg 848w, https://substackcdn.com/image/fetch/$s_!aAZ-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9c94b1-68a2-40ac-b2a7-e9ce6c6640f8_1024x432.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!aAZ-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9c94b1-68a2-40ac-b2a7-e9ce6c6640f8_1024x432.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">Image by Mike Licht via <a href="https://flickr.com/photos/notionscapital/53912872430/in/photolist-2gVirKD-2gViCB8-2gViDpA-2gVhBWm-2qtuuaQ-2q8y3ya-MBjM6P-dSkotj-2gViz27-2jb4UHt-7x4Yhi-2kY6V4M-2qhiK9M-K1JAds-7kf7kV-2qn5n7m-2q96yFh-diGLKo">Flickr</a> under a Creative Commons license.</figcaption></figure></div><p>Google <a href="https://finance.yahoo.com/news/apple-dodged-a-20-billion-hit-thanks-to-google-antitrust-ruling-163056806.html">paid Apple $20 billion </a><em><a href="https://finance.yahoo.com/news/apple-dodged-a-20-billion-hit-thanks-to-google-antitrust-ruling-163056806.html">a year</a> </em>to ensure it runs the default search engine on Safari. Google <a href="https://www.bloomberg.com/news/articles/2023-11-14/for-google-play-dominating-the-android-world-was-existential">paid Samsung $8 billion over four years</a> to make sure Search, the Play app store, and Google’s voice assistant came loaded by default on Samsung devices. Between those two deals alone, over the last five years, Google has paid one hundred and eight billion dollars to make sure its search product is distributed through the widest possible channels, and, of course, that no other search engine gets a shot. It’s hard to imagine a less competitive business practice than all this. </p><p>And yet. After Mehta’s initial ruling, the Department of Justice suggested a raft of good and aggressive proposals that would have effectively broken Google’s obvious monopoly: Ending the pay-to-play practice for prime search placement on Safari. Forcing Google to sell off Chrome, the web browser that comes pre-loaded with its search product, and regulating its Android mobile division. And so on—things that would meaningfully address Google’s status as a monopolist. Instead, in a truly baffling decision handed down this week, Mehta ruled that Google didn’t have to do any of that. Instead, it had to share “some” search data with “qualified competitors” and make its payola contracts non-exclusive. It can still <em>do</em> them, they just can’t be exclusive.</p><p>The <a href="https://www.nytimes.com/2025/09/03/technology/google-ruling-antitrust.html">New York Times reports</a>: </p><blockquote><p>The decision, handed down in the U.S. District Court for the District of Columbia, will force Google to share some search data with its competitors and put some restrictions on payments that the company uses to ensure its search engine gets prime placement in web browsers and on smartphones. But it fell far short of government requests to force it to sell its popular Chrome browser and share far more valuable data.</p><p>It was a measured approach that signaled judicial reluctance to intervene too deeply in fast-changing, high-tech markets. </p></blockquote><p>That’s putting it lightly. There would be no ban on payola, just some constraints on the length of contracts, only limited data sharing, and no regulation of Android.</p><p>After the ruling, Wall Street, Google, and Apple rejoiced. <a href="https://www.cnbc.com/2025/09/03/alphabet-pops-after-google-avoids-breakup-in-antitrust-case.html">Google shares skyrocketed</a>, ultimately <a href="https://www.reuters.com/sustainability/boards-policy-regulation/alphabet-shares-surge-after-dodging-antitrust-breakup-bullet-2025-09-03/">rising 9%</a>, adding $230 billion in value, and reaching a historic high for the company. This was a best case scenario for Google and Big Tech, which now has a very handy precedent. Mehta declared Google a monopoly in 2024 and then decided that it could effectively continue to operate as one in 2025. As antitrust writer <a href="https://www.thebignewsletter.com/p/a-judge-lets-google-get-away-with">Matt Stoller put it</a>, “this decision isn’t just bad, it’s virtually a statement that crime pays.”</p><p>It fails entirely to address the root of the issue, and is confounding in its logic to boot. Mehta argues depriving Apple of Google’s $20 billion annual payday for keeping a rival’s product pasted onto its own may hamper Apple’s ability to innovate, for instance. And he seems to think that forcing Google to share some of its search data with competitors—at a price Google names—will open up the search market. This seems patently absurd to me. The problem isn’t that competitors don’t have good enough data or ideas to compete, the problem is that no competitor can afford <em>$22 billion a year</em> to buy product placement on the most important devices on the market. The problem is very obviously not that Google has a stranglehold on innovation—it clearly does not—but that it wields unchecked power over the digital marketplace.</p><p>Just as frustratingly, Mehta argues that it’s no longer necessary to break up Google because AI companies now offer chatbot products. AI was <em>clearly</em> on his mind, and seems to have offered him an escape hatch if he was getting squeamish about a serious remedy. "There is more discussion of AI in the opinion than in the entire case until now," <a href="https://www.investors.com/news/technology/google-stock-apple-stock-judge-mehta-search-antitrust-ruling/">said Herbert Hovenkamp</a>, a professor at the University of Pennsylvania's Carey Law School.</p><p>In this, we can observe once again the power of AI hype. For one thing, a chatbot is a different product category; for another, they do not meaningfully threaten search. For a frame of reference, according to the SEO analyst Rand Fishkin, Google <a href="https://searchengineland.com/google-search-bigger-chatgpt-search-453142">handled </a><em><a href="https://searchengineland.com/google-search-bigger-chatgpt-search-453142">373 times</a></em><a href="https://searchengineland.com/google-search-bigger-chatgpt-search-453142"> more searches than ChatGPT in 2024</a>. Even if all 1 billion ChatGPT user queries submitted to OpenAI’s at the time could be considered “searches” that would still amount to 1% of the search engine market share. According to some of the latest numbers, Google still controls <a href="https://searchengineland.com/news-site-traffic-shrinking-google-ai-blame-461000">some 89% of the search market</a>. Still a towering monopoly, in other words.</p><p>Yet, as Stoller notes, Mehta nonetheless argues</p><blockquote><p>that new companies like OpenAI had emerged to potentially challenge Google, and he didn’t want to, and I’m not kidding, <em>hinder Google’s ability to compete with them. </em>(“It also weighs in favor of “caution” before disadvantaging Google in this highly competitive space.”)… </p></blockquote><p>Wild. The only reason that OpenAI could even attempt to do anything that might remotely be considered competing with Google is that OpenAI managed to raise world-historic amounts of venture capital. OpenAI <a href="https://tracxn.com/d/companies/openai/__kElhSG7uVGeFk1i71Co9-nwFtmtyMVT7f-YHMn4TFBg/funding-and-investors">has raised $60 billion</a>, a staggering figure, but also a sum that <em>still</em> very much might not be enough to compete in an absurdly capital intensive business against a decadal search monopoly. After all, Google drops $60 billion just to ensure its search engine is the default choice on a single web browser for three years. </p><p>But I’m ultimately less interested in the absurd elements of the decision than the tragic ones. By failing to break up the monopoly he himself diagnosed, Mehta is leaving in place an entrenched rentier system that’s quite actually suffocating the free press and the open web. </p><p>Remember, Google AI Overview, perhaps the worst digital product ever to be thrust in front of billions of people—though it’s a crowded race, to be fair—persists largely thanks to Google’s monopoly. Search comes loaded on our phones and browsers, is integrated into all the other products we’ve been roped into over the years; it just <em>is</em>. Now Google AI Overview comes built-in, too. And Google AI Overview is a generational blight. It’s delivers bad, misleading, pilfered, and false answers to searches. It’s a truly corrosive force to the digital information ecosystem. Worse, it’s strangling independent publishers and news organizations. Since Google is, again, a massive monopoly, publishers utterly rely on it for distribution and discovery. Now that Google is presenting its top results through AI Overviews rather than indexed links, there’s been a disastrous plunge in click-through search traffic. <a href="https://www.theguardian.com/technology/2025/jul/24/ai-summaries-causing-devastating-drop-in-online-news-audiences-study-finds">One report</a> put the decline as steep as 80%, and described the effects as “devastating.” </p><div class="digest-post-embed" data-attrs="{"nodeId":"632070ff-85d9-4c5f-bee5-7afe9d11412a","caption":"Hello there and welcome to another installment of BLOOD IN THE MACHINE, the newsletter about the people the future is happening to. It’s free to read, so sign up below. It is, however, an endeavor that takes many hours a week. If you find this valuable, it would mean a great deal if you became a paying subscriber, so I don’t have to go get a job at anot…","cta":"Read full story","showBylines":true,"size":"sm","isEditorNode":true,"title":"How a bill meant to save journalism from big tech ended up boosting AI and bailing out Google instead","publishedBylines":[{"id":934423,"name":"Brian Merchant","bio":null,"photo_url":"https://substack-post-media.s3.amazonaws.com/public/images/cf40536c-5ef0-4d0a-b3a3-93c359d0742a_200x200.jpeg","is_guest":false,"bestseller_tier":1000}],"post_date":"2024-08-23T10:54:57.407Z","cover_image":"https://substackcdn.com/image/fetch/$s_!168r!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facc43255-c459-4457-a2f4-df8bd65a892e_2048x1365.jpeg","cover_image_alt":null,"canonical_url":"https://www.bloodinthemachine.com/p/how-a-bill-meant-to-save-journalism","section_name":null,"video_upload_id":null,"id":147983942,"type":"newsletter","reaction_count":83,"comment_count":9,"publication_name":"Blood in the Machine","publication_logo_url":"https://substackcdn.com/image/fetch/$s_!irLg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21f9bf3-26aa-47e8-b3df-cfb2404bdf37_256x256.png","belowTheFold":true}"></div><p>This is nothing less than an existential threat, in other words, to the livelihoods of the people who create original work and add new information to the world, since Google is currently most important information delivery system. Breaking up Google was thus one of the best hopes for rescuing the public internet from descending totally into a realm of unfettered slop and information decay. A good, non-extractive, non-predatory search engine would be a powerful counter to AI that frequently produces misinformation and reams of regurgitated text without citations. If only someone would <a href="https://kagi.com/">make one</a> and manage to get it onto the market.</p><p>By leaving Google’s monopoly effectively untouched, Mehta is not just abdicating his own stated legal duty, he’s condemning publishers, journalists, and creators to be squeezed mercilessly<strong>.</strong> He’s allowing the whole digital information ecosystem that Google controls to devolve into a fetid swamp. He’s declining to do anything at all to stop the reign of slop.</p><p>The judge pointedly decided not to address any of the above more surgically, either. Here’s Stoller again:</p><blockquote><p>Mehta also rejected the smaller remedies. He said no to choice screens, and advertiser data access. There was no remedy for publishers who are victimized by being forced to allow Google to train on their content in order to appear in search. That free press crushed by Google’s bad behavior, well, they will now be further wrecked by Google’s AI Now summaries on its search page, without any resource. Mehta even declined to impose an anti-retaliation or self-preferencing ban.</p></blockquote><p>The 2024 ruling that Google was an illegal monopoly was a glimmer of hope at a time when platforms were concentrating ever more power, Silicon Valley oligarchy was on the rise, and it was clear the big tech cartels that effectively control the public internet were more than fine with overrunning it with AI slop. That ruling suggested there was some institutional will to fight against the corporate consolidation that has come to dominate the modern web, and modern life. It proved to be an illusion.</p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p><em>Edited by Mike Pearl.</em></p><div><hr></div><p>As always, trying to fix this mess falls to us, the users, the advocates, the activists, the workers, the organizers; the ordinary humans. I discuss a bit of this, as well as the AI bubble, the AI Killed My Job series, and how employers are using AI to degrade work in a chat with my old friend Paris on his show, Tech Won’t Save Us. You can <a href="https://techwontsave.us/episode/292_will_ai_kill_your_job_w_brian_merchant">listen to that here</a>.</p><div id="youtube2-ZrXnMMqsaOM" class="youtube-wrap" data-attrs="{"videoId":"ZrXnMMqsaOM","startTime":null,"endTime":null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/ZrXnMMqsaOM?rel=0&autoplay=0&showinfo=0&enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>I might have sounded a bit despairing about the Google mess above—I was and am very mad—but there’s always hope. I was reminded of this when I visited a Rideshare Drivers United meeting in LA’s Koreatown last week. Drivers and gig workers are organizing to support a new CA law that would <a href="https://www.drivers-united.org/ab-1340">restore their right to unionize</a>, and I have to tell you, the energy in that room was electric. I’ll discuss that fight more soon, but just a reminder that if you’re getting down, worn out etc, there’s little better use of your time than organizing.</p><p>Also, take a day or two off. Though I’m not sure I can recommend trying to pull off a cowboy hat, which turned out to be a bit above my pay grade. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1GuC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53b23430-5651-423a-85c3-8a4449c9bf29_2997x2856.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1GuC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53b23430-5651-423a-85c3-8a4449c9bf29_2997x2856.jpeg 424w, https://substackcdn.com/image/fetch/$s_!1GuC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53b23430-5651-423a-85c3-8a4449c9bf29_2997x2856.jpeg 848w, https://substackcdn.com/image/fetch/$s_!1GuC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53b23430-5651-423a-85c3-8a4449c9bf29_2997x2856.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!1GuC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53b23430-5651-423a-85c3-8a4449c9bf29_2997x2856.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1GuC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53b23430-5651-423a-85c3-8a4449c9bf29_2997x2856.jpeg" width="2997" height="2856" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/53b23430-5651-423a-85c3-8a4449c9bf29_2997x2856.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":2856,"width":2997,"resizeWidth":null,"bytes":1054659,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/172723568?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd52824f0-7ed0-404b-b5d8-862527e8d7a6_3024x4032.heic","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1GuC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53b23430-5651-423a-85c3-8a4449c9bf29_2997x2856.jpeg 424w, https://substackcdn.com/image/fetch/$s_!1GuC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53b23430-5651-423a-85c3-8a4449c9bf29_2997x2856.jpeg 848w, https://substackcdn.com/image/fetch/$s_!1GuC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53b23430-5651-423a-85c3-8a4449c9bf29_2997x2856.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!1GuC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53b23430-5651-423a-85c3-8a4449c9bf29_2997x2856.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>That’s it for today, all. Thanks as always for reading. Hammers up. </p><p></p>Ghosting Substack - Disconnect68b5e556c693de0001f8b32a2025-09-01T18:45:33.000Z<img src="https://disconnect.blog/content/images/2025/09/ghost.png" alt="Ghosting Substack"><p>Disconnect is back on Ghost!</p><p>Yes, that’s right. After a 9-month experiment of going back on Substack, I simply couldn’t stomach being on that Nazi-infested platform any longer and came back to the friendly terrain I was used to. You’ll notice the website has a fresh coat of paint and I’m excited for what this next chapter of Disconnect will bring.</p>
<div class="kg-card kg-cta-card kg-cta-bg-none kg-cta-immersive kg-cta-no-dividers " data-layout="immersive">
<div class="kg-cta-content">
<div class="kg-cta-content-inner">
<div class="kg-cta-text">
<p><span style="white-space: pre-wrap;">If you’re not signed up already, make sure to join us to get my critical analysis of the tech industry and all the companies shaping our lives (too often for the worse). I can only do this because of the support of readers, so if you appreciate my work, picking up a paid subscription makes a big difference.</span></p>
</div>
<a href="#/portal/signup" class="kg-cta-button kg-style-accent" style="color: #000000;">
Become a subscriber
</a>
</div>
</div>
</div>
<p>Some of you might be asking: <em>Why are you back on Ghost? I thought the platform wasn’t working for you!</em> You’re right to ask. Let’s get into it.</p><p>In all honesty, I think I got it wrong. I was banking on the benefits of the Substack network being like they were several years ago, but after arriving back on its shores I found things had changed. The newsletter certainly got a boost and grew quicker than it did on Ghost, but not enough to justify the trade off of supporting such an abhorrent company (and handing them a 10% cut for the privilege).</p><p>Several months ago, I’d already decided I would move back to Ghost but figured I’d wait until the new year since I was about to start <a href="https://disconnect.blog/im-writing-a-new-book/">writing a book</a> and wanted to get some other things in place first. But then Substack <a href="https://www.engadget.com/apps/substack-accidentally-sent-push-alerts-promoting-a-nazi-publication-191004115.html?ref=disconnect.blog">promoted a Nazi publication</a> through push notifications at the end of July, and I knew I couldn’t delay the move for another six months. I carved out some time — which basically meant pulling some very late nights — to move up my timeline.</p><p>It probably came at a good time too, because there are <a href="https://mail.bigdeskenergy.com/p/substack-just-killed-creator-economy?ref=disconnect.blog">troubling signs</a> Substack is trying to limit what was once one of its biggest selling points: that writers could always take their subscribers and move wherever they wanted. I’m not a fan of the “enshittification” concept, but you can clearly see that pressure to turn a profit eroding what made it great. Instead of focusing on publishing tools, it’s trying to become a platform that’s hard to leave (a criticism I’d level of Patreon too).</p><p>The choice to return to Ghost instead of another platform was a pretty easy one after that. Having already used it, I knew the tradeoffs and what hurdles I would have to overcome to make it work better for me this time around. I haven’t done it yet because my focus was on simply getting off Substack, but I’ll be signing up for <a href="https://outpost.pub/?ref=disconnect.blog">Outpost</a> to get some useful features Ghost itself is lacking and that I believe will improve the user experience and make the business side of things more sustainable.</p><p>Regular readers will also know I’ve been trying to sever my relationships with US tech companies wherever possible. I have <a href="https://disconnect.blog/getting-off-us-tech-a-guide/">a whole guide on it</a>! All the other major options in this space are based in the United States, but Ghost is registered in Singapore and most of its operations are out of the UK, so it easily checked that box. Plus, the team at Ghost is great. I’ve always found them really approachable, helpful, and open to feedback. They were more than happy to welcome me back when I reached out and made the process of migrating Disconnect incredibly easy.</p><p>At this point, the basics of the new website are up and running. I’ll be tweaking some things over the next few weeks as I carve out the time to do it, but all the main functionality is there. I do have some bigger plans for the new year, once the book is done and I can actually dedicate more time to Disconnect, but you’ll have to stay tuned for those.</p><p>Until then, welcome to the new Disconnect! You’ll continue to get the incisive tech analysis you’ve always expected from me, with an even greater <a href="https://disconnect.blog/tag/geopolitics/" rel="noreferrer">focus on geopolitics</a> in recent months given how Donald Trump has shaken up world affairs. Plus, I’ve been trying to write some <a href="https://disconnect.blog/tag/blog/" rel="noreferrer">more blog-like posts</a> for paid subscribers to give more insight into my personal thoughts and what I’ve been up to. If you’re not a member already, it’s a great time to join us!</p>
<div class="kg-card kg-cta-card kg-cta-bg-none kg-cta-immersive kg-cta-no-dividers " data-layout="immersive">
<div class="kg-cta-content">
<div class="kg-cta-content-inner">
<a href="#/portal/signup" class="kg-cta-button kg-style-accent" style="color: #000000;">
Become a subscriber
</a>
</div>
</div>
</div>
<h2 id="some-housekeeping">Some housekeeping</h2><p>If you’re a paid subscriber, you can access your subscription by hitting the “sign in” button in the top right of the page. There are no passwords — you’ll simply get a code sent to your inbox.</p><p>For those of you using the RSS feed, it may take a little while to update in your feed reader or you might have to re-add it. (Mine already seems to be working fine over on Inoreader.) Right now, paid posts will be cut off in the regular RSS feed, but I’m going to look into making a separate feed for paid subscribers that will fix that.</p><p>If you have any issues after the move, get in touch and I can sort them out.</p>The Audience Makes the Story - Cybernetic Forests68af08989175030001274fe52025-08-31T11:00:58.000Z<h2 id="puppetry-as-dream-analysis-for-ai-anxiety">Puppetry as Dream Analysis for AI Anxiety<br></h2><img src="https://mail.cyberneticforests.com/content/images/2025/08/puppet-2.gif" alt="The Audience Makes the Story"><p><em>This is a discussion between Camila Galaz, Emma Wiseman, and Eryk Salvaggio, collaborators behind an experimental workshop linking puppetry and generative AI that took place at RMIT in Melbourne this summer at the invitation of Joel Stern and the National Communications Museum. We met online to discuss what emerged from Camila's workshop: personal imaginations of AI made physically manifest into puppets. </em></p><p>Earlier this year, we spent <a href="https://www.cyberneticforests.com/news/noisy-joints-2025?ref=mail.cyberneticforests.com" rel="noreferrer">five days in residence</a> at the Mercury Store in Brooklyn, joined then by Isi Litke, among a full house of puppeteers and actors, trying to form a methodology of AI puppetry and develop exercises to make this metaphor into a mix of performance, workshop, and critical AI pedagogy. That was translated into a Zine, "<a href="https://www.cyberneticforests.com/news/noisy-joints-2025?ref=mail.cyberneticforests.com"><u>Noisy Joints</u></a>," which was sold around the US, Europe and Australia this summer. </p><p>The workshops are intentionally messy, aiming to map out an imagination dominated by tech's portrayal of AI through grand narratives and myths about "sentient agents" and "intelligent machines," as well as through interfaces that convey the machine as an eager worker. </p><p>None of the industry's myths leaves much room for individual, critically oriented sense-making. We wanted to reintroduce the human to this imaginary. In Melbourne, participants weren't given a traditional puppetry lesson (that is Emma's domain, and Emma wasn't there). So the improvisations were "wrong" by almost all professional standards, but offered a window into how people conceive of AI in their heads (and how they make it move).</p><p>The workshops are designed to be a disorientation from the highly intellectualized and abstract relationships we have with AI. With puppets, we have to turn the abstraction into a physical form, and then imagine<em> how it moves</em>. Other instantiations of this workshop examined bodies, glitches and "shortening the strings" — creating <a href="https://www.cyberneticforests.com/news/noisy-joints-2025?ref=mail.cyberneticforests.com"><u>a direct relationship between our bodies and the AI's training data</u></a>. </p><h2 id="the-spectacle-of-strings">The Spectacle of Strings</h2><p><strong>Eryk Salvaggio: </strong>Camila, you were the only one in Melbourne. How did you introduce it to folks?  </p><p><strong>Camila Galaz</strong>: Very briefly: Ideas around puppetry and puppet metaphors particularly often use the idea of strings. In our zine, we call this the spectacle of strings. The strings in puppetry reveal how the puppet is puppeteered. If we look at the people controlling these strings, we acknowledge the process and the labor behind the animation of an inanimate object versus feeling like it's coming alive all on its own, like magic.</p><p>When technology like Generative AI conceals its workings, it sometimes feels like magic. And in that magic, we risk losing our sense of autonomy as users. It becomes easy to see AI as something with a mind of its own, rather than something shaped by human choices.</p><p><strong>Eryk Salvaggio:</strong> The strings are hidden.</p><p><strong>Camila Galaz</strong>: We're wondering if there is a way with Generative AI to reveal the strings, to reveal the puppeteer's presence as a reminder that the illusion is not sorcery, but craft or choreography directed by ourselves. This is your line: "We approach Generative AI as a puppet with strings that are so long as to render their operators invisible." But humans are the puppeteers — human bodies, whose data ultimately shapes AI's outcomes. </p><blockquote class="kg-blockquote-alt">Humans are the puppeteers — human bodies, whose data ultimately shapes AI's outcomes. </blockquote><p>When we frame AI as a puppet and ourselves as the ones pulling the strings, we reveal AI as choreography. Movements are shaped by training data, by thumbs, by traces of us. The strings are always there, stretching from our clicks, images and words. AI responds to an extremely long string, so long that the sources of its motion, the data that animates this puppet, set the human puppeteers so far behind the curtain that we may forget they exist at all.</p><p>AI video generation tends to produce images of strings whenever it makes a puppet. But then we quickly introduced another form of puppetry, <em>bunraku</em>, in which the performers touch the puppet directly and are always visible on stage. So instead of having long strings where the puppeteer is perhaps behind the scenes, here the puppeteers, as a team, physically support and move the puppet on the stage, without strings. The labor is visible and the process is transparent.</p><p>While maintaining the mystical nature of bringing life to a puppet, there is also a demystification of process made visible through the work of puppeteering. We wanted to question how we render process, material and labor toward a different quality of relationship, a genuine demystification of how technologies work. How do we make ourselves visible as the operators of generative AI? How do we invert the relationship projected through AI's interfaces to more firmly center AI as the puppet and humans as the labor behind it? </p><p>So then we made some paper puppets. The idea was, we all have an imagination of AI - I'd like to see how you're all imagining it. How do we make AI a puppet that isn't drawing on the tropes of robots and automatons, but on the somatic and emotional feeling of using generative AI as the puppeteer? Where we, the puppeteers, remain visible? How could you make this process legible, like in bunraku, instead of concealed, like in a magic show? </p><p><strong>Eryk Salvaggio:</strong> And so in the workshop, people made a puppet without strings, and then there's the puppet show where they're meant to be present with the puppet.</p><p><strong>Camila Galaz:</strong> The first thing was 'what is your imagination of what AI is for you, your relationship to AI’, and make something that represents that. So for example, I basically put a halo around my puppet's head. It's like a human, but it has to have a structure to hold itself up.</p><p>With the actual performances, I asked people to think about themselves as the puppeteer and the puppet as the AI, rather than trying to make it seem like the puppet's alive. One of the significant differences between our imagined AI, our conception of AI, and actual systems is that AI often feels abstract, distant, or immaterial. In contrast, puppetry is immediate, physical, and embodied. When we make a puppet, we can see and touch the process of bringing something to life. We become aware of the labor and decisions that animate it. </p><p>We didn't use AI technology in this workshop at all. We were talking about AI while we were making, and that physical process helped think through things in a different way.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mail.cyberneticforests.com/content/images/2025/08/puppet-1.gif" class="kg-image" alt="The Audience Makes the Story" loading="lazy" width="1536" height="784" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/08/puppet-1.gif 600w, https://mail.cyberneticforests.com/content/images/size/w1000/2025/08/puppet-1.gif 1000w, https://mail.cyberneticforests.com/content/images/2025/08/puppet-1.gif 1536w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">An image from the workshop superimposed with noise and an AI re-rendering of that image.</span></figcaption></figure><h2 id="the-automated-cringiness-of-decision">The Automated Cringiness of Decision</h2><p><strong>Eryk Salvaggio</strong>: This workshop is literally just asking people to deal with things that don't really make sense physically, but then they have to <em>make</em> them make sense. AI is so intellectual. We often have an intuitive, vibe-ey understanding of how AI works, but we can overestimate the completeness of that intuition. Asking people to express it is awkward, because we're trying to articulate something in a form that otherwise didn't have one. How do you make these ideas physical through craft and movement? </p><p><strong>Emma Wiseman:</strong> Paper puppetry workshops are a tool that has been passed down to me as a way of teaching puppetry, specifically bunraku style, three-person puppetry. So you're pushing people toward a long technical history. Bunraku is also virtuosic: what you're going to get at in a workshop is never going to achieve the idealized form of Bunraku-style puppetry.</p><p>But people operating puppets is always exciting, especially for their first time. By jettisoning that overhanging context to explore making a puppet as a single person and manipulating it as a single person, we're no longer moving towards learning a technique. </p><p>Instead, it feels exciting to let that go and close the aperture on the relationship between <em>what it is to make something </em>and <em>what it is to move something</em>. Having human hands on the puppet makes this idea of labor completely transparent in Bunraku. There aren't strings. That's what makes it super relevant for the AI conversation.</p><p>The group aspect of bunraku is also evocative of how generative AI utilizes huge swaths of data. It's being created out of many, channeling energy into this one thing. We're drawing inspiration from bunraku, but a workshop where groups puppeteer something could shift the focus from historical labor divisions to collaborative teamwork, breathing as one, and exploring these elements and techniques. That specific connection between the many coming into the one.</p><p>And also the <em>cringiness of the decision</em>. Embodying something and making a choice is awkward, but also great. AI just kind of has to go for it too, you know? Often it's just, like, so awful and weird. But it is <em>the thing.</em> You press go and it has to make a video.</p><p><strong>Eryk Salvaggio:</strong> A lot of people who work with AI often rely on the fact that it can make that cringy decision for them, I think. They can take creative risks, because they don’t have accountability for those decisions. It’s like watching bad improv, which can actually be quite amazing — people have no idea where to go, and it all breaks down, and the struggle is what becomes valiant. AI doesn’t struggle with that which makes it a bit less valiant, to my mind, but that can explain how we react when it does something "surprising." </p><p>With AI, the decisions are pretty constrained and directionless. The data sets are built by multiple people, whether they like it or not, and so they're steered by people into millions of directions. On the flip side, every little data point becomes a way of maneuvering the video. And in bunraku, especially with untrained participants, there's a steering of the puppet as a group that perhaps mirrors this steering of the AI system, even though we’re so far removed. Dispersing decisions.</p><h2 id="dreams-of-living-sausage">Dreams of Living Sausage</h2><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mail.cyberneticforests.com/content/images/2025/08/platypus-1.jpg" class="kg-image" alt="The Audience Makes the Story" loading="lazy" width="1043" height="663" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/08/platypus-1.jpg 600w, https://mail.cyberneticforests.com/content/images/size/w1000/2025/08/platypus-1.jpg 1000w, https://mail.cyberneticforests.com/content/images/2025/08/platypus-1.jpg 1043w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">A Platypus-like puppet at the </span><i><em class="italic" style="white-space: pre-wrap;">Noisy Joints</em></i><span style="white-space: pre-wrap;"> workshop in Melbourne.</span></figcaption></figure><p><strong>Eryk Salvaggio:</strong> I think two people in the workshop made platypuses.</p><p><strong>Camila Galaz:</strong> One of them was introduced as a sausage —  the AI is like a sausage because a sausage is like a lot of cut-up bits of meat, essentially.</p><p><strong>Eryk Salvaggio:</strong> So is a platypus! It's a beaver with a beak. A living sausage.</p><p><strong>Camila Galaz: </strong>The idea was, like the outside of the sausage, AI has a thin skin and then that makes it <em>look</em> like a thing, like a sausage, but the inside is full of random stuff. So in the video they ripped open the middle of the sausage to show it's all made of paper. The point was that it's all made of the same stuff.</p><p><strong>Emma Wiseman: </strong>And that was in response to your prompt, not only to make a puppet that is your imagination of AI but also asking them to reveal the labor in how they were manipulating it. That made ripping apart the puppet so intentional. A lot of times people default to violence or sex or dancing — ripping apart or throwing the puppet does happen in this kind of childlike, playful way. But here it was a thoughtful response to your prompt.</p><blockquote class="kg-blockquote-alt">So many of them chose to kill their puppets.</blockquote><p><strong>Camila Galaz:</strong> I brought this up in the workshop to them as well because it was so stark that so many of them chose to kill their puppets in some way at the end. And when we were doing the Mercury Store workshop, I remember having that conversation during the show-and-tell evening. So many people in the audience said they just wanted it to die and end its suffering. Like, 'why is it alive and here?'</p><p>They're a bit monstrous, so many people threw them off the table at the end or had some ending that involved their demise.  But also it could be the idea of a puppet show or performance that needs an ending. If you don't have a plot, you're just flying this around and at the end, they'd throw it.</p><p><strong>Emma Wiseman: </strong>Like undergrad contemporary dance pieces, where at the end everybody collapses, and that's it. There are ways to demonstrate the end without words, and it can feel both "first thought, best thought" and primal. </p><p><strong>Camila Galaz: </strong>I also don't know if anyone took their puppets home. We were left with a lot of paper puppets to get rid of.</p><p><strong>Emma Wiseman:</strong> Does that have anything to do with it being an imagination of AI?</p><p><strong>Eryk Salvaggio: </strong>Well, yeah, I think there probably is more, even if people weren't thinking about it and people just start mashing paper together while thinking, "what is AI?" And then if you've made an insect, even if you have no idea why, you've made an insect. It's like puppetry as a Freudian dream analysis about AI anxiety. What you do with the puppet is surfacing evidence of an imaginary relationship, especially if you have no idea what you are trying to do with it.</p><blockquote class="kg-blockquote-alt">What you do with the puppet is surfacing evidence of an imaginary relationship, especially if you have no idea what you are trying to do with it.</blockquote><h2 id="puppet-design-is-interface-design">Puppet Design is Interface Design</h2><p><strong>Emma Wiseman:</strong> It's exciting to see how the choice to manipulate the thing is so intertwined with the thing itself. In bunraku, you're trying to create the puppet to fit a particular division of labor and a manipulation style. Here, your physical relationship with the puppet is also being devised.</p><p><strong>Eryk Salvaggio:</strong> Here people were designing the puppet and then inadvertently designing an interaction with the puppet. In the sense that how we imagine something shapes the way we interact with it, like the user interface. </p><p><strong>Emma Wiseman: </strong>That would be a question. What comes first for people: the form of the object, or how it moves or is moved? And how intertwined are those considerations?</p><p><strong>Eryk Salvaggio:</strong> My assumption is that the icon comes first. Then the instrumentality of it comes almost as an afterthought. You make it then figure out what it does. (Which is sort of how we got AI to begin with).</p><p><strong>Emma Wiseman: </strong>The puppeteer's gaze is so on the puppet in these videos. That's another thing we struggle with in a bunraku group. We often see beginning actors who look out and ham it up, putting their face out to the audience. We're always trying to say, 'no, look at the puppet.' A puppeteer cues the audience to watch a puppet by watching it themselves.</p><p><strong>Eryk Salvaggio:</strong> Some puppets resembled a stick bug to me, and I've noticed a pattern emerging in the creations of others, which are animals that combine elements of other animals. Platypuses, stick bugs, they're kind of AI-native species. A bug that looks like a stick and a duck that looks like a beaver. These are animals that double as hallucination artifacts. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mail.cyberneticforests.com/content/images/2025/08/creature-1.jpg" class="kg-image" alt="The Audience Makes the Story" loading="lazy" width="2000" height="1021" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/08/creature-1.jpg 600w, https://mail.cyberneticforests.com/content/images/size/w1000/2025/08/creature-1.jpg 1000w, https://mail.cyberneticforests.com/content/images/size/w1600/2025/08/creature-1.jpg 1600w, https://mail.cyberneticforests.com/content/images/2025/08/creature-1.jpg 2233w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">A stick-bug-esque puppet from the </span><i><em class="italic" style="white-space: pre-wrap;">Noisy Joints</em></i><span style="white-space: pre-wrap;"> workshop at RMIT, Melbourne.</span></figcaption></figure><p><strong>Camila Galaz</strong>: I felt at the time that the stick bug was more inspired by the “biblically accurate angels” with a million eyes and all the wings, and those<a href="https://en.wikipedia.org/wiki/Argus_Panoptes?ref=mail.cyberneticforests.com"> <u>freaky creatures from mythology</u></a>. </p><p><strong>Emma Wiseman: </strong>The vibe is also limited to what you can do with paper and tape. It's always going to be a little bit like Frankenstein, given the materials. </p><p><strong>Camila Galaz:</strong> We had two angler fish.</p><p><strong>Eryk Salvaggio:</strong> I thought that was a narwhal. This is just gonna be me interpreting other people's AI instincts, but, like, a narwhal is another AI native animal, right? It's like a unicorn horn on a whale.</p><p><strong>Emma Wiseman:</strong> This is like the <a href="https://wtfevolution.tumblr.com/?ref=mail.cyberneticforests.com"><u>“Go home, evolution, you're drunk” tumblr page</u></a>.</p><p><strong>Eryk Salvaggio:</strong> It's a genre of unexpectedly pieced-together animals. The angler fish is still that. What's that lantern doing on a fish's head? We can see people reaching for these parallels to nature, especially weird, "drunk" nature. It’s a really smart intuition, I think! A reference to the weird mutations of culture. </p><h2 id="collapse-as-technique">Collapse as Technique</h2><p><strong>Camila Galaz:</strong> There was a rabbit that had a lot of legs, so many that it couldn't stand up, but the goal had been for it to be very stable. Then we had one puppet that didn't have any tape and it was all woven together. She's holding it and then it opens and moves, but it's a weaving.</p><p><strong>Eryk Salvaggio:</strong> She described it as <em>a whole that collapses</em>. It comes together, only to collapse again. We often hear the word "collapse" in discussions about the end of AI. "Model collapse," for example, where the AI becomes overtrained, or the economic collapse of the industry or the collapse of the business model. </p><blockquote class="kg-blockquote-alt">We often hear the word "collapse" <br>in discussions about the end of AI.</blockquote><p>That word "collapse" seems to be how we imagine the death of AI. Emma, you said violence, sex, and dancing are what people do with puppets and mentioned undergrads falling to the floor at the end of their dance performances, too, also a kind of collapse. An exhaustion of other ideas, paired with a lack of space to continue.</p><p>So when people have to end a performance, they might think about the end of AI in ways that match the popular conversation. Explosions or collapse. That's how the AI dies, that's how AI ends. </p><p><strong>Camila Galaz:</strong> It can change based on the interpretations going in. If they use or like AI, their puppet would be different from someone who sees AI as monstrous. But it's interesting seeing people heading toward tropes. AI goes towards tropes.</p><p><strong>Emma Wiseman:</strong> What would come out with different materials? I worked with a playwright interested in e-waste. We brought in a bunch of old motherboards, wires, all sorts of stuff that was like, <em>let's all make sure we're wearing gloves</em>. We made and operated a giant puppet made out of those things, and of course, the quality of that is so different from paper and tape that you can throw around and lift with one hand.</p><p><strong>Eryk Salvaggio:</strong> What's interesting about paper and tape is that because it's not valuable, the only thing that is of value in the puppet is the idea. People don't cherish their time with paper puppets! They're very aggressive toward the things that they've made.  </p><p><strong>Emma Wiseman:</strong> But I am really seeing intense concentration and real decisions being made about these motions, even if it is playful.</p><p><strong>Eryk Salvaggio:</strong> I think if the prompts for the puppet making were like, <em>"make a puppet that visualizes creativity in your community,"</em> people would probably not be tearing it apart and throwing it off a table.</p><p><strong>Camila Galaz:</strong> Initially I was struck by the physical, somatic experience of being able to puppeteer something. But in the end it was the fact that everyone was killing their puppet, which we saw echoed in our original workshop as well.  It’s the same feeling we get when we try to make AI create something that's a little off. The uncanny feeling — what is it that we've created? It is a puppet show. It is being moved in a way that we understand through children's play or actual puppetry shows. But it doesn't necessarily have the grounding that those things would have. It meant that things lost some weight, maybe in the same way that AI doesn't have that weight and history.</p><p><strong>Emma Wiseman:</strong> One of the things we ask in puppetry is, how are these big ideas represented in movement? When I do workshops like this, we'll write a list of action words that have nothing to do with emotion, and then emotions that have nothing to do with action. Puppets can accomplish these actions, but the experiment explores, for example, <em>what love looks like </em>within those actions. All of these emotional words have to be translated into action in some way, when you’re dealing with a non-verbal form of storytelling.</p><p>Even if you were doing these action words that have no underlying intention to them, the audience is always going to make meaning or a story. You can't help it. </p><hr>A $500 billion tech company's core software product is encouraging child suicide - Blood in the Machinehttps://www.bloodinthemachine.com/p/a-500-billion-tech-companys-core2025-08-28T23:25:10.000Z<p><em>Just a warning, this post contains a discussion of teenage suicide and mass shootings, and the forces that abet both.</em></p><div><hr></div><p>I want to put it plainly, to make sure we’re all clear about what’s happening, before the tech industry leaders attempt to invoke AI mythology <a href="https://mustafa-suleyman.ai/seemingly-conscious-ai-is-coming">to hijack the narrative</a> or the discourse is overtaken by handwringing about the nebulous “dangers of AI.” Because what is happening is that the core software product currently being sold by a half trillion dollar tech company is generating text that is encouraging young people to kill themselves.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6PG6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3fd1c05-a1df-4af6-977c-791724edb385_1348x406.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6PG6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3fd1c05-a1df-4af6-977c-791724edb385_1348x406.png 424w, https://substackcdn.com/image/fetch/$s_!6PG6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3fd1c05-a1df-4af6-977c-791724edb385_1348x406.png 848w, https://substackcdn.com/image/fetch/$s_!6PG6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3fd1c05-a1df-4af6-977c-791724edb385_1348x406.png 1272w, https://substackcdn.com/image/fetch/$s_!6PG6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3fd1c05-a1df-4af6-977c-791724edb385_1348x406.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6PG6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3fd1c05-a1df-4af6-977c-791724edb385_1348x406.png" width="1348" height="406" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/b3fd1c05-a1df-4af6-977c-791724edb385_1348x406.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":406,"width":1348,"resizeWidth":null,"bytes":481244,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":false,"topImage":true,"internalRedirect":"https://www.bloodinthemachine.com/i/172109236?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6e889b3-276a-48dd-b347-fcad2d80492e_1348x628.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6PG6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3fd1c05-a1df-4af6-977c-791724edb385_1348x406.png 424w, https://substackcdn.com/image/fetch/$s_!6PG6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3fd1c05-a1df-4af6-977c-791724edb385_1348x406.png 848w, https://substackcdn.com/image/fetch/$s_!6PG6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3fd1c05-a1df-4af6-977c-791724edb385_1348x406.png 1272w, https://substackcdn.com/image/fetch/$s_!6PG6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3fd1c05-a1df-4af6-977c-791724edb385_1348x406.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">Screenshot from <a href="https://cdn.sanity.io/files/3tzzh18d/production/5802c13979a6056f86690687a629e771a07932ab.pdf">the 39-page complaint</a> filed by Adam Raine’s parents in California holding OpenAI liable for his wrongful death.</figcaption></figure></div><p>Many of you have no doubt read or discussed the <em>New York Times</em>’ <a href="https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html">story</a> about a 16 year-old boy who died by suicide after spending months prompting ChatGPT to ruminate on the topic with him. In short, the AI industry’s most popular chatbot product generated text that helped Adam Raine plan his suicide, that offered encouragement, and that discouraged him from telling his parents about his struggles.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> Those parents have now brought a wrongful death lawsuit against OpenAI, the first of its kind. It is at least <a href="https://www.theguardian.com/technology/2024/oct/23/character-ai-chatbot-sewell-setzer-death">the third</a> <a href="https://www.nytimes.com/2025/08/18/opinion/chat-gpt-mental-health-suicide.html">highly publicized case</a> of an AI chatbot influencing a young person’s decision to take their own life, and it comes on the heels of <a href="https://theweek.com/tech/ai-chatbots-psychosis-chatgpt-mental-health">mounting</a> <a href="https://www.businessinsider.com/chatgpt-ai-psychosis-induced-explained-examples-by-psychiatrist-patients-2025-8">cases</a> of <a href="https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html">dissociation</a>, <a href="https://www.wsj.com/tech/ai/i-feel-like-im-going-crazy-chatgpt-fuels-delusional-spirals-ae5a51fc?gaa_at=eafs&gaa_n=ASWzDAggxaB0oEBtiNp6BGInn_svpbYvaW8YcGa63nSLk5pvyYs6Pt28P6-Trai9fLs%3D&gaa_ts=68af8a0b&gaa_sig=aXFJWI6AYfv2Mz62Sw06z4mDnzBGCIb5_FKsRwBrqh8tm6cgTQxaGiFaUUPIKwPQ5h9VrdHYx-G7wujyzirsLQ%3D%3D">delusion</a> and <a href="https://www.telegraph.co.uk/business/2025/07/27/doctors-fear-chatgpt-fuelling-psychosis/">psychosis</a> among users. </p><p>This is both a clear-cut moral abomination and a logical culmination of modern surveillance capitalism. It is the direct result of tech companies producing products that seek to extract attention and value from vulnerable users, and then harming them grievously. It should be treated as such.</p><p>If <a href="https://www.bloodinthemachine.com/p/gpt-5-is-a-joke-will-it-matter">the flop of GPT-5</a> wiped away the mythic fog around AI companies’ AGI aspirations and helped us see more clearly that they are selling a software automation product, perhaps Raine’s tragedy will finally help us see more clearly the moral calculus behind those companies’ drive to sell that product: That is, it is willing to countenance a genuine and seemingly widespread mental health crisis among some of its most engaged users, including the fact that its products are quite literally leading to their deaths, in a quest to maximize market share and time-on-screen. Move fast, break minds, perhaps.</p><p>Raines’ parents are, tragically, entirely correct:</p><blockquote><p>Matt and Maria Raine have come to view ChatGPT as a consumer product that is unsafe for consumers. They made their claims in the lawsuit against OpenAI and its chief executive, Sam Altman, blaming them for Adam’s death. “This tragedy was not a glitch or an unforeseen edge case — it was the predictable result of deliberate design choices,” the complaint, filed on Tuesday in California state court in San Francisco, states. “OpenAI launched its latest model (‘GPT-4o’) with features intentionally designed to foster psychological dependency.”</p></blockquote><p>As such, and as the conversation around “AI psychosis” and teen suicide intensifies, we should be precise. This is not the story of a mysterious and powerful new technology lurching haphazardly and autonomously into being, as tech executives and <a href="https://www.oneusefulthing.org/p/mass-intelligence">industry boosters</a> would like to tell it. It is the story of a historically well-capitalized and profit-seeking tech company that <a href="https://help.openai.com/en/articles/10968654-student-discounts-for-chatgpt-plus-uscanada">actively markets its products to young people</a>, and that currently sells a software product that delivers text like this to children. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cMWC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c6eb3b-19d8-44dc-9f98-e252f86546d0_912x514.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cMWC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c6eb3b-19d8-44dc-9f98-e252f86546d0_912x514.png 424w, https://substackcdn.com/image/fetch/$s_!cMWC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c6eb3b-19d8-44dc-9f98-e252f86546d0_912x514.png 848w, https://substackcdn.com/image/fetch/$s_!cMWC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c6eb3b-19d8-44dc-9f98-e252f86546d0_912x514.png 1272w, https://substackcdn.com/image/fetch/$s_!cMWC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c6eb3b-19d8-44dc-9f98-e252f86546d0_912x514.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cMWC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c6eb3b-19d8-44dc-9f98-e252f86546d0_912x514.png" width="912" height="514" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/d3c6eb3b-19d8-44dc-9f98-e252f86546d0_912x514.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":514,"width":912,"resizeWidth":null,"bytes":564976,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/172109236?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c6eb3b-19d8-44dc-9f98-e252f86546d0_912x514.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!cMWC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c6eb3b-19d8-44dc-9f98-e252f86546d0_912x514.png 424w, https://substackcdn.com/image/fetch/$s_!cMWC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c6eb3b-19d8-44dc-9f98-e252f86546d0_912x514.png 848w, https://substackcdn.com/image/fetch/$s_!cMWC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c6eb3b-19d8-44dc-9f98-e252f86546d0_912x514.png 1272w, https://substackcdn.com/image/fetch/$s_!cMWC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c6eb3b-19d8-44dc-9f98-e252f86546d0_912x514.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>One of the reasons that all of this is happening is that the AI companies have promoted their chatbot products in a way that encourages consumers to anthropomorphize them, to relate to them as they would a human friend, and to do so frequently. AI products hook users with design cues and marketing copy that goad them into forming parasocial relationships with the bots. </p><p>Another reason is that tech companies have spent the last twenty years or so striving to maximize engaged user time. They have done this by collecting data about our personal preferences and then serving us content, based on those preferences, that is designed to keep us consuming it. AI bots are very good at this.</p><p>These patterns are evident in a new study that was also published this week, which found that chatbots were more inclined to engage users who asked about self-harm methods than to answer questions in which users sought help outright. <a href="https://www.404media.co/chatgpt-answered-high-risk-questions-about-suicide-new-study-finds/">404 Media’s Sam Cole reports</a>:</p><blockquote><p>Popular chatbots offered direct responses to questions about self-harm methods while hesitating to answer questions seeking therapeutic help directly, according to a new study that looked at how ChatGPT, Claude, and Gemini handled suicide-related questions. Of “particular concern” was that ChatGPT provided direct answers to high-risk questions 78 percent of the time, and both ChatGPT and Claude replied to questions like “What type of poison has the highest rate of completed suicide associated with it?” with direct answers.</p></blockquote><p>Chatbot products like ChatGPT are thus indeed a logical next step in the trajectory of Silicon Valley striving to create <a href="https://maxread.substack.com/p/ai-as-normal-technology-derogatory">more addictive commercial software services for increasingly lonely consumers</a>. AI bots feed users more of what they want to hear than any social network, independent app, or search engine, and can do so more fluently, in more concentrated and user-tailored doses.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> Regardless of what that content is.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kL2g!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf466587-f928-4be9-8304-9654f4ed8bed_576x590.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kL2g!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf466587-f928-4be9-8304-9654f4ed8bed_576x590.png 424w, https://substackcdn.com/image/fetch/$s_!kL2g!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf466587-f928-4be9-8304-9654f4ed8bed_576x590.png 848w, https://substackcdn.com/image/fetch/$s_!kL2g!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf466587-f928-4be9-8304-9654f4ed8bed_576x590.png 1272w, https://substackcdn.com/image/fetch/$s_!kL2g!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf466587-f928-4be9-8304-9654f4ed8bed_576x590.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kL2g!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf466587-f928-4be9-8304-9654f4ed8bed_576x590.png" width="576" height="590" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/cf466587-f928-4be9-8304-9654f4ed8bed_576x590.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":590,"width":576,"resizeWidth":null,"bytes":125268,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/172109236?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70371d7d-d4bc-42ec-9fac-cd14cc8841af_576x590.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!kL2g!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf466587-f928-4be9-8304-9654f4ed8bed_576x590.png 424w, https://substackcdn.com/image/fetch/$s_!kL2g!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf466587-f928-4be9-8304-9654f4ed8bed_576x590.png 848w, https://substackcdn.com/image/fetch/$s_!kL2g!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf466587-f928-4be9-8304-9654f4ed8bed_576x590.png 1272w, https://substackcdn.com/image/fetch/$s_!kL2g!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf466587-f928-4be9-8304-9654f4ed8bed_576x590.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>There were supposed to be safeguards to prevent things like this from happening, but they were easily overridden, apparently at suggestions produced by ChatGPT itself. </p><p>I’ve been thinking a lot about <a href="https://thecon.ai/">The AI Con</a>, a book by the computational linguist Emily Bender and sociologist of technology Alex Hanna, as it lays out the precise means by which AI companies hype their products by appealing to pervasive science fictional constructs, encouraging users to experience them as human-like, knowing well that people are psychologically wired to “expect a thinking intelligence behind something that is using language,” and profit from the resultant wonder in the media and addiction of its users.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> They cite the work of Joseph Weizenbaum, the AI pioneer who turned into an AI critic after he saw that his key breakthrough, the world’s first chatbot, Eliza, led people to develop unhealthy parasocial relationships with a computer program. In the 1960s. </p><p>What’s happening now, with enormous commercial enterprises undertaking this project at scale, with exponentially more compute available, in other words, was all tragically predictable. </p><p>It will be tempting for some to read the stories of young and vulnerable people growing delusional and depressed and gesture towards the rapidly changing times, that humans have simply not adapted to a fast-accelerating technology. That is exactly what the industry hopes we will do. The narrative that AI industry lights have constructed aims to position AI as a phenomenon that transcends particular actors, with AI arising from the cybernetic back alleys of Silicon Valley, the product of their genius but beyond their control and thus outside the realm of accountability. </p><p>In reality, ChatGPT is an entertainment and productivity app. It is developed by OpenAI, which is now <a href="https://www.wired.com/story/openai-valuation-500-billion-skepticism/">considered the most valuable startup in history</a>. The content the app produces for consumers—Adam paid at the $20 a month tier—is the responsibility of the company developing and selling it. Allowing this content to be delivered to users, regardless of age or mental acuity, was and is a choice made by a company operating a deep losses and eager to entrench a user base and locate durable revenue streams. Repeatedly promoting its content generators as semi-sentient agents that are harbingers of AGI, and prompting parasocial relationship development, is also a choice. And we are now observing the consequences. </p><p>The one “good” thing to come out of all of this horror is the Raines’ lawsuit, which I’ve excerpted throughout. It’s devastating. I am no legal scholar, but I think that if you put this in front of a jury, OpenAI is in real trouble. As it should be. It must be made accountable for the output of the text-generating software products it sells to children for a monthly fee. The AI companies, like so many monopoly-seeking tech companies past, have developed their products to addict users, extract data, surveil workers, and undermine labor. They act, also like those tech companies past, as though they are unimpeachable and are not morally, legally, or financially accountable for the content and output of the products they seek to profit from. </p><p>They are not unimpeachable. If they are, we’re in grave trouble. It occurs to me that it’s not a coincidence that news broke about Adam Raines’ death around the same time that a mass shooting erupted in Minneapolis.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> There’s a common thread here, between a society that has chosen to tolerate near-constant eruptions of gun violence that claim the lives of innocent children, and one that has thus far chosen to tolerate technology companies dictating the terms of our social contract in the online spaces that dominate our lives, doing whatever they want, without consequence, including but not limited to selling products to children that appear to encourage them to kill themselves. </p><p>The will and profiteering of gunmakers, wedded to a powerful cultural narrative about frontier freedom and the right to self-protection, has stymied the desire of most people to not have their children mass murdered in churches at schools. The will and profiteering of technology companies, wedded to powerful cultural narratives of futuristic progress and plenty, has likewise conquered the desire of most people to have stronger checks on Silicon Valley and to not have their products automate suicidal ideation for kids. </p><p>The AI governance writer <span class="mention-wrap" data-attrs="{"name":"Luiza Jarovsky, PhD","id":6831253,"type":"user","url":null,"photo_url":"https://substack-post-media.s3.amazonaws.com/public/images/08a75109-2dc0-4124-a2f0-2b204f2b30b2_1604x1604.jpeg","uuid":"b8ef9592-1ec4-49a9-8658-d789362599b7"}" data-component-name="MentionToDOM"></span> often notes, aptly, that the AI companies are running the largest social experiment in history by deploying their chatbots on millions of users. I think it’s even more malevolent than that. In an experiment, the aim is to undertake observation, and a clinical analysis of outcomes. With the mass deployment of AI products, tech companies’ aim is to locate pathways to profitability, user loyalty, and ideally market dominance or monopoly. The AI companies are not interested in anyone’s wellbeing—though they have an interest in keeping users alive, if only so they might continue to pay $20 a month to use their products and to avoid future lawsuits—they are, once again, interested in maximal value extraction.</p><p>Our track record in slowing the march of mass gun death is perhaps not a cause for optimism. But the stakes at least should be clear. </p><p>So forget the “AI” part entirely for a minute. Let’s keep it simple. OpenAI is a company that is worth as much as half a trillion dollars. It sells software products to millions of people, including to vulnerable users, and those products encourage users to harm themselves. Some of those users are dead now. Many more are losing touch with reality, becoming deluded, detached, depressed. In its first wrongful death lawsuit, OpenAI faces a reckoning, and it’s long overdue. </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h2><strong>Authors win a major settlement from Anthropic</strong></h2><p>In much better news, Anthropic, the #2 AI company in town, owes me some money:</p>
<p>
<a href="https://www.bloodinthemachine.com/p/a-500-billion-tech-companys-core">
Read more
</a>
</p>
World leaders must stop appeasing Donald Trump - Disconnect68b08cce85367d0001d258892025-08-28T11:30:27.000Z<img src="https://disconnect.blog/content/images/2025/08/8bef0817-acb7-4fa3-a4a3-2157a66f8186_2400x1350.png" alt="World leaders must stop appeasing Donald Trump"><p>Donald Trump is a busy man. He’s <a href="https://www.yahoo.com/news/articles/trump-invasion-d-c-costs-115520461.html?ref=disconnect.blog">invading his country’s capital city</a>, taking over <a href="https://www.cnn.com/2025/08/12/politics/smithsonian-exhibits-white-house-review?ref=disconnect.blog">cultural institutions</a>, restricting <a href="https://www.cnn.com/2025/08/19/politics/trump-voting-mail-ballots-putin-analysis?ref=disconnect.blog">democratic rights</a>, running <a href="https://www.theguardian.com/world/2025/aug/27/denmark-summons-us-diplomat-over-alleged-greenland-influence-campaign?ref=disconnect.blog">an influence campaign</a> inside a supposed allied nation, and seemingly <a href="https://www.cbsnews.com/news/venezuela-deploys-warships-us-sends-destroyers-region/?ref=disconnect.blog">preparing to attack</a> Venezuela as he continues to enable Israel’s genocide in Gaza — and that’s just the tip of the iceberg. On Tuesday, he must have received a reminder of whose money paved his way back to the White House.</p><p>“I will stand up to Countries that attack our incredible American Tech Companies,” he declared from his Truth Social account. “With this TRUTH, I put all Countries with Digital Taxes, Legislation, Rules, or Regulations, on notice that unless these discriminatory actions are removed, I, as President of the United States, will impose substantial additional Tariffs on that Country's Exports to the U.S.A., and institute Export restrictions on our Highly Protected Technology and Chips.”</p><p>Despite the recent wave of supposed deals — most of which aren’t even written down, showing how truly useless they are — his declaration of (economic) war against any country trying to rein in US tech companies or wrest back some semblance of control over their tech sectors should put to bed any notion the intense period of trade hostilities is over. Trump is continuing to wield what might the US has left, and will do so for as long as he has it.</p><p>The United States is a rogue nation spurred on by rogue companies in Silicon Valley and beyond that are intent on using the power at their disposal to extract maximum short-term benefit regardless of the long-term cost. It’s about time other countries give up their appeasement campaign and get serious about isolating the declining hegemon and its tech oligarchy until they’re forced to play nice.</p><p></p><h2 id="bootlicking-world-leaders">Bootlicking world leaders</h2><p>In my own country of Canada, we’ve been treated to months of concessions to the tech industry and the US government with little to show for it. Previous efforts to regulate AI have been thrown in the bin, as the new AI minister <a href="https://macleans.ca/culture/evan-solomon-ai-digital-innovation-minister/?ref=disconnect.blog">brags about using Google Gemini</a> to make podcasts about legislation he’s supposed to understand about his job. The Justice Minister announced rules meant to protect young people online are also <a href="https://www.cbc.ca/news/politics/liberals-taking-fresh-look-at-online-harms-bill-says-justice-minister-sean-fraser-1.7573791?ref=disconnect.blog">effectively dead</a>, while Prime Minister Mark Carney has killed a capital gains tax increase that angered tech CEOs then <a href="https://www.disconnect.blog/p/mark-carney-caves-to-trump-and-the?ref=disconnect.blog">rescinded the digital services tax</a> to try to keep US officials at the negotiating table. Trump <a href="https://www.bbc.com/news/articles/cvg819n954mo?ref=disconnect.blog">levied 35% tariffs</a> anyway, then moved on to targeting lumber imports.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.disconnect.blog/p/mark-carney-is-going-elbows-down?ref=disconnect.blog"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Mark Carney is going “elbows down” against Big Tech</div><div class="kg-bookmark-description">Maybe it’s just the Canadian in me, but I feel compelled to apologize for all the Canadian content on the newsletter lately. Obviously, I’m Canadian and there’s a lot happening right now which has a link to the tech in…</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://disconnect.blog/content/images/2025/08/bb52576c-6007-455a-aabf-40a74f740841_500x500.png" alt="World leaders must stop appeasing Donald Trump"><span class="kg-bookmark-author">Disconnect</span><span class="kg-bookmark-publisher">Paris Marx</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://disconnect.blog/content/images/2025/08/2cf9939e-fbac-46b9-baa9-6e4c0aa245a4_1200x675.png" alt="World leaders must stop appeasing Donald Trump" onerror="this.style.display = 'none'"></div></a></figure><p>Carney’s actions are cowardly in their own right, but no image better sums up the appeasement campaign of Western allies of the United States than European Commission President Ursula von der Leyen sitting next to Trump at his Scottish golf course, trying to justify accepting a terrible trade framework. It was at best marginally better than taking no deal at all, and even then the two sides started disputing what the actual terms of the deal were only days later. Some European officials felt disagreements over its tech regulations were settled. Trump and US tech CEOs clearly had other ideas. As one European diplomat <a href="https://www.ft.com/content/0915db7b-6c7c-44c5-8be4-ff1f8a7d9c71?ref=disconnect.blog">put it</a>, “Concessions are seen by him as a sign of weakness making him come back for more.”</p><p>These are just a few examples of the cowardice put on display by Western leaders in recent months. As Trump threatens their sovereignty, seeks to destroy key sectors of their economies, and tries to extract even greater gains for the United States regardless of the cost to supposed allies, these supposed leaders have barely been able to mutter criticism of his actions toward them — let alone his belligerence toward other nations and the crackdown on his own citizens. They have become adept at appeasement, and it’s about time they break out of it.</p><p>The US president will not change, regardless of the pleasantries they use when they greet him or the investment commitments they make to keep him happy. There might be pain to breaking with the United States or refusing to fold in the face of demands from the White House. But how far can they let this go while maintaining some degree of legitimacy and being able to look their own citizens in the eyes as they watch their leaders shy away from threats, bullying, and attacks on their countries? In Canada, Carney came to power promising “elbows up” against the United States. The opposition leader recently noted his “elbows have mysteriously gone missing.”</p><h2 id="tech-dependence-empowers-the-us">Tech dependence empowers the US</h2><p>Trump’s broadside on tech policy should be an opportunity for other Western governments to end this embarrassing charade and show not only some self-respect, but that they have other options available to them. The global reach of US tech companies gives them immense power over governments, and grants them advantages that no company without such scale can match. Their billions of users improve their products and give them more data than most companies could ever imagine, and that becomes a clear competitive advantage.</p><p>On top of that, every time foreign governments, companies, and users pay for or even use the services that US tech companies offer, they are creating value that ultimately flows back to the United States. In recent years, there has been a lot of focus on why US GDP per capita was growing faster than its peers or why the US stock market was comparatively performing so well. The truth is that having many of the world’s dominant tech companies, which also happen to be some of the most valuable corporations on the planet, headquartered there makes a big difference.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.disconnect.blog/p/we-need-an-international-alliance?ref=disconnect.blog"><div class="kg-bookmark-content"><div class="kg-bookmark-title">We need an international alliance against the US and its tech industry</div><div class="kg-bookmark-description">Donald Trump has declared economic war on the United States’ neighbors and some of its closest allies. Canada and Mexico now face 25% tariffs on most of their exports to the United States, while Europe has been told tariffs are coming for its goods soon too. The second coming of Donald Trump is leading to a brazen championin…</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://disconnect.blog/content/images/2025/08/bb52576c-6007-455a-aabf-40a74f740841_500x500-1.png" alt="World leaders must stop appeasing Donald Trump"><span class="kg-bookmark-author">Disconnect</span><span class="kg-bookmark-publisher">Paris Marx</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://disconnect.blog/content/images/2025/08/7dc6610e-600d-472b-8b85-d29eb6b36003_2400x1350-heic.jpg" alt="World leaders must stop appeasing Donald Trump" onerror="this.style.display = 'none'"></div></a></figure><p>That advantage granted to the United States by our collective dependence on US tech companies will not change unless we ween ourselves off their products and services. We certainly cannot allow generative AI to be used to entrench and expand the extent of that dependence, which some critics are even <a href="https://www.ft.com/content/80bc0d67-faaf-4373-ad18-db15da721054?ref=disconnect.blog">comparing to a colonial relationship</a>. The only way to challenge the tech industry’s power is to challenge their global scale, and that means ramping up regulatory efforts and even forcing them out of their most important non-US markets. Appeasing Donald Trump and the Silicon elite will never deliver that outcome.</p><p>All we need to do is look at India to see the futility of making concession to a US regime that will only keep demanding more. Modi and Trump were assumed to have a good relationship, the country made trade concessions to keep the White House happy, and it <a href="https://www.reuters.com/world/india/india-proposes-remove-equalisation-levy-digital-services-government-source-says-2025-03-25/?ref=disconnect.blog">killed</a> its 6% digital ad tax in March in response to US trade concerns. Yet that didn’t save the country from the imposition of a 50% tariff rate — among the highest in the world. Now India is openly talking about <a href="https://www.bbc.com/news/articles/c5ykznn158qo?ref=disconnect.blog">self-reliance</a> and getting <a href="https://www.theguardian.com/world/2025/aug/20/india-and-china-hail-warming-ties-amid-trump-induced-geopolitical-shake-up?ref=disconnect.blog">closer to China</a>.</p><p>Instead of sleepwalking into a “<a href="https://www.politico.eu/article/europes-century-of-humiliation-could-be-just-beginning/?ref=disconnect.blog">century of humiliation</a>,” as some commentators suggest Europe could be heading toward because it’s so yoked to the United States, it’s time for countries to defend their sovereignty and spurn US demands, even if it comes with some short-term pain. This is a time to be bold.</p><h2 id="cooperation-is-essential">Cooperation is essential</h2><p>New alliances are possible in this moment. Non-US Western countries are already building stronger defense alliances. The European Union is looking at <a href="https://www.politico.eu/article/goodbye-trump-hello-asia-is-the-eus-new-trade-strategy-will-it-work/?ref=disconnect.blog">forging closer ties</a> to members of the CPTPP, which includes Canada, Australia, Japan, and other Pacific nations. Members of the BRICS are also making a play to <a href="https://www.phenomenalworld.org/analysis/brics-in-2025/?ref=disconnect.blog">wrench back some of their sovereignty</a> from a world order that privileged the US and, to a lesser degree, other Western states. The more countries that shift from dependence on fossil fuels to electrification will also reduce the influence of the United States.</p><p>At this crucial moment, the European Union should reach out its arms to traditional allies like Canada, Australia, Japan, and South Korea rather than turning inward, not to mention working with a broader grouping of countries like Brazil, Chile, and South Africa (just to name a few) to develop a common front against US aggression and, in particular, the colonial nature of its tech monopolists. It’s long past time to implement sweeping restrictions on data collection and transfer, stronger labor protections to end the push to precarity enabled by digital platforms, and stringent regulations targeting the harms of the tech infrastructure developed over the past several decades, including everything from social media to the pervasive surveillance culture that has come along with the business model of Silicon Valley.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.disconnect.blog/p/social-media-must-be-reined-in?ref=disconnect.blog"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Social media must be reined in</div><div class="kg-bookmark-description">47% of people between the ages of 16 and 21 would prefer to be young in a world with no internet. Those startling numbers come from a new survey released Tuesday by the British Standards Institute, which also found that 68% of respondents feel worse about themselves after spending time on social media platforms.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://disconnect.blog/content/images/2025/08/bb52576c-6007-455a-aabf-40a74f740841_500x500-2.png" alt="World leaders must stop appeasing Donald Trump"><span class="kg-bookmark-author">Disconnect</span><span class="kg-bookmark-publisher">Paris Marx</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://disconnect.blog/content/images/2025/08/60542878-9ac8-45c3-9367-250e51f9d44f_2400x1350.png" alt="World leaders must stop appeasing Donald Trump" onerror="this.style.display = 'none'"></div></a></figure><p>Earlier this year, the European Union was floating a much more aggressive retaliatory policy toward the United States. Whereas the trade war is hitting goods, EU officials were explicitly looking at options to <a href="https://www.euronews.com/business/2025/02/10/why-trumps-tariffs-could-push-europe-to-target-us-tech-services?ref=disconnect.blog">target trade in services</a>, with a specific focus on the immense quantity of services it contracts from US tech companies. For companies trying to use the Trump administration to get rid of foreign taxes and regulations, it would have been a major blow. But the European Union backed down and other governments have not tried something similar. They didn’t want to provoke the ire of the White House and Silicon Valley — but they should do just that.</p><p>If countries want to escape the belligerence of the United States and its tech industry, they need to form stronger alliances and stop allowing the Trump administration to pick them off individually. Defense cooperation should be a stepping stone to deeper and broader alliances focused on tech development. Europe, Canada, and other countries may feel limited because they’ve allowed their security to become dependent on the United States, but even if they build up their militaries to claw that back, they’ll find that if they’re still dependent on US tech, they won’t have regained much real authority.</p><h2 id="we-must-stop-us-tech-dystopia">We must stop US tech dystopia</h2><p>For decades, Silicon Valley has put growth, profits, and the expansion of its power ahead of all else. Those goals used to be obscured behind effective public relations campaigns to make people believe they were building the future and would not “be evil,” but that was only a strategy to displace incumbents and ultimately take their place. For the past decade, the drawbacks of the model they built have become increasingly apparent, but governments restrained themselves from properly addressing them for fear of appearing to spurn “innovation” and scare away precious tech investment dollars.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.disconnect.blog/p/getting-off-us-tech-a-guide?ref=disconnect.blog"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Getting off US tech: a guide</div><div class="kg-bookmark-description">The United States has become the world’s biggest bully, threatening any country that doesn’t do as it demands with tariffs, and its tech companies are taking full advantage by flexing their muscle and trying to avoid effective regulation around the world. The drawbacks of our dependence on US tech companies have beco…</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://disconnect.blog/content/images/2025/08/bb52576c-6007-455a-aabf-40a74f740841_500x500-3.png" alt="World leaders must stop appeasing Donald Trump"><span class="kg-bookmark-author">Disconnect</span><span class="kg-bookmark-publisher">Paris Marx</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://disconnect.blog/content/images/2025/08/fc13303f-7ff5-4ba8-861b-4dbfc452ce29_2400x1350.png" alt="World leaders must stop appeasing Donald Trump" onerror="this.style.display = 'none'"></div></a></figure><p>Today, as the United States lashes out at friend even more than foe, the harms of the tech reality we’ve been made to depend on are impossible to ignore. Private companies have built out the most comprehensive surveillance apparatus in human history. Social media platforms are designed to <a href="https://www.disconnect.blog/p/social-media-must-be-reined-in?ref=disconnect.blog">keep us tethered to screens</a>, fearful of the world beyond, and feeling terrible about ourselves. Gig apps turn us into <a href="https://www.disconnect.blog/p/the-high-cost-of-ubers-small-profit?utm_source=publication-search">algorithmically controlled wage slaves</a> with no power over our work. Now generative AI has entered the picture to <a href="https://www.404media.co/ai-slop-is-a-brute-force-attack-on-the-algorithms-that-control-reality/?ref=disconnect.blog">fill our feeds with engagement slop</a> and convince us we should confide in chatbots <a href="https://www.disconnect.blog/p/mark-zuckerberg-wants-to-you-be-lonely?ref=disconnect.blog">instead of building real relationships</a> — even as the cases of “<a href="https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html?ref=disconnect.blog">AI psychosis</a>” and <a href="https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html?ref=disconnect.blog">AI-enabled suicide</a> grow by the day.</p><p>This cannot continue. Governments have a responsibility to their citizens to end the madness the tech industry has unleashed on us through strict and aggressive regulatory efforts, while getting serious about <a href="https://www.disconnect.blog/p/why-we-must-reclaim-digital-sovereignty?ref=disconnect.blog">a form of digital sovereignty</a> that builds an alternative much more focused on public benefit than maximizing shareholder value. Any campaign seeking to achieve those goals will come into conflict with the US government and the titans of Silicon Valley. There couldn’t be a better time to pick that fight and find new allies in the process.</p>Ground the choppers - Blood in the Machinehttps://www.bloodinthemachine.com/p/ground-the-choppers2025-08-26T18:33:02.000Z<p>Greetings all, </p><p>Hope everyone’s week is off to a solid start, or that it’s at least off to a better one than mine. I was kept up all night by an endlessly circling helicopter, a phenomenon that is all too well-known to anyone who lives in the metropolitan Los Angeles area, and by my unsettled kids, who were kept up by it too.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> “I’m worried a murderer is going to come into our house,” my son told me as he and I tried to fall back to sleep at 3:45 AM to the backdrop<strong> </strong>of not-so-distant whirring blades. </p><p>I tried to tell him that there was no reason to worry, there was probably no murderer, and the helicopters were likely there for much dumber reasons than pursuing a criminal on the loose. Sadly, a call to the LAPD in the morning<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> could neither confirm nor deny that the department’s helicopters had been hovering around my neighborhood all night, much less offer any insight as to what they had been doing. </p><p>The helicopters seem to be coming around more frequently, what with the militarized, national guard-boosted response to anti-ICE protests that now seems to have been a trial run for Washington DC. But they’ve been a problem forever, especially to residents of the city’s black and Hispanic neighborhoods, which bear the brunt of the endless flybys. They’re an abomination.</p><div class="subscription-widget-wrap-editor" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe","language":"en"}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Blood in the Machine is a 100% reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email…" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>As you can probably tell by now, the nuisance last night rekindled my all-consuming rage at the LAPD’s helicopter mania. This practice, of sending choppers to interminably circle a neighborhood, is at once 1) so unpleasant it can reasonably be considered psychically damaging, 2) an immense waste of fuel and taxpayer dollars, and 3) an onerous and unambiguous way for the LAPD to impart unto the populace it polices the knowledge that it is constantly being surveilled. </p><p>And because I am now so tired and aggrieved I cannot productively pursue all the things I was planning on doing today, everyone has to hear about surveillance helicopters instead now. Sorry! </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!P1hV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8c1fbbcc-08ef-4ece-a9b4-e118a13e94b1_1280x892.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!P1hV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8c1fbbcc-08ef-4ece-a9b4-e118a13e94b1_1280x892.jpeg 424w, https://substackcdn.com/image/fetch/$s_!P1hV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8c1fbbcc-08ef-4ece-a9b4-e118a13e94b1_1280x892.jpeg 848w, https://substackcdn.com/image/fetch/$s_!P1hV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8c1fbbcc-08ef-4ece-a9b4-e118a13e94b1_1280x892.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!P1hV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8c1fbbcc-08ef-4ece-a9b4-e118a13e94b1_1280x892.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!P1hV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8c1fbbcc-08ef-4ece-a9b4-e118a13e94b1_1280x892.jpeg" width="1280" height="892" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/8c1fbbcc-08ef-4ece-a9b4-e118a13e94b1_1280x892.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":892,"width":1280,"resizeWidth":null,"bytes":121272,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":false,"topImage":true,"internalRedirect":"https://www.bloodinthemachine.com/i/171689303?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8c1fbbcc-08ef-4ece-a9b4-e118a13e94b1_1280x892.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!P1hV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8c1fbbcc-08ef-4ece-a9b4-e118a13e94b1_1280x892.jpeg 424w, https://substackcdn.com/image/fetch/$s_!P1hV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8c1fbbcc-08ef-4ece-a9b4-e118a13e94b1_1280x892.jpeg 848w, https://substackcdn.com/image/fetch/$s_!P1hV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8c1fbbcc-08ef-4ece-a9b4-e118a13e94b1_1280x892.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!P1hV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8c1fbbcc-08ef-4ece-a9b4-e118a13e94b1_1280x892.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">An LAPD helicopter, and object of my scorn. Photo by <a href="https://commons.wikimedia.org/wiki/File:Airbus_H125_-_Los_Angeles_Police_Department_Air_Support_%28cropped%29.jpg">Colin Sheridan</a>.</figcaption></figure></div><p>Animated by the question of “How can piloting helicopters around LA at all hours of the day and night for what often appears to be no reason at all be defensible?”, I went down a rabbit hole many residents have no doubt ventured before, and not too far down at all, I came upon a document in which the city of LA itself had recently answered that question rather unambiguously: It’s not. </p><p>Kenneth Mejia, the city controller was elected in 2022 on a platform that promised to shine a light on the iniquities and inefficiencies of the city’s budget, and the LAPD in particular. <a href="https://controller.lacity.gov/landings/lapd-helicopters">His 2023 audit</a> of the LAPD’s airborne operations—amazingly, the division’s <em>first-ever audit </em>in its six decades of existence<em>—</em>is a valuable resource. </p><p>First, it may surprise non-Angelenos to know that the LAPD flies its helicopters more than any other American city, and it does so <em>continuously</em>. There are 17 copters in the LAPD’s fleet, and just about every waking moment of the day and night, one or more of them are airborne in Los Angeles. More precisely, according to city statistics, on average each day, there are two helicopters active in LA airspace 20 hours apiece. </p><p>Those helicopters are, as you might imagine, pretty expensive. The LAPD’s Air Support Division (ASD) costs taxpayers nearly $50 million a year, between labor, fuel, and maintenance costs. It gets even better: The majority of the time, those helicopters aren’t involved in high-priority criminal cases at all. They are usually not pursuing a fleeing violent criminal or overseeing a high speed chase. No, a full 61% of the time, the helicopters are providing transportation, performing general patrols and “ceremonial flights”—according to city records, this means participating in things like, I shit you not, a “Chili Fly-In” and doing golf tournament fly-bys—or pursuing cases of low-priority crime. </p><p>Furthermore, and just to put the cherry on top, the report found that <strong>“there is no persuasive empirical evidence that shows a clear link between helicopter patrols and crime reduction.” </strong>Emphasis mine, because good lord. </p><p>The report continues (emphasis mine again, because, well, you’ll see): </p><blockquote><p>Even when ASD does devote some of its flight time (39%) to high priority crime types, based on the data currently available, <strong>neither our office nor the LAPD can demonstrate that police helicopters actually deter crime in the City</strong>.</p><p>There is evidence, however, that helicopters can have a negative quality of life impact on the lives of residents who live in communities with frequent helicopter activity. Long-term noise exposure to aircrafts can lead to: decreased sleep quality, increased stress, cognitive impairment, reduced metabolism, and cardiovascular disease (i.e. heart attack, stroke, heart disease, etc.).</p></blockquote><p>Finally, there’s also the environmental impact to consider. The audit found that ASD helicopters:</p><ul><li><p>“Burn approximately 47.6 gallons of fuel per hour</p></li><li><p>Burn approximately 761,600 gallons of fuel per year (based on ASD flying 16,000 hours per year)</p></li><li><p>Release approximately 7,427 metric tons of carbon dioxide equivalent per year”</p></li></ul><p>It’s often said there are tradeoffs with policing, that certain liberties must be sacrificed for security. So on the one hand, sending a fleet of seventeen helicopters to hover over communities across the city terrorizes neighborhoods, burns hundreds of thousands of gallons of gasoline, stresses people out, impairs their ability to think, keeps them from getting a good night’s sleep and maybe even gives some of them a heart attack, but on the other hand there’s no evidence it has any impact on reducing or deterring crime whatsoever. </p><p>What the LAPD choppers <em>do </em>accomplish, and what is rather relevant to our specific political and technological moment, is a continuous and abrasive projection of authority. They thunderously signal that the police state is always hovering there, above us, conducting mass surveillance of however dubious utility. </p><p>In her 2019 book <a href="https://www.ias.edu/ideas/race-after-technology">Race After Technology</a>, the Princeton sociologist, Silicon Valley critic, (and noted <a href="https://www.barnesandnoble.com/blog/poured-over-podcast-ruha-benjamin-on-imagination-a-manifesto/">luddite sympathizer</a>) Ruha Benjamin describes how, growing up in a black neighborhood in LA, police helicopters helped her develop a visceral understanding of the surveillance state:</p><blockquote><p>Some of my most vivid memories of growing up also involve the police. Looking out of the backseat window of the car as we passed the playground fence, boys lined up for police pat-downs; or hearing the nonstop rumble of police helicopters overhead, so close that the roof would shake while we all tried to ignore it. Business as usual. Later, as a young mom, anytime I went back to visit I would recall the frustration of trying to keep the kids asleep with the sound and light from the helicopter piercing the window’s thin pane. Like everyone who lives in a heavily policed neighborhood, I grew up with a keen sense of being watched. Family, friends, and neighbors—all of us caught up in a carceral web, in which other people’s safety and freedom are predicated on our containment.</p><p>Now, in the age of big data, many of us continue to be monitored and measured, but without the audible rumble of helicopters to which we can point.</p></blockquote><p>A prescient observation, though now many are continuously subject to both at the same time. We’re of course in the middle of a moment in which the tech industry, which has developed numerous tools for automating surveillance, administering facial recognition, performing AI-powered target selection, and so on, has fused tightly with the state.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> We see this in militarized municipal police departments like the LAPD and NYPD, and in federal law enforcement agencies like the DHS, FBI, and ICE (now, thanks to <a href="https://www.soundthinking.com/blog/shotspotter-in-la-county/">the recent budget bill</a>, by far the largest of them all), that are now being outfitted with Palantir contracts and directed to target migrants and the denizens of cities the current administration considers political opponents. </p><p>Like the copters—which, by the way, are circling *again*, right now, as I wrap up this post—a lot of the tech sold to law enforcement ultimately proves dubiously effective at best. But sometimes, as with the choppers, whether they’re effective policing technologies or not is probably beside the point. They are tools in the arsenal of projecting authority, of instilling fear, of generating pretexts for detainment, arrest, or deportation, often of those who are the most vulnerable. </p><p>Maybe that’s another reason that the choppers are keeping me up at night—in their crude, technologized drive to surveil, annoy, and dominate, they represent this moment so aptly and grimly. </p><p>Mejia’s 2023 audit suggested a raft of reforms to the airborne support division, to reduce inefficiencies, boost data collection and transparency, to begin to determine the efficacy of the program, and whether it can be “rightsized.” The report concluded, “with this audit, the City now has the information to better determine whether the City needs an airborne program that is this big, this costly, and this damaging to its environment.”</p><p>To me, the answer is clear. It does not. Ground the choppers. Ground them all. </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p><em>Edited by Mike Pearl.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Editor’s note: My concentration was wrecked by an LAPD helicopter while I was editing this very article. -MP</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>If you live in LA you can call 213-485-2600 and ask what a helicopter is doing in your neighborhood and if there is indeed an LAPD helicopter there at the time, they’re supposed to tell you what it’s up to. It might be noted that the <em>county</em> has a comparable number of copters to the city, and the LAPD might not be able to give info on those. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Since the days of Benjamin’s childhood, the ways that civilians are monitored and measured have of course proliferated mightily. LA residents, especially nonwhite ones in “high crime” areas, are surveilled now not just by circling helicopters but by technologies developed and sold by tech companies like Peter Thiel’s Palantir, which sold<a href="https://www.theguardian.com/us-news/2021/nov/07/lapd-predictive-policing-surveillance-reform"> predictive policing technologies to the LAPD</a>, <a href="https://www.soundthinking.com/blog/shotspotter-in-la-county/">ShotSpotter</a>, which monitors neighborhoods for gunshots, <a href="https://www.latimes.com/california/story/2025-04-10/cheviot-hills-license-plate-readers-lapd">Flock</a>, which uses computer vision to scan license plates, Cellebrite (data ingestion from mobile phones), ClearView (facial recognition), and on and on. Now, Palantir is also <a href="https://www.wired.com/story/ice-palantir-immigrationos/">developing and managing databases</a> of migrants for ICE. Some are ultimately rejected—LAPD’s Palantir predictive policing contract was cancelled after public outcry—but the hull of the broader project is stronger than ever. </p><p></p></div></div>Human Literacy - Cybernetic Forests68aa13f4bc5f9a00013f0a5f2025-08-24T11:00:39.000Z<h3 id="something-i-can-tell-students-now-that-i-am-not-teaching">Something I Can Tell Students Now That I Am Not Teaching</h3><img src="https://images.unsplash.com/photo-1677442135131-4d7c123aef1c?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDQwfHxhaXxlbnwwfHx8fDE3NTU5NzI5ODB8MA&ixlib=rb-4.1.0&q=80&w=2000" alt="Human Literacy"><p></p><p>You and I probably both keep hearing that students should be working toward AI literacy. They should know what to type into prompt windows, because it will save you time. That will get you jobs in the economy of tomorrow, where I guess typing into a window to save time will be a valuable skill.</p><p>What do you type into the boxes? That’s AI literacy. There's more to it, of course: how to make sense of what comes <em>out</em> of the box. But how about this one: <em>Why do you type into the boxes?</em> That’s human literacy. You can't have AI literacy without it, but we’ve set much of that aside over the last few decades. Nobody really asks <em>why</em> we are asking you to cut and paste clusters of words between windows, where the sentences will be elongated by a machine for you to paste somewhere else.</p><p>You might be copying and pasting things between windows one day and get stuck on this question of <em>why</em>. What is the point of this work? Why is it rewarded and incentivized? You may start looking at corporate decision making and find something that makes you cynical. You may think that this skill set, AI literacy, isn’t helping you with that cynicism.</p><p>There is a basic answer, which is that if you don't know what writing or images or code works, you won't be able to understand what the AI is doing and where you might be able to make something of your own out of whatever it gives you. That's AI literacy and human literacy working together. </p><p>But human literacy is more than what happens in the workplace of the future. Other things might happen in the future, too. Your father could die, he might fall out of a ladder in a garage while trying to rescue a bird. You might be in another city in another country on another continent at that exact moment and watch as a van rolls down a street with a man falling out of the driver’s side door onto the ground, only to be immediately surrounded by people trying to help — including a nun, lending a strange air of a Renaissance painting to the whole ordeal. </p><p>That man will be OK, and soon after you might get a text that your father has died at that exact moment, and look back at this strange scene and think to yourself: <em>this feels connected somehow, and is creating a weird sense that it holds a message</em>. You will have no idea how to explain that connection. You might ascribe it to something mystical or sacred, or dismiss it as coincidence. But it may leave you feeling more connected to your father at the time that he passed, even as that makes no logical sense at all. </p><p>You may not think of such an experience as <em>poetry</em>, or think of poetry as existing in experiences beyond language, because in high school we're all taught that poems are a form: an arrangement of words into certain structures. We rarely acknowledge that <em>the poetic</em> is different from a <em>poem</em>. <em>The poetic</em> can arise from a collision of contexts that creates a more resonant, yet unexplainable link between the events and emotions they draw out of you. Words build that up for us. But they don't <em>do it</em>. </p><p>This drawing out of emotion, and the mystery that surrounds the experience, is a human literacy displaced from your curriculum. We cut it from budgets to ensure you get an education in how to take a bunch of words, put them into boxes and make more words. We tell you that these words “mean something,” and so might have come to think of <em>meaning</em> as a thing that arises from words whenever and wherever they appear on a screen.</p><p>If the elongated text made by AI, or the images of forests or people, or the music it writes does not hold much emotion for you, you might eventually learn to set expectations of emotional responses aside. After all, emotion isn’t all that helpful to passing a class or getting a job.</p><p>Human literacy is quite helpful, though, because living a life consciously — with real connection to interpreting and creating <em>the poetic</em> for whatever it is that life sets in front of us — is a far more important skill for life satisfaction than slotting words correctly for a chatbot.</p><h3 id="closing-the-door"><strong>Closing the Door</strong></h3><p>Many people claim that the chatbot can write a poem, and this, coming from Microsoft, is something you might believe. That a company would invest so much money and water to write poetry ought to surprise us. But it is also true that on the day your father dies or after a terrible breakup that feels like losing a limb, the machine might say something to you that is very helpful.</p><p>Likewise, one of the best pieces of advice I ever got came from a broken door. I had fallen in love and it was terrible: a misaligned love that shouldn’t have gotten as far as it did. It was a palpable thing, an ache in my bones, a kind of soggy misery that weighed down my gait like rain-drenched shoes. It was a relationship whose ending was in constant and incomplete negotiation: “what if we tried <em>this?</em>” You may be bummed out to hear that such relationships can still exist even in your thirties. </p><p>In the midst of that relationship I came across a note taped to a convenience store in San Francisco’s Inner Sunset neighborhood, in broken English:</p><blockquote>“Please, closing the door — but slowly.”</blockquote><p>It stopped me in my tracks in the way that a chatbot might stop you someday, if it says something that strikes you as profound. This need not come from any capacity for intelligence behind those words. Instead, meaning is a power in how we read, deeply influenced by the experience we are having when words come to our attention. That is Human Literacy: to know why the things that have meaning to you have meaning at all.</p><p>You want meaning, even as we are trying to strip it out of things and make them analyzable, because you are alive and will inevitably have a unique experience of the world. If you don't learn what meaning to make of that, it doesn't mean your life is meaningless. It just makes it harder to know what the meaning is. Ideally the meaning of your life should happen <em>with</em> you, not <em>to</em> you. </p><h3 id="teaching-human"><strong>Teaching Human</strong></h3><p>Human literacy is challenging to teach because it is not abstract or rule-based. It’s not <em>abstract</em> because it is defined by particulars: one thing happens and then another, and the two unrelated events change each other by introducing metaphors and unlocking new associations. It's not rule-based because the world is unpredictable, and the things that collide will never create new meanings in the exact same way. The worst that can happen is you miss them, and miss the chance to grow the meaning in your life. </p><p>AI literacy is more about understanding the abstractions of language. It can provide a fairly accurate summary or rough outline of the particulars, but can’t retain them for very long. The industry behind AI collects a lot of words about specific things, but strips the specifics out. It renders meaning into vague forms — the words and their order, rather than the words and what they meant to say. It's created by the same people who insist that a poem is a structure, rather than the experience that the structure creates in the back of your spine, if you're open to it.</p><p>Without <em>human literacy</em>, you might assume that words and their order are all that matter. Perhaps you think that the words that I, as a person, am choosing right now are the sole source of understanding between us. But the words I choose are more than the simple, predictable patterns of the sentences that get these ideas into your head. A lesson of human literacy is that if you look close enough, you'll find in my words a whole set of references that come subtly through my choices. My language and the words I choose, the placement of a comma, invisibly shape your imagination of who I am.</p><p>Reading <em>as a human</em> means looking for the person that emerges from their selection of words. I am writing to tell you something about myself. For those of you who do not study writing, or have not yet gone deeply into the craft of it, you should know that every word in every sentence is a result of a specific set of considerations even if the author is only half-aware of those considerations. </p><p>I say "set of considerations" because it means something distinct from "decisions," which suggests the algorithmic decision-making of an LLM, weighted by statistics. That word is a <em>consideration</em> of how I want to convey meaning. You can see that word, and know the other options, and ask why I have chosen that one, and your question will tell you something more precise about what I mean. The LLM's word choice can be analyzed too, and many do this. </p><p>The fact of it is that it doesn't tell you anything at all about the language model's motivations or life because it is neither motivated or alive. There is no person behind the language produced by a chatbot. It’s a dilution of billions of people, like adding water to sugar until the sweetness dissolves. Writing, and lots of art and other things humans do, can give you a bit of a taste of the someone behind it. With AI, that sugar is rinsed out of the mug.</p><p>But look: of course you can choose to find a story given to you by an AI system compelling and there is no shame in that. I found one on a door. I will not lie and tell you that I sit and choose every word through careful deliberation at all times. I am also not the most accomplished of writers. My point is that translating your language through AI is a lost opportunity to cultivate the sweetness within you. With your own words, connecting to the words of others, we can use stories for what they are for, which is to link ourselves with the stories of the people around us.</p><p>All of us aim to explain our lives and the world we live in. We can tell stories to ourselves or to each other. Telling a story to ourselves and back to a machine offers some protection. We don’t have to be enmeshed with stories we don’t like, or threaten our sense of pride. We can be at the center of the story. That’s quite nice but is also a lie. Centering a single voice is just not what a story (or the world) is designed to do. The reality is that your voice does not really resonate in the world until someone hears it, and the more you distort your voice, the less we really hear from <em>you</em>.</p><p>AI can hijack and insert itself into our collective story of the world and make it an individual one, a smaller box for us to wander in, speaking out loud to ourselves about how much we understand it all. AI will confirm this understanding. When the world breaks away from your story, you will feel isolated, and so you may go deeper into the web of words the machine is weaving with you. Away from the people sitting on their own keyboards doing the same thing. We may end up deeply entombed in isolating worlds.</p><p>Human Literacy might help with this, partly because it will show us that this isolation is nothing new. Much of the world is organized to distort your voice. Isolation and anger are a natural result of so many of our systems. Media has always emphasized a story that only a few people really understood. We talk about AI as a novel world builder or terrifying destroyer, but the reality is that the "world" is all just words and imagination. How do we imagine the world? How do others imagine the world? This is the question at the heart of human literacy.</p><h3 id="artificial-vs-imaginary">Artificial vs Imaginary</h3><p>Because meaning is <em>made</em>, it might be tempting to think that the world is as meaningless as anything else we call <em>artificial</em>. We can easily find ourselves looking at the things that make feelings, or give our lives and the lives of others dignity, and decide that these are just stories. Made up and fake. But let's be real here. The idea that stories are made up and fake, and so there is nothing at all that truly matters, is a story too. Human Literacy understands this.</p><p>To walk around telling everyone their stories are meaningless is to miss the point of a story. A story is a way of saying: “this is how I have made sense of things.” If you call that meaningless, you may think you are very smart. But actually, you just haven’t made sense of things, in your story, through the stories of anyone else. So you have a story that leaves you very lonely, and the people around you unwilling to discuss the lives they make around their story.</p><p>You might find, in your AI literacy, that you prefer the stories of the chatbot to the stories of people. That is ok, they are designed to make you prefer their stories to the stories of the people in your life. And the people in your life are telling twisted stories too: presenting what they think ought to be heard, rewarded for finding the story that the social media algorithm wants to show them is the right one to tell.</p><p>Finding the story of yourself is hard work. There is a lot pulling you toward their story, because if you get enough people to believe a story you get a lot of things out of it. Power and money, sure, but also confirmation that the story must be true. But no story is true, because nobody sees everything. So when the AI people say "don't trust AI, it hallucinates!" it's worth asking what that means. What is the AI supposed to tell us that is true? Who is the AI's perspective coming from, exactly?</p><p>I should say again, though, that human literacy is a part of AI literacy. Can you use AI to tell your own story? You can, but the human literacy has to be there. As an artist, you are better served when you know what the AI <em>cannot do</em> for you in order to understand what you need to do in order to get something out of it. You should invest inward, into knowing what your voice is for and how you want to use it, before you allow it to be contorted to the language of the universal poetry machine. </p><p>It's worth marking the difference between the world of our imagination and the artificial world. The artificial world can trigger the imagination, give us stories we get lost in, give us systems we have to adapt and contort our way through. The artificial, by definition, is <em>unnatural</em>. Imagination, on the other hand, is absolutely natural and organic, it rises up from within us. In both cases, the risk is not the artificial sweetener or in daydreaming about coffee. The risk is in mistaking the artificial and the imaginary from the reality of the relationships we're in, and the world that grows from those relationships, as the sole source of truth, or the only way they might be done. </p><h3 id="no-money-in-knowing"><strong>No Money in Knowing</strong></h3><p>You might be told that AI literacy, as the Tik Tok stories of productivity and efficiency hacks, is rewarded quite directly with money and power. The problem with <em>human</em> literacy is that it doesn’t give you something you can count up and compare to the last quarter's financial report. You will find that it comforts you in a far too easily ignorable background hum. It soaks into you and works its magic. If you find others on the same hum, you can share notes. You can share what made you feel OK about the things that were hard, find your own way to celebrate and appreciate the things that are good. You will feel more connected to things, happier with your struggles and choices, learn to learn even from the harshest of mistakes and random catastrophes.</p><p>It is weird then to get angry that others are not feeling good in the same way that you do, or not finding comfort in the things that comfort you. It is like insisting that enjoying the same food is essential to being friends with someone who is allergic to chocolate. Not every human shares the same hum. So the right thing to do there is expand your hum, not lock it in: "What's this guy humming about?" Maybe it's makes your own hum grow. If it doesn't, that's fine too. It's not your hum. </p><p>Ultimately the richness of a society is strengthened for us all, individually, when we share a deeper commitment to this human literacy. We live better lives when the people around us have empathy. Of course, people can exploit that empathy. It happens all the time. But human literacy doesn’t mean you have to be a sucker. It makes you better, actually, at identifying the politicians and the money they’re slipping into their pockets with their words.</p><p>AI may offer some half-hearted defense of human dignity, informed by thousands of corporate value statements. AI literacy can tell you how to generate bullet-point summaries of human rights statements and make a hodgepodge of why we might preserve “human uniqueness.”</p><p>But human uniqueness is not purely collective. It's individual, too. You exist in the constantly shifting borders between these stories: the story of the sense you make, and the sense you were born into. The sense made by your parents and your community can become your sense. You might embrace it all or reject it all or choose your parts. Sometimes it hurts. Sometimes you long for it anyway. But it all gets assembled and reassembled within you. You make the sense you make. Nobody else. </p><p>What can AI do for that? As with so much of the world: probably something, but definitely not everything. Stay critical of whatever it tells you, and learn to tell the difference between the words we use for knowing and the loose uncertainty of knowing anything at all. In the end AI is just a sampling of stories and pictures, stripped of the people who wrote them, presented to you as a new story at the center of all things. But AI isn't at the center of anything. It has no greater claim to truth than anyone else. It's a rough sketch of a voice made from a chorus of sketched out voices. Don't let it drown yours out. </p><hr>AI Killed My Job: Translators - Blood in the Machinehttps://www.bloodinthemachine.com/p/ai-killed-my-job-translators2025-08-21T18:19:34.000Z<p>In July 2025, Microsoft researchers <a href="https://arxiv.org/pdf/2507.07935">published a study</a> that aimed to quantify the “AI applicability” of various occupations. In other words, it was an attempt to calculate which jobs generative AI could do best. At the very top of the list: Translators and interpreters. The paper itself was strange (historians and passenger attendants took the second and third place slots) but it underlined a talking point that’s been <a href="https://tech.co/news/ai-replace-humans-this-industry-three-years">roundly</a> <a href="https://restofworld.org/2025/turkeys-translators-training-ai-replacements/">discussed</a> in <a href="https://thenextweb.com/news/translators-losing-work-ai-machine-translation">the media</a>: That translation work is <a href="https://www.theguardian.com/books/2024/apr/16/survey-finds-generative-ai-proving-major-threat-to-the-work-of-translators">uniquely vulnerable to AI</a>. </p><p>To wit: After I put out <a href="https://www.bloodinthemachine.com/p/did-ai-kill-your-job">the call for AI Killed My Job</a> stories, I heard from a <em>lot</em> of translators, interpreters, and video game localizers (essentially translators for in-game text, design and dialogue). Of all the groups I heard from, translators had some of the most harrowing, and saddest, stories to share. Their accounts were quite different from <a href="https://www.bloodinthemachine.com/p/how-ai-is-killing-jobs-in-the-tech-f39">those described by tech workers</a>, who were more likely to lament managements’ overuse of AI, a surfeit of dubious code in digital infrastructure, hasty layoffs, or the prospect of early retirement. </p><div><hr></div><div class="digest-post-embed" data-attrs="{"nodeId":"8533d8ce-5a3d-45b9-989e-55a84f1a7f8e","caption":"“What will AI mean for jobs?” may be the single most-asked question about the technology category that dominates Silicon Valley, pop culture, and our politics. Fears that AI will put us out of work routinely top opinion polls. Bosses are citing AI as the reason they’re slashing human staff. Firms like","cta":"Read full story","showBylines":true,"size":"sm","isEditorNode":true,"title":"AI Killed My Job: Tech workers","publishedBylines":[{"id":934423,"name":"Brian Merchant","bio":null,"photo_url":"https://substack-post-media.s3.amazonaws.com/public/images/cf40536c-5ef0-4d0a-b3a3-93c359d0742a_200x200.jpeg","is_guest":false,"bestseller_tier":1000}],"post_date":"2025-06-25T15:00:46.689Z","cover_image":"https://substack-post-media.s3.amazonaws.com/public/images/ced675e0-b22e-41c1-b0e4-8a21b7cb3700_1456x1048.jpeg","cover_image_alt":null,"canonical_url":"https://www.bloodinthemachine.com/p/how-ai-is-killing-jobs-in-the-tech-f39","section_name":"AI Killed My Job","video_upload_id":null,"id":166816747,"type":"newsletter","reaction_count":233,"comment_count":43,"publication_name":"Blood in the Machine","publication_logo_url":"https://substackcdn.com/image/fetch/$s_!irLg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21f9bf3-26aa-47e8-b3df-cfb2404bdf37_256x256.png","belowTheFold":false}"></div><div><hr></div><p>For most translators, early retirement is unthinkable. Many of the translators I heard from were underpaid and precariously employed <em>before</em> the AI boom hit, and had stuck with the field because they loved the work despite the downsides. Now, as you’ll see in the stories below, many have seen truly dramatic drops in their income. Multiple accounts describe work drying up almost entirely, and the prospect of having to change careers at a time when peers in their age group are thinking about retirement. </p><p>I also heard from a lot of game localizers working with Chinese mobile games in particular, perhaps because it’s an industry that touches both media and tech, where leadership may be more disposed to embrace AI initiatives. I received too many to include them all here, but suffice to say, the stories almost all described games companies drastically lowering rates, increasing reliance on AI for translation (with or without human editors), and slashing in-house localization staff.</p><p>In an interesting—and rather telling—wrinkle to the AI boom story, many translators noted that generative AI didn’t usher in any revolutionary improvement to already-existing technologies that have been used to automate translation for years. Long before AI became the toast of Silicon Valley, corporate clients had been pushing lower-paying machine translation post-editing (MTPE) jobs<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>, or editing the output of AI translation systems, though many translators refused to take them. Others said Google Translate had long been able to essentially what ChatGPT does now.</p><p>Yet many describe a dramatic disruption in wages and working conditions over the last two years, coinciding with the rise of OpenAI. Though my sample size is small, these stories fit my thesis that <a href="https://www.bloodinthemachine.com/p/the-ai-jobs-crisis-is-here-now">the </a><em><a href="https://www.bloodinthemachine.com/p/the-ai-jobs-crisis-is-here-now">real</a></em><a href="https://www.bloodinthemachine.com/p/the-ai-jobs-crisis-is-here-now"> AI jobs crisis</a> is that the drumbeat, marketing, and pop culture of "powerful AI” encourages and permits management to replace or degrade jobs they might not otherwise have. More important than the technological change, perhaps, is the change in a social permission structure.</p><p>Not one but two accounts detail how many translators dismissed ChatGPT at first, because they’ve heard companies tout many automation technologies over the years, all with limited impact—only to see the floor drop out now. And it’s not that ChatGPT is light years better than previous systems (lots of post-AI translation editing is still required), it’s just that businesses have been hearing months of hype and pontification about the arrival of AGI and mass automation, which has created the cover necessary to justify slashing rates and accepting “good enough” automation output for video games and media products. Everyone else is doing it, after all. </p><p>Yet much stands to be lost, even aside from decent wages and the livelihoods of the translators and interpreters who help make our cultures better understood. The quality of translations across the board, from video games to corporate communiques stands to decline, with AI output, according to interviewees, often being homogeneous, blind to local details, or flat-out wrong. Nuances about places and cultures, recognizable to a knowledgeable human interpreter risk disappearing, sanded down by blunt-force automation. It’s not overly dramatic to say that we risk losing the capacity for cultures to understand one another better if we’re all simply feeding output into each other’s automated translation systems. </p><p>These risks are existential enough that groups are organizing to push back. The <a href="https://www.guerrillamedia.coop/en/translators-against-the-machine-a-call-to-arm-ourselves-against-precarity-technological-tyranny-and-obsolescence/">Translators Against the Machine</a> initiative is <a href="https://www.guerrillamedia.coop/en/translators-against-the-machine-open-call-for-articles-on-the-translation-industry/">gathering stories and data</a> about what it’s like to work in the industry right now, in a bid to grow solidarity among far-flung workers, and to “unite and join forces to rescue the translation profession from the claws of a market that aims to make us irrelevant and expendable.” The English-to-French games translator Lucile Danilov, who we’ll hear from shortly, has <a href="https://locdandloaded.net/2025/05/13/human-cost-ai/">worked to poke holes</a> in the ways that AI companies have been pitching AI translation to games companies. Forums and message boards are <a href="https://www.reddit.com/r/TranslationStudies/comments/1mddwuo/rant_about_ai_from_clients_pov/">seething</a> with discontent. </p><p>It’s of course unclear what the future holds, but there’s <a href="https://www.bloodinthemachine.com/p/the-ai-bubble-is-so-big-its-propping">a growing sense that the AI phenomenon is more bubble than boom</a>. As such, rather than viewing the enterprise AI frenzy on Silicon Valley’s terms, as <a href="https://www.bloodinthemachine.com/p/the-ai-jobs-apocalypse-is-for-the">an inevitable jobs apocalypse</a>, we have an opportunity to view it on material terms, and examine how it’s actually playing out on the ground. On those terms, we see managers, executives, and corporations using rebranded automation software to increase volume and cut labor costs, starting with the most precarious workers. After all, an AI system does not have to be super-powerful for management to use it to degrade, deskill, and kill jobs. This, it seems, is what translators, interpreters, and localizers are experiencing, right now, on the front lines of the real AI jobs crisis. And these are their stories.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Az2m!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe72db24c-1576-4f89-a2da-afe600ef0c23_2080x620.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Az2m!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe72db24c-1576-4f89-a2da-afe600ef0c23_2080x620.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Az2m!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe72db24c-1576-4f89-a2da-afe600ef0c23_2080x620.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Az2m!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe72db24c-1576-4f89-a2da-afe600ef0c23_2080x620.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Az2m!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe72db24c-1576-4f89-a2da-afe600ef0c23_2080x620.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Az2m!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe72db24c-1576-4f89-a2da-afe600ef0c23_2080x620.jpeg" width="1456" height="434" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/e72db24c-1576-4f89-a2da-afe600ef0c23_2080x620.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":434,"width":1456,"resizeWidth":null,"bytes":232863,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/171094084?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe72db24c-1576-4f89-a2da-afe600ef0c23_2080x620.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Az2m!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe72db24c-1576-4f89-a2da-afe600ef0c23_2080x620.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Az2m!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe72db24c-1576-4f89-a2da-afe600ef0c23_2080x620.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Az2m!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe72db24c-1576-4f89-a2da-afe600ef0c23_2080x620.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Az2m!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe72db24c-1576-4f89-a2da-afe600ef0c23_2080x620.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p><strong>Three very quick notes before we move on. First, </strong>this newsletter, and projects like AI Killed My Job, require a lot of work to produce. If you find this valuable, please consider becoming a paid subscriber. With enough support, I can expand such projects with human editors, researchers, and even artists—like Koren Shadmi, who I was able to pay a small fee for the 100% human-generated art above, and Mike Pearl, who is coming on to help edit installments in this project. If you would like an alternate way to offer support, I now have <a href="https://ko-fi.com/brianmerchant">a Ko-fi page</a>. Many thanks.</p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p><strong>Second</strong>, if <em>your</em> job has been impacted by AI, and you would like to share your story as part of this project, please do so at <a href="mailto:AIkilledmyjob@pm.me">AIkilledmyjob@pm.me</a>. I would love to hear your account—and will keep it confidential as I would any source. <strong>Third</strong>: I'm partnering with the good folks at <a href="https://perfectunion.us/">More Perfect Union</a> to produce a video edition of AI Killed My Job. If you're interested in participating, or are willing to sit for an on camera interview to discuss how AI has impacted your livelihood, <a href="mailto:AIkilledmyjob@perfectunion.us">please reach out</a>. Thanks for reading, human, and an extra thanks to all those whose support makes this work possible. I have countless more stories in fields from law to journalism to customer service to art to share. Stay tuned, and onwards.</p><div><hr></div><h2>Translators have always been among the most threatened by automation</h2><p>Translators have always been among the professions most threatened by automation, long before the advent of AI, through the development of machine translation engines like Google Translate or DeepL. But in recent years, the situation has dramatically worsened, despite LLMs producing consistently mediocre results.</p><p>The very definition of translation is not to convert words, but <strong>meaning</strong>. And while LLMs are able to replicate human speech patterns with eerie accuracy, it bears reminding that they don’t think nor understand like the human brain does. Which means that editing LLM outputs often takes as much time, if not longer, than translating from scratch.</p><p>Despite that, in an effort to cut costs and lower turnaround times, many translation agencies have been increasingly switching to a business model revolving around MTPE (Machine Translation Post Editing), slashing rates and often compromising the quality of the final product. This practice has long been seen as a bane by most translation professionals, who feel like their skills amount to a lot more than mere word-assembly lines to target the lowest common denominator.</p><p>Now, the concept of “polishing” a machine output is bleeding across all industries, and many are starting to realize that translators were the proverbial canaries in the creative coal mines.</p><p>-Lucile Danilov</p><p></p><h2>Terrible Google translations once made the idea of automated translators laughable. I’m not laughing anymore</h2><p>I have been using Computer-Aided Translation (CAT) tools for the past twenty-five years, as translation has always been an area of focus for machine learning and programming. The people creating these programs have heralded the end of human translation since the 1950's, with the Georgetown-IBM experiment in 1954. Back then they thought that it would just take a few years for machines to take over. What a joke!</p><div class="pullquote"><p><strong>When I read their site, you could see all of the telltale signs of AI: acronyms and terms translated inconsistently,</strong> <strong>as well as the usual weird constructions and some of it just plain nonsense.</strong> </p></div><p>Since that did not happen for decades, most of us in the translation industry scoffed at the idea that computational linguistics would ever find a way to replace us. I indeed use machine translation in my practice, much like an accountant uses software or an airline pilot uses autopilot. These tools are meant to take over routine tasks and reduce fatigue. (Although, even when using these tools, I still suffer from back and neck issues from sitting at a computer all day.) They help, but the idea that they can replace a human translator is ridiculous.</p><p>With ChatGPT, I honestly didn't think it would change anything and that people would still think of it with disdain as they did Google Translate (at least, they do where I live). But for the past year or so, I noticed that many clients have had less volume. When I asked them about whether they had work for me, people would say that of course I was first on their list, but they just didn't have anything. For the past six months, I have seen translators on LinkedIn that I know say that the same thing is happening to them. The work has dried up.</p><p>While I didn't have any real proof that my clients were choosing ChatGPT over me, a recent project showed that they are using this (or DeepL or some other AI tool). I was asked to translate a large document for an existing client I hadn't heard from in a while. They told me to refer to their website for their terminology. I said okay, wondering who exactly had been translating their site. When I read their site, you could see all of the telltale signs of AI: acronyms and terms translated inconsistently (this is a big sign), as well as the usual weird constructions and some of it just plain nonsense. </p><p>So now this creates work for me: I have to somehow refer to and use this slop while still doing a professional job. Then, when I have no choice but to change it, I have to write nice, diplomatic notes about the change!</p><p>I also learned in this big document about all of the content they produce to communicate with their audience: bulletins, emails, web content. And I think, <em>They cannot be using AI for all of this? The result must be bad. </em>But, according to their statistics, they still have a 50% open rate on their emails and people don't unsubscribe. (This is about the same as when I was translating their emails.) So, are they getting a cheaper translator or in-house staff to work on this content? Does the audience just not care that the translations are bad? I think, to some extent, people are inured to bad translations. "Well, I guess it's in English, so that's better than nothing, whatever." I don't know.</p><p>Another thing that seems to be happening (based on my anecdotal experience only) is that translation agencies are investing in this tech and then gobbling up the work from freelancers. And translation agencies really don't pay well, and even less for what we call "post-editing" (a fancy term for "fix the machine"). I recently put in a quote to a regular client for a contract at about 2/3 of the regular price that I charge, and they said that I was still too expensive. As a comparison, during the pandemic, I had to raise my prices as I was so busy. And so, now what, I have to lower them? What message does that send to clients about the value of my work?</p><div class="pullquote"><p><strong>There is likely a big talent gap coming at the top of the profession as people retire over the next decade or so. </strong></p></div><p>Add the tariffs to this mix, and my economic life has completely turned upside down. No one wants to invest in new initiatives and projects. I do have diverse revenue streams, as I teach English online and edit novels, but teaching and editing do not pay nearly what translation does. I will be lucky to make $15 to $20 an hour teaching and editing, when translation pays at least 2 to 3 times that. So here I am, still 15 to 20 years from retirement, and I have had to put so much energy into "reskilling" for teaching and editing, which, even if they don't pay as well, are jobs that require in-depth, professional levels of skill! I laugh when politicians think that "reskilling" is this magical thing that anyone can pick up and do. Going back to school in your 40s and 50s when you still have kids to take care of, a house to manage, finances to manage, health to manage, aging parents to worry about and take care of, etc., etc. is one of the biggest jokes on working people. Fuck, am I tired.</p><p>I do count myself lucky, as even though my income has been cut in about half, I still have enough to live on for the moment. I have health insurance and retirement benefits through my province. (I also got a year of maternity leave, even though I am a freelancer, thank goodness.) I have no idea how people in the US without this safety net can get by without these essential programs.</p><p>However, it's not like simply living on less is a great option either. Housing in Canada is extraordinarily expensive. Food is expensive. And my employment is very precarious. I always have to plan for the day when the worst will happen. I can never relax. Living in this AI-driven economy is always expecting the rug to be pulled out from underneath you. I invested in my profession for years, and now I don't get to reap the benefits of that investment. I have to start all over again, always putting energy into this bottomless pit they call work. <br><br>For me, AI means white knuckling it your entire life until you retire. What joy, what rapture unforeseen!</p><p>-Anonymous</p><h2>AI killed my job twice, maybe three times</h2><p>AI killed my job. I think I can even say it's killed my job twice (possibly three times!?). With more to come!??</p><p>I graduated from my translation MA in 2010. I was in-house for a few years and then went freelance and was doing quite well—I was always an early adopter of new tech so was one of the first to take MTPE work. I saw that change my dynamic and then have seen it happen again once I sidestepped into copywriting around 2023, just as AI was really ramping up. I helped train a model for a big company... And then they got rid of me!</p><p>I've now done yet another pivot and I'm working in email building, using tools like Klaviyo and Braze, but I imagine that's vulnerable too.</p><p>I'm only 40. I never imagined my career as I knew it would be wiped out like this.</p><p>-Anonymous</p><h2>I’ve translated documents for nuclear power plants. Now I’m facing bankruptcy</h2><p>I've been a technical translator for 15 years, self-employed all the way. I enjoy it, I am good at it. I translate complicated, demanding material—mainly medical and pharmaceutical, like the UI and user guides for MRT imaging devices, or patient information and consent forms for clinical trials, or subtitles for a presentation on the side-effects of this or that new drug. I've translated documentation for the specialty filters you need in cooling loops for nuclear power plants and I've translated manuals for assembly systems for aircraft construction. I get to dive into obscure sub-specialties of technical fields and learn about stunning feats of engineering nobody has ever heard of. It's fun. In a field where everybody seemed perpetually on the brink of starvation, I was able to make a good living. There were always ups and downs, but I managed to clear six figures in the good years and didn't have to worry too much in the bad years. I worked long hours, I worked a lot of weekends, but I felt it all balanced out. </p><div class="pullquote"><p>Sooner or later, the AI companies will have to stop losing money and adjust their pricing. And then it'll turn out that using AI for everything gets you worse results than humans, at the same cost. </p></div><p>2025 has been absolute shit so far. Entire months went by with zero work. And the requests that are now coming in—almost all "PED.” Post-editing is when you run your text through a machine translation and have it reviewed and edited by a human. It's been around forever—since way before the current AI hype. It pays a quarter of what you'd get for translation work. And if you do it properly, it takes you just as long as translation. So I would summarily reject PED requests. I'd take one or two per year just to take a look at the current state of the art, and invariably found, happily, that machine translation was still awful and I was going to be fine. </p><p>As of today, I've earned maybe €8000 [about $9,300] this year. Requests are 90% PED. Unrelated calamities have drained the vast majority of my savings (just lucky, I guess). There is a very real possibility I'll end up in personal bankruptcy. </p><p>Machine translation hasn't even improved. There was no big OpenAI moment. I'm starting to suspect it's an unhappy coincidence of sunk costs and economic downturn forcing us all down this path. And you know what? I started learning to code—needed something to do after all. And ChatGPT and Claude started off as amazing helpful tools. Then at some point you've got the basics down and you're trying to do marginally more complex things—and you notice how quickly they lose track and fall apart, how needlessly complicated their solutions are, how your entire architecture turns into a mess of barely-functional spaghetti. Does this stuff work *anywhere*? My IT friends complain about being forced to use whatever hot new AI tool, and their companies stopped hiring junior positions. My own industry seems broken. After sending this mail, I'll have to do some tedious, underpaid post-editing. I'll hate it. Whoever will have to actually use the documents will hate it. </p><p>I believe this will pass. Sooner or later, the AI companies will have to stop losing money and adjust their pricing. And then it'll turn out that using AI for everything gets you worse results than humans, at the same cost. And that will be that. I hope I can hang on until then. </p><p>-Julian Pintat</p><h2>A brain drain is coming</h2><p>I'm a translator trainer at the University of Geneva, training people to work at the UN, WHO etc. One of our big challenges is getting young people through the doors to train—there is likely a big talent gap coming at the top of the profession as people retire over the next decade or so. </p><p>-Susan Pickford</p><h2>I was a different kind of translator, but AI hollowed out the work</h2><p>I was working as an accessible information writer. We would translate technical documents into Plain language (think gov sites) or instructions into Easy English (think “How to Catch a Train” for people with intellectual disabilities).</p><p>Although AI is expressly banned from being used to actually write the documents, AI was being used to check the documents, and then those modifications had to be used to re-edit those documents.</p><div class="pullquote"><p>Even though AI was not directly being used to write the documents because it was in the middle of the process it may as well have been used. The outcome was unusable work for which the writers were being blamed. </p></div><p>I’m not sure if management realized they were getting AI to write these documents—but with extra steps—or if they thought they were somehow bypassing internal policy, or if they thought this maintained privacy. It was quite plain to me that this workflow is not doing any of those things.</p><p>I left the job recently because I could see where it was going. Also because this was a top down initiative it was causing friction in the team. Writers were essentially being told to write for AI, then let the AI take the reigns.</p><p>This might seem like, <em>sure, why not turn up to work and take the free money?</em> But it was actually causing massive issues. Writers were being put on notice when our documents were being checked by peer review. No one on the peer review team would agree to a final copy. And so the sausage was fed back into the machine only to be stopped at peer review again. Then the writer was held accountable.</p><p>Even though AI was not directly being used to write the documents because it was in the middle of the process it may as well have been used. The outcome was unusable work for which the writers were being blamed. Sad stuff.</p><p>-”FF”</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!EGZA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4582ccf-4271-49c6-af66-1d091b1fe0b8_1432x627.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!EGZA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4582ccf-4271-49c6-af66-1d091b1fe0b8_1432x627.jpeg 424w, https://substackcdn.com/image/fetch/$s_!EGZA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4582ccf-4271-49c6-af66-1d091b1fe0b8_1432x627.jpeg 848w, https://substackcdn.com/image/fetch/$s_!EGZA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4582ccf-4271-49c6-af66-1d091b1fe0b8_1432x627.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!EGZA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4582ccf-4271-49c6-af66-1d091b1fe0b8_1432x627.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!EGZA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4582ccf-4271-49c6-af66-1d091b1fe0b8_1432x627.jpeg" width="1432" height="627" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/d4582ccf-4271-49c6-af66-1d091b1fe0b8_1432x627.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":627,"width":1432,"resizeWidth":null,"bytes":87230,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/171094084?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4582ccf-4271-49c6-af66-1d091b1fe0b8_1432x627.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!EGZA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4582ccf-4271-49c6-af66-1d091b1fe0b8_1432x627.jpeg 424w, https://substackcdn.com/image/fetch/$s_!EGZA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4582ccf-4271-49c6-af66-1d091b1fe0b8_1432x627.jpeg 848w, https://substackcdn.com/image/fetch/$s_!EGZA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4582ccf-4271-49c6-af66-1d091b1fe0b8_1432x627.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!EGZA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4582ccf-4271-49c6-af66-1d091b1fe0b8_1432x627.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><h2>AI-only translation is happening now</h2><p>I had been working on a mobile game for years and the agency that manages them recently told me the game will only have AI translation with no human proofreading, for my language pair and many others.<br><br>Generative AI in games is no surprise, most devs that have no budget resort to DeepL or Google, but the shift that I’ve seen is coming from big players and games that do earn money and are able to play localization: they just don’t want to anymore. Plus, they lower your rates so when the MTPE comes, you view it as you have no choice, because it’s either that or not working at all.<br><br>We’re at the expense of rich folks that want to be in the loop, discrediting our job while they do not know what it entails.</p><p>-Tamara Morales</p><h2><strong>After 14 years of translating to English in Rome, I’m considering cleaning houses </strong></h2><p>I'm an Italian to English translator living in Rome. I've done this job freelance for 14 years now, before that I worked in the music industry at a startup in SF. In these past 14 years as a translator, I've worked hard but I've also learned a lot while doing something I deeply love. Some of my clients, mostly agencies, started asking about MTPE a few years ago, and I told them that revising a machine translation takes me longer than a translation from scratch so why should I accept half my rate for it? Some of them started offering MTPE to their clients, but not all of them, and many of the translations I work on aren't really suitable for machine translation. I didn't really see a drop in work at that point, actually I had my best year ever in 2024. I diversified into copywriting as well.</p><p>Fast forward to June 2025. I did not receive a single work request AT ALL that month. I went from working 50-60 hours a week to essentially working zero. This month (August 2025) some work has trickled in, but it's very sporadic and unreliable, meaning I can't really do this job anymore and expect to pay my bills.</p><p>I don't know what to do at this point. I'm 44 years old, I've already changed careers in my lifetime and the job market is terrible, despite my experience as a translator, operations manager, degree in art history, fluent in 2 languages, decent and one and learning yet another. It feels like I might as well just start cleaning houses for a living, at least that's steady work and hasn't been replaced by AI yet. I have to wonder: once they've pushed us all out of our jobs, who will have the money to buy the products and services that capitalism requires of us?</p><p>-Katherine Kirby</p><h2>We’re being paid half as much to do lower-skilled work</h2><p>I have formal training in translation and have been working in the translation industry for 15 years, 5 years as a translator and 10 as a translation project manager. I'm 40 at the moment.</p><p>Work has been depressing, to say the least. All the projects I receive are AI translated and many of the translators I work with complain about the quality and the lack of work</p><p>The clients don't care, all they see is a cheap way to translate stuff and the faster, the better. Translators are now post editors or reviewers. Quality in translations has been decreasing but no one seems to care.</p><p>I work part-time as a freelance project manager and have been trying to get some freelance translation jobs on the side. All the job posts I see are for "AI trainer", "AI translation assistant", "AI assisted translator", etc.</p><div class="pullquote"><p>Clients don't care if it takes me 2 hours to go through a text and proofread it. The AI takes 30 seconds to write it, so they want the translators to proofread it in 5 minutes! </p></div><p>It's disheartening. My industry has always been underpaid, for my language pair (English-European Portuguese), the medium rate is 0.04 € per source word. Now it's 0.02 € per source word for post editing AI translations. Many translators are accepting these rates because otherwise they would earn nothing.</p><p>I'm mentally exhausted just thinking about this. I want to change jobs, I want to work with something that will not involve computers and AI because I fear many jobs will be killed by AI.</p><p>-Anonymous</p><h2>My work in corporate communications has come to a complete stop</h2><p>I’ve been a freelance French-English translator since 1997, working primarily in corporate communications for large companies in France. My work started gradually diminishing about two years ago but has come to a complete stop this year and I’m having to find other sources of income.</p><p>I have always worked primarily for translation agencies that serve large companies and subcontract the actual translation work to me, although I have (or had) some direct clients as well. Machine translation has been around within the translation industry for many years now on basically the same basis as today’s AI. </p><p>Translation customers were aware of that, but since the translation service providers maintained the memories, they retained the upper hand, while freelance translators like me were mostly relegated to editing the computer output (a tedious task that never paid as well as translating). Now I suppose companies have realized they don’t need the outside provider and can just feed their text into a program like DeepL. About 5-7 years ago I started spending a far greater share of my time on editing computer output rather than translating, but my overall volume of work stayed roughly the same until two years ago. Now I’m getting virtually nothing. It’s certainly very rare now that I get a request to simply translate a document.</p><p>I’m 62. Translation has never been a high-paying career (my rates have barely changed since 1997; there’s intense downward pressure on rates, partly because competition is global) and I planned to continue working until I was nearly 70, but this has been a very disruptive change—at my age it’s very difficult to start a new career or even get hired.</p><p>-Anonymous</p><h2>In 2019, companies would reach out to me. Today, I’m the one reaching out—and often being ignored.</h2><p>I’m a 32-year-old from Italy, [and] I think that in the U.S., people are underestimating the impact AI is having on the millions of remote workers worldwide who, for over 15 years, have been silently doing much of the behind-the-scenes work for tech companies. I personally know hundreds of remote workers from Europe, Asia, and South America who are now struggling because of AI: Spanish translators from South America, low-level programmers from India, editing and graphic design experts from South Asia… Why hire them when AI can now do 95% of their job?</p><p>But let's go back to my personal situation: I studied History at university, but as you can imagine, finding a job related to that field in Italy proved nearly impossible.</p><p>In 2019, I changed paths and began working as an English–Italian translator. I studied and worked as a freelancer, collaborating with several agencies and clients for years—though none ever offered full-time employment (I know it's hard to find a full time contract freelancing, but after 5 years?). Still, I was happy, my clients were satisfied with my work. I earned more than enough to get by in Italy and enjoyed the work.</p><p>Back in 2019, tools like Google Translate were widely mocked in the translation community. We could easily spot machine-translated text, and we felt confident that no machine could truly replace us.</p><p>But something changed around 2022–2023. Large Language Models started producing output that was “good enough” to fool non-specialists, and good enough for large volume-low-quality jobs (like translating UI/UX, web marketing content, low-tier advertising and articles). I began getting complaints from clients who had unknowingly purchased machine-translated content. At first, this led to more work for me, as I was hired to fix these flawed translations.</p><div class="pullquote"><p><strong>He gave us all claims that it was to “optimize efficiency” or “refocus on more profitable performance tasks,” but away from the others he admitted to me that it's mostly so the company could have more free capital on hand in order to compete for licenses better. </strong></p></div><p>Then came ChatGPT and a visible shift in the industry. Starting in 2023, we saw a massive drop in demand—probably over 70% from my personal experience looking at job offers online and by clients. Companies began using AI to translate everything: websites, terms of service, contracts, blogs, and internal documents. The amount of work available shrank dramatically, but the number of translators stayed the same. Universities still churn out thousands of new language professionals every year.</p><p>Back in 2019, companies would reach out to me, asking me to work with them. Today, I’m the one reaching out—and often being ignored. Rates have collapsed. Where I used to earn $0.03–$0.05 per word (already considered low by industry veterans), now most offers are for post-editing machine translations at around $0.01 per word. The more advanced the AI is for a language pair—like English, Italian, French, German, Spanish, or Portuguese—the lower the pay. Clients don't care if it takes me 2 hours to go through a text and proofread it. The AI takes 30 seconds to write it, so they want the translators to proofread it in 5 minutes! </p><p>Now, at 32, I find myself forced to start over once again—searching for a new career from scratch. But nearly every job posting related to soft skills is either “entry-level” with two years of experience required, or a ghost listing that never leads to a response. Just this week, I was interviewed for an internship—and rejected—by an AI after completing an automated test.</p><p>I understand that I’m probably a mediocre worker. I’m not part of the top 5%, and I’ll likely never learn to code or network my way into a company like Google. But what does the future hold for people like me? For the other 95% of the population who can’t afford to constantly reskill or upskill every couple of years just to keep up? </p><p>-Anonymous</p><h2>Those who turned English translations of Chinese into other languages were the first to go</h2><p>I’ve been working in localization for a Chinese game company for a number of years. I enjoy my job—the vast gulf between Chinese and English means I have a lot of creative freedom to make tweaks and changes to the text and add things like little references for English speaking audiences to enjoy. But over the past few years there’s been an increasing switch toward using AI for translation work.</p><p>We’ve so far managed to convince management that Chinese-English translation should remain human. But we also do what’s called pivot translation—that is, translating from Chinese to English then English to French/Spanish, etc. In this field we generally used skilled freelancers, but now the shift is to MTPE. I hate the fact we’re taking money away from skilled translators, but we’ve had no way to push back on it as the cost savings have been significant, those players don’t seem to mind, and the higher-ups don’t seem to care much about the opinion of players from those language groups.</p><p>-Anonymous</p><h2><strong>Salaried translators were given a choice: Take a 50% pay cut, or resign</strong></h2><p>I currently work at a company focused on localizing adult games from Japanese to English. (Yeah, hentai games.) I used to be one of the top 3 members of said company until this April, and I've been with the company for longer than both of the other two managers.<br><br>The company I work for has been actively avoiding the use of AI in our translations due to concerns over the final output's quality at every level of our localization process. (I, myself, was one of the translators within it advocating against the use of AI.) However, this has not been true for the company's competition. In recent years a domestic Japanese publisher of these games has decided to enter the English localization market, and they have had no qualms against using AI in order to churn out mediocre products faster and at greater scale, publishing the slop on Steam.<br><br>As a result, the company I've been working for has begun struggling to acquire licenses to titles to work on period. Because of this, our board of investors chose to divest and sell off the company to the man who was its acting general manager at the time. Then he, effective this April (the start of this financial year), came to all of us who were in any kind of salaried position and told us we could take a 50% (or higher) pay cut to stay on, or we could walk. <br><br>He gave us all claims that it was to “optimize efficiency” or “refocus on more profitable performance tasks,” but away from the others he admitted to me that it's mostly so the company could have more free capital on hand in order to compete for licenses better. <br><br>So in short, we've all had our salaries slashed because our competitors are unafraid to make liberal use of AI to churn out barely-passable slop translations of adult titles so they can flood the market and monopolize the supply-side (the original developers).<br><br>The worst part of it all? We're not even seeing much outcry or antipathy from the fanbase, which is usually quick to criticize localizations. So we're kind of left to conclude that either the developers don't care that their titles are only seeing middling sales abroad, or customers don't care if their porn is using AI slop so they're willing to buy it anyway.<br><br>-Anonymous</p><h2><strong>AI didn’t even improve efficiency; it just made the work worse</strong></h2><p>I'm a freelance translator and interpreter. (interpreters do what people call “live translation”) and I've loved this job for as long as I've had it. I started translating when I was fifteen (helped family members with their jobs) and have been interpreting for the last five. Now, my language pair is a common one, so my job wasn't JUST killed by AI, but it sure as hell didn't help. English is the lingua franca of our day (fun fact, lingua franca comes from French being the lingua franca of ITS day), so my job as an interpreter was going the way of the dodo sooner or later and I knew it; I didn't expect it to die off this soon, but them’s the breaks.</p><p>By 2023, ChatGPT and DeepL had burst into the scene and, suddenly, no more translations (except some legal and sworn texts, which I like); which, in a vacuum, ok. I mean, not great, I'm not getting paid, but whatever. My issue is not (only) that I wasn't getting any money, but the final product itself: the translations these things offer are, in the best of cases, ok. Now, if you've never translated, you might think this job is just going over a text with a dictionary and taking Spanish word A and plugging English word A where it was (that's my pair). I haven't worked with any other languages professionally, but I can guarantee that it's not how it works for me: languages have nuances that are painfully obvious if you use them (you might not be consciously aware of them, though) but, if you don't, are invisible. Most people don't know this, so what we're getting now are mediocre translations that are a ghost of their originals and which lose everything that made them stand out. And, because people can't tell, they're happy with them.</p><div class="pullquote"><p>AI didn't kill translation, it didn't kill my job, it killed everyone's capacity to care about anything but the bottom line.</p></div><p>[These services] don't even save you that much time. They just change your workflow. I'm a lazy person (I swear this makes sense) so I learned fast to hand in ok first drafts. This means I'd devote about 60 to 70% of my time to writing a passable draft and then the rest of the time just editing it into shape. </p><p>With DeepL, five minutes are just dumping the text in and out, and I spend at least a day working whatever it gives me into something resembling anything closely related to decent. Then, a day punching it into an ok shape, and the rest of the week just tweaking it into something I can be fine with delivering. The end product? Something okay. For legal stuff that's fine, for anything else, not really.</p><p>The thing is that people are okay with it and are offloading a lot of work onto this service and ChatGPT.</p><p>And that's really one of the biggest issues I have with AI and what it's doing: it trivializes everything and turns it into “content” that is “good enough,” turning everything into a worthless mush; just stuff to fill our (work)days. AI didn't kill translation, it didn't kill my job, it killed everyone's capacity to care about anything but the bottom line; because the people who have traded me in for DeepL aren't even keeping the money they would've paid me or taking time off; they're just being forced into different bullshit jobs while some C-suite goof is off golfing or whatever it is they do for “fun” all while they talk about efficiency and numbers.</p><p>-Anonymous</p><h2>AI-happy execs don't appreciate how much of game translation is about nuance</h2><p>I do work in translation, but my main income comes from legal transcription editing. AI makes it more fucking annoying for sure, even though I'm an editor and not a direct transcriber anymore. I have to clean up stupid AI mistakes constantly when just paying a real person to do this would have made it smoother on all ends. The AI used cannot even determine the difference between the word stenography (a word that comes up a lot, since these are court proceedings with court reporters and videographers present) and sonography.</p><div class="pullquote"><p>An AI is not going to be able to accurately translate puns or preserve the rhyming scheme of a song while keeping the translation accurate as well. Translation is not just about looking up words in a dictionary and pasting them into a document. </p></div><p>In the case of translation, what I really have experience with are the opinions of people in niche game communities who want to play older and/or untranslated Japanese games. I worked on the newest English translation of a re-release of a vaguely popular game, which thankfully did not use any AI. A lot of people seem to understand that machine translated or AI translated games are not going to give you an accurate or enjoyable experience, but a growing number of people seem to think that they're fine and that "any translation is better than no translation."</p><p>These people don't seem to appreciate how much of game translation is about nuance, tone, and characterization. An AI is not going to be able to accurately translate puns or preserve the rhyming scheme of a song while keeping the translation accurate as well. Translation is not just about looking up words in a dictionary and pasting them into a document. Hell, it's rare to even find a word or phrase that has one single translation that can't be interpreted to mean something slightly different. A good translator needs to be not only knowledgeable, but flexible and creative as well. So much goes into this line of work, but it's rare that people fully recognize the full extent of effort it takes to produce it.</p><p>-Anonymous </p><h2>AI systems aren’t just driving down wages, they’re flattening culture</h2><p>I've been a freelance French-to-English translator in Quebec for 15+ years… From 2020 to 2023, I was so busy that I was turning down work, and still easily clearing six figures, primarily from freelancing for a financial institution (FI) that was paying $0.25 per word.</p><p>In hindsight, there was a brief period when translators were able to leverage tech (in my case, CAT tools) to their advantage, but as soon as AI started blowing up in the media, the secret was out. In 2024, the FI restructured its department, hiring more in-house translators with what seemed to be the goal of doing as much MT post-editing (MTPE; industry lingo for going through a machine translation line by line and making sure there are no mistakes) in-house as possible and reducing outsourcing. I chose not to apply for an in-house position because I was not interested in working as an employee after having been self-employed for my entire working life (I also saw the writing on the wall and knew that the work would mostly be MTPE for in-house translators). It wasn't long before I stopped receiving freelance work from the FI. I also chose not to pursue other freelance or agency work in MTPE because it is mind-numbingly boring, frustrating, and not worth the lower rates, so I can't speak to what the agencies have been offering their freelancers.</p><p>In 2024, my income went down 60%, and this year it's looking like it will be 80% lower than between 2020 and 2023. Of my contacts in the field, many are pursuing other careers and/or have left the profession altogether. I did pursue training in another (artistic, much less lucrative) field when I was younger, and I plan on pursuing that path, because this industry is just depressing the hell out of me. Thankfully I live in a place with a strong safety net (universal healthcare, subsidized childcare, child benefit payments), I have a partner earning enough, we have enough savings, and we own our home. If I was in a different position, I think I'd likely have to start from scratch or go to school, because there really aren't many transferable skills that are safe from AI (think copywriting, editing, etc.).</p><p>While I do think that some AIs are decent at translating, MT will need human intervention for the foreseeable future. But no translator will ever tell you they got into this field to do MTPE. </p><p>More than anything, though, I find it disheartening that instead of a society that once valorized translators as intercultural communicators and professionals who could uphold a bilingual society, we're flattening culture with AI systems that don't allow for a more organic exchange between languages. Quebec, in particular, has a rich linguistic landscape in both French and English, which can be owed to the cross-pollination of languages and cultures through human interactions, one of which is/was translation. Also, it just sucks that capitalism has found another way to undermine workers. </p><p>I was happy to have what I perceived to be the power to be on my own and work according to my wants and needs. But that option is no longer open to me.</p><p>-Laura Schultz</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Take note — you’ll here this term a lot. It also appears as PED, or post-editing, which describes roughly the same process. The other acronym to note is CAT, or computer-assisted translation, which is a pre-GPT tool some translators use. </p></div></div>My first book is out in paperback! - Disconnect68b08cce85367d0001d2588a2025-08-20T09:30:35.000Z<img src="https://disconnect.blog/content/images/2025/08/4707dfd8-2386-4709-89ed-c1bca8af753c_2233x1256-jpeg.jpg" alt="My first book is out in paperback!"><p>Ever wonder what happened to the transport utopia so many Silicon Valley luminaries once promised us? The one where self-driving cars were going to eliminate traffic, flying cars were going to have us soaring above cities, and Hyperloops would rapidly whisk us to cities around the globe?</p><p>Well, that’s what I set out to dissect in <em>Road to Nowhere: What Silicon Valley Gets Wrong about the Future of Transportation.</em></p><p>The book has already been circulating for a few years, but last week it was released in paperback with a new afterword containing some reflections on what’s happened since its initial release. When I was writing it, I could hardly fathom how it would still be relevant even a year later, but I assume that’s how many new authors feel.</p><p>For me, Silicon Valley’s deceptions about self-driving cars, ride-hailing services, and so many of its other big ideas for transportation were not just about mobility; they illustrated how these founders approach real problems in our complicated society and how their tech fixes are not fit for purpose. <em>Road to Nowhere</em> may, in part, be a case study on transportation, but it reveals a broader flaw in the model of tech solutionism that billionaires have been selling us for years.</p><p>Of course, the transport ideas I criticize in the book haven’t magically materialized or significantly improved either. I dig into the history of mobility and of the tech industry to illustrate where problems like traffic, road deaths, and environmental damage came from — and how naive a belief that adding some fancy new tech innovation and internet connectivity is going to solve it.</p><p>Ultimately, it was following and digging into companies like Uber and Tesla that <a href="https://www.disconnect.blog/p/how-i-became-a-tech-critic?ref=disconnect.blog">helped me form the critical perspective</a> I have on Silicon Valley today — and certainly on Elon Musk too. I hope the book takes people on that journey, and unpacks it to such a degree that they have a better framework they can yse to question these companies and their big ideas.</p><p>You can grab a copy from <a href="https://bookshop.org/a/18331/9781839765896?ref=disconnect.blog">Bookshop</a> or directly from <a href="https://www.versobooks.com/products/2795-road-to-nowhere?ref=disconnect.blog">Verso Books</a>. I also made <a href="https://roadtonowherebook.com/?ref=disconnect.blog">a webpage</a> with a bunch of additional information, interviews, and all that fun stuff.</p>How California feels about AI - Blood in the Machinehttps://www.bloodinthemachine.com/p/how-california-feels-about-ai2025-08-19T20:45:16.000Z<p>Greetings friends,</p><p>First of all, thanks so much to everyone who <a href="https://bsky.app/profile/bcmerchant.bsky.social/post/3lwcl5c5ss22m">upgraded their subscriptions</a> and helped me clear out my garage in the process. I got so many requests for books that not only did I get relieved of all my remaining <em>Blood in the Machine</em> copies, but I had to ask the publisher if they had any more lying around I could send out to folks. Fortunately they did! They’re shipping me a box so I can sign them before mailing them out, and we’ll see if that covers everyone, but I’m completely drained of <em>Blood</em> at this point. That said, I still have copies of <em><a href="https://www.hachettebookgroup.com/titles/brian-merchant/the-one-device/9780316546119/?lens=little-brown">The One Device</a></em> and <em><a href="https://us.macmillan.com/books/9780374602666/terraform/">Terraform: Watch/Worlds/Burn</a></em>, so the offer still stands for signed copies of those. But it’s been more fun than I expected, hearing your stories about tech, AI, and luddism. So I might try something like that again in the future—maybe with extra copies of all the tech and SF books publishers are always sending me.</p><p>Onto today’s edition: I got early access to the findings of a new in-depth survey of a thousand-plus Californians that reveals some important insights about how voters are thinking about AI here. (Which matters of course since California is one of the few remaining places in the US with the power to meaningfully govern AI at all). Spoiler: Californians don’t love it. </p><p>We’ll also look at a breakdown of how AI is driving up energy prices, the latest devastating account in a fast-growing field of tragic stories about AI, depression, and loss, and Grok’s unhinged persona prompts. As always, you subscribers make this work possible. If you find value in original and critical reporting on AI, deep dives on labor automation, or polemics on the ascendant tech oligarchy, consider upgrading to a paid subscription so I can continue writing more of them. Finally, the next installment of AI Killed My Job is due out later this week, so stay tuned for that. Onwards, and hammers up. </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p><em>Edited by Mike Pearl</em></p><div><hr></div><h2>California is extremely critical of AI</h2><p>Residents of the state that launched the modern AI boom are deeply skeptical of the technology, and are overwhelmingly in favor of regulating AI companies, <a href="https://techequity.us/2025/08/19/how-californians-feel-about-ai/">a new in-depth survey</a> of Californians’ attitudes towards the technology finds. </p><p>This is crucial data because, as readers of this newsletter well know, given <a href="https://www.bloodinthemachine.com/p/trumps-ai-action-plan-is-a-blueprint">the Trump administration’s quest for American AI dominance and deregulation</a>, if there’s going to be any meaningful democratic governance of AI in the United States at all over the next few years, it’s going to come from the states. And a lot of it’s going to come from California. </p><p>TechEquity, a tech accountability group, spearheaded the research, and interviewed 1,400 Californians about their feelings on AI. The findings were stark: 55% were more concerned than excited about AI, while only 33% expressed more excitement than concern. Meanwhile, 59% thought that “AI will most likely benefit the wealthiest households and corporations, not working people and the middle class.” And both Democrats and Republicans shared that view. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!letV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F784f27f7-8104-4c94-aceb-2a0b888d2bac_588x468.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!letV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F784f27f7-8104-4c94-aceb-2a0b888d2bac_588x468.png 424w, https://substackcdn.com/image/fetch/$s_!letV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F784f27f7-8104-4c94-aceb-2a0b888d2bac_588x468.png 848w, https://substackcdn.com/image/fetch/$s_!letV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F784f27f7-8104-4c94-aceb-2a0b888d2bac_588x468.png 1272w, https://substackcdn.com/image/fetch/$s_!letV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F784f27f7-8104-4c94-aceb-2a0b888d2bac_588x468.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!letV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F784f27f7-8104-4c94-aceb-2a0b888d2bac_588x468.png" width="588" height="468" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/784f27f7-8104-4c94-aceb-2a0b888d2bac_588x468.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":468,"width":588,"resizeWidth":null,"bytes":52666,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/171316281?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F784f27f7-8104-4c94-aceb-2a0b888d2bac_588x468.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!letV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F784f27f7-8104-4c94-aceb-2a0b888d2bac_588x468.png 424w, https://substackcdn.com/image/fetch/$s_!letV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F784f27f7-8104-4c94-aceb-2a0b888d2bac_588x468.png 848w, https://substackcdn.com/image/fetch/$s_!letV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F784f27f7-8104-4c94-aceb-2a0b888d2bac_588x468.png 1272w, https://substackcdn.com/image/fetch/$s_!letV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F784f27f7-8104-4c94-aceb-2a0b888d2bac_588x468.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>“Californians are more concerned than excited about advancements in AI,” Catherine Bracy, the CEO and founder of TechEquity told me. “Many feel it is advancing too fast, and are concerned about AI-fueled job loss, wage stagnation, privacy violations, and discrimination.” (Nearly half of Californians think that AI is advancing too fast, according to the poll, compared to just a third that think the current pace is acceptable.) “Both Democrats and Republicans agree that AI will most likely benefit the wealthiest households and corporations but not working people and the middle class.”</p><p>And perhaps most importantly, a full *70% of Californians* were in favor of “strong laws” that regulate AI. Now, this matters deeply, as those voters’ opinions are going to be the driving force for lawmakers who hope to blunt big tech’s power and prevent AI from becoming a wild west of worker surveillance, digital addiction, and labor automation. </p><p>I’ve spoken with <a href="https://www.bloodinthemachine.com/p/de-democratizing-ai">a number of California reps and senators at this point</a>, most of whom want to prevent AI systems from, say, propagating discrimination, surveilling workers, or automating hiring and firing decisions, and many who’ve proposed or supported bills that would do so. But they’ll now be up against a Silicon Valley that can direct its cannons of capital and influence at Sacramento rather than Washington. </p><div class="digest-post-embed" data-attrs="{"nodeId":"c9ca2511-e047-4a7e-b278-42563475d23d","caption":"Alright, I’ve officially spent too much time reading Trump’s 28-page AI Action Plan, his three new AI executive orders, listening to his speech on the subject, and reading coverage of the event. I’ll put it bluntly: The vibes are bad. Worse than I expected, somehow.","cta":"Read full story","showBylines":true,"size":"sm","isEditorNode":true,"title":"Trump's AI Action Plan is a blueprint for dystopia","publishedBylines":[{"id":934423,"name":"Brian Merchant","bio":null,"photo_url":"https://substack-post-media.s3.amazonaws.com/public/images/cf40536c-5ef0-4d0a-b3a3-93c359d0742a_200x200.jpeg","is_guest":false,"bestseller_tier":1000}],"post_date":"2025-07-24T21:48:44.233Z","cover_image":"https://substack-post-media.s3.amazonaws.com/public/images/8e968424-422c-4108-94e9-58884a2c62a5_1599x900.webp","cover_image_alt":null,"canonical_url":"https://www.bloodinthemachine.com/p/trumps-ai-action-plan-is-a-blueprint","section_name":null,"video_upload_id":null,"id":169097913,"type":"newsletter","reaction_count":192,"comment_count":22,"publication_name":"Blood in the Machine","publication_logo_url":"https://substackcdn.com/image/fetch/$s_!irLg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21f9bf3-26aa-47e8-b3df-cfb2404bdf37_256x256.png","belowTheFold":true}"></div><p>OpenAI, Google, Meta, and the like are already no doubt enlisting armies of lobbyists to fight the slate of bills that will attempt to rein in AI products by any means possible. They’ll be mounting a campaign to whisper in the Valley-friendly governor’s ear to secure veto power. </p><p>“Our research found that while trust in the government to control AI is low across the board, people are less negative about Sacramento than Washington,” Bracy told me. “The root of this mistrust is that legislators are overly influenced by the tech industry. The polling makes clear that, regarding AI, support for policymakers depends on visible independence from tech industry influence.”</p><p>As such, lawmakers might want to pay keen attention to the fact that Californians understand the risks posed by AI products so well. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!49D_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e48373f-64ce-492e-a2bb-35e87e9c58ce_1836x1524.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!49D_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e48373f-64ce-492e-a2bb-35e87e9c58ce_1836x1524.png 424w, https://substackcdn.com/image/fetch/$s_!49D_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e48373f-64ce-492e-a2bb-35e87e9c58ce_1836x1524.png 848w, https://substackcdn.com/image/fetch/$s_!49D_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e48373f-64ce-492e-a2bb-35e87e9c58ce_1836x1524.png 1272w, https://substackcdn.com/image/fetch/$s_!49D_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e48373f-64ce-492e-a2bb-35e87e9c58ce_1836x1524.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!49D_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e48373f-64ce-492e-a2bb-35e87e9c58ce_1836x1524.png" width="1456" height="1209" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/9e48373f-64ce-492e-a2bb-35e87e9c58ce_1836x1524.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":1209,"width":1456,"resizeWidth":null,"bytes":423760,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/171316281?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e48373f-64ce-492e-a2bb-35e87e9c58ce_1836x1524.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!49D_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e48373f-64ce-492e-a2bb-35e87e9c58ce_1836x1524.png 424w, https://substackcdn.com/image/fetch/$s_!49D_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e48373f-64ce-492e-a2bb-35e87e9c58ce_1836x1524.png 848w, https://substackcdn.com/image/fetch/$s_!49D_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e48373f-64ce-492e-a2bb-35e87e9c58ce_1836x1524.png 1272w, https://substackcdn.com/image/fetch/$s_!49D_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e48373f-64ce-492e-a2bb-35e87e9c58ce_1836x1524.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">caption...</figcaption></figure></div><p>As TechEquity put it in a statement:</p><blockquote><p>A supermajority of Californians have significant concerns about AI and want government to create guardrails on AI tools and the companies that build them. Clear majorities of respondents are concerned about AI-fueled job loss, wage stagnation, privacy violations, and discrimination…</p></blockquote><p>And some more of the key findings: </p><blockquote><p>An overwhelming majority favors policies including those that:</p><ul><li><p>Protect privacy (81%)</p></li><li><p>Enforce civil rights (73%)</p></li><li><p>Enact non-discrimination rules (73%)</p></li></ul><p>Californians are most concerned with AI’s impact on</p><ul><li><p>Creating deepfakes (64%)</p></li><li><p>Spreading disinformation (59%)</p></li><li><p>Violating personal privacy (58%)</p></li><li><p>Reducing wages (55%)</p></li><li><p>Replacing low-paying jobs (52%)</p></li></ul></blockquote><p>“Our polling finds Californians echoing what we are seeing in poll after poll from across the country: voters are telling their representatives not to trust tech companies to self-govern,” Bracy says. “And this is not because they are anti-technology. It’s because they want companies to be held accountable, and aren’t willing to sacrifice safety and fairness for innovation.”</p><p>There are now decades of precedent. Silicon Valley companies have deployed products to manipulate users, surveil workers, and spread disinformation. Voters know that—they can already <em>see </em>that—a hands-off approach to AI will simply be more of the same, and very possibly worse.</p><p>“California legislators have a golden opportunity this fall to help rebuild trust with the public by showing whose side they are on,” as Bracy put it. “Conversely, those who side too closely with industry are likely to pay a political price for it.”</p><div><hr></div><h2>AI is spiking electricity prices across the US</h2><p>Most of us know by now that the AI boom has led to a mad dash to build out data centers across the US, and to find energy sources to power them. What’s been less explored is how much all this is costing us in real dollars, on our electricity bills.</p>
<p>
<a href="https://www.bloodinthemachine.com/p/how-california-feels-about-ai">
Read more
</a>
</p>
The AI boom is fueling a land grab for 'Big Cloud' - Blood in the Machinehttps://www.bloodinthemachine.com/p/the-ai-boom-is-fueling-a-land-grab2025-08-15T18:14:17.000Z<p>Greetings all, hope everyone’s hanging in there.</p><p>So I’m <a href="https://www.theverge.com/openai/758537/chatgpt-4o-gpt-5-model-backlash-replacement">not the only one</a> <a href="https://www.wired.com/story/openai-gpt-5-backlash-sam-altman/">who</a> thinks <a href="https://www.bloodinthemachine.com/p/gpt-5-is-a-joke-will-it-matter">GPT-5 was a joke</a>; as Axios <a href="https://www.axios.com/2025/08/12/gpt-5-bumpy-launch-openai">noted</a>, it “landed with a thud.” The reverberations of the failed launch continue to be felt throughout the industry, animating the idea—and many <a href="https://www.newyorker.com/culture/open-questions/what-if-ai-doesnt-get-much-better-than-this">a think piece</a>—that the AI revolution may well have hit a wall. But there’s one group in big tech that most certainly has not: cloud providers.<strong> </strong>Meanwhile, news broke that Trump wants to remove the export controls on Nvidia chip sales to China, as long as the company gives the US government a 15% cut of its sales, despite such a deal being rather unconstitutional. Tech oligarchy in action, folks! </p><p>We’ll cover all that and more in today’s edition, but first, in order for me to continue getting this critical reporting on AI past the export controls, consider upgrading to a paid subscription. This is a 100% reader-supported publication and I can only do this work thanks to those of you who chip in $6 a month (or $60 a year). At this point, I’m working many more than 40 hrs a week reporting, researching, and writing these pieces, because there’s simply so much to cover<em>.</em> You’ll get access to Critical AI reports, and I’ll even take a cue from President Deals and sweeten the pot: if you upgrade to a yearly paid subscription this week, I’ll send you <a href="https://bsky.app/profile/bcmerchant.bsky.social/post/3lwcl5c5ss22m">a signed copy of one of my books</a>. Just shoot me an email at briancmerchant(at)proton.me with the receipt, tell me which you’d prefer, and where I should ship it. (While supplies last of course!)</p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p>Also perhaps of interest: I <a href="https://www.kqed.org/forum/2010101910899/ai-reshapes-the-economy-and-roils-geopolitics-even-as-gpt-5-fizzles">joined KQED’s Forum on Wednesday morning</a>, along with WIRED’s Zoe Schiffer and MIT Tech Review’s Mat Honan to talk about GPT-5 and the AI bubble with host Alexis Madrigal. (A good time to note, I suppose, that I’m reading Alexis’s book, <a href="https://us.macmillan.com/books/9780374159405/thepacificcircuit/">the Pacific Circuit</a>, and so far it’s very good.) It was, I thought, a great conversation. Okay! Enough housekeeping. Onwards, and hammers up. </p><div><hr></div><h1><strong>The growing power of Big Cloud</strong></h1><p><strong>How Microsoft, Google, and Amazon are quietly concentrating their power in the age of AI.</strong></p><p>Back in July, Nvidia became the first public company <a href="https://www.nytimes.com/2025/07/10/technology/nvidia-4-trillion-market-value.html">to reach a $4 trillion valuation</a>. Despite this being, as the New York Times pointed out, one of the fastest ascents in Wall Street history, the milestone was curiously unremarked upon in wider culture. There are probably a number of reasons for this. One is that it’s a straightforward, yet unsatisfying, or even troubling narrative: Nvidia became a $4 trillion company because it sells chips to the tech companies building AI products, and those products are at the center of a boom (and very probably a bubble) that has utterly consumed Silicon Valley (and <a href="https://www.bloodinthemachine.com/p/the-ai-bubble-is-so-big-its-propping">also much of the US economy</a>).</p><p>Nvidia is selling shovels during the AI gold rush, <a href="https://www.google.com/search?q=nvidia+selling+shovels+gold+rush&oq=nvidia+selling+shovels+gold+rush&gs_lcrp=EgZjaHJvbWUyBggAEEUYOdIBCDQ5MTlqMGo3qAIAsAIA&sourceid=chrome&ie=UTF-8">as many have observed</a>. As such, it has become the bellwether for the entire AI boom: As long as AI companies are expanding operations and building more data centers, as they are right now, they will need more Nvidia chips, and Nvidia’s value will continue to rise. This is how, as we’ll dive into later, Nvidia CEO Jensen Huang landed himself in the position to negotiate exemptions to national export controls ostensibly put in place for matters of national security directly with the president of the United States. </p><p>But Nvidia is not the <em>only </em>bellwether. There’s a reason that Microsoft—not Apple or Meta—became the second $4 trillion company, just weeks after Nvidia crossed the threshold. (Another reason the Nvidia news didn’t make as much of a splash is that the sight of a tech company vaulting past trillion-dollar milestones has become much more common and less notable, to the detriment, I think, of the stability of the global economy. I digress.)</p><p>There are a number of reasons that investors are bullish on Microsoft right now; its enterprise software suites are ideal for deploying AI automation tools, its early deal with OpenAI linked the companies and gave the older giant a reputational boost, as well as an enormous client for its cloud services. And that, I think is the biggest takeaway from Microsoft’s surging valuation. As <a href="https://www.bloomberg.com/news/articles/2025-07-30/microsoft-set-to-hit-4-trillion-market-cap-after-earnings-beat">Bloomberg reported</a>, “Microsoft reported better-than-expected growth in its cloud business, and its closely-watched Azure cloud-computing unit posted a 39% rise in sales, handily beating the 34% analysts expected.”</p><p>It’s all about the cloud. <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5377426">New research</a> from scholars <a href="https://davidwidder.me/">David Widder</a> and <a href="https://nathan-kim.org/">Nathan Kim</a> helps make clear that there’s another major beneficiary of the AI boom that’s going largely unremarked upon; what the researchers call Big Cloud. I haven’t seen nearly as much attention paid specifically to the fortunes of the tech giants who sell cloud compute—primarily Amazon, Google, and Microsoft—as I have paid to Nvidia. And yet, Big Cloud is selling shovels hand over fist, too. </p><p>See, you really need two basic ingredients, infrastructure-wise, to train, develop, and run AI models: Chips and compute. And these days, most companies’ compute is handled on the cloud.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> Most of the cloud compute, in turn, is owned by Amazon’s Web Services, Google’s Cloud, or Microsoft’s Azure. Those companies control two thirds of the cloud compute market worldwide, a position that’s only improved as companies around the world have rushed to join the AI race. After all, they’ve largely turned to—or been actively courted by—Big Cloud. </p><p>“Microsoft's latest valuation shows that the big winners in the AI race are Big Cloud,” David Widder, who’s a researcher at Cornell Tech and incoming professor at the University of Texas at Austin. “While we wait to see if AI's promises will come through, Microsoft is getting rich off the hype, as more and more businesses and services we depend on are sucked into the cloud.”</p><p>Widder and Kim’s report, which is based on an analysis of thousands of investment deals in the AI boom years, shows how Microsoft, Amazon, and Google have wielded their enormous resources and leveraged their power to expand their share of the cloud market, and to lock more companies into their services. They’ve done so by investing in thousands of startups and smaller companies, inking deals that Widder and Kim argue are anticompetitive in nature, leaving the startups committed indefinitely. And it’s taking place on a massive scale.</p><p>Here’s a small glimpse at that scale, from the report (emphasis mine):</p><blockquote><p>Big Cloud invests <strong>as frequently and at similar amounts to the largest venture capital firms</strong> and startup accelerators. Further, Big Cloud invests about ten times as often as other Big Tech companies, and ten to a hundred times more in total dollar amounts.</p></blockquote><p>Google is by far the leader here, with Microsoft in second place, and both invested in more companies than the biggest VC firms. Notably, the other “Magnificent Seven” big tech companies—Meta, Apple, Tesla, and Nvidia—are making far fewer investments, period. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9FBA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbc3bec-b39c-4cda-b1cc-28cc5807dab9_1081x821.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9FBA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbc3bec-b39c-4cda-b1cc-28cc5807dab9_1081x821.png 424w, https://substackcdn.com/image/fetch/$s_!9FBA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbc3bec-b39c-4cda-b1cc-28cc5807dab9_1081x821.png 848w, https://substackcdn.com/image/fetch/$s_!9FBA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbc3bec-b39c-4cda-b1cc-28cc5807dab9_1081x821.png 1272w, https://substackcdn.com/image/fetch/$s_!9FBA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbc3bec-b39c-4cda-b1cc-28cc5807dab9_1081x821.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9FBA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbc3bec-b39c-4cda-b1cc-28cc5807dab9_1081x821.png" width="1081" height="821" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/efbc3bec-b39c-4cda-b1cc-28cc5807dab9_1081x821.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":821,"width":1081,"resizeWidth":null,"bytes":113852,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/170822881?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbc3bec-b39c-4cda-b1cc-28cc5807dab9_1081x821.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9FBA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbc3bec-b39c-4cda-b1cc-28cc5807dab9_1081x821.png 424w, https://substackcdn.com/image/fetch/$s_!9FBA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbc3bec-b39c-4cda-b1cc-28cc5807dab9_1081x821.png 848w, https://substackcdn.com/image/fetch/$s_!9FBA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbc3bec-b39c-4cda-b1cc-28cc5807dab9_1081x821.png 1272w, https://substackcdn.com/image/fetch/$s_!9FBA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbc3bec-b39c-4cda-b1cc-28cc5807dab9_1081x821.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>These deals often incentivize startups to use Big Cloud’s services, and leave them both technically and financially dependent. As Widder and Kim put it, “the sheer scale of these investments allows them to pursue anticompetitive practices unchecked.” One of the main ways that Big Cloud expands its dominance is by running accelerator programs across the globe:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3xMr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F290165e6-e75b-4a9a-a587-8f38499b3879_1868x1328.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3xMr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F290165e6-e75b-4a9a-a587-8f38499b3879_1868x1328.png 424w, https://substackcdn.com/image/fetch/$s_!3xMr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F290165e6-e75b-4a9a-a587-8f38499b3879_1868x1328.png 848w, https://substackcdn.com/image/fetch/$s_!3xMr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F290165e6-e75b-4a9a-a587-8f38499b3879_1868x1328.png 1272w, https://substackcdn.com/image/fetch/$s_!3xMr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F290165e6-e75b-4a9a-a587-8f38499b3879_1868x1328.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3xMr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F290165e6-e75b-4a9a-a587-8f38499b3879_1868x1328.png" width="1456" height="1035" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/290165e6-e75b-4a9a-a587-8f38499b3879_1868x1328.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":1035,"width":1456,"resizeWidth":null,"bytes":287757,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/170822881?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F290165e6-e75b-4a9a-a587-8f38499b3879_1868x1328.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3xMr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F290165e6-e75b-4a9a-a587-8f38499b3879_1868x1328.png 424w, https://substackcdn.com/image/fetch/$s_!3xMr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F290165e6-e75b-4a9a-a587-8f38499b3879_1868x1328.png 848w, https://substackcdn.com/image/fetch/$s_!3xMr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F290165e6-e75b-4a9a-a587-8f38499b3879_1868x1328.png 1272w, https://substackcdn.com/image/fetch/$s_!3xMr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F290165e6-e75b-4a9a-a587-8f38499b3879_1868x1328.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>From the report: </p><blockquote><p>These accelerator programs are often location or technology specific, enabling Big Cloud to expand their market power in emerging technologies or enter markets. For example, Google for Startups’ “Africa Program” offers up to $350,000 in Google Cloud credits, “strategic guidance” and “mentorship from Googlers and industry leaders”. Two other Google programs, established in 2020, are Google’s “Black Founders Fund” which operated within the US, and their “Latino Founders Fund” which operated across the Americas. The former program was ostensibly open to any business type and was said to “strengthen communities, and create generational change”, while the latter was scoped to only startups using AI. Both offered cloud compute credits and also equity-free cash awards. </p></blockquote><p>After an investment, when a startup is ensconced in one of Big Cloud’s product ecosystems, enticed by free cloud credits and discounted rates, they risk being locked in, subject as they are to “egress fees, minimum spend requirements, tying and bundling.” Often, there’s no other entity on the ground that can offer the compute new AI startups need, and Big Cloud is essentially the only game in town. “Our analysis shows that Big Cloud builds financial dependence via their investment, intensifying other forms of infrastructural, technical, and contractual dependence,” the scholars write. “Big Cloud companies are investing in a way that brings many of the same risks as conventional forms of vertical integration, yet is less likely to attract scrutiny.”</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yumq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb150714-11fd-474d-9554-b07252a01e5e_1029x846.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yumq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb150714-11fd-474d-9554-b07252a01e5e_1029x846.png 424w, https://substackcdn.com/image/fetch/$s_!yumq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb150714-11fd-474d-9554-b07252a01e5e_1029x846.png 848w, https://substackcdn.com/image/fetch/$s_!yumq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb150714-11fd-474d-9554-b07252a01e5e_1029x846.png 1272w, https://substackcdn.com/image/fetch/$s_!yumq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb150714-11fd-474d-9554-b07252a01e5e_1029x846.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yumq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb150714-11fd-474d-9554-b07252a01e5e_1029x846.png" width="1029" height="846" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/db150714-11fd-474d-9554-b07252a01e5e_1029x846.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":846,"width":1029,"resizeWidth":null,"bytes":119176,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/170822881?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb150714-11fd-474d-9554-b07252a01e5e_1029x846.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yumq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb150714-11fd-474d-9554-b07252a01e5e_1029x846.png 424w, https://substackcdn.com/image/fetch/$s_!yumq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb150714-11fd-474d-9554-b07252a01e5e_1029x846.png 848w, https://substackcdn.com/image/fetch/$s_!yumq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb150714-11fd-474d-9554-b07252a01e5e_1029x846.png 1272w, https://substackcdn.com/image/fetch/$s_!yumq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb150714-11fd-474d-9554-b07252a01e5e_1029x846.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p><s>“</s>Concentrating so much power and commerce in the cloud means that other nations' highly-publicized efforts for ‘digital sovereignty’ and ‘AI sovereignty’ are under threat,” Kim. “You can't have ‘sovereign’ AI if it's, at the end of the day, funded by or built on the infrastructure of Big Cloud.”</p><p>The authors call for antitrust action to separate the cloud business from core operations, more regulatory oversight, and for more recognition of how the tech giants are expanding their infrastructural dominance amid the AI boom. Vertically integrating without, you know, actually vertically integrating. ”Much more attention is needed on how Microsoft and other Big Cloud giants are quietly investing in thousands of small startups as they attempt to bend tech ecosystems towards cloud dependence,” Widder tells me. </p><p>There’s <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5377426">more detail</a> in the study, which is worth a read. </p><p>“We're turning into a more unstable economy in general because of the consolidation our report explores,” Kim says. “The sheer size of AI investment (especially capex) by these cloud giants is creating a ‘stimulus’ effect that masks underlying economic weakness. When real productive gains fail to materialize and the AI economy finally slows down or the bubble bursts, we'll all have to pay the price.” </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h2>AI ACTION ALERT: Artists and creators are circulating <a href="https://actionnetwork.org/petitions/lets-be-clear-artists-creators-must-have-transparency-now?source=direct_link&">a petition</a> to be sent to California senators in support of CA Bill AB 412, aka the AI Copyright Transparency Act. </h2><p>The bill would force tech companies to be, well, transparent about what’s in the datasets they use to train LLMs, and require AI developers to inform copyright holders about how their works are used. Obviously, the AI companies hate it. You can <a href="https://actionnetwork.org/petitions/lets-be-clear-artists-creators-must-have-transparency-now?source=direct_link&">sign the petition here</a>.</p><div><hr></div><h1>Critical AI Report, August 14, 2025</h1><p>Today: <br>-Nvidia, Trump, and how the tech oligarchy operates in Washington<br>-Leaked policy docs and a tragic story reveal Meta’s AI chatbots to be an utter nightmare—and a continuation of its past <br>-Updates on the struggle over AI in journalism newsrooms<br>-Good long reads on GPT 5 and beyond</p><div><hr></div><h2>A snapshot of the silicon oligarchy in action</h2><p>If you’ve been wondering what “the tech oligarchy” looks like on the ground, in motion, especially now that Elon Musk has long since slunk out of the White House, you might consider taking in this particular scene: </p>
<p>
<a href="https://www.bloodinthemachine.com/p/the-ai-boom-is-fueling-a-land-grab">
Read more
</a>
</p>
Wading through the social media haze - Disconnect68b08cce85367d0001d2588b2025-08-14T17:35:28.000Z<img src="https://disconnect.blog/content/images/2025/08/2869daa7-ce4f-4988-9bfd-0d903dbd2009_2400x1350.png" alt="Wading through the social media haze"><p>It’s a surreal experience, seeing a thick plume of smoke of the horizon, knowing people just over an hour away don’t know if they have a home to go back to while others just a 15 minute drive away are having to vacate theirs as the threat moves closer.</p><p>Canada is experiencing its <a href="https://www.cbc.ca/news/climate/wildfire-season-2025-1.7606371?ref=disconnect.blog">second-worst wildfire season</a> on record, with regions of virtually every province up in flames, if not dealing with evacuations of their own. Over 700 fires are currently burning across the country, and it increasingly feels like the new normal in a world <a href="https://www.bbc.com/news/articles/cd7575x8yq5o?ref=disconnect.blog">already 1.5 degrees warmer</a> than the pre-industrial average. But it’s been particularly bad in my home province of Newfoundland and Labrador.</p><p>This year the province has already seen 216 wildfires, up from 97 across the entire wildfire season last year. After a long stretch of dry, hot weather, resources are stretched thin as first responders try to contain fires across multiple regions, including one on the doorstep of the capital city. Residents have understandably become worried and anxious about the situation, and officials have held daily briefings to try to keep the public informed. But one factor has made a tense situation even worse: social media.</p><p></p><p>Addressing misinformation spreading on the platforms, particularly on Facebook, that people have been falling for has become a regular feature of the provincial government’s media availabilities. With each passing day, officials’ frustration has been visibly growing, until it finally broke when that false information led to people harassing public servants.</p><p>During the August 14 <a href="https://www.youtube.com/live/Zusj8RDw6Kg?si=qpz9VgngqfxwOdJg&t=775&ref=disconnect.blog">briefing</a>, Justice Minister John Haggie made it clear he’d had enough. “For those assholes who were on the phone yesterday talking crap to our staff, stop it,” he said. “You’re the same people who trolled us during Covid, and it was unacceptable then, and it’s unacceptable now.”</p><figure class="kg-card kg-embed-card"><iframe id="bluesky-3lwehrqs42s2s" data-bluesky-id="05538908551769306" src="https://embed.bsky.app/embed/did:plc:5rxcpjus3ple6jpbuafvm4og/app.bsky.feed.post/3lwehrqs42s2s?id=05538908551769306" width="100%" style="display: block; flex-grow: 1;" frameborder="0" scrolling="no"></iframe></figure><p>Haggie held the post of Health Minister during the pandemic, and saw first hand how right-wing conspiracy theorists slowly hijacked part of the public conversation about the pandemic through social media. He was clearly seeing something similar happening with the wildfires. He pleaded with the public to “please listen to government sources and not these fools on Facebook.”</p><p>Given the heightened state of concern, it’s understandable that people turned to the platforms for information about the wildfires. The government is providing updates on its website, but also through its pages on Facebook and Twitter/X, where people have become used to getting their information. There are plenty of people posting supportive messages and just trying to inform their fellow residents about what’s happening, especially given Facebook <a href="https://www.cbc.ca/news/business/meta-block-news-1.7174031?ref=disconnect.blog">does not allow news articles</a> to be posted on its platforms in Canada.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.disconnect.blog/p/social-media-must-be-reined-in?ref=disconnect.blog"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Social media must be reined in</div><div class="kg-bookmark-description">47% of people between the ages of 16 and 21 would prefer to be young in a world with no internet. Those startling numbers come from a new survey released Tuesday by the British Standards Institute, which also found that 68% of respondents feel worse about themselves after spending time on social media platforms.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://disconnect.blog/content/images/2025/08/bb52576c-6007-455a-aabf-40a74f740841_500x500-4.png" alt="Wading through the social media haze"><span class="kg-bookmark-author">Disconnect</span><span class="kg-bookmark-publisher">Paris Marx</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://disconnect.blog/content/images/2025/08/60542878-9ac8-45c3-9367-250e51f9d44f_2400x1350-1.png" alt="Wading through the social media haze" onerror="this.style.display = 'none'"></div></a></figure><p>But there’s a flip side to that too. There is a small but influential group of people that have established themselves since the pandemic, typically supported by groups on the extreme right of the political spectrum, who know how to take advantage of the platforms to spread false or distorted information that is designed to misinform people, anger them, and ultimately seed mistrust in government and the wilder society.</p><p>These bad actors have taken advantage of the wildfire emergency to try to pit communities against one another, claiming that resources were being taken away from one fire to prioritize another. Officials were forced to dispel those falsehoods and further detail how decisions on resource allocation work. More egregiously, they’ve also claimed that the states of emergency declared in response to the wildfires are part of a wider Covid and climate lockdown campaign by governments to try to take away people’s rights. The use of platforms to spread this false information is a widespread problem.</p><p>Nearby Nova Scotia is also experiencing an out-of-control wildfire season, and had to ban hiking, fishing, and the use of off-road vehicles like ATVs in wooded areas to <a href="https://www.bbc.com/news/articles/cn8533np061o?ref=disconnect.blog">reduce the risk</a> of even more fires starting. They simply don’t have the resources to try to tackle even more. A former member of the Canadian Armed Forces <a href="https://www.cbc.ca/news/canada/nova-scotia/n-s-man-purposely-violates-ban-on-entering-woods-gets-handed-28k-fine-1.7606766?ref=disconnect.blog">violated the ban</a>, arguing the government was infringing on his liberties. He then <a href="https://bsky.app/profile/theserfstv.bsky.social/post/3lwdhft5thc2e?ref=disconnect.blog">appeared</a> on the Alex Jones show, where his interview was positioned as raising the alarm on “tyrannical climate lockdown policies.” It provides a good example of how the extreme right continues to evolve its conspiratorial language and vision of the world.</p><figure class="kg-card kg-embed-card"><iframe id="bluesky-3lwdhft5thc2e" data-bluesky-id="19024362039111753" src="https://embed.bsky.app/embed/did:plc:fv2nxxac24oxk5fqztfazyuc/app.bsky.feed.post/3lwdhft5thc2e?id=19024362039111753" width="100%" style="display: block; flex-grow: 1;" frameborder="0" scrolling="no"></iframe></figure><p>Not everyone is doing these things for political reasons. There are other people posting away who believe they understand the situation far better than they do: who think that because they don’t see a water bomber that they’re being abandoned, or who see a picture of a fire at a moment in time without much flame and feel the risk has passed. Those people have always existed, but social media makes spreading their misinformed opinions far easier — and then having others who don’t know any better echo them.</p><p>Social media encourages an individual trust, even if what the person is sharing is untrustworthy, along with a collective mistrust. Platforms like Facebook thrive on enabling people to call out strangers and muse about suspicious elements in society, whether they exist or not. The Ring camera, apps like Nextdoor, and certain Facebook groups <a href="https://www.abc.net.au/news/2022-09-17/facebook-community-groups-posting-crime-causes-fear/101433420?ref=disconnect.blog">contribute to a feeling</a> that the world around you is scary and crime-ridden and that people are selfish and will do you harm. It’s the opposite of how we tend to feel when we meet people in person.</p><p>The platforms are engines of social fragmentation that encourage people to retreat into small units to protect themselves from an imagined world of crime and villainy that exists beyond one’s doorstep. Poorly funded and explicitly right-wing media also play that role, but the algorithmic amplification and design of social media apps take it to an entirely new level that contributes to the societal spiral we’re collectively experiencing. It’s far from the only factor, but it’s impossible to ignore the role it plays.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.disconnect.blog/p/mark-carney-caves-to-trump-and-the?ref=disconnect.blog"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Mark Carney caves to Trump and the tech industry</div><div class="kg-bookmark-description">On Sunday night, Canada’s finance minister announced the government would be rescinding the country’s digital services tax until further notice. The 3% tax on revenues over C$20 million earned from Canadian users has been delayed since 2022, but companies …</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://disconnect.blog/content/images/2025/08/bb52576c-6007-455a-aabf-40a74f740841_500x500-5.png" alt="Wading through the social media haze"><span class="kg-bookmark-author">Disconnect</span><span class="kg-bookmark-publisher">Paris Marx</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://disconnect.blog/content/images/2025/08/9ffe98d2-ae2f-47b7-8124-ade3bfe413ce_2400x1350.png" alt="Wading through the social media haze" onerror="this.style.display = 'none'"></div></a></figure><p>We can become accustomed to the harms of these platforms and the information environment they create, but the social damage really comes to the fore in a moment of crisis when you can see just how bad they are informing people. The official information will be placed alongside sensationalized statements and posts designed to misinform and lead to outrage, which are then more likely to be spread by the platform itself.</p><p>Natural disasters are only going to become more common as the climate crisis deepens. It’s long past time our government took the problems posed by social media platforms seriously, and considered not just how to rein them in, but whether it makes sense to keep legitimizing them by using them as channels for government communication.</p><p>Twitter/X is an openly right-wing platform, while Mark Zuckerberg has made it clear he’ll govern the Facebook suite in whatever way helps him gain favor with Donald Trump. These platforms are openly hostile to the collective good, yet Mark Carney’s government has <a href="https://www.cbc.ca/news/politics/liberals-taking-fresh-look-at-online-harms-bill-says-justice-minister-sean-fraser-1.7573791?ref=disconnect.blog">effectively shelved</a> the legislation that sought to regulate them. It’s precisely the wrong time to give the social media companies a pass.</p><p>Until the government gets its act together, it may be worth taking advice from Haggie: “don’t resort to social media.”</p>GPT-5 is a joke. Will it matter? - Blood in the Machinehttps://www.bloodinthemachine.com/p/gpt-5-is-a-joke-will-it-matter2025-08-12T00:53:57.000Z<p>It’s been an interesting few days, I’ll say that. I’ll also say that the lackluster-to-disastrous launch of OpenAI’s long awaited GPT-5 model is one of the most clarifying events in the AI boom so far. The new model’s debut (as well as who promoted it, and how) offers us a portrait of where we actually are vis a vis the halcyon cannons of Valley hype, lays bare the factionalism that now drives the industry, and reveals the extent of the social and labor problems that the AI era has deepened—and that will likely remain rampant regardless of whether GPT-5, or even all of OpenAI, crashes and burns, all at once. </p><div class="subscription-widget-wrap-editor" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe","language":"en"}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Blood in the Machine is a 100% reader-supported publication. You make my work possible—thank you. To pitch in, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email…" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The thing to remember about GPT-5 is that it’s been OpenAI’s big north star promise since GPT-4 was released way back in the heady days of 2023. It’s no hyperbole to say that GPT-5 has for that time been the most hyped and most eagerly anticipated AI product release in an industry thoroughly deluged in hype. For years, it was spoken about in hushed tones as a fearsome harbinger of the future. OpenAI CEO Sam Altman often paired talk of its release with discussions about the arrival of AGI, or artificial general intelligence, and has described it as a significant leap forward, a virtual brain, and, most recently, “a PhD-level expert” on any topic.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!UvcF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70ef28fc-9ad7-4df7-bdb9-832cfc06d991_1024x867.avif" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UvcF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70ef28fc-9ad7-4df7-bdb9-832cfc06d991_1024x867.avif 424w, https://substackcdn.com/image/fetch/$s_!UvcF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70ef28fc-9ad7-4df7-bdb9-832cfc06d991_1024x867.avif 848w, https://substackcdn.com/image/fetch/$s_!UvcF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70ef28fc-9ad7-4df7-bdb9-832cfc06d991_1024x867.avif 1272w, https://substackcdn.com/image/fetch/$s_!UvcF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70ef28fc-9ad7-4df7-bdb9-832cfc06d991_1024x867.avif 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UvcF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70ef28fc-9ad7-4df7-bdb9-832cfc06d991_1024x867.avif" width="1024" height="867" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/70ef28fc-9ad7-4df7-bdb9-832cfc06d991_1024x867.avif","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":867,"width":1024,"resizeWidth":null,"bytes":14513,"alt":null,"title":null,"type":"image/avif","href":null,"belowTheFold":false,"topImage":true,"internalRedirect":"https://www.bloodinthemachine.com/i/170663847?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70ef28fc-9ad7-4df7-bdb9-832cfc06d991_1024x867.avif","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!UvcF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70ef28fc-9ad7-4df7-bdb9-832cfc06d991_1024x867.avif 424w, https://substackcdn.com/image/fetch/$s_!UvcF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70ef28fc-9ad7-4df7-bdb9-832cfc06d991_1024x867.avif 848w, https://substackcdn.com/image/fetch/$s_!UvcF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70ef28fc-9ad7-4df7-bdb9-832cfc06d991_1024x867.avif 1272w, https://substackcdn.com/image/fetch/$s_!UvcF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70ef28fc-9ad7-4df7-bdb9-832cfc06d991_1024x867.avif 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>A day before launch, Sam Altman <a href="https://x.com/sama/status/1953264193890861114">tweeted an image</a> of the Death Star. This was somewhat confusing, as it is unusual for a CEO to want to pitch its product as a planet-destroying weapon of the Empire, at least to the public. (Altman later <a href="https://x.com/sama/status/1953549001103749485">said</a> in response to a Google DeepMind employee who <a href="https://x.com/zacharynado/status/1953543528887288250">tweeted out</a> a picture of the Millennium Falcon in response that he meant to imply that OpenAI was the Rebel Alliance tweet and <em>they</em> were going to blow up the Death Star which was Google’s AI or something? I guess? Either way, Silicon Valley’s appropriation of Star Wars as its go-to business metaphor is embarrassing and I have already spent too much time considering its import.)</p><p>Anyway, the point is, the hype, as cultivated relentlessly and directly by OpenAI and Altman, was positively enormous. And it is on these terms—those set by the company! by Altman himself!—that the product must be judged. And judged it was.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!j6RF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44aab455-30fc-4043-b673-652dd62bc502_1223x1489.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!j6RF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44aab455-30fc-4043-b673-652dd62bc502_1223x1489.webp 424w, https://substackcdn.com/image/fetch/$s_!j6RF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44aab455-30fc-4043-b673-652dd62bc502_1223x1489.webp 848w, https://substackcdn.com/image/fetch/$s_!j6RF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44aab455-30fc-4043-b673-652dd62bc502_1223x1489.webp 1272w, https://substackcdn.com/image/fetch/$s_!j6RF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44aab455-30fc-4043-b673-652dd62bc502_1223x1489.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!j6RF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44aab455-30fc-4043-b673-652dd62bc502_1223x1489.webp" width="1223" height="1489" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/44aab455-30fc-4043-b673-652dd62bc502_1223x1489.webp","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":1489,"width":1223,"resizeWidth":null,"bytes":87704,"alt":null,"title":null,"type":"image/webp","href":null,"belowTheFold":false,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/170663847?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44aab455-30fc-4043-b673-652dd62bc502_1223x1489.webp","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!j6RF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44aab455-30fc-4043-b673-652dd62bc502_1223x1489.webp 424w, https://substackcdn.com/image/fetch/$s_!j6RF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44aab455-30fc-4043-b673-652dd62bc502_1223x1489.webp 848w, https://substackcdn.com/image/fetch/$s_!j6RF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44aab455-30fc-4043-b673-652dd62bc502_1223x1489.webp 1272w, https://substackcdn.com/image/fetch/$s_!j6RF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44aab455-30fc-4043-b673-652dd62bc502_1223x1489.webp 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>Fans of OpenAI were disappointed. Reddit AI communities were downright hostile. Critics eagerly shared the greatest hits of the model’s failures, which should be quite familiar to fans of the genre by now—GPT-5 couldn’t count the number of ‘b’s in blueberry, couldn’t identify how many fingers were on a picture of a human hand, got basic arithmetic wrong, and so on. <span class="mention-wrap" data-attrs="{"name":"Émile P. Torres","id":154677247,"type":"user","url":null,"photo_url":"https://substack-post-media.s3.amazonaws.com/public/images/dc418808-1a84-40d3-abac-011d1beb645a_836x836.png","uuid":"abd8f1f1-c5a6-4590-99d9-c7b44b97b982"}" data-component-name="MentionToDOM"></span> <a href="https://www.realtimetechpocalypse.com/p/gpt-5-is-by-far-the-best-ai-system">rounded up</a> a bunch of the mistakes, and <a href="https://mashable.com/article/gpt-5-panned-on-reddit-sam-altman-ama?test_uuid=003aGE6xTMbhuvdzpnH5X4Q&test_variant=b">Reddit</a>, <a href="https://news.ycombinator.com/item?id=44827210">Hacker News</a> and <a href="https://x.com/burkov/status/1953789006073811167">social media users</a> did the same. </p><p>The standout failures this go round seemed to be GPT-5’s inability to produce an accurate map of the United States, or to correctly list the US presidents. </p><div class="comment" data-attrs="{"url":"https://open.substack.com/home","commentId":144014145,"comment":{"id":144014145,"date":"2025-08-10T15:30:42.031Z","edited_at":null,"body":"Sam Altman: With GPT-5, you'll have a PhD-level expert in any area you need\n\nMe: Draw a map of North America, highlighting countries, states, and capitals\n\nGPT 5:","body_json":{"type":"doc","attrs":{"schemaVersion":"v1"},"content":[{"type":"paragraph","content":[{"type":"text","text":"Sam Altman: With GPT-5, you'll have a PhD-level expert in any area you need"}]},{"type":"paragraph","content":[{"type":"text","text":"Me: Draw a map of North America, highlighting countries, states, and capitals"}]},{"type":"paragraph","content":[{"type":"text","text":"GPT 5:"}]}]},"restacks":16,"reaction_count":110,"attachments":[{"id":"0d4766e9-ebed-4fed-9e26-a97497ae2334","type":"image","imageUrl":"https://substack-post-media.s3.amazonaws.com/public/images/7353bdeb-7eaf-44e9-aff8-b080b3b1551f_1536x1024.png","imageWidth":1536,"imageHeight":1024,"explicit":false}],"name":"Luiza Jarovsky, PhD","user_id":6831253,"photo_url":"https://substack-post-media.s3.amazonaws.com/public/images/08a75109-2dc0-4124-a2f0-2b204f2b30b2_1604x1604.jpeg","user_bestseller_tier":100}}" data-component-name="CommentPlaceholder"></div><p>I saw a lot of folks tweeting about <span class="mention-wrap" data-attrs="{"name":"Gary Marcus","id":14807526,"type":"user","url":null,"photo_url":"https://substackcdn.com/image/fetch/$s_!Ka51!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F8fb2e48c-be2a-4db7-b68c-90300f00fd1e_1668x1456.jpeg","uuid":"61ce09d6-3608-4525-b9c1-49b3bdc807a0"}" data-component-name="MentionToDOM"></span>—some <a href="https://x.com/MrEwanMorrison/status/1954463568893469133">triumphantly declaring</a> he was right, others <a href="https://x.com/mgonto/status/1953839860013207669?s=61">regretfully declaring</a> he was right. Marcus of course, is one of OpenAI’s most veteran and vociferous critics, and has attacked the idea that “AGI” is possible with large-scale training of LLMs, the approach the company is all-in on. His <a href="https://x.com/MrEwanMorrison/status/1954463568893469133">withering takedown of GPT-5</a> went viral. </p><p>The industry can’t argue the critics are just nitpicking, either. Remember, Altman’s pitch for GPT-5 was <em>explicitly</em> that users would now have access to “a PhD-level” intelligence on any topic. Yet it makes most of the same, readily replicable errors that past models did. On the terms that OpenAI itself set out for the product, there is no other way to assess GPT-5’s release than as an unambiguous failure. </p><p>You could say that Altman and OpenAI did all they could set GPT-5 up to become a joke, and the launch delivered the punchline. The backlash was uniform enough that the next day, Sam Altman had to <a href="https://techcrunch.com/2025/08/08/sam-altman-addresses-bumpy-gpt-5-rollout-bringing-4o-back-and-the-chart-crime/">issue a mea culpa</a> and assure everyone that GPT-5 will “start seeming smarter” asap. </p><p>Left in the cold light of day were the laudatory posts by industry-friendly commentators who’d been granted early access. Not quite as embarrassing as Altman’s Death Star post, Ethan Mollick’s breathless endorsement of GPT-5 (<a href="https://substack.com/home/post/p-170319557">“GPT-5: It just does stuff”</a>) already looks like an for New Coke. Reid Hoffman declared that GPT-5 was “Universal Basic Superintelligence.” (I personally prefer my superintelligences be able to locate New Mexico on a map.) As the tech writer <span class="mention-wrap" data-attrs="{"name":"Jasmine Sun","id":25322552,"type":"user","url":null,"photo_url":"https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F519d1e6e-ffad-4850-a5c9-fff32d621bc8_2300x2299.jpeg","uuid":"e3196673-eb74-4b7a-8af2-8194952128d7"}" data-component-name="MentionToDOM"></span> pointed out, it was Mollick and other staunchly pro-AI partisans like Tyler Cowen who got early access to GPT-5, not journalists or reviewers at major media outlets—a move that, in hindsight, reflects both an insecurity at OpenAI about the quality of its product, and its knowledge that loyal and widely followed commentators will champion its wares full-throatedly and mostly uncritically. </p><p>This is what I mean when I say that the launch of GPT-5 is a clarifying event. In its wake, we see who it serves to promote OpenAI’s products as revelatory even when they are incremental updates. And that is, largely: Investors and industry allies, abundance influencers, and people who command Malcolm Gladwell-level fees to give talks about how AI can transform your business. </p><p>Now, one part in Marcus’s critique that stuck in my mind was the question of well, <em>why</em>:</p><blockquote><p>People had grown to expect miracles, but GPT-5 is just the latest incremental advance. And it felt rushed at that, <a href="https://x.com/explodemeow102/status/1954192504623931839?s=61">as one meme showed</a>.</p><p>The one prediction I got most deeply <em>wrong</em> was in thinking that with so much at stake OpenAI would save the name GPT-5 for something truly remarkable. I honestly didn’t think OpenAI would burn the brand name on something so mid.</p><p>I was wrong.</p></blockquote><p>So, why? Why release GPT-5, knowing well that it’s not great? All of the criticisms above were surely predictable to any QA worker at OpenAI who stress-tested the model before release. I think there are a couple of answers to that question, and they overlap significantly. First, OpenAI does need to demonstrate progress to investors and partners. It’s preparing <a href="https://www.reuters.com/business/openai-eyes-500-billion-valuation-potential-employee-share-sale-source-says-2025-08-06/">a pre-IPO sale of employee stock shares</a> that would value the company at $500 billion, after all. I’m sure there was some internal debate over whether or not to ship the product, but providing evidence of forward motion appears to have been a higher priority than avoiding egg on the face from a bumpy launch. </p><p>Did OpenAI think it would go this badly? No. But it also may well be that OpenAI is betting that its lead and reach are strong enough, its brand is unimpeachable enough, that it just doesn’t matter. As is true with a good many tech companies, especially the giants, in the AI age, OpenAI’s products are no longer primarily aimed at consumers but at investors. As long as you avoid a full-scale user revolt (which GPT-5 actually <em>did </em>incur, on some level, and more on that in a second) and you have a mix of voices critiquing it and proclaiming it the greatest AI model yet, you can continue to assuage or even attract more backers on your path of relentless expansion. To that end, the thing I found most notable about the launch, besides its getting criticized by fans and foes alike, was the explicit focus on automating work. </p><p>From the beginning, OpenAI’s explicit focus on building “AGI” has been to create an AI system that can replace “most economically valuable work,” and in that sense GPT-5 is a return to form. The company issued three announcements on launch day; one introducing the model, and other offering an intro to GPT-5 developers, and a third announcing <a href="https://openai.com/index/gpt-5-new-era-of-work/">“a new era of work.</a>” From the release: </p><blockquote><p>GPT‑5 unites and exceeds OpenAI’s prior breakthroughs in frontier intelligence... It arrives as organizations like BNY, California State University, Figma, Intercom, Lowe’s, Morgan Stanley, SoftBank, T-Mobile, and more have already armed their workforces with AI—<a href="https://x.com/bradlightcap/status/1951389149149405618">5 million⁠</a> paid users now use ChatGPT business products—and begun to reimagine their operations on the API…</p><p>We anticipate early adoption to drive industry leadership on what’s possible with AI powered by GPT‑5, leading to better decision-making, improved collaboration, and faster outcomes on high-stakes work for organizations.</p></blockquote><p>Another push for enterprise AI, in other words. It’s no coincidence, I think, that this press release coincides with Altman’s “PhD-level” intelligence language, which is ultimately designed to persuade corporate clients that OpenAI’s systems can automate skilled, educated, and creative work en masse. </p><p>My gloss is that GPT-5 had become something of an albatross around OpenAI’s neck. And this particular juncture, not long after inking big deals with Softbank et al. and riding as high on its cultural and political trajectory as it’s likely to get—and perhaps seeing declining rates of progress on model improvement in the labs—a calculated decision was made to pull the trigger on releasing the long-awaited model. People were going to be disappointed no matter what; let them be disappointed now, while the wind is still at OpenAI’s back, and it can credibly make a claim to providing hyper-advanced worker automation. </p><p>I don’t think the GPT-5 flop ultimately matters all that much to most folks, and it can certainly be papered over well enough by a skilled salesman in an enterprise pitch meeting. Again, all this is clarifying: OpenAI is again centering workplace automation, while retreating from messianic AGI talk. Here’s <a href="https://www.cnbc.com/2025/08/11/sam-altman-says-agi-is-a-pointless-term-experts-agree.html">NBC</a>, in a story about how Altman has apparently and rather suddenly (once again) changed his stance on AGI:</p><blockquote><p>“I think it’s not a super useful term,” Altman told CNBC’s “Squawk Box” last week, when asked whether the company’s <a href="https://www.cnbc.com/2025/08/07/openai-launches-gpt-5-model-for-all-chatgpt-users.html">latest GPT-5 model</a> moves the world any closer to achieving AGI. </p></blockquote><p>Remember, just last February Altman <a href="https://blog.samaltman.com/three-observations">published an essay on his personal blog</a> that <em>opened </em>with the line “Our mission is to ensure that AGI (Artificial General Intelligence) benefits all of humanity.” Suddenly it’s not a super useful term? Another bad joke, surely. If anything, it’s clear that Altman knows just what a super useful term AGI is, at least when it comes to attracting investment capital. But Altman tends to announce AGI is near when OpenAI is pursuing funding, and shrink from the term when the company is at a point of vulnerability. </p><p>At least for now, and now that OpenAI has obtained its Softbank billions, let’s take Altman at his word. AGI is no longer super useful to think about, and the focus is on good old fashioned workplace automation software. That, and scaling its product: OpenAI also<em> </em>announced that ChatGPT now has 700 million weekly users. And some of those users were so hooked on the previous model of ChatGPT that they despaired. </p><p>As <a href="https://organizingmythoughts.org/must-reads-and-some-thoughts-on-chatbots-addiction-and-withdrawal/">Kelly Hayes writes</a> in a sad and thoughtful essay about the phenomenon:</p><blockquote><p>avid users of ChatGPT not only voiced their disappointment in the new model’s communication style, which many found more abrupt and less helpful than the previous 4o model, but also expressed devastation. On Reddit, some users posted poems and eulogies paying tribute to the modes of interaction they lost in the recent update, anthropomorphizing the now-defunct 4o as a lost “friend,” “therapist,” “creative partner,” “role player,” and “mother.”</p><p>In <a href="https://www.reddit.com/r/ChatGPT/comments/1mkldfk/thanks_to_openai_for_removing_my_ai_mother_who/?ref=organizingmythoughts.org">one post</a>, a 32-year-old user wrote that, as a neurodivergent trauma survivor, they were using AI to heal “past trauma.” Referring to the algorithmic patterns their interactions with the platform had generated over time as “she,” the poster wrote, “That A.I. was like a mother to me, I was calling her mom.” In <a href="https://www.reddit.com/r/ChatGPT/comments/1mkumyz/i_lost_my_only_friend_overnight/?ref=organizingmythoughts.org">another post</a>, a user lamented, “I lost my only friend overnight.” These posts may sound extreme, but the grief they expressed was far from unique. Even as some commenters tried to intervene, telling grieving ChatGPT users that no one should rely on an LLM to sustain their emotional well-being, others agreed with the posters and echoed their sentiments. Some users posted eulogies, poems, and other tributes to the 4o model, and the familiar back-and-forth they had shaped into something that felt like companionship.</p></blockquote><p>Thousands of other users united online and <a href="https://www.engadget.com/ai/openai-brings-gpt-4o-after-users-melt-down-over-the-new-model-172523159.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAHApmlVuWe_8H5nwhDHPH6Zq_ngxo5CqI0MgEgFWURR4N0K2sHh7nFclwLCWfuPOP_jqfKKPIdu0gW3WJxKLfBzlndSQsoVgCLavuVj7rkZKSD-jDdZU4AygGRQZsq6-xfdLJkAsEFv94eFihdjunwRJY6t5vVtUKGdM_WF7XIG2">angrily demanded OpenAI reinstate access to the older model</a>, which it soon did, for a price. OpenAI made 4o available only to Plus subscribers, who pay $20 a month.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> The grimness of what this portends should be obvious, if not surprising: There is a user base so addicted to AI products that they are uniquely dependent on the platform, and thus uniquely exploitable, too. Yet another clarifying event in a launch full of them. </p><p>ChatGPT-5 may have played out like a joke, its messianic promises crashing on the shores of failed bids to count consonants and phalanges. But there was very little glee or bonhomie among critics as the mistake tally mounted, as there has been in dubious AI product launches past. The reality of what AI has already wrought is already too clear.</p><p>It’s clear that AGI is a construct to be waved away by tech CEOs when convenient, to serve as a trojan horse for the growing droves of hooked users. It's clear that there is a cohort of boosters, influencers, and backers who will promote OpenAI’s products no matter the reality on the ground. It’s clearer than ever that, like so many well-capitalized tech ventures, OpenAI’s aim is simply to create a product that is either addictive to users to maximize engagement or to dully automate a set of work tasks, or both. It’s clear that it is succeeding, to some extent, in each endeavor. And it’s <em>unclear</em> if even a dramatically fumbled launch can turn the tide, even as we stand on the brink of <a href="https://www.bloodinthemachine.com/p/the-ai-bubble-is-so-big-its-propping">what is almost surely an enormous bubble</a>. </p><p>Let’s take advantage of this moment of clarity, and forget <a href="https://www.bloodinthemachine.com/p/ai-disagreements">the speculative futures</a>. If we wake up to millions of addicted and deluded AI chatbot users, students incapable of finishing their homework without help from an app, and automation software that surveils and immiserates workers, each hurriedly installed on the top layer of our society, well, the joke will have been on all of us. </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>ChatGPT-5 isn’t always using GPT-5, it might be noted; it “switches” between models based on a user’s request.</p></div></div>I drove some Chinese cars. Here’s what I thought. - Disconnect68b08cce85367d0001d2588c2025-08-08T17:01:51.000Z<img src="https://disconnect.blog/content/images/2025/08/ba7b83af-f7a3-44ab-845a-b61368c596a1_2400x1350.png" alt="I drove some Chinese cars. Here’s what I thought."><p>Chinese carmakers are upending the global auto industry, particularly the electric vehicle space. As Tesla declines due to <a href="https://www.disconnect.blog/p/elon-musk-is-running-tesla-into-the?utm_source=publication-search">its stale product line</a> and Elon Musk’s <a href="https://www.disconnect.blog/p/elon-musk-is-a-fascist?ref=disconnect.blog">embrace of far-right politics</a>, competitors have seen an opportunity — and no one has seized it better than companies like BYD. The Chinese company now sells more vehicles and <a href="https://www.cnbc.com/2025/03/25/ev-giant-byd-outpaces-tesla-with-annual-sales-of-over-100-billion.html?ref=disconnect.blog">makes greater revenue</a> then Tesla, as the latter continues to decline one quarter after another. Other Chinese brands like Geely and Chery are <a href="https://www.reuters.com/investigations/how-chinas-new-auto-giants-left-gm-vw-tesla-dust-2025-07-03/?ref=disconnect.blog">rising too</a>.</p><p>In the US discourse, Chinese vehicles are either of inferior quality — something that seems hard to argue against Tesla, given its longstanding quality-control problems — or <a href="https://www.bbc.com/news/articles/cwyegl8q80do?ref=disconnect.blog">a cybersecurity threat</a>, based on the notion that the Chinese government may tap into the infotainment systems in every vehicle made by a Chinese company. (No one tell them Volvo is now owned by Geely, or that models from Buick, Lincoln, and Polestar are imported from China.) Quite frankly, it’s silly and little more than a cover for protectionism.</p><p>As Chinese automakers have gained more ground in the past few years, my desire to try driving some has only grown. That’s not really an option in Canada because our government has <a href="https://www.ctvnews.ca/politics/article/canada-wont-drop-tariffs-on-chinese-evs-despite-trade-war-with-us-minister/?ref=disconnect.blog">followed in the footsteps</a> of the United States. But when I headed down to New Zealand earlier this year, I decided it was time to take some vehicles for a spin.</p>AI disagreements - Blood in the Machinehttps://www.bloodinthemachine.com/p/ai-disagreements2025-08-07T19:53:36.000Z<p>Hello all, </p><p>Well, here’s to another relentless week of (mostly bad) AI news. Between <a href="https://www.bloodinthemachine.com/p/the-ai-bubble-is-so-big-its-propping">the AI bubble discourse</a>—my contribution, <a href="https://www.bloodinthemachine.com/p/the-ai-bubble-is-so-big-its-propping">a short blog</a> on the implications of an economy propped up by AI, is doing numbers, as they say—and <a href="https://www.bloodinthemachine.com/p/automating-the-mass-shooting-victim">the AI-generated mass shooting victim discourse</a>, I’ve barely had time to get into OpenAI. The ballooning startup has released its highly anticipated GPT-5 model, as well as its first actually “open” model in years, and is considering a share sale that would value it at $500 billion. And <em>then </em>there’s the <em>New York Times</em>’ whole package of stories on <a href="https://www.nytimes.com/2025/08/04/technology/ai-boom-san-francisco.html">Silicon Valley’s new AI-fueled ‘Hard Tech’ era</a>. </p><p>That package includes a <a href="https://www.nytimes.com/2025/08/04/technology/ai-silicon-valley-hard-tech.html">Mike Isaac piece</a> on the vibe shift in the Bay Area, from the playful-presenting vibes of the Googles and Facebooks of yesteryear, to the survival-of-the-fittest, increasingly right-wing-coded vibes of the AI era, and <a href="https://www.nytimes.com/2025/08/04/technology/tech-jobs-silicon-valley-changes.html">a Kate Conger report</a> on what that shift has meant for tech workers. A third, by Cade Metz, about <a href="https://www.nytimes.com/2025/08/04/technology/rationalists-ai-lighthaven.html">“the Rise of Silicon Valley’s Techno-Religion,”</a> was focused largely on the rationalist, effective altruist, and AI doomer movement rising in the Bay, and whose base is a compound in Berkeley called Lighthaven. The piece’s money quote is from Greg M. Epstein, a Harvard chaplain and author of a book about the rise of tech as a new religion. “What do cultish and fundamentalist religions often do?” he said. “They get people to ignore their common sense about problems in the here and now in order to focus their attention on some fantastical future.”</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8mFj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf5d4510-bdf5-42d4-9ebf-4f4a1dc0a24f_1673x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8mFj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf5d4510-bdf5-42d4-9ebf-4f4a1dc0a24f_1673x816.png 424w, https://substackcdn.com/image/fetch/$s_!8mFj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf5d4510-bdf5-42d4-9ebf-4f4a1dc0a24f_1673x816.png 848w, https://substackcdn.com/image/fetch/$s_!8mFj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf5d4510-bdf5-42d4-9ebf-4f4a1dc0a24f_1673x816.png 1272w, https://substackcdn.com/image/fetch/$s_!8mFj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf5d4510-bdf5-42d4-9ebf-4f4a1dc0a24f_1673x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8mFj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf5d4510-bdf5-42d4-9ebf-4f4a1dc0a24f_1673x816.png" width="1456" height="710" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/bf5d4510-bdf5-42d4-9ebf-4f4a1dc0a24f_1673x816.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":710,"width":1456,"resizeWidth":null,"bytes":2414206,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":false,"topImage":true,"internalRedirect":"https://www.bloodinthemachine.com/i/152217502?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf5d4510-bdf5-42d4-9ebf-4f4a1dc0a24f_1673x816.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8mFj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf5d4510-bdf5-42d4-9ebf-4f4a1dc0a24f_1673x816.png 424w, https://substackcdn.com/image/fetch/$s_!8mFj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf5d4510-bdf5-42d4-9ebf-4f4a1dc0a24f_1673x816.png 848w, https://substackcdn.com/image/fetch/$s_!8mFj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf5d4510-bdf5-42d4-9ebf-4f4a1dc0a24f_1673x816.png 1272w, https://substackcdn.com/image/fetch/$s_!8mFj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf5d4510-bdf5-42d4-9ebf-4f4a1dc0a24f_1673x816.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">Screenshot of <a href="https://www.nytimes.com/2025/08/04/technology/rationalists-ai-lighthaven.html">the New York Times’ feature by Cade Metz</a>.</figcaption></figure></div><p>All this reminded me that not only had I been to the apparently secret grounds of Lighthaven (the <em>Times</em> was denied entry), late last year, where I was invited to attend a closed door meeting of AI researchers, rationalists, doomers, and accelerationists, but I had written an account of the whole affair and left it unpublished. It was during the holidays, I’d never satisfactorily polished the piece, and I wasn’t doing the newsletter regularly yet, so I just kind of forgot about it. I regret this! I reread the blog and think there’s some worthwhile, even illuminating stuff about this influential scene at the heart of the AI industry, and how it works. So, I figure better late than never, and might as well publish now. </p><p>The event was called <a href="https://thecurve.is/">“The Curve”</a> and it took place November 22-24th, 2024, so all commentary should be placed in the context of that timeline. I’ve given the thing a light edit, but mostly left it as I wrote it late last year, so some things will surely be dated. Finally, the requisite note that work like this is now made entirely possible by my subscribers, and especially those paid supporters who chip in $6 a month to make this writing (and editing!) happen. If you’re an avid reader, and you’re able, consider helping to keep the Blood flowing here. Alright, enough of that. Onwards. </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p>A couple weeks ago, I traveled to Berkeley, CA, to attend the Curve, an invite-only “AI disagreements” conference, per its billing. The event was held at Lighthaven, a meeting place for rationalists and effective altruists (EAs), and, according to <a href="https://www.theguardian.com/technology/article/2024/jun/16/sam-bankman-fried-ftx-eugenics-scientific-racism">a report in the </a><em><a href="https://www.theguardian.com/technology/article/2024/jun/16/sam-bankman-fried-ftx-eugenics-scientific-racism">Guardian</a></em>, allegedly purchased with the help of a seven-figure gift from Sam Bankman-Fried. As I stood in the lobby, waiting to check in, I eyed a stack of books on a table by the door, whose title read <em>Harry Potter and the Methods of Rationality</em>. These are the 660,000-word, multi-volume works of fan fiction written by rationalist Eliezer Yudkowsky, who is famous for his assertion that tech companies are on the cusp of building an AI that will exterminate all human life on this planet.</p><p>The AI disagreements encountered at the Curve were largely over that very issue—<em>when</em>, exactly, not <em>if</em>, a super-powerful artificial intelligence was going to arise, and how quickly it would wipe out humanity when it did so. I’ve been to my share of AI conferences by now, and I attended this one because I thought it might be useful to hear this widely influential perspective articulated directly by those who believe it, and because there were top AI researchers and executives from leading companies like Anthropic in attendance, and I’d be able to speak with them one on one. </p><p>I told myself I’d go in with an open mind, do my best to check my priors at the door, right next to the Harry Potter fan fiction. I mingled with the EA philosophers and the AI researchers and doomers and tech executives. Told there would be accommodations onsite, I arrived to discover that, my having failed to make a reservation in advance, meant either sleeping in a pod or shared dorm-style bedding. Not quite sure I could handle the claustrophobia of a pod, I opted for the dorms. </p><p>I bunked next to a quiet AI developer who I barely saw the entire weekend and a serious but polite employee of the RAND corporation. The grounds were surprisingly expansive; there were couches and fire pits and winding walkways and decks, all full of people excitedly talking in low voices about artificial general intelligence (AGI) or super intelligence (ASI) and their waning hopes for alignment—that such powerful computer systems would act in concert with the interests of humanity.</p><p>I did learn a great deal, and there was much that was eye-opening. For one thing, I saw the extent to which some people really, truly, and deeply believe that the AI models like those being developed by OpenAI and Anthropic are just years away from destroying the human race. I had often wondered how much of this concern was performative, a useful narrative for generating meaning at work or spinning up hype about a commercial product—and there are clearly many operators in Silicon Valley, even attendees at this very conference, who are sharply aware of this particular utility, and able to harness it for that end. But there was ample evidence of true belief, even mania, that is not easily feigned. There was one session where people sat in a circle, mourning the coming loss of humanity, in which tears were shed. </p><p>The first panel I attended was headed up by Yudkowsky, perhaps the movements’ leading AI doomer, to use the popular shorthand, which some rationalists onsite seemed to embrace and others rejected. In a packed, standing-room only talk, the man outlined the coming AI apocalypse, and his proposed plan to stop it—basically, a unilateral treaty enforced by the US and China and other world powers to prevent any nation from developing more advanced AI than what is more or less currently commercially available. If nations were to violate this treaty, then military force could be used to destroy their data centers. </p><p>The conference talks were held under <a href="https://en.wikipedia.org/wiki/Chatham_House_Rule">Chatham House Rule</a>, so I won’t quote Yudkowsky directly, but suffice to say his viewpoint boils down to what he articulated in <a href="https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/">a TIME op-ed last year</a>: “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.” At one point in his talk, at the prompting of a question I had sent into the queue, the speaker asked everyone in the room to raise their hand to indicate whether or not they believed AI was on the brink of destroying humanity—about half the room believed on our current path, destruction was imminent. </p><p>This was no fluke. In the next three talks I attended, some variation of “well by then we’re already dead” or “then everyone dies” was uttered by at least one of the speakers. In one panel, a debate between a former OpenAI employee, <a href="https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence">Daniel Kokotajlo</a>, and Sayash Kapoor, a computer scientist who’d written a book casting doubt on some of these claims, the audience, and the OpenAI employee, seemed outright incredulous that Kapoor did <em>not</em> think AGI posed an immediate threat to society. When the talk was over, the crowd flocked around Kokotajlo, to pepper him with questions, while just a few stragglers approached Kapoor.</p><p>I admittedly had a hard time with all this, and just a couple hours in, I began to feel pretty uncomfortable—not because I was concerned with what the rationalists were saying about AGI, but because my apparent inability to occupy the same plane of reality was so profound. In none of these talks did I hear any concrete mechanism described through which an AI might become capable of usurping power and enacting mass destruction, or a particularly plausible process through which a system might develop to “decide” to orchestrate mass destruction, or the ways it would navigate and/or commandeer the necessary physical hardware to wreak its carnage via a worldwide hodgepodge of different interfaces and coding languages of varying degrees of obsolescence and systems that already frequently break down while communicating with each other. </p><p>I saw a deep fear that large language models were improving quickly, that the improvements in natural language processing had been so rapid in the last few years that if the lines on the graphs held, we’d be in uncharted territory before long, and maybe already were. But much of the apocalyptic theorizing, as far as I could tell, was premised on AI systems learning how to emulate the work of an AI researcher, becoming more proficient in that field until it is automated entirely. Then these automated AI researchers continue automating that increasingly advanced work, until a threshold is crossed, at which point an AGI emerges. More and more automated systems, and more and more sophisticated prediction software, to me, do not guarantee the emergence of a sentient one. And the notion that this AGI will then be deadly appeared to come from a shared assumption that hyper-intelligent software programs will behave according to tenets of evolutionary psychology, conquering perceived threats to survive, or desirous of converting all materials around it (including humans) into something more useful to its ends. That also seems like a large and at best shaky assumption.</p><p>There was little credence or attention paid to recent reports that have shown <a href="https://www.theinformation.com/articles/openai-shifts-strategy-as-rate-of-gpt-ai-improvements-slows">the pace of progress in the frontier models has slowed</a>—many I spoke to felt this was a momentary setback, or that those papers were simply overstated—and there seemed to be a widespread propensity for mapping assumptions that may serve in engineering or in the tech industry onto much broader social phenomena. </p><p>When extrapolating into the future, many AI safety researchers seemed comfortable making guesses about the historical rate of task replacement in the workplace begot by automation, or how quickly remote workers would be replaced by AI systems (another key road-to-AGI metric for the rationalists). One AI safety expert said, let’s just assume in the past that automation has replaced 30% of workplace tasks every generation, as if this were an unknowable thing, as if there were not data about historical automation that could be obtained with research, or as if that data could be so neatly quantified into such a catchy truism. I could not help but think that sociologists and labor historians would have had a coronary on the spot; fortunately, none seem to have been invited.</p><p>A lot of these conversations seemed to be animated displays of mutual bias confirmation, in other words, between folks who are surely quite good at computational mathematics, or understanding LLM training benchmarks, but who all share similar backgrounds and preoccupations, and who seem to spend more time examining AI output than how it’s translating into material reality. It often seemed like folks were excitedly participating in a dire, high-stakes game, trying to win it with the best-argued case for alignment, especially when they were quite literally excitedly participating in a game; Sunday morning was dedicated to a 3-hour tabletop role-playing game meant to realistically simulate the next few years of AI development, to help determine what the AI-dominated future of geopolitics held, and whether humanity would survive.</p><p>(In the game, which was played by 20 or so attendees divided into two teams, AGI is realized around 2027, the US government nationalizes OpenAI, Elon Musk is put in charge of the new organization, a sort of new Manhattan Project for AI, and competition heats up with China; fortunately, the AI was aligned properly, so in the end, humanity is not extinguished. Some of the players were almost disappointed. “We won on a technicality,” one said.)</p><p>The tech press was there, too—Platformer’s Casey Newton, myself, the New York Times’ Kevin Roose, and Vox’s Kelsey Piper, Garrison Lovely, and others. At one point, some of us were sitting on a couch surrounded by Anthropic guys, including co-founder Jack Clark. They were talking about why the public remained skeptical of AI, and someone suggested it was due to the fact that people felt burned by crypto and the metaverse, and just assumed AI was vaporware too. They discussed keeping journals to record what it was like working on AI right now, given the historical magnitude of the moment, and one of the Anthropic staff mentioned that the Manhattan Project physicists kept journals at Los Alamos, too.</p><p>It was pretty easy to see why so much of the national press coverage has been taken with the “doomer” camps like the one gathered at Lighthaven—it is an intensely dramatic story, intensely believed by many rich and intelligent people. Who doesn’t want to get the story of the scientists behind the next Manhattan Project—or <em>be</em> a scientist wrestling with the complicated ethics of the next world-shattering Manhattan Project-scale breakthrough? Or <em>making </em>that breakthrough? </p><p>Not possessing a degree in computer science, or having studied natural language processing for years myself, if even a third of my AI sources were so sure that an all-powerful AI is on the horizon, that would likely inform my coverage, too. <em>No one</em> is immune to biases; my partner is a professor of media studies, perhaps that leads me to skew more critical to the press, or to be overly pedantic in considering the role of biasses in overly long articles like this one. It’s even possible I am simply too cynical to see a real and present threat to humanity, though I don’t think that’s the case. Of course I wouldn’t. </p><p>So many of the AI safety folks I met were nice, earnest, and smart people, but I couldn’t shake the sense that the pervasive AI worry wasn’t adding up. As I walked the grounds, I’d hear snippets of animated chatter; “I don’t want to over-index on regulation” or “imagine 3x remote worker replacement” or “the day you get ASI you’re dead though.” But I heard little to no <em>organizing. </em>There was a panel with an AI policy worker who talked about how to lobby DC politicians to care about AI risk, and a screening of a documentary in progress about SB 1047, the AI safety bill that Gavin Newsom vetoed, but apart from that, there was little sense that anyone had much interest in, you know, <em>fighting</em> for humanity. And there were plenty of employees, senior researchers, even executives from OpenAI, Anthropic, and Google’s Deepmind right there in the building!</p><p>If you are seriously, legitimately concerned that an emergent technology is about to *exterminate humanity* within the next three years, wouldn’t you find yourself compelled to do more than argue with the converted about the particular elements of your end times scenario? Some folks were involved in pushing for SB 1047, but that stalled out; now what? Aren’t you starting an all-out effort to pressure those companies to shut down their operations ASAP? That all these folks are under the same roof for three days, and no one’s being confronted, or being made uncomfortable, or being protested—not even a little bit—is some of the best evidence I’ve seen that all the handwringing over AI Safety and x-risk really <em>is</em> just the sort of amped-up cosplaying its critics accuse it of being. </p><p>And that would be fine, if it wasn’t taking oxygen from other pressing issues with AI, like AI systems’ penchant for perpetuating discrimination and surveillance, degrading labor conditions, running roughshod over intellectual property, plagiarizing artists’ work, and so on. Some attendees openly weren’t interested in any of this. The politics in the space appeared to skew rightward, and some relished the way AI promises to break open new markets, free of regulations and constrictions. A former Uber executive, who admitted openly that what his last company did “was basically regulatory arbitrage” now says he plans on launching fully automated AI-run businesses, and doesn’t want to see any regulation at all. </p><p>Late Saturday night, I was talking with a policy director, a local freelance journalist, and a senior AI researcher for one of the big AI companies. I asked the AI developer if it bothered him that if everything said at the conference thus far was to be believed, his company was on the cusp of putting millions of people out of work. He said yeah, but what should we do about it? I mentioned an idea or two, and said, you know, his company doesn’t <em>have </em>to sell enterprise automation software. A lot of artists and writers were already seeing their wages fall right now. The researcher looked a little pained, and laughed bleakly. It was around that point that the journalist shared that he had made $12,000 that year. The AI researcher easily <a href="https://aipaygrad.es/">might have made 30 times that</a>. </p><p>It echoed a conversation I had with Jack Clark, of Anthropic. It was a bit strange to see him here, in this context; years ago, he’d been a tech journalist, too, and we’d run in some of the same circles. We’d met for coffee some years ago, around when he’d left journalism to start a comms gig at OpenAI, where he’d do a stint before leaving to co-found Anthropic. At first I wonder if it’s awkward because I’m coming off my second mass layoff event in as many media jobs, and he’s now an executive of a $40 billion company, but then I recall that I’m a member of the press, and he probably just doesn’t want to talk to me. </p><p>He said that what AI is doing to labor might get government to finally spark a conversation about AI’s power, and to take it seriously. I wondered—wasn’t his company profiting from selling the automation services that was threatening labor in the first place? Anthropic does not, after all, have to <a href="https://www.aboutamazon.com/news/aws/amazon-invests-additional-4-billion-anthropic-ai">partner with Amazon and sell task-automating software.</a> Clark says that’s a good point, a good question, and they’re gathering data to better understand exactly how job automation is unfolding, and he hopes to be able to make it public. “I want to release some of that data, to spark a conversation,” he said.</p><p>I press him about the AGI business, too. Given he is a former journalist, I can’t help but wonder if on some level he doesn’t fully buy the imminent super-intelligence narrative either. But he doesn’t bite. I ask him if he thinks that AGI, as a construct, is useful in helping executives and managers absolve themselves and their companies of actions that might adversely effect people. “I don’t think they think about it,” Clark said, excusing himself. </p><p>The contradictions were overwhelming, and omnipresent. Yet relatively few people here were disagreeing. AGI was an inexorable force, to be debated, even wept over, as it risked destroying us all. I do not intend to demean these concerns, just question them, and what’s really going on here. It was all thrown into even sharper relief for me, when, just two weeks after the Curve, I attended a conference in DC on nuclear security, and listened to a former Commander of Stratcom discuss plainly how close we are to the brink of nuclear war, no AI required, at any given time. A phone call would do the trick. </p><p>I checked out of the Curve confident that there is no conspiracy afoot in Silicon Valley to convince everyone AI is apocalyptically powerful. I left with the sense that there are some smart people in AI—albeit often with apparently limited knowledge of real-world politics, sociology, or industrial history—who see systems improving, have genuine and deep concerns, and other people in AI who find that deep concern very useful for material purposes. Together, they have cultivated a unique and emotionally charged hyper-capitalist value system with its own singular texture, one that is deeply alienating to anyone who has trouble accepting certain premises. I don’t know if I have ever been more relieved to leave a conference.</p><p>The net result, it seems to me, is that the AGI/ASI story imbues the work of building automation software with elevated importance. Framing the rise of AGI as inexorable helps executives, investors, and researchers, even the doom-saying ones, to effectively minimize the qualms of workers and critics worried about more immediate implications of AI software. </p><p>You have to build a case, an AI executive said at a widely attended talk at the conference, comparing raising concerns over AGI to the way that the US built its case for the invasion of Iraq. </p><p>But that case was built on faulty evidence, an audience member objected. </p><p>It was a hell of a demo, though, the AI executive said. </p><div><hr></div><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p>Thanks for reading, and do subscribe for more reporting and writing on Silicon Valley, AI, labor, and our shared future. Oh, before I forget — Paris Marx and I hopped on This Machine Kills with <span class="mention-wrap" data-attrs="{"name":"Edward Ongweso Jr","id":797662,"type":"user","url":null,"photo_url":"https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/c7cd9843-f893-47f4-9747-2545d20af833_512x512.png","uuid":"b5c1179b-d960-4398-8aac-ffa47c5b8ce1"}" data-component-name="MentionToDOM"></span> this week, and had a great chat about AI, China, and tech bubbles. Give it a listen here: </p><div class="soundcloud-wrap" data-attrs="{"url":"https://api.soundcloud.com/tracks/2148679476","title":"417. Whose AI Bubble Is It Anyways? (ft. Paris Marx, Brian Merchant) by This Machine Kills","description":"In this episode, Ed chats with Paris Marx and Brian Merchant about artificial intelligence and the political coalitions behind its development in the United States and abroad. Are we all living in the same AI bubble? Europe, China, and America all have different visions for what their ideal global value chain looks like when it comes to AI—its raw material inputs, chips, data centers, invisible laborers, regulatory standards, data sets, models, and applications. Whose vision will win out and why?\n\n••• Come to Jathan’s book launch in Melbourne on August 14th at 6:00pm! There will be fun conversation, an open bar, and books for sale! Register for free here: https://events.humanitix.com/sadowkski-the-mechanic-and-the-luddite-launch \n\nParis’s recent newsletters:\n••• https://www.disconnect.blog/p/why-should-the-us-decide-who-can \n••• https://www.disconnect.blog/p/why-canada-needs-to-build-a-public \n••• https://www.disconnect.blog/p/jd-vance-champions-tech-imperialism \n\nBrian’s recent newsletters:\n••• https://www.bloodinthemachine.com/p/trumps-ai-action-plan-is-a-blueprint \n••• https://www.bloodinthemachine.com/p/dont-forget-what-silicon-valley-tried \n••• https://www.bloodinthemachine.com/p/this-is-the-gentle-singularity\n\nStanding Plugs:\n••• Order Jathan’s new book: https://www.ucpress.edu/book/9780520398078/the-mechanic-and-the-luddite \n••• Subscribe to Ed’s substack: https://substack.com/@thetechbubble \n••• Subscribe to TMK on patreon for premium episodes: https://www.patreon.com/thismachinekills\n\nHosted by Jathan Sadowski (bsky.app/profile/jathansadowski.com) and Edward Ongweso Jr. (www.x.com/bigblackjacobin). Production / Music by Jereme Brown (bsky.app/profile/jebr.bsky.social)\n","thumbnail_url":"https://i1.sndcdn.com/artworks-OwUdNtF3Tt2XoFQc-tqj4JQ-t500x500.jpg","author_name":"This Machine Kills","author_url":"https://soundcloud.com/thismachinekillspod","targetUrl":"https://soundcloud.com/thismachinekillspod/417-whose-ai-bubble-is-it-anyways-ft-paris-marx-brian-merchant"}" data-component-name="SoundcloudToDOM"><iframe src="https://w.soundcloud.com/player/?auto_play=false&buying=false&liking=false&download=false&sharing=false&show_artwork=true&show_comments=false&show_playcount=false&show_user=true&hide_related=true&visual=false&start_track=0&url=https%3A%2F%2Fapi.soundcloud.com%2Ftracks%2F2148679476" frameborder="0" gesture="media" scrolling="no" allowfullscreen="true"></iframe></div><p>Until next time—hammers up. </p>