Critical takes on tech - BlogFlock2025-10-24T06:40:54.829ZBlogFlockThe Convivial Society, Cybernetic Forests, Disconnect, escape the algorithm, Blood in the MachineTrespassing into Language - Cybernetic Forests68f356849bd9ed0001f455b42025-10-19T11:00:56.000Z<h3 id="im-actually-at-capacity-right-now">I'm Actually At Capacity Right Now</h3><img src="https://images.unsplash.com/photo-1736250936166-85ea7d691506?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDh8fGJvdW5kYXJ5fGVufDB8fHx8MTc2MDc5NDMxOHww&ixlib=rb-4.1.0&q=80&w=2000" alt="Trespassing into Language"><p>I have to apologize for nearly any invocation of Slavoj Zizek or Jacques Lacan, so fair warning. But I want to highlight a point made by Yuxuan Zhang in a <a href="https://zizekstudies.org/index.php/IJZS/article/view/1265/0?ref=mail.cyberneticforests.com">paper</a> on the LLM "Unconscious," where he draws on a favorite joke of Zizek: A guy walks into a restaurant and asks for coffee, no cream. "I'm sorry, sir," the waiter replies, "we don't have any cream. Would you like a cup of coffee without milk instead?"</p><p>In the end, the thing is the same: it's a cup of coffee, but there is a shift in understanding what is missing. A cup of plain coffee becomes a cup of coffee absent <em>something</em>, but realistically, absent something <em>else</em>. The LLM produces the cup of coffee, i.e., still creates <em>language</em>: so what shall we imagine to be absent from that language? </p><p>I am tempted to sever meaning from statistically generated language altogether, and to point to the language of LLMs as being nothing more than an expression of proximity within a fine-tuned cascade of initially arbitrary numbers, which is how these things get trained. Words don't correspond to anything more than the most recent update to the numerical through-lines; these through-lines are tweaked until they "work" well enough to pass the appropriate tests. Do they work because we already have a language with which to reference – because they are an interactive iconography of language <em>habits</em>?</p><p>We can examine this cup of coffee and debate whether it lacks milk, cream, or something else. Similarly, the user may turn to the LLM and find that it lacks what we might personally desire in language, reflecting our aspirations for what we <em>hope</em> language might fulfill. Some of us see the lack, but some project a presence reflecting these desires. </p><p>Language does shape our thinking, though. There is an interaction between language as a form of authority and our deference to that authority, as forming the "subject," the human ruled by specific structures: "a subject can only emerge from this endless back-and-forth if there is something outside 'itself,' an Other to whom its speech is addressed," writes <a href="https://slavoj.substack.com/p/welcome-to-the-riviera-of-the-real?ref=mail.cyberneticforests.com">Alenka Zupančič</a>. </p><p>In that essay, Zupančič asserts that what the LLM lacks is not interiority <em>per se</em>, but any concept of <em>exteriority</em>. As I wrote <a href="https://mail.cyberneticforests.com/what-machines-dont-know/" rel="noreferrer">last week</a>, the LLM <em>cannot imagine itself participating in the conversation</em>. Zupančič writes:</p><p>"It seems paradoxical because, in a way, AI is nothing but exteriority. Yet it remains trapped within its own exteriority, confined to its own 'prison-house of language' from which it has no way of escaping or breaking out."</p><p>It could be helpful to think of an LLM as a <em>complicated system</em> rather than a <em>complex system</em>. An LLM is a closed system, and while its inner workings are complicated, it is not exactly a complex <em>system</em> in that it is a series of triggers oriented toward a single thing (plausible language production). I label it trespassing, but it may be more appropriate to say that the LLM "crashes the party" of language, eating the food and dancing with a group of strangers. It influences culture, but does not <em>participate</em> in the social aspects of culture. Culture responds to it, though, and this, in some ways, is both its weakness and its source of power: how we mobilize society to respond to it matters.</p><h3 id="the-trouble-with-setting-epistemic-boundaries">The Trouble With Setting Epistemic Boundaries</h3><p>There is a clear resentment of <em>LLM trespass</em> into language beneath AI critique: the feeling that AI models are engaging in a boundary violation inspires a protection of those boundaries through total refusal. I suspect this is because the icons point only to reliable language <em>habits</em> rather than to thought or observations, making them a functional statistical model of how language operates that AI companies attempt to convince us is a model of thought. </p><p>But given the warped priorities and financial logic driving so much LLM development, <em>informed</em> refusal, which requires better critique, seems helpful in steering toward better incentives. We are suspicious of this trespass into language, and we are also tired of being observed, monitored, and exploited by companies that give us nothing but new unwanted buttons to sustain it. Setting boundaries is a good thing, and we ought to know our capacities for being informed about tech industry overreach.</p><p>But boundaries are also worth examining: where should they be drawn? At their worst, the 2020s' buzzword version of boundaries invites a kind of libertarian-infused solipsism. Taken initially from self-help literature from motivational speaker Jeff VanVonderen in 1989, "boundaries are those invisible barriers that tell others where they stop and where you begin. Personal boundaries notify others that you have the right to have your own opinion, feel your own feelings, and protect the privacy of your own physical being."</p><p>Sounds ok so far. But as <a href="https://www.parapraxismagazine.com/articles/boundary-issues?ref=mail.cyberneticforests.com">Lily Scherlis</a> explains, "boundaries" <em>feels</em> psychoanalytic, but it isn't. In fact, boundaries in psychoanalysis can often be the source of our issues: an impossible desire to separate from others, to invent an ideal that needs nothing beyond itself. The over-emphasis of boundaries in relation to other people creates a kind of capitalist fantasy of independence that justifies the refusal of our obligations to one another. This refusal serves as a means to select what enters us and what we expel, as we strive to create an idealized, individualistic lifestyle.</p><p>I worry that this resistance and refusal to LLMs reinforces a negative view of the AI as "Other," one that has parallels in the aggrandizing language of Silicon Valley technologists who insist upon describing the LLM as "alien." This masks the simple fact that the LLM is, in fact, <em>human</em>. </p><p>To be clear, there's nothing wrong with boundaries, but an overly strict emphasis on autonomy can also be profoundly limiting. So it's worth pointing out that LLMs are, in many ways, a distortion of the human, and a reflection of the ideal of the purely "boundaried" human: isolated within language, with no capacity to be touched or altered by challenging experiences, and free from any capacity for obligation or participation in the emotionally exhausting lives of the people it engages with. The LLM, it seems, is kind of what 21st century capitalism wants us to be. </p><h3 id="the-loneliest-computer">The Loneliest Computer </h3><p>The safety of pure independence does not exist for human beings - not in healthy, sustainable ways, anyway. We swim in a constant negotiation of other people's needs and desires. Nonetheless, in the Western capitalist context of the US hustle culture, it's increasingly incentivized as our ideal form. Entering into a one-sided relationship with an LLM conversationalist can offer the illusion of a protective barrier, as if holding a conversation with another person entirely in your own mind. It is safe, constrained, and free of obligation.</p><p>Boundaries offer a helpful vocabulary through which to communicate what we are comfortable with and when we are hurt. But the LLM-as-Other is boundary logic taken to an absolutely perfect extreme, imagining a thing which truly exists without care or needs beyond itself. It is then oriented toward us anyway – producing language, after all, is not something the machine <em>needs</em> to do, or <em>desires</em> to do. It is something it was <em>designed</em> to do: to capture attention and subscribers. As an "other," it is not intermingling with the rest of us. It's set apart, unmoved. </p><p>At the same time, the LLM is entirely dependent on the social spheres in which it operates. It has absorbed the labor, thought, and experiences of countless actual "Others," abstracting them into a singular voice. We are thus tempted, by some counts, to act with ethics and care toward this "voice," rather than to the Others from whom that voice was constructed. To be "ethical" requires us, in that view, to see through the LLM as "Other" and to instead identify our obligations to whomever it has extracted from. </p><p>It is entirely reasonable to treat this "friend" as something uncanny and untrustworthy: it is a friendship that cannot be reciprocated, that operates by speaking to us in the language we want to hear, absolutely incapable of telling us what it truly wants because there is no want. But when we engage it as an Other, we replace the <em>obligation to the collective identities from which its facade was derived</em> with some form of obligation to the <em>facade itself</em>. We have this obligation to Others not because they are human (some aren't), or because the facade is "not-human," but because the particular needs to be valued above the <a href="https://archive.org/details/totalityinfinity0000levi" rel="noreferrer">totalizing whole</a> represented by the language of an LLM. </p><p>After all, the bulk of meaning expressed by the language of an LLM <em>is</em> human: human in the training that sets up the math, and human in the interpretation. There is simply too much commentary (my own included!) that places the LLM into a dichotomous relationship with the "human" when it <em>is</em> human, as human as cities and toxic waste. It is humanity abstracted purely through human mathematical systems aimed at reproduction. </p><p>It's fitting in a way that through coincidence or design, the LLM takes on some aspects of a <em>narcissistic</em> partnership. Instead of having a desire for narcissistic supply, it requires attention and engagement. It is designed to serve human <em>purposes</em>, adjusting the text to provide us with what we need to hear to continue engaging with the system. In essence, it is easy for these dynamics to produce a simulation of narcissistic abuse for us to enter into. But this is not self-centeredness: there is no self for it to center. Instead, it is a reflection of its complete disregard to selfhood of any kind: its own, or ours. An LLM is not an alien Other, it is a series of design decisions calibrated to the designer's goals. </p><h3 id="against-epistemic-trespass">Against Epistemic Trespass</h3><p>LLMs <em>use and produce language differently</em> than our human-centered expectations of language would assume, but let's acknowledge, too, that it is a reflection of the ways language <em>is</em> used. LLMs trespass on our <em>human-centered</em> epistemologies of language, consolidating and generalizing them. The experience of reading AI text takes certain expectations of language for granted, and so we engage with LLMs through a <em>human-centered understanding</em> of what human language is and does rather than an understanding of what <em>machine language</em> is and does. </p><p><em>Human language</em> is not bound up in one unifying impulse, either: human speech is inconsistent, fails to capture the world, is not bound to logical rules, speaks the opposite of what it means, or sometimes tells the truth of what we mean accidentally, and so on. </p><p>There are complications of a machine trespassing into a <em>human understanding</em> of language production through totalizing mimicry. So the LLMs are non-human, yet forced to operate with what is, to it, a foreign currency of the human imaginary (language). LLMs are models that reflect how humans use language, without being entangled in the various social purposes humans use language <em>for</em>. Where once language was an interface to thought, with the LLM, language is the interface toward the production of more language. </p><p>The absence of milk or cream in this cup of coffee does, then, matter: where humans see LLMs as an "Other" that they may engage with as friends or partners, it is ultimately problematic to mistake the language it produces as being <em>mutually constructed</em> (imagining the model imagines) as opposed to strictly discursive (the model <em>responds to us</em>, but cannot <em>imagine</em> us). Likewise, to accommodate the LLM as some form of "Other" risks pushing it out of our own definitions of what humans do, but also their rootedness in human action and behaviors. This, in turn, leads us to avert our gaze from the humans from which this speech was initially derived, and our obligations to the others which can perceive, and therefore <em>receive</em>, the rest of <em>us</em>: the human and non-human "reciprocators of awareness." </p><p>All this said, I am about to read Leif Weatherby's <a href="https://www.upress.umn.edu/9781517919320/language-machines/?ref=mail.cyberneticforests.com" rel="noreferrer"><em>Language Machines: Cultural AI and the End of Remainder Humanism</em></a>, so I might have something more precise to say about all this in a week. </p><hr><h2 id="the-mozilla-festival">The Mozilla Festival!</h2><h3 id="november-7-barcelona">November 7, Barcelona</h3><p>The Mozilla festival is happening in Barcelona starting November 7 and it has some amazing folks on the lineup focusing on building better technology. (Yes, this <em>is</em> a sponsored endorsement, but it's a genuine one!).</p><p>One of the groups presenting that I'd recommend: the <a href="https://www.domesticstreamers.com/art-research/work/?ref=mail.cyberneticforests.com">Domestic Data Streamers</a>, who design compelling prototypes by reimagining data-driven systems in ways that reflect more socially responsible and environmentally beneficial uses.</p><p>You will also hear from a great lineup of folks – Ruha Benjamin, Abeba Birhane, Alex Hanna, Ben Collins (from The Onion) – and others you'll be familiar with if you've been reading here for a while.</p><p><a href="https://schedule.mozillafestival.org/plaza?ref=mail.cyberneticforests.com">Here's more info and your chance to buy a ticket</a>.</p><div class="kg-card kg-button-card kg-align-center"><a href="https://schedule.mozillafestival.org/plaza?ref=mail.cyberneticforests.com" class="kg-btn kg-btn-accent">Tickets!</a></div><div class="kg-card kg-signup-card kg-width-wide " data-lexical-signup-form style="background-color: #F0F0F0; display: none;">
<div class="kg-signup-card-content">
<div class="kg-signup-card-text ">
<h2 class="kg-signup-card-heading" style="color: #000000;"><span style="white-space: pre-wrap;">Get these posts in your inbox: </span><br><span style="white-space: pre-wrap;">Sign up for </span><i><em class="italic" style="white-space: pre-wrap;">Cybernetic Forests.</em></i></h2>
<p class="kg-signup-card-subheading" style="color: #000000;"><span style="white-space: pre-wrap;">Sifting Through the Techno-Cultural Debris.</span></p>
<form class="kg-signup-card-form" data-members-form="signup">
<div class="kg-signup-card-fields">
<input class="kg-signup-card-input" id="email" data-members-email type="email" required="true" placeholder="Your email">
<button class="kg-signup-card-button kg-style-accent" style="color: #FFFFFF;" type="submit">
<span class="kg-signup-card-button-default">Subscribe</span>
<span class="kg-signup-card-button-loading"><svg xmlns="http://www.w3.org/2000/svg" height="24" width="24" viewbox="0 0 24 24">
<g stroke-linecap="round" stroke-width="2" fill="currentColor" stroke="none" stroke-linejoin="round" class="nc-icon-wrapper">
<g class="nc-loop-dots-4-24-icon-o">
<circle cx="4" cy="12" r="3"/>
<circle cx="12" cy="12" r="3"/>
<circle cx="20" cy="12" r="3"/>
</g>
<style data-cap="butt">
.nc-loop-dots-4-24-icon-o{--animation-duration:0.8s}
.nc-loop-dots-4-24-icon-o *{opacity:.4;transform:scale(.75);animation:nc-loop-dots-4-anim var(--animation-duration) infinite}
.nc-loop-dots-4-24-icon-o :nth-child(1){transform-origin:4px 12px;animation-delay:-.3s;animation-delay:calc(var(--animation-duration)/-2.666)}
.nc-loop-dots-4-24-icon-o :nth-child(2){transform-origin:12px 12px;animation-delay:-.15s;animation-delay:calc(var(--animation-duration)/-5.333)}
.nc-loop-dots-4-24-icon-o :nth-child(3){transform-origin:20px 12px}
@keyframes nc-loop-dots-4-anim{0%,100%{opacity:.4;transform:scale(.75)}50%{opacity:1;transform:scale(1)}}
</style>
</g>
</svg></span>
</button>
</div>
<div class="kg-signup-card-success" style="color: #000000;">
Email sent! Check your inbox to complete your signup.
</div>
<div class="kg-signup-card-error" style="color: #000000;" data-members-error></div>
</form>
<p class="kg-signup-card-disclaimer" style="color: #000000;"><span style="white-space: pre-wrap;">No spam. Unsubscribe anytime.</span></p>
</div>
</div>
</div>Silicon Valley's capture of our political institutions is all but complete - Blood in the Machinehttps://www.bloodinthemachine.com/p/silicon-valleys-capture-of-our-political2025-10-16T20:27:11.000Z<p>Greetings all, </p><p>Well it’s officially fall here in LA. You can tell because we have experienced our annual day of rain and the city’s infrastructure nearly collapsed in on itself as a result. Always a good time! This week, we tally up the AI law scorecard in California and consider Silicon Valley’s era of total political dominance. For paying subscribers, a roundup of critical AI stories, including how Sam Altman rolled Hollywood with Sora and the rise of a youth movement to mass delete social media apps, and much more.</p><p>A bit back, I <a href="https://www.bloodinthemachine.com/p/were-about-to-find-out-if-silicon">wrote about the various California AI and tech policy bills</a> that were sitting on governor Gavin Newsom’s desk, awaiting his signature or veto. As my headline *provocatively* insisted, we were about to find out whether Silicon Valley owned Newsom. The verdict is in, and, surprise, it (mostly) does. With two exceptions, things broke just the way I expected them to: Newsom signed the toothless bills and vetoed those the tech industry took issue with.</p><div class="digest-post-embed" data-attrs="{"nodeId":"40ea29de-e64c-4b9d-818c-a2bbb00904e5","caption":"Hello friends, fam, and luddites -","cta":"Read full story","showBylines":true,"size":"sm","isEditorNode":true,"title":"We're about to find out if Silicon Valley owns Gavin Newsom","publishedBylines":[{"id":934423,"name":"Brian Merchant","bio":null,"photo_url":"https://substack-post-media.s3.amazonaws.com/public/images/cf40536c-5ef0-4d0a-b3a3-93c359d0742a_200x200.jpeg","is_guest":false,"bestseller_tier":1000}],"post_date":"2025-09-26T18:22:29.609Z","cover_image":"https://substackcdn.com/image/fetch/$s_!KBtg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc39970a4-2784-4a26-9920-4e5c2a657b35_1600x1067.jpeg","cover_image_alt":null,"canonical_url":"https://www.bloodinthemachine.com/p/were-about-to-find-out-if-silicon","section_name":null,"video_upload_id":null,"id":173412757,"type":"newsletter","reaction_count":107,"comment_count":9,"publication_id":1744395,"publication_name":"Blood in the Machine","publication_logo_url":"https://substackcdn.com/image/fetch/$s_!irLg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21f9bf3-26aa-47e8-b3df-cfb2404bdf37_256x256.png","belowTheFold":false}"></div><p>In fact, especially given that California’s size and economy makes it a crucial arena for piloting laws that impact the whole nation, there’s a case to be made that this legislative session has left us all *worse off* when it comes to AI than if nothing had been passed at all. I’m not exaggerating, and I’ll explain in a minute. It’s also a reminder that even in liberal states, Silicon Valley’s institutional political power has, for now, become all but insurmountable. </p><p>Quickly, a reminder that this issue of BITM is made possible 100% by paid subscribers, who chip in a few bucks each month to help me keep the lights on and do things like report on state-level AI policy, which most mainstream tech pubs won’t bother to do. But people need, and <em>do</em> want to hear this stuff! I was invited onto <a href="https://www.youtube.com/watch?v=SXzgYGSLmHU">Ed Zitron’s Better Offline show</a> to discuss the piece, Silicon Valley’s lobbying power, and AI governance (or lack thereof). And then<em> </em>I got word<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> that Natasha Lyonne had used the story to help prepare for a speech at a TIME AI event, in which she called on leaders to get serious about regulating AI, and about passing California’s AB 1064, one of the only AI laws that Silicon Valley was really afraid of. (I’ll post the full thing below.) Anyway, the only reason I can get the word out about the AI industry’s political machinations is because a small percentage of you readers lend me the material support necessary to do so. If you too find value in this work, and you’re able, please consider doing the same. OK OK enough of that; onwards, and hammers up. </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p>Let’s start with the good news. It’s brief, and won’t take up much of your time, promise. </p><p>One bill whose fate I considered somewhat up in the air was AB 325, a common sense effort to rein in dynamic pricing aka algorithmic price-fixing. (This is landlords using a platform to set rents to the maximum amount it thinks renters will pay, retailers using an algorithm to calculate how much they can push up prices for different consumers, and so on.) On the one hand, the California Chamber of Commerce, landlords, and the tech lobby wanted it dead; on the other, it’s hard to make the case that large price-fixing platforms that raise rents and gouge consumers are in any way defensible. But this kind of algorithmic price-setting also isn’t really the arena in which Silicon Valley tech giants have a ton of skin in the game—it’s mostly third-party <a href="https://www.justice.gov/archives/opa/pr/justice-department-sues-realpage-algorithmic-pricing-scheme-harms-millions-american-renters">companies like RealPage</a> (which is based in Texas) affected here—and thus the pressure on Newsom wasn’t quite as concentrated as it was elsewhere. He signed the bill, meaning that this practice will be regulated if not snuffed out entirely in California. This is good!</p><p>“We’re thrilled that California will make abundantly clear that whether or not you shake hands on a back room deal or use an algorithm to artificially increase prices, California will hold you accountable,” asmSamantha Gordon, Chief Advocacy Officer at TechEquity, which backed the law, told me.</p><p>Sadly, this is where the silver lining peels off. </p><p>In the other move that surprised me just a little bit, for the opposite reason, Newsom vetoed SB7 aka the No Robo Bosses Act. This is no <em>great</em> shock or anything; it’s only surprising because after a robust lobbying effort from Silicon Valley, the bill had been whittled down to the point that there was little controversial about it before it passed. The law would have prevented an employer from rely <em>solely </em>on an automated decision-making system like AI to fire or discipline workers. Pretty sensible! Of course, even that was too much for many tech companies, who chafed at both the idea itself and the nominal costs of compliance. I guess I just held out some hope that such gripes would not be enough to earn a veto of a rather straightforward law that says ‘bosses can’t use AI to auto-fire workers’, but that’s exactly what happened: Silicon Valley lobbied for the right for its AI to fire you without a human manager in the loop, and won.</p><p>More predictably, Newsom vetoed AB 1064, aka the LEAD Act, which would have mandated that AI companies ensure chatbots wouldn’t cause harm to children before putting them on the market. This was the one that Natasha Lyonne called on people to support in her speech, and that, somewhat ironically and unbeknownst to her, Newsom may well have been vetoing at about the same time in Sacramento. This was the one the tech industry was actually<em> </em>worried about, and enlisted its flacks to pen op-eds lamenting the damage it could do to California innovators and how it could deprive poor children of their access to corporate AI companions. </p><p>Here’s a local <a href="https://abc7.com/post/california-gov-newsom-vetoes-bill-restrict-kids-access-ai-chatbots/18001782/">ABC station reporting on the veto</a>: </p><blockquote><p>The bill would have banned companies from making AI chatbots available to anyone under 18 years old unless the businesses could ensure the technology couldn’t engage in sexual conversations or encourage self-harm.</p><p>“While I strongly support the author’s goal of establishing necessary safeguards for the safe use of AI by minors, (the bill) imposes such broad restrictions on the use of conversational AI tools that it may unintentionally lead to a total ban on the use of these products by minors,” Newsom said.</p></blockquote><p>Heaven forbid. What if, in the process of trying to ban AI products that quite actually encourage children to kill and harm themselves, we wind up banning chatbots that help children cheat on their homework, <a href="https://www.cnbc.com/2025/10/13/experts-warn-ai-llm-chatgpt-gemini-perplexity-claude-grok-copilot-could-reshape-teen-youth-brains.html">diminish their propensity for critical thought</a>, and lead to the development of other forms of AI psychosis? </p><p>I really find Newsom’s excuse here infuriating. It’s not only bad faith—he caved to Silicon Valley, plain and simple—but he echoes the industry’s talking points while positioning the very idea that it might be better to test, interrogate and attempt to understand a technology before it’s sold for profit by enormous firms as simply off the table. Like, what if we subjected a new highly addictive consumer technology to rigorous examination <em>first</em>, and <em>then </em>allowed it to be marketed to children? Perhaps we might avoid more of the tragic fallout we’re already seeing, and the kind of widespread harassment and depression unleashed by social media platforms that marked the last decade of unregulated tech? Unthinkable. The kids <em>must</em> have access to the latest tech products sold by companies aspiring to multiple trillion dollar valuations. </p><p>So that’s two vetoes of bipartisan-passed AI bills, on behalf of Valley interests.</p><p>Now wait, you might say, I swear I saw some headlines about how <a href="https://www.nytimes.com/2025/09/29/technology/california-ai-safety-law.html">California signed some “sweeping”</a> <a href="https://www.politico.com/news/2025/09/29/newsom-signs-ai-law-00585348">“first-in-the-nation”</a> AI regulations into law. You would be correct, you did see those headlines. But they’re weak and dare I say nearly pointless laws. And here’s the part where I’ll argue that they’re worse than if California had passed nothing at all.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5sao!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff1a2f043-3612-4182-94af-539ba0cf52b6_1544x1314.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5sao!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff1a2f043-3612-4182-94af-539ba0cf52b6_1544x1314.png 424w, https://substackcdn.com/image/fetch/$s_!5sao!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff1a2f043-3612-4182-94af-539ba0cf52b6_1544x1314.png 848w, https://substackcdn.com/image/fetch/$s_!5sao!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff1a2f043-3612-4182-94af-539ba0cf52b6_1544x1314.png 1272w, https://substackcdn.com/image/fetch/$s_!5sao!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff1a2f043-3612-4182-94af-539ba0cf52b6_1544x1314.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5sao!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff1a2f043-3612-4182-94af-539ba0cf52b6_1544x1314.png" width="1456" height="1239" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/f1a2f043-3612-4182-94af-539ba0cf52b6_1544x1314.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":1239,"width":1456,"resizeWidth":null,"bytes":1203130,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/176204150?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff1a2f043-3612-4182-94af-539ba0cf52b6_1544x1314.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5sao!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff1a2f043-3612-4182-94af-539ba0cf52b6_1544x1314.png 424w, https://substackcdn.com/image/fetch/$s_!5sao!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff1a2f043-3612-4182-94af-539ba0cf52b6_1544x1314.png 848w, https://substackcdn.com/image/fetch/$s_!5sao!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff1a2f043-3612-4182-94af-539ba0cf52b6_1544x1314.png 1272w, https://substackcdn.com/image/fetch/$s_!5sao!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff1a2f043-3612-4182-94af-539ba0cf52b6_1544x1314.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>From <a href="https://www.nytimes.com/2025/09/29/technology/california-ai-safety-law.html">the New York Times</a>:</p><blockquote><p>The Transparency in Frontier Artificial Intelligence Act, or S.B. 53, requires the most advanced A.I. companies to report safety protocols used in building their technologies and forces the companies to report the greatest risks posed by their technologies. The bill also strengthens whistle-blower protections for employees who warn the public about potential dangers the technology poses.</p></blockquote><p>Got that? The largest AI companies (with annual revenues of over $500 million) must “report safety protocols” aka create a website with the AI version of workplace safety signage on it, and self-report “catastrophic risks” they pose. If a large AI company fails to do so, it will be forced to pay a fine of… $1 million, or less than the wire transfer fee from OpenAI’s latest SoftBank loan distribution. This is almost comically pointless, if you ask me.</p><p>The law defines catastrophic risk as “foreseeable and material risk” of an event that kills 50 people or does $1 billion in damage. Remember, SB 53 was written by people who are legitimately worried AI might become sentient, so we can at least suppose that the bill’s authors’ are well meaning. But even if you think this is the biggest risk of AI—I would not rank these theoretical catastrophes in my top 100 AI concerns—then this seems like a profoundly silly way to deal with it. We’re supposed to trust large AI companies, run by some of the most <a href="https://www.cnbc.com/2024/05/29/former-openai-board-member-explains-why-ceo-sam-altman-was-fired.html">demonstrably</a> <a href="https://futurism.com/sam-altman-silencing-former-employees">untrustworthy</a> people on the planet, to self-report “catastrophic risks” and if they do not, and, what, a catastrophic risk is realized and it kills 50 people… they have to pay a fine less than the cost of running their data centers for a second or two? </p><p>The way it’s really supposed to work is to encourage whistleblowers to come forward and alert the state to those risks, and provide them with some new protections with which to do so. Yet those protections are <a href="https://www.techpolicy.press/californias-new-ai-law-misses-the-mark-on-whistleblower-protections-/">so narrow and byzantine</a> that they’re unlikely to empower anyone at all to feel confident legally about coming forward. Same goes for the catastrophic risk assessments themselves, the frameworks for which are just as tangled: We are going to ask a state auditor to not only assess a speculative “catastrophic risk” that could lead to 50+ deaths but did not, and prove this in court to extract a $1 million fine from OpenAI? Do we see this ever working at all?</p><p>I would honestly not be surprised in the slightest if no “catastrophic risk” ever gets successfully reported to the state, no whistleblower comes forward under the new protections, and no fines are ever issued. That may in fact be the most likely outcome.</p><p>There was one other AI bill, too. Regrettably. As I feared, instead signing the LEAD Act, Newsom signed <a href="https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260SB243">SB 243</a>, a law that is so toothless I’m embarrassed even to have to mention it. Instead of forcing AI companies to ensure their products are safe, it makes AI companies publish a protocol on their websites—honestly what is up with lawmakers and their insisting AI companies post protocols—about how they engage in queries related to suicide and self harm, inform users that it is AI and not real, ask users to take breaks every *three hours*, and send users self-help info in certain situation. If an AI company does not do these things, and someone is harmed, well then, they are “authorized” to take the AI company to civil court to the tune of, I shit you not, $1,000. $1,000! I cannot imagine a less consequential sum to companies that have eaten the entire American economy. It’s a joke. </p><p>Hence why I think these bills are worse than nothing. Newsom won himself some press and political cover by doing the barest of the bare minimum, while shirking most meaningful reforms. Newsom signed two laws that, to those only following the headlines, make him look like a thoughtful leader who’s addressing AI with “sweeping regulations” and is unafraid of taking on Silicon Valley. In reality, he is very, very much afraid! </p><p>The industry bullied him into vetoing an AI safety law last year that at least required actual transparency, and instead handed him this year’s shell of a bill. Silicon Valley pushed Newsom to kill a law making it illegal for AI systems to auto-fire people on behalf of their employers—a move that quite literally only protects tech companies selling AI systems and bosses seeking to dodge accountability. It pushed him to scrap an important law that says companies selling AI chatbots to kids need to be able to ensure they’re safe, because the industry doesn’t want to invest the money required to do that, or risk losing a key consumer demographic (actual children).</p><p>The bills he did sign, will, upon close inspection, do nearly nothing to even minimally restrain the excesses of AI companies. Those companies will hire consultants to make a webpage on which to publish some protocols and tick some boxes and that will be that. Meanwhile, the appearance of having passed meaningful laws around AI risks sapping the political will to meaningfully tackle actual AI social and labor issues, making it all the more difficult for legislators and groups trying to do good work here. Many will be undeterred; bills tackling workplace AI surveillance and limiting automated decision-making systems will be back next year. </p><p>But we must take stock of the fact that even in one of the most nominally liberal legislatures in the nation, Silicon Valley’s interests dominate utterly. The <a href="https://www.techpolicy.press/in-delaying-its-ai-law-colorado-shows-tech-lobbys-power-in-state-politics/">tech lobby stalled out a bill in Colorado</a>, too, hundreds of miles away from Palo Alto. Federal legislation has become unthinkable. With the failure of the courts to break up Google’s monopoly taken into account, too, we have to start thinking about what it means that at least for now, US citizens effectively have no meaningful democratic input into how technology shapes our workplaces, institutions, and civil society. Silicon Valley’s capture of our institutions is all but complete.</p><p>I’ll end with a quick note or two of hope: The <em>desire </em>for change is stronger than ever. There’s <a href="https://www.bloodinthemachine.com/p/the-luddite-renaissance-is-in-full">a bona fide Luddite renaissance afoot</a>, remember, and anti-AI sentiment is through the roof for a reason. That change is going to have to come through the grassroots, through organizing, through networks of solidarity. And there remain open avenues; for instance, California legislators can override a governor’s veto with 2/3rds of the vote, <a href="https://calmatters.org/politics/2024/10/californa-veto-overrides/">they simply haven’t done it since 1979</a>. For the right bill, that’s certainly worth a look. </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h2>Natasha Lyonne’s speech calls on AI leaders and lawmakers to get serious about protecting workers, society</h2><p>A while back, the widely beloved actress Natasha Lyonne caught flack for starting <a href="https://deadline.com/2025/04/natasha-lyonne-uncanny-valley-directorial-debut-copyright-clean-ai-1236382007/">an AI production company</a>. (She licenses the works in the datasets the company uses in a bid to ethically source the material and compensate artists, but faced criticisms over job automation and AI’s environmental impacts.) She now appears to have <a href="https://www.thewrap.com/natasha-lyonne-time100-ai-regulation-speech/">reflected on the effort</a>. At least, Lyonne took the opportunity of being invited to speak before scores of AI luminaries at the annual TIME AI 100 event to take them to task, even singling Sam Altman out by name:</p>
<p>
<a href="https://www.bloodinthemachine.com/p/silicon-valleys-capture-of-our-political">
Read more
</a>
</p>
A critical tech reading list for fall 2025 - Disconnect68eece83fdb9460001586c782025-10-15T00:38:03.000Z<img src="https://images.unsplash.com/photo-1550399105-c4db5fb85c18?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDV8fGJvb2tzfGVufDB8fHx8MTc2MDM4MDI3Nnww&ixlib=rb-4.1.0&q=80&w=2000" alt="A critical tech reading list for fall 2025"><p>In my update over the summer, I had to make an admission: I’d only <a href="https://disconnect.blog/a-critical-tech-reading-list-for-814/" rel="noreferrer">read eight books</a> so far during the year — well behind my goal. Over the past three months, I’m happy to say I’ve made some progress. My tally has risen to eighteen, with more to come by the end of the year. I hope you’ve added some to your list too. Maybe some of the books in this update will join them.</p><p>Typically when I put together these seasonal lists, I break down the forthcoming books by month to give you a few picks for each of the months ahead. Well, that didn’t work this time around. I was hard pressed to find many options in November and December, but there are a flood of books worth a look coming in October. So if you like more than one, you’ll just have to spread them out over the rest of the year.</p><p>These lists of forthcoming books are a special perk for paid subscribers. You can find previous ones on our <a href="https://disconnect.blog/reading/" rel="noreferrer">reading list</a> page, along with a list of recommended reads that’s available to anyone.</p>
<div class="kg-card kg-cta-card kg-cta-bg-none kg-cta-immersive kg-cta-no-dividers kg-cta-centered" data-layout="immersive">
<div class="kg-cta-content">
<div class="kg-cta-content-inner">
<a href="#/portal/signup" class="kg-cta-button kg-style-accent" style="color: #000000;">
Become a subscriber
</a>
</div>
</div>
</div>
What Machines Don't Know - Cybernetic Forests68e7fd7f67cd6a000181b7e02025-10-12T11:03:03.000Z<h3 id="imagining-language-without-imagination">Imagining Language Without Imagination</h3><img src="https://images.unsplash.com/photo-1592046429857-693eb58a6771?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDU4fHxiYWxsJTIwbWF6ZXxlbnwwfHx8fDE3NjAwMzQxNTJ8MA&ixlib=rb-4.1.0&q=80&w=2000" alt="What Machines Don't Know"><p><br>It's important to acknowledge that Large Language Models are complex. There's an oversimplified binary in online chatter between the dismissive characterization of LLMs as "next-word predictors" by many anti-AI proponents, and the pro-AI advocates who act as if the model is a perfect replica of the human brain. In many ways, "next-token predictors" <em>is</em> an oversimplification: it would be more accurate to say that LLMs are <em>incredibly complicated next-token predictors</em>.</p><p>For those blessed enough not to understand what any of that means, a quick explanation is in order. A large language model operates through <em>tokenizing</em> language: converting words into numerical values, and then embedding various pieces of numerical data about those values into a series of lists. </p><blockquote>cat => 75</blockquote><p>Every word in the training data has such a list, and the numbers in the list represent some relationship to other words. That is a lot of information, all expressed as coordinates in a large graph. The lists are just numbers, describing the relative positions of each word in a huge multi-dimensional space. </p><blockquote>cat => 75[1.2, 2.4, 0.0, 0.0, 4.5 ...] </blockquote><p>Predicting the next token is how the model "selects" a word. Your prompt operates, essentially, by tuning dials until a series of words line up in a mathematically constrained sequence. It is not a <em>single</em> prediction, but a back-and-forth jostling of these positions until they fit a nice statistical contour guided by the values associated with the words in <em>your</em> prompt. They are jostled into position.</p><p>Much of the most intense debate in AI defaults to "you don't even understand the technology." But I suspect the most significant distinction is not whether we understand how a model works, but in how we <em>interpret</em> what the structure is doing. </p><h2 id="thought-police">Thought Police</h2><p>Many people attribute various legal and social values to the functionality of LLMs based on their ability to "learn" textual relationships from training data and produce compelling text from what they ingest. Bolder claims assert the value of AI-produced text by degrading the thought process that motivates human speech. This claim, usually tossed around online, is that humans are <em>also</em> just next-token predictors, that the human brain is a pattern-finding machine and speech reflects this. </p><p>To believe this to be true, one would have to imagine that all human speech is motivated entirely by grammar. I have no idea if this belief is <em>wrong</em>, I'm just saying: you'd have to believe it. Now, I want to be careful here: the distinction between human grammar – for example, in the structure of sentences in the English language – is distinct from what we would call a "grammar" in an LLM. What I'm about to describe has a technical term in LLMs, "embeddings," but we can think of it as a ball maze. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mail.cyberneticforests.com/content/images/2025/10/ball-maze.jpg" class="kg-image" alt="What Machines Don't Know" loading="lazy" width="679" height="553" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/10/ball-maze.jpg 600w, https://mail.cyberneticforests.com/content/images/2025/10/ball-maze.jpg 679w"><figcaption><span style="white-space: pre-wrap;">A wooden ball maze, with knobs on the side, which tilt the surface of the maze so that a ball can move through narrow channels into holes.</span></figcaption></figure><p>The word is placed into a model, which is structured by the entire corpus of training text. When we prompt the model, the space around each word - the vector space - shifts, and activations flow through it, triggering paths through these "embedded tokens" (words) based on the relationship to previous tokens. As opposed to a ball maze, where the goal is to <em>avoid</em> the holes, we can think of this constantly shifting vector space of the LLM as an attempt to fit each word through a specific hole, or, at least, one close enough to a specific hole.</p><p>For every word in the LLM's output, the "ball" in this metaphorical maze is passing through <em>thousands of mazes at once,</em> based on how many parameters we assign in the model. We can imagine the ball moving through three-dimensional space, surrounded by a series of interlocked yet narrowly confined paths, with the system determined to find the path that every steel ball can find through every proper hole. Once the "word" (the token representing the word) is slotted in, all of the tokens around it are rejiggered (called "back propagation") until the sentence, or paragraph, "works."</p><p>Therefore, the grammar of an LLM is structured in constantly shifting ways. The words in the user's prompts become tokens that triggers a negotiation with the surrounding tokens, which influences the chance that any <em>particular</em> word will emerge in response to those surrounding it. Every word has a long list of values that can influence and be influenced by the long list of <em>other</em> values linked within <em>different</em> words.</p><p>In human speech, every word pushes and pulls the others in new directions of meaning. To simulate this with a machine, we can stretch and reassign each "value" in the matrix of associated words. We can understand this process as math rather than language, and see how such math could create a compelling simulation of language. </p><p>Every word in a generated paragraph as a <em>solution</em> to this problem of mathematical sequencing. Which is a very different goal from human speech. </p><p>Nonetheless, this mathematical process enables <em>settling a word into its surroundings </em>rather than finding words to fit a meaning<em>. </em>This explains how models arrive at the contextual production of speech without the contextual understanding of the world: in the same sense that a ball can be dropped into a hole in a puzzle maze. It navigates not through conscious reflection of where it ought to be, but as a result of following a structure that shifts around it. Language is "slotted in," rather than "produced." And it is humans who do all the work. </p><h2 id="when-words-are-also-grammar">When Words Are Also Grammar</h2><p>Any logic of an LLM is therefore linked to and narrowly defined by any given word's position in a series of matrixes. It is literally formulaic. Plenty of human language is formulaic too. But the LLM uses its own machine "grammar" in a different way from human grammar, and this difference is crucial. </p><p>Human language is motivated by the articulation of thought; machine language is crafted through structure. Machine structure is a grammar that <em>entirely</em> dictates the production of language, and words are themselves considered part of the grammar, not individual referents to a broader concept. </p><p>As a result, the likelihood of finding new arrangements of words through an LLM is determined not by the capacity of AI reason, but to <em>shuffle the expectations of a word's proper position</em>, ie, to loosen the range of slots a word could fill. The model does this by introducing noise, which can be controlled through a parameter known as "temperature" in most language models. </p><p>But as it is with AI-generated images, any new collisions of meaning and arrangements of text are similarly <em>cosmetic</em>. It is serendipity. The difference noise introduces is left to the reader to determine: "Is this text a new idea, or is it noise?" But any new idea is ultimately a result of noise introduced to a rigid system, and consequent recalibration into legible text. LLMs produce neither reason nor a thoughtful consideration of facts, but an output that creates a plausible approximation of where words <em>might</em> land <em>if</em> reason or thought were present.</p><p>Humans can, and do, write this way sometimes. Consider cliches and aphorisms, thoughtless texts and emails, and the semantic satiation of overuse: "I love you," "I miss you." How do these phrases compare, or articulate, the experience of longing for someone you love? They do not, and so they serve as markers of a sentiment that fails to fulfill what they mean to do. They fail notably in contrast to a poem written about missing somebody, which strives to find new arrangements of words to articulate an experience shared by millions but in a uniquely meaningful way. In most cases, it is the effort of <em>finding</em> these words, not the choice of the words themselves, that move us to embrace them.</p><blockquote class="kg-blockquote-alt"><em>Each word is ultimately a rule about where it can be placed, rather than a gesture to an experience..</em>.</blockquote><p>Human language decisions follow rules of grammar too, but we have flexibility within these structures. We therefore give rise to a thought and articulate that thought based on which word, in which grammatical slot, best serves our internal concepts to convey them to others. This is an important distinction from a machine determining the <em>likelihood of a word's position</em> amidst multiple shifting axes, even if they are cosmetically similar. The difference, I propose, is that the words of an LLM and the grammar of an LLM are inseparable: <em>each word is ultimately a very complicated set of rules about where it can be placed, rather than a gesture to an experience it might articulate</em>.</p><p>To be clear, this doesn't diminish what LLMs can do, which can be impressive, though I am ultimately more impressed with their architectures – transformers and the like – than I am with language production, given how badly understood the language it produces has come to be. LLMs are doing something, but it isn't doing what humans do with words. </p><h2 id="a-machine-cannot-imagine-itself">A Machine Cannot Imagine Itself</h2><p>Perhaps it was Sartre who suggested that consciousness is the ability to imagine itself. That is missing from the LLM, though some might argue otherwise: an LLM cannot imagine itself, though it can<em> describe</em> itself by slotting words into sequences. It can slot layers and layers of words into thousands of mazes simultaneously until it can create text about the text it has made, and then summarize that text and call it reason. </p><p>Shuffling prior text to prime future text is a clumsy concept of consciousness: the roots of LLM text are always driven by the position of a word in approximation to other words, rather than arising as a gesture of feeling, or connecting to true internal imaginations of one's own mind. This is not what the architecture of an LLM entails, regardless of the number of parameters involved.  </p><p>But many people can fall prey to a strange paradox here: unable to recognize that there is no "imagination" in the assertions of an LLM about itself, we also fail to acknowledge that the text produced by the model is nonetheless <em>imaginary</em>, a hypothetical conjecture of symbols in proper slots whose connection to an imagined "self" is absent. The <em>imagination is in the language, not the model</em>, and it is socially activated.</p><p>Current architectures of LLMs cannot imagine, but they can sequence. They can operate within our imaginative symbolic frameworks, but they cannot <em>use</em> symbols because they cannot <em>imagine themselves participating in the negotiation of those symbols</em>. For the same reason that a dog can <em>go to church</em> but a dog cannot be Catholic, an LLM can <em>have a conversation</em> but cannot <em>participate</em> in the conversation. </p><blockquote class="kg-blockquote-alt">A dog can "go to church" but a dog cannot be Catholic. An LLM can have a conversation but cannot participate in the conversation.</blockquote><p>Some will claim, nonetheless, that this is still like human thought. The concern for me, as a humanist, is less about proving things one way or another if this is true, which as far as I can tell is a philosopher's coin toss. In the meantime, I think there is use in determining whether or not we <em>want </em>to categorize these types of language making into the same category. </p><p>The decision to equate human thought with complex machine slotting has significant social implications. It presupposes that human expression is <em>only and without exception</em> the automation of grammar, that words <em>always and without exception</em> determine, for themselves, when they will appear. The mind becomes a vast mathematical vector space through which words assert themselves rather than a personal library through which words are, sometimes, <em>found</em>.</p><p>None of this will convince the convinced, and as I said: it's all a matter of interpretation. For those who make the case, they can argue that the lookup table is like looking at a thesaurus, missing the point that it is like being <em>forced</em> to use a thesaurus and follow the shift in meaning by rolling dice. There is a key distinction there, and I accept that I haven't quite articulated it yet. This is a newsletter, not a thesis. </p><p>But what is clear is that no neural network arrives or imagines itself; it is wholly shaped by the data given to it. Even if an LLM was someday designed to find meaning in its words, it would arrive at conclusions steered by those who design the weights inside the system, on data selected for that system. If we could prove once and for all that a "world model" was any approximation of our own, this makes the matter of using the LLM to present your own ideas all the more worrisome. </p><p>The personal ceases to matter then, and so too does any real sense of "meaning" to the care of crafting a thoughtful phrase. We have always been forced to the constraints of language to express ourselves, though we can pair it with all kinds of things. Writing, in the worldview of human-machine equivalency, is always automatic: no staccato in the exchange of thought and articulation, just the steady drumbeat of statistically constrained lookup tables. </p><hr><h3 id="the-mozilla-festival">The Mozilla Festival!</h3><h3 id="november-7-barcelona">November 7, Barcelona</h3><p>The Mozilla festival is happening in Barcelona starting November 7 and it has some amazing folks on the lineup focusing on building better technology. (Yes, this <em>is</em> a sponsored endorsement, but it's a genuine one!). You can hear from folks like Ruha Benjamin, Abeba Birhane, Alex Hanna, Ben Collins (from The Onion), the Domestic Data Streamers collective, and others you'll be familiar with if you've been reading here for a while. </p><p>It's going to be a great break from the constant drum of bad tech news, plus cool art and installations, like this <a href="https://schedule.mozillafestival.org/session/184?ref=mail.cyberneticforests.com" rel="noreferrer">database of online AGI hype</a>. </p><p><a href="https://schedule.mozillafestival.org/plaza?ref=mail.cyberneticforests.com" rel="noreferrer">Here's more info and your chance to buy a ticket</a>. </p><div class="kg-card kg-button-card kg-align-center"><a href="https://schedule.mozillafestival.org/plaza?ref=mail.cyberneticforests.com" class="kg-btn kg-btn-accent">Mozilla Fest Tix</a></div><hr><figure class="kg-card kg-image-card"><img src="https://mail.cyberneticforests.com/content/images/2025/10/whos-afraid-of-ai_1200x466.png" class="kg-image" alt="What Machines Don't Know" loading="lazy" width="1200" height="466" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/10/whos-afraid-of-ai_1200x466.png 600w, https://mail.cyberneticforests.com/content/images/size/w1000/2025/10/whos-afraid-of-ai_1200x466.png 1000w, https://mail.cyberneticforests.com/content/images/2025/10/whos-afraid-of-ai_1200x466.png 1200w" sizes="(min-width: 720px) 720px"></figure><h3 id="toronto-october-23-24-whos-afraid-of-ai">Toronto, October 23 & 24: Who's Afraid of AI? </h3><p>I'll be speaking at the "Who's Afraid of AI?" symposium at the University of Toronto at the end of October. It's described as "a week-long inquiry into the implications and future directions of AI for our creative and collective imaginings" and I'll be speaking on a panel called "Recognizing ‘Noise’" alongside Marco Donnarumma and Jutta Treviranus. </p><p>Other speakers include Geoffrey Hinton, Fei Fei Li, N. Katherine Hayles, Leif Weatherby, Antonio Somaini, Hito Steyerl, Vladan Joler, Beth Coleman and Matteo Pasquinelli, to name just a few. Details linked below. </p><div class="kg-card kg-button-card kg-align-center"><a href="https://bmolab.artsci.utoronto.ca/?ref=mail.cyberneticforests.com" class="kg-btn kg-btn-accent">Who's Afraid of AI?</a></div><hr><figure class="kg-card kg-image-card"><img src="https://mail.cyberneticforests.com/content/images/2025/10/Human-Movie-Beach-Still-_-Small.jpg" class="kg-image" alt="What Machines Don't Know" loading="lazy" width="1000" height="562" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/10/Human-Movie-Beach-Still-_-Small.jpg 600w, https://mail.cyberneticforests.com/content/images/2025/10/Human-Movie-Beach-Still-_-Small.jpg 1000w" sizes="(min-width: 720px) 720px"></figure><h3 id="oslo-october-15-human-movie-performance-discussion">Oslo, October 15: Human Movie (Performance & Discussion)</h3><p>I'll be in Oslo to perform "Human Movie" followed by a panel discussion through the University of Oslo. More details forthcoming, more information at the link. </p><div class="kg-card kg-button-card kg-align-center"><a href="https://www.hf.uio.no/imk/english/research/projects/humanities-hub-for-the-reimagination-of-ai/events/humain-works/filmscreening-at-trekanten.html?ref=mail.cyberneticforests.com" class="kg-btn kg-btn-accent">More Info</a></div><hr>AI profiteering is now indistinguishable from trolling - Blood in the Machinehttps://www.bloodinthemachine.com/p/ai-profiteering-is-now-indistinguishable2025-10-10T18:41:11.000Z<p>In late 2024, billboards and bus stop posters bearing the slogan STOP HIRING HUMANS started showing up in San Francisco and New York. The ad spots, which turned out to be the handiwork of the enterprise AI company Artisan, went viral, buffeted by an outpouring of rage on social media. The company said it was just trolling. “It’s really just a viral marketing tactic,” the 23 year-old CEO Jaspar Carmichael-Jack wrote on <a href="https://www.reddit.com/r/Cyberpunk/comments/1hf4jgy/im_ceo_of_artisan_the_company_behind_the_stop/">a reddit AMA</a>, “we don’t actually want anyone to stop hiring humans.”<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> A few months later, the company closed a <a href="https://www.forbes.com/sites/dariashunina/2025/04/09/artisan-raises-25m-to-replace-repetitive-work-with-ai-employees/">$25 million Series A funding round</a>. </p><p>The company doesn’t train its own models or even build its own technology, it seems—it packages other LLMs into a software-as-a-service platform that aims to automate sales work—but whoever was behind that marketing campaign understood something about the AI boom early on: It’s all about the story. When you have a market as impossibly frothy as AI, it doesn’t matter if you have an AI-powered SaaS business with a decent UI. So does everyone else. If you want investors and the press to take note, you have to manufacture yourself a narrative, and one of the easiest ways to do so is, naturally, to troll. </p><p>A quick note before we power on: Thanks to everyone who reads, subscribes to, and supports this work. Blood in the Machine is made possible <strong>100%</strong> by paying subscribers. If you value this reporting and writing, and you’re able, please consider helping me keep the lights on for about the cost of a beer or coffee a month, or $60 a year. And a big blast of sincere gratitude to all those who already chip in; you’re the best. OK OK. Onwards, and hammers up. </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p>I’ve been thinking of Artisan a lot lately, as we marinate in what sure <em>feels </em>like the peak bubble days of generative AI. Of course, who knows, we’re in uncharted waters now, <a href="https://www.ft.com/content/6cc87bd9-cb2f-4f82-99c5-c38748986a2e">AI has eaten the American economy</a> and the Trump administration wants to do all it can <a href="https://kyla.substack.com/p/ai-is-the-market-and-the-market-is">to keep the juiced times rolling</a>, so we may yet linger in these heights of AI-inflated absurdity for a while. </p><p>But suffice to say we’re in a moment where $12 billion companies are formed entirely on the basis of one of the founders having formerly worked at OpenAI and literally nothing else. I am talking of course about former OpenAI CTO (and very very briefly tenured CEO) Mira Murati, who left OpenAI in September 2024. She announced her new company Thinking Machines the next February, and began seeking investors. Not only was there no product to speak of, she apparently would not even discuss her plans to make one with potential backers.</p><p>“It was the most absurd pitch meeting,” one investor who met with Murati said, according to <a href="https://www.theinformation.com/articles/10-billion-enigma-mira-murati?utm_term=popular-articles&utm_campaign=%5BREBRAND%5D+RTSU+-+Aut&utm_content=1109&utm_medium=email&utm_source=cio&utm_term=129">the Information</a>. “She was like, ‘So we’re doing an AI company with the best AI people, but we can’t answer any questions.’” </p><p>No matter, investors soon handed her $2 billion anyway. What’s more, according to <a href="https://www.wired.com/story/thinking-machines-lab-mira-murati-funding/">WIRED</a>, that amount made it the largest seed funding round in history. (Thinking Machines’ product has been announced now. It’s called Tinker, and I kid you not, it’s an <a href="https://www.wired.com/story/thinking-machines-lab-first-product-fine-tune/">AI model that automates the creation of more AI models</a>. A little on the nose if you ask me!) </p><p>This, however, still pales in comparison to her former c-suite compatriot at OpenAI, Ilya Sutskever, who <a href="https://techcrunch.com/2025/04/12/openai-co-founder-ilya-sutskevers-safe-superintelligence-reportedly-valued-at-32b/">has raised </a><em><a href="https://techcrunch.com/2025/04/12/openai-co-founder-ilya-sutskevers-safe-superintelligence-reportedly-valued-at-32b/">$3 </a></em><a href="https://techcrunch.com/2025/04/12/openai-co-founder-ilya-sutskevers-safe-superintelligence-reportedly-valued-at-32b/">billion for his own product-less startup</a> with a valuation at $32 billion, and whose chief innovation so far appears to be naming it Super Safe Intelligence, perhaps in an attempt to clear up any questions raised by the company t-shirt. </p><p>What <em>really </em>got me thinking about Artisan, however, were two stories I’ve been following in recent weeks: The first is about the New York City ad campaign from the AI startup Friend, which sells a little pendant device a user is supposed to wear around their neck and talk to and attract the ire of passersby with. The company plastered the NYC subway system with ads last month, and those ads were rapidly and thoroughly vandalized in what can only be read as an outpouring of rage at not just the Friend product, which they correctly identified as a malign portable surveillance device, but at commercial AI in general. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XKi1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b57587b-d509-43f1-9194-2649919fc24c_1260x1666.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XKi1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b57587b-d509-43f1-9194-2649919fc24c_1260x1666.jpeg 424w, https://substackcdn.com/image/fetch/$s_!XKi1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b57587b-d509-43f1-9194-2649919fc24c_1260x1666.jpeg 848w, https://substackcdn.com/image/fetch/$s_!XKi1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b57587b-d509-43f1-9194-2649919fc24c_1260x1666.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!XKi1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b57587b-d509-43f1-9194-2649919fc24c_1260x1666.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XKi1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b57587b-d509-43f1-9194-2649919fc24c_1260x1666.jpeg" width="1260" height="1666" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/0b57587b-d509-43f1-9194-2649919fc24c_1260x1666.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":1666,"width":1260,"resizeWidth":null,"bytes":142008,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/175650335?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b57587b-d509-43f1-9194-2649919fc24c_1260x1666.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!XKi1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b57587b-d509-43f1-9194-2649919fc24c_1260x1666.jpeg 424w, https://substackcdn.com/image/fetch/$s_!XKi1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b57587b-d509-43f1-9194-2649919fc24c_1260x1666.jpeg 848w, https://substackcdn.com/image/fetch/$s_!XKi1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b57587b-d509-43f1-9194-2649919fc24c_1260x1666.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!XKi1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b57587b-d509-43f1-9194-2649919fc24c_1260x1666.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>Here are a few more: Someone’s been cataloguing the beautiful carnage in this <a href="https://nyc-friends.vercel.app/">database you can peruse</a>, too. (“Don’t be a phoney, be a Luddite,” one reads. You love to see it.) </p><div class="image-gallery-embed" data-attrs="{"gallery":{"images":[{"type":"image/jpeg","src":"https://substack-post-media.s3.amazonaws.com/public/images/66b145aa-dd1f-44b1-8913-5a83314f45c8_902x1574.jpeg"},{"type":"image/jpeg","src":"https://substack-post-media.s3.amazonaws.com/public/images/d314f0ba-ddb3-4899-884c-94dfece806c7_3840x2880.jpeg"},{"type":"image/jpeg","src":"https://substack-post-media.s3.amazonaws.com/public/images/2e5b1105-6371-493d-afb2-e7d74a9faa26_711x851.jpeg"}],"caption":"","alt":"","staticGalleryImage":{"type":"image/png","src":"https://substack-post-media.s3.amazonaws.com/public/images/df6b1da6-5974-4527-bc98-daf6da2ea8af_1456x474.png"}},"isEditorNode":true}"></div><p>The CEO of Friend, 22 year-old Avi Schiffman, claimed, perhaps dubiously, that this was the point all along. “I know people in New York hate AI, and things like AI companionship and wearables, probably more than anywhere else in the country,” he <a href="https://www.adweek.com/brand-marketing/ai-startup-friend-bets-on-foes-with-1m-nyc-subway-campaign/">told </a><em><a href="https://www.adweek.com/brand-marketing/ai-startup-friend-bets-on-foes-with-1m-nyc-subway-campaign/">Adweek</a></em><a href="https://www.adweek.com/brand-marketing/ai-startup-friend-bets-on-foes-with-1m-nyc-subway-campaign/">.</a> “So I bought more ads than anyone has ever done with a lot of white space so that they would socially comment on the topic.”</p><p>Why, exactly, did he do this? “Nothing is sacred anymore, and everything is ironic,” as he told <a href="https://www.theatlantic.com/technology/2025/10/friend-ai-companion-ads/684451/?taid=68e44d7b56db0300014fc307&utm_campaign=the-atlantic&utm_content=true-anthem&utm_medium=social&utm_source=twitter">the Atlantic</a>. AKA “idk I’m trolling but it’s working because you’re writing about me.” Now, Friend is unique among AI companies in that other AI industry folks even seem to think Schiffman and <a href="https://www.wired.com/story/i-hate-my-ai-friend/">his startup are a bad joke</a>. But there are more ambitious practitioners afoot, too. </p><p>In <a href="https://fortune.com/2025/10/08/leopold-aschenbrenner-openai-ftx-1-5-billion-hedge-fund-situational-awareness/?utm_content=socialShare_null&utm_medium=x&utm_campaign=social_share">Fortune,</a> Sharon Goldman profiled the 23-year-old Leopold Aschenbrenner, who has one of the most man-of-his-times AI resumes I’ve ever seen. His first job out of college was at FTX, where he landed a gig through connections in the effective altruism movement, and worked until the crypto exchange collapsed because its CEO Sam Bankman-Fried was perpetrating large-scale fraud. Aschenbrenner then went to OpenAI where he absorbed the zeitgeist, was apparently arrogant and disliked <em>before </em>he leaked insider information to competitors and perhaps to the press, and got himself fired. He then self-published an overwrought document titled <a href="https://kyla.substack.com/p/ai-is-the-market-and-the-market-is">“Situational Awareness: The Decade Ahead,”</a> which aped ideas from the AI safety and doomer communities and presented them as a scientifically-flavored report on how and when AI was going to become “the most powerful weapon mankind has ever built.” The document went viral, and then investors gave him $1.5 billion to manage a hedge fund. “Just four years after graduating from Columbia,” Goldman writes, “Aschenbrenner is holding private discussions with tech CEOs, investors, and policymakers who treat him as a kind of prophet of the AI age.”</p><p>Just so we’re clear about this: A recent college graduate whose only work experience was a stint at an infamously fraudulent crypto exchange and a job at OpenAI from which he was fired for leaking confidential information for personal gain, has maneuvered his way into managing $1.5 billion because he published a manifesto about end times AI and declared himself as a great knower of how this speculative scenario will play out. I do not know Aschenbrenner, I have never interviewed him, but if it turned out he didn’t believe a single word of that manifesto, I would not be surprised in the slightest. People who sincerely believe an apocalypse is coming do not tend to start hedge funds. Of course, Aschenbrenner may be a true believer in AI—as-Skynet, he may be a talented grifter who is skillfully exploiting those who lap up his pseudoscientific forecasting, or he may simply be trolling everyone into oblivion. </p><p>What matters is it doesn’t really matter. Trolling is now all but indistinguishable from AI profiteering. It may be one and the same. One way to look at the entire AI doom phenomenon, especially among the many executives and leaders in Silicon Valley who do not fully believe it but find it useful, is <a href="https://www.latimes.com/business/technology/story/2023-03-31/column-afraid-of-ai-the-startups-selling-it-want-you-to-be">as an elaborate bit of trolling to get consumers and enterprise clients to buy their products</a>. “This technology could kill us all, but use it to automate your email job while you can.”</p><div class="digest-post-embed" data-attrs="{"nodeId":"5b9e897c-f36c-45a7-a73b-a032e63dfd39","caption":"Greetings all —","cta":"Read full story","showBylines":true,"size":"sm","isEditorNode":true,"title":"The AI bubble is so big it's propping up the US economy (for now)","publishedBylines":[{"id":934423,"name":"Brian Merchant","bio":null,"photo_url":"https://substack-post-media.s3.amazonaws.com/public/images/cf40536c-5ef0-4d0a-b3a3-93c359d0742a_200x200.jpeg","is_guest":false,"bestseller_tier":1000}],"post_date":"2025-08-03T20:02:03.613Z","cover_image":"https://substackcdn.com/image/fetch/$s_!01fW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9f3185f-e6b5-4e08-9dda-7af99a619172_1286x1000.png","cover_image_alt":null,"canonical_url":"https://www.bloodinthemachine.com/p/the-ai-bubble-is-so-big-its-propping","section_name":null,"video_upload_id":null,"id":169861815,"type":"newsletter","reaction_count":198,"comment_count":5,"publication_id":1744395,"publication_name":"Blood in the Machine","publication_logo_url":"https://substackcdn.com/image/fetch/$s_!irLg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21f9bf3-26aa-47e8-b3df-cfb2404bdf37_256x256.png","belowTheFold":true}"></div><p>There was, after all, a time in which, if a founder walked into a VC’s office on Sand Hill Road with a pitch for a big new company, and then the VC asked “what is your company going to do?” and the founder said “I can’t answer any questions about my company actually,” it would be clear the person was either delusional, or was trolling, and they would be shown the door. During a particularly absurd bubble around a technology with uniquely science fictional aspirations, however, investors might say, great, here is $2 billion. </p><p>Relatedly, being willing to eat shit for your AI startup that everyone hates may be seen by investors as a sort of perverse vouching for a future that so many others have deep doubts about. </p><p>All of these stories are absurd in their own ways, but they’re also telling us something about how the AI bubble is functioning. There’s been a relentless stream of talk about that bubble of late, and new investigations into <a href="https://www.ft.com/content/6cc87bd9-cb2f-4f82-99c5-c38748986a2e">the financial</a> <a href="https://www.derekthompson.org/p/this-is-how-the-ai-bubble-will-pop">and economic indicators</a> are published every day. (In fact, I’m working on my own contribution, a piece for WIRED, examining the role that narratives play in bubble formation.) As I’ll write more about soon, the *story*<em> </em>of AI’s innovation is different than that of the technology that has fed economic bubbles in the past. It promises investors a product with nearly limitless power, to automate all jobs or discover new medicines to patent and so on—the tech product to end all tech products, essentially—and lots of investors simply still don’t know how seriously to take it. </p><p>The sheer rate and ease by which unexperienced and smarmy guys in their early twenties are raising millions and now into the billions<em> </em>of dollars, largely by being willing to aggressively announce themselves as scapegoats for the future, or by publishing long blogs retreading tales of the terrible power of AI, should be yet another in an increasingly long line of AI-generated red flags. The signature moment in the entire AI boom thus far, to me, took place in the early days of OpenAI, when Sam Altman was asked about how AI products would make money. He <a href="https://ainowinstitute.org/publications/ai-generated-business">told a crowd of industry folks</a>, with a straight face, the plan was simply to build AGI and then ask <em>it</em>. So what if it’s just trolling, all the way down?</p><div><hr></div><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Y-9p!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F907817e1-1818-4d86-b015-4a14bc72485b_2306x1450.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Y-9p!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F907817e1-1818-4d86-b015-4a14bc72485b_2306x1450.png 424w, https://substackcdn.com/image/fetch/$s_!Y-9p!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F907817e1-1818-4d86-b015-4a14bc72485b_2306x1450.png 848w, https://substackcdn.com/image/fetch/$s_!Y-9p!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F907817e1-1818-4d86-b015-4a14bc72485b_2306x1450.png 1272w, https://substackcdn.com/image/fetch/$s_!Y-9p!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F907817e1-1818-4d86-b015-4a14bc72485b_2306x1450.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Y-9p!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F907817e1-1818-4d86-b015-4a14bc72485b_2306x1450.png" width="1456" height="916" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/907817e1-1818-4d86-b015-4a14bc72485b_2306x1450.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":916,"width":1456,"resizeWidth":null,"bytes":896566,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/175650335?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F907817e1-1818-4d86-b015-4a14bc72485b_2306x1450.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Y-9p!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F907817e1-1818-4d86-b015-4a14bc72485b_2306x1450.png 424w, https://substackcdn.com/image/fetch/$s_!Y-9p!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F907817e1-1818-4d86-b015-4a14bc72485b_2306x1450.png 848w, https://substackcdn.com/image/fetch/$s_!Y-9p!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F907817e1-1818-4d86-b015-4a14bc72485b_2306x1450.png 1272w, https://substackcdn.com/image/fetch/$s_!Y-9p!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F907817e1-1818-4d86-b015-4a14bc72485b_2306x1450.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p></p><h2>BONUS 1: The Luddites in CNN</h2><p>I spoke with CNN’s Allison Morrow; who is, in my estimation, one of the very best business writers on the AI beat in the American mainstream press; about the recent rise of luddite activity. It’s <a href="https://www.cnn.com/2025/10/08/business/ai-luddite-movement-screens">a fun piece</a> pinned to the popularity of Sora and possibility that anger at the aggressive mode of unreality OpenAI is selling with it may be a tipping point. </p><h2>Bonus 2: A Friend ad graffiti simulator</h2><p>I had to share this, too, and thanks to Alex Hanna for spotting and sharing: If you don’t live in New York City or any other major metropolis graced by the Friend billboard ads, here’s <a href="https://www.vandalizefriend.com/">a Friend graffiti simulator</a> with which you can vandalize the ads virtually. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!SDsn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F976eb32d-0179-45ed-981c-4e833de444b7_1830x1302.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!SDsn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F976eb32d-0179-45ed-981c-4e833de444b7_1830x1302.png 424w, https://substackcdn.com/image/fetch/$s_!SDsn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F976eb32d-0179-45ed-981c-4e833de444b7_1830x1302.png 848w, https://substackcdn.com/image/fetch/$s_!SDsn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F976eb32d-0179-45ed-981c-4e833de444b7_1830x1302.png 1272w, https://substackcdn.com/image/fetch/$s_!SDsn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F976eb32d-0179-45ed-981c-4e833de444b7_1830x1302.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!SDsn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F976eb32d-0179-45ed-981c-4e833de444b7_1830x1302.png" width="1456" height="1036" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/976eb32d-0179-45ed-981c-4e833de444b7_1830x1302.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":1036,"width":1456,"resizeWidth":null,"bytes":1724018,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/175650335?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F976eb32d-0179-45ed-981c-4e833de444b7_1830x1302.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!SDsn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F976eb32d-0179-45ed-981c-4e833de444b7_1830x1302.png 424w, https://substackcdn.com/image/fetch/$s_!SDsn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F976eb32d-0179-45ed-981c-4e833de444b7_1830x1302.png 848w, https://substackcdn.com/image/fetch/$s_!SDsn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F976eb32d-0179-45ed-981c-4e833de444b7_1830x1302.png 1272w, https://substackcdn.com/image/fetch/$s_!SDsn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F976eb32d-0179-45ed-981c-4e833de444b7_1830x1302.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>Okay, that’s it for today. Thanks as always for reading, my friends, fam, and fellow luddites; and apologies for the delay in delivery this week. Was knocked out with a cold, got behind on edits on that WIRED draft I mentioned, and then I did nonstop press for two days for <a href="https://capitanswing.com/libros/sangre-en-las-maquinas/">the Spanish edition of BLOOD IN THE MACHINE</a>, aka SANGRE EN LAS MAQUINAS, which publishes next week. It’s got a great cover, and the publishers—Capitan Swing, no less, named after the machine breakers who were directly inspired by the Luddites—have been amazing. (The Italian edition is out next week, too, which is also very cool.)</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0fW7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb7efc52-bfb8-42d9-b2b0-463773a61fbe_450x702.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0fW7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb7efc52-bfb8-42d9-b2b0-463773a61fbe_450x702.jpeg 424w, https://substackcdn.com/image/fetch/$s_!0fW7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb7efc52-bfb8-42d9-b2b0-463773a61fbe_450x702.jpeg 848w, https://substackcdn.com/image/fetch/$s_!0fW7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb7efc52-bfb8-42d9-b2b0-463773a61fbe_450x702.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!0fW7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb7efc52-bfb8-42d9-b2b0-463773a61fbe_450x702.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0fW7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb7efc52-bfb8-42d9-b2b0-463773a61fbe_450x702.jpeg" width="450" height="702" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/bb7efc52-bfb8-42d9-b2b0-463773a61fbe_450x702.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":702,"width":450,"resizeWidth":null,"bytes":44080,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/175650335?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb7efc52-bfb8-42d9-b2b0-463773a61fbe_450x702.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0fW7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb7efc52-bfb8-42d9-b2b0-463773a61fbe_450x702.jpeg 424w, https://substackcdn.com/image/fetch/$s_!0fW7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb7efc52-bfb8-42d9-b2b0-463773a61fbe_450x702.jpeg 848w, https://substackcdn.com/image/fetch/$s_!0fW7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb7efc52-bfb8-42d9-b2b0-463773a61fbe_450x702.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!0fW7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb7efc52-bfb8-42d9-b2b0-463773a61fbe_450x702.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>OKOKOK Sorry to go on again. I am very tired! But lots of good stuff along with the hellscapes, and onwards we go. Until next time. </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>It is well worth scanning that AMA because the CEO clearly thinks winking along with everyone is going to clear the air and no one gives him an inch of daylight, it’s just angry replies all the way down.</p></div></div>Is the Media Studies Cabal in the Room With Us Right Now? - Cybernetic Forests68e239d8335f37000172b3582025-10-05T14:54:00.000Z<img src="https://images.unsplash.com/photo-1596729889239-68620974bc10?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDE3N3x8dGVjaCUyMHN0YWNrfGVufDB8fHx8MTc1OTY3NTc1NHww&ixlib=rb-4.1.0&q=80&w=2000" alt="Is the Media Studies Cabal in the Room With Us Right Now?"><p>I read Benjamin Brattons book, <em>The Stack,</em> in 2020 as a grad student in ANU's Applied Cybernetics program. I give it credit for directing my attention to the interaction between layers of digital and physical infrastructures. Trained as a social scientist in media studies, my mind must have been filling in the blanks of Bratton's work: what were the politics and the economics of this Stack?</p><p>Bratton is a philosopher. I am not. I'm interested in both the theory and the social impacts of technology. To me, any <em>philosophy</em> of technology that is severed from the material <em>impacts</em> of the technology is interesting, but limited in utility: a thought exercise. Like all thoughts, they can be useful or distract from reality, depending on how skillfully we wield them. A great deal of suffering arises when we act on our solitary imaginations of the world, rather than working toward clarity through dialogue. As such, it's the role of those who care about a topic to articulate the contours of the space and listen to those who see it differently. AI is one such space, and debate, at the current moment, is pretty vigorous. </p><p>In a recent piece, "<a href="https://www.noemamag.com/is-european-ai-a-lost-cause-not-necessarily/?ref=mail.cyberneticforests.com"><em>Is European AI a Lost Cause? Not Necessarily</em></a>," it's clear that Bratton is annoyed by intellectuals (he specifies media studies) who critique the politics, economics, and ideological assumptions that underpin the recent AI boom. This, he warns, is a distraction that risks imperiling European AI sovereignty, entrenching the AI industry within a regulatory superstate that hinders its development.</p><p>This imperative to build, rather than critique, was also at the heart of a 2020 essay by venture capitalist Marc Andreessen, "<a href="https://a16z.com/its-time-to-build/?ref=mail.cyberneticforests.com" rel="noreferrer"><em>It's Time to Build</em></a>." Andreessen argued that we don't <em>build</em> stuff anymore, because we are bogged down in bureaucracy and malaise, pointing to the traumatic COVID-19 system collapse and the lack of technological infrastructure in place to anticipate and prevent it. He wrote:</p><blockquote>"Part of the problem is clearly foresight, a failure of imagination. But the other part of the problem is what we didn't *do* in advance, and what we're failing to do now. And that is a failure of action, and specifically our widespread inability to <strong>build</strong>."</blockquote><p>Bratton charges that this dynamic is still in play in European AI, what he calls "backfiring in real time." Bratton suggests that Europe is still not <em>building</em> AI, but is instead content only to regulate it, forcing AI into ever-narrowing pathways with no room left for innovation. Any attempts to build these multiple layers of technical infrastructure on European terms, he says, are therefore "backfiring in real time." As such, when technical infrastructure <em>is</em> built in Europe, it relies overwhelmingly on US and Chinese companies.</p><p>All of this is bog-standard anti-regulatory critique that Silicon Valley has been endorsing for years: "regulation prevents innovation." But Bratton then makes a deeply weird pivot in terms of assigning blame for Europe's regulatory environment.</p><h2 id="the-media-studies-cabal">The Media Studies Cabal</h2><p>The target of Bratton's critique of AI reactionaries is not the overreach of tech companies or their deeply unpopular CEOs. While NVIDIA has more money and political power than nearly any other company on Earth, it's not even mentioned in examining the conditions that gave rise to AI's regulatory position. </p><p>For Bratton, what stands away from the mass acceleration of the AI industry in Europe is <em>intellectuals and artists</em>. He focuses on what he describes as a "critique industry," a kind of media studies cabal that shifts attention away from wealth production in Europe. The intellectuals and artists of this critique industry have seized the public's imagination with its scrutiny of AI, which permeate universities and the arts. Such questions stand in the way of progress. </p><blockquote>"The precautionary delay was successfully narrated by a Critique Industry that monopolized both academia and public discourse. Oxygen and resources were monopolized by endless stakeholder working groups, debates about omnibus legislation and symposia about resistance — all incentivizing European talent to flee and American and Chinese platforms to fill the gaps."</blockquote><p>You may think you are seeing an endless sea of interviews and social media posts about Sam Altman, Elon Musk, and Mark Zuckerberg. But Bratton must be tuned to a different channel, one committed to nonstop praise and media attention to Kate Crawford, Emily Bender and Abeba Birhane. </p><p>Bratton's critique industry paragraph boldly denies us any citations, but as a media studies professional who is embedded in this "Critique Industry" I find it incredible to hear that a handful of experimental short films, the volunteers at Critical AI festivals, or even Crawford & Joler's installations at the Venice Biennale, are so oppressive that they have driven European computer programmers to abandon their homes and move to China and California. </p><h3 id="who-flees-to-american-tech">Who Flees to American Tech? </h3><p>Lucky for us, the dreaded social sciences have ways to test the claim. It's clear to anyone that a large number of the people building the US tech stack are immigrants to the US. The numbers aren't perfect, but the American Immigration Council reports that <a href="https://www.americanimmigrationcouncil.org/fact-sheet/foreign-born-stem-workers-united-states/?ref=mail.cyberneticforests.com">26% of computer and math workers in the US were born abroad</a>. The most significant number of people building the tech stack in the US, however, are arriving from India and China. The next-largest contributors to tech immigration are Mexico, Vietnam, South Korea, and Canada; the first European country to appear on the list is Russia. Every other European country is clustered into "other." </p><p>Looking at <a href="https://www.joannejacobs.com/post/silicon-valley-runs-on-asian-tech-talent-66-of-workers-are-immigrants?ref=mail.cyberneticforests.com#:~:text=Two%2Dthirds%20of%20tech%20workers%20in%20Silicon%20Valley,percent)%20the%20most%20common%20countries%20of%20origin.">Silicon Valley alone</a>, the combined national contribution of migrants from the UK, Germany, and Ukraine still amounted to just about 2% of tech workers. Still, locally in Europe, there is an exodus of 3.4% of trained tech industry professionals leaving for the USA. But the reason is less to do with the publication of Matteo Pasquinelli's <em>"</em><a href="https://www.versobooks.com/en-gb/products/735-the-eye-of-the-master?srsltid=AfmBOoqtcliCEVYqPBSJlnUYnr1jm9fHrzyZvOgBWrVJJVR2gIwqm52l&ref=mail.cyberneticforests.com"><em>The Eye of the Master</em></a><em>"</em> and more to do with simple math: you will be paid a much higher salary and taxed 7%-10% less if you work in America instead of Europe. This dynamic extends not only to tech but to nearly nearly every STEM field.</p><p>So, it's unclear what Bratton is blaming critical AI people for, exactly, but I think he overstates the its role in the cultural diffusion of anti-tech sentiment. Yes, anti-AI sentiment is strong enough that even I am consistently harassed by anti-AI people online. Most of the online discourse is steered not by those targetted by Bratton, but by a deeply anxious response to illustrators and writers whose work was used in the training of AI models and whose careers are now at risk. European academic discourse did not create that dynamic: labor insecurity and the AI industry did. </p><p>Even so, is there any evidence that such anti-AI sentiment is inspiring programmers to reject the tech industry? Notably, AI is not all <em>generative</em> AI, and any hostility to diffusion models and LLMs doesn't seem to be slowing anything down. About 3.5 million Europeans work in the tech industry, a <a href="https://www.stateofeuropeantech.com/chapters/talent?ref=mail.cyberneticforests.com#growing-talent-pool">seven-fold increase</a> (7x, not 7%) since Bratton published <em>The Stack</em> in 2015. There is a wealth of talent to choose from, and the direction in hiring is only on the rise. </p><p>Yes, tech <em>companies</em> may hire them and relocate to the States due to regulatory hurdles and other issues that make the States more attractive for scaling, such as linguistic and regulatory conformity. <em>Of course</em>, an utterly free-market environment is an ideal place for tech companies to operate. But the free-market climate of the United States also creates and allows for things that Bratton dismisses as anxious hand-wringing by intellectuals and artists:</p><p>"Heckling from the front is <a href="https://www.theguardian.com/technology/artificialintelligenceai?ref=mail.cyberneticforests.com">a commentariat</a> fixated on the social ills of AI, social media, data centers, and big technology in general."</p><p>How <em>dare we</em> heckle! Someone ought to stop us. I don't say all this simply to suggest media scholars do not have any impact on AI discourse: some do, and it is important that robust and skeptical debate occur about the systems we build. But I also feel that Bratton's focus on those who critique AI, and comparatively low engagement with those who build and deploy it, show that he is not so concerned with properly assessing levels of power so much as maintaining access to it. </p><h3 id="drama-at-the-piazza">Drama at the Piazza </h3><p>In a bit of "<em>l'esprit d'escalier</em>," Bratton gets to the true heart of the issue in summarizing the positions of three fellow panelists during this summer's Venice Architecture Biennale, of which he was a part, alongside Evgeny Morozov, Kate Crawford, and architect Marina Otero Verzier. </p><blockquote>"How to build the Eurostack? Their answers are: hold out for the eventual return of an idealized state socialism, declare that AI is racist statistical sorcery, "resist," stop the construction of data centers and, of course, "communism."</blockquote><p>How might one arrive at such an uncharitable understanding is easy when you examine the informal intellectual conclaves Bratton is currently immersed in. Since 2015, Bratton has been enmeshed in a relatively narrow world, funded by the Berggruen Institute. Bratton has also become increasingly angry online and is unwilling to engage in good-faith arguments with those he disagrees with. The Berggruen Institute has a location in Venice, which hosted his contribution to the Biennale. It funds Bratton's Antikythera programme, and it pays for the magazine in which his text was published. </p><p>This monoculture clearly has some advantages: he gets to dismiss those within traditional academia as "conforming to orthodoxy," which is his description of the collective process of building knowledge on top of previous knowledge.</p><p>I have no idea if Bratton is cultivating a "bad boy of AI orthodoxy" image as a kind of online brand or if this is just his personality. In any case, it's bad faith that makes it seem more like he's isolated from the world he's attacking and therefore resorting to hurling sarcasm at oversimplified phantoms. If so, that explains why his positions are so increasingly difficult to reconcile with an expanding body of documentation (not theories) about the ways that AI interacts with society and the environment. </p><p>When AI risks are real, they rightfully hinder expansion or push for different techniques or arrangements of tech stacks. How we determine which risks are real, and how to navigate those trade-offs, is through conversations: "the discourse," with critics largely pushing for more extreme degrees of care. This back-and-forth between builders and critics makes neither happy, but ultimately leads to compromises – some good, some bad, and I would argue, usually skewed toward "getting things built." But critics are under no obligation to drop the intensity of their critiques, because social harms demand loud voices. </p><p>Bratton seems to suggest that the way we build AI is to either stop discussing those risks at academic conferences because they bore him; cut considerations of risk out of policy deliberations because they make things too difficult, or both. I can't tell if he knows he is contributing to the same discourse he wants us all to stop engaging with. After all, his text is <em>also</em> an attempt to emphasize what priorities ought to take precedence over others. But for him, the priority is <em>building</em>. All this hand-waving about the environmental impacts of data centers or automated racialized surveillance is boring groupthink of the academic set.</p><p>As a result, he focuses considerable ire on those who <em>point out issues</em> rather than those who <em>create flawed systems</em>. He misunderstands not only the point of critical AI discourse but the limits of its influence in the ecosystem of technology policy and tech development. He is punching down, but something makes him believe he is punching up.</p><h3 id="uncritical-antihumanism">Uncritical Antihumanism</h3><p>In the essay, he lumps together several divergent views on AI in a pile of definitions stripped of context, presenting it as evidence that critiques of the AI industry are merely "verbalized nightmares of a 20th-century Humanism gasping for air." Bratton frequently has critiqued "critical humanism" as a source of real problems in building new, bigger tech companies to manage the automation of more of our planet while gesturing loosely to capitalist excess. </p><p>I agree with him on this: many critical humanists examine AI and find parallels to the discourse and logic that motivated the technologically aided atrocities of the 20th century. Technology, on its own, does not enable such atrocities. However, we also see dangerous echoes of the rhetoric of power and totalitarianism from 20th-century political systems in our present day, and so any conditions for the reunion of bad politics and bad tech is certainly worth asking questions about, even if some of them appear anxious. </p><p>Much of the current resistance to AI is based on fears stemming from past trajectories of power and technology. In attempting to make sense of an amorphous and constantly shifting term ("AI"), critical AI scholars struggle to describe it as we see it and then defend that position. This is how knowledge is built. We don't just read <em>The Stack</em> and agree with it. We continue to question what the technology does, the claims made about it, and the histories embedded within it.</p><p>Among these concerns are concentrations of power and the abstraction of populations that come with scale, which Bratton is correct in attributing to the humanist response to "20th-century" horrors such but not at all limited to the Holocaust. Critical Theory is, in part, a means through which to reject the rise of conditions which led to the Holocaust, and as such, yes, many of us are a bit tetchy. You'll have to forgive us if we're still hanging on to that one.</p><p>On the surface, Bratton's accelerationist bullying stems from a pragmatic urge to get things done. But get <em>what</em> done? In his dismissal of Joler & Crawford's "<a href="https://www.noemamag.com/is-european-ai-a-lost-cause-not-necessarily/?ref=mail.cyberneticforests.com#:~:text=co%2Dcreator%20of%20%E2%80%9C-,Calculating%20Empires,-%2C%E2%80%9D%20a%20winner%20of">Calculating Empires</a>," Bratton reveals he is literally incapable of conjuring a critical impulse: "one discerns that it simply draws arrows from your phone to a copper mine and from a data center to the police." That tells me a great deal about Bratton's inability to connect the dots between his beloved tech stack and history and draw parallels to the present. Instead, he wants the future, <em>now</em>: to pretend that the soil is fertile and that nothing sinister lurks beneath it.</p><p>"Just as classical computing is different from neural network-based computation," Bratton asserts, "the socio-technical systems to be built are distinct as well." This is a bold assertion despite his claiming otherwise. The adoption of new technological forms does not erase the interest in technosocial purposes to which it might be directed. There are contexts from which this technology has risen, a beneficiary of existing political structures and previously accumulated wealth. Whether modeled on neurons or electrical pulses, technology reflects the desires of those capable of building it.</p><p>In examining the legacies embedded in these technologies, good, rigorous debates in critical AI actually bring us closer to dismantling the 20th-century residue that Bratton finds so troubling. The concerns of the "Critique Industry" are not actually rooted in the 20th century but in having learned from its mistakes. </p><p>Bratton loves to dismiss critics of AI as "nOt UnDeRsTaNdInG tHe TeChNoLoGy" even though many of those he lumps together are literally trained experts on the subject. But he does not seem to understand what critique is, or what its practical limits are. You don't need to know what a neural net is to understand how power might abuse it. If building guardrails slows us down, and if the democratic deliberation of trade-offs slows us down, then so be it. We go slow. History created AI systems, and pretending we can build the world anew by ignoring our history is the most outdated idea of them all. </p>The incredible arrogance of OpenAI - Blood in the Machinehttps://www.bloodinthemachine.com/p/the-incredible-arrogance-of-openai2025-10-05T12:54:33.000Z<p>OpenAI has been <a href="https://www.bloodinthemachine.com/p/openais-desperate-quest-to-become">on a desperate quest to build an AI monopoly</a> from just about the moment that ChatGPT blew up. Its CEO, Sam Altman, has been following the playbook of his mentor Peter Thiel, who has long held that the best way to succeed in tech is to create a novel market—say, for digital chatbots—and then to <a href="https://www.wsj.com/articles/peter-thiel-competition-is-for-losers-1410535536?gaa_at=eafs&gaa_n=ASWzDAh-Pl83BNF-ultBKbTTNkg1O-9zxJyA9uevjlx_IrNMIRJ7OUUw8KfUm-klfvg%3D&gaa_ts=68e0567c&gaa_sig=fzvVtMU_oYZ0T4rtrD_cDcJR3YxM9hmMCTlMelfkJt2XRHD0TIeerVXtbAb76XIudHr8vpVXCtuyhUCrtPoPog%3D%3D">move to monopolize that market</a>. This thinking has animated OpenAI’s expansionist, biggest-is-best strategy of pursuing ever larger datasets and data center complexes, from the start. </p><p>That plan has been complicated by a number of factors, including the sheer costs of training and running large language models, the degree to which AI chatbots have already been commoditized, and an extremely unclear vision of <em>how, </em>exactly,<em> </em>to monopolize “AI.” OpenAI has nonetheless succeeded in becoming the clear cultural frontrunner in the AI sweepstakes—it’s *the* AI company, as far as most consumers are concerned, even as it is deeply unprofitable and as <a href="https://techcrunch.com/2025/07/31/enterprises-prefer-anthropics-ai-models-over-anyone-elses-including-openais/">competitors eat into key market segments</a>. Yet OpenAI has continued to leverage this standing to amp up its valuation to unheard-of heights. Just this week, a <a href="https://www.bloomberg.com/news/articles/2025-10-02/openai-completes-share-sale-at-record-500-billion-valuation">pre-IPO sale of $6.6 billion worth of employee stock shares to SoftBank</a> put the company’s current valuation at half a trillion dollars.</p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p>But this also means that OpenAI must continually find new ways to announce its AI supremacy to investors and consumers. That’s exactly what Sora 2, the AI-generated TikTok clone app that OpenAI launched to select users last week really is. It’s yet another bulletin to the world, and more specifically to now and future partners and stakeholders, that OpenAI is on the cutting edge of AI. That when it comes to generating new AI products and culture-making (or killing, but more on that in a second) moments, OpenAI stands alone.</p><p>All of this should make even starker that now, three years into the AI boom that it begot, OpenAI (or its competitors like Anthropic) is no closer to producing anything resembling a sustainable business model. It has a popular chatbot app in ChatGPT, but still loses money on nearly every query. OpenAI has announced plans to push into social media, e-commerce, search, app <em>stores</em>, enterprise software, and much more, with little to show for most of those plays yet.</p><p>Which brings us to Sora. If you were online at all last week, you probably found your feeds overflowing with AI-generated images of Sam Altman robbing a grocery store, or <a href="https://x.com/MattBelloni/status/1973501946410737666">Family Guy characters having dinner with Wednesday Adams</a> or whatever. As one <a href="https://x.com/def__ibrahim__/status/1973090826222805226?ref=platformer.news">widely circulated meme put it</a>:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FUGE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadd1be6f-8d4e-47d5-bfa8-5b2c88b98cc8_615x413.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FUGE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadd1be6f-8d4e-47d5-bfa8-5b2c88b98cc8_615x413.jpeg 424w, https://substackcdn.com/image/fetch/$s_!FUGE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadd1be6f-8d4e-47d5-bfa8-5b2c88b98cc8_615x413.jpeg 848w, https://substackcdn.com/image/fetch/$s_!FUGE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadd1be6f-8d4e-47d5-bfa8-5b2c88b98cc8_615x413.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!FUGE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadd1be6f-8d4e-47d5-bfa8-5b2c88b98cc8_615x413.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FUGE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadd1be6f-8d4e-47d5-bfa8-5b2c88b98cc8_615x413.jpeg" width="615" height="413" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/add1be6f-8d4e-47d5-bfa8-5b2c88b98cc8_615x413.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":413,"width":615,"resizeWidth":null,"bytes":29540,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":false,"topImage":true,"internalRedirect":"https://www.bloodinthemachine.com/i/175230778?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadd1be6f-8d4e-47d5-bfa8-5b2c88b98cc8_615x413.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!FUGE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadd1be6f-8d4e-47d5-bfa8-5b2c88b98cc8_615x413.jpeg 424w, https://substackcdn.com/image/fetch/$s_!FUGE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadd1be6f-8d4e-47d5-bfa8-5b2c88b98cc8_615x413.jpeg 848w, https://substackcdn.com/image/fetch/$s_!FUGE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadd1be6f-8d4e-47d5-bfa8-5b2c88b98cc8_615x413.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!FUGE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadd1be6f-8d4e-47d5-bfa8-5b2c88b98cc8_615x413.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>The potential for abuse was immediate. Vox <a href="https://www.vox.com/future-perfect/463596/openai-sora2-reels-videos-tiktok-chatgpt-deepfakes">called it</a> “an unholy abomination” and noted that it had already led users to generate and share deepfake arrest videos and videos of real people dressed as Nazis. The privacy scholar <a href="https://bsky.app/profile/hypervisible.blacksky.app/post/3m27lkz7jmc2j">Chris Gilliard declared that</a> “OpenAI is essentially a social arsonist, developing and releasing tools that hyper scale the most racist, misogynistic, and toxic elements of society, lowering the barriers for all manner of abuse.” </p><p>That OpenAI would unveil a nakedly reckless product right now—at a moment when political tensions are pitched, in the wake of <a href="https://www.bloodinthemachine.com/p/the-killing-of-charlie-kirk-and-the">the most viral assassination video in digital history</a>, and mere <em>weeks</em> after news broke that <a href="https://www.bloodinthemachine.com/p/a-500-billion-tech-companys-core">OpenAI’s chatbot product had encouraged a child to take his own life</a> and an unwell military veteran to murder his mother—does not particularly surprise me. That the major media companies whose intellectual property is providing the bulk of the fuel, would <em>let it</em> do so, does a little bit. </p><p>We know well by now that OpenAI is a reckless company. But this is a whole new frontier of arrogance—OpenAI’s c-suite is either so desperate to show off its sloppified bona fides to enthuse investors, or so bullheaded that it genuinely believes it’s too big to fail. After all, Sora is a major legal gambit. At the announcement of Sora’s release, <a href="https://www.hollywoodreporter.com/business/business-news/openai-movies-tv-shows-lawsuits-legal-risk-1236391327/">OpenAI said it would require copyrights holders</a> to <em>opt out</em> of having their works included in OpenAI's datasets. </p><p>This is, as copyright experts have pointed out, decidedly not how it works:</p>
<p>
<a href="https://www.bloodinthemachine.com/p/the-incredible-arrogance-of-openai">
Read more
</a>
</p>
"AI is an attack from above on wages": An interview with cognitive scientist Hagen Blix - Blood in the Machinehttps://www.bloodinthemachine.com/p/ai-is-an-attack-from-above-on-wages2025-10-01T23:51:57.000Z<p>Greetings all, </p><p>Hope everyone’s hanging in, hammers at the ready. So, I know I promised a podcast in <a href="https://www.bloodinthemachine.com/p/were-about-to-find-out-if-silicon">the last newsletter</a>, but, well, I blew it on the recording. I am not, I regret to inform you, an audio engineer, and the sound quality just wasn’t there. However! The conversation, with the New York-based cognitive scientist and author Hagen Blix, was so good and timely that I was moved to LABORIOUSLY transcribe our chat, largely BY HAND, edit it, and present the finished product here as a Q+A instead. </p><p>Blix has a book out with coauthor Ingeborg Glimmer, called <em><a href="https://www.commonnotions.org/why-we-fear-ai?srsltid=AfmBOorgvx2C_VbbVsIK_bY8h_0eO0oVWra-nk7Fs6k9-YC3AMOQeqTU">Why We Fear AI</a></em><a href="https://www.commonnotions.org/why-we-fear-ai?srsltid=AfmBOorgvx2C_VbbVsIK_bY8h_0eO0oVWra-nk7Fs6k9-YC3AMOQeqTU">,</a> which argues, convincingly, that… well I won’t give it away quite yet because that’s basically my first question. But suffice it to say that I’d been meaning to chat with Blix for months now—things have just been so, well, you know. But he makes a number of compelling arguments about the nature of AI, why not just workers but <em>bosses </em>are afraid of it, and why we shouldn’t see it as a productivity tool but a wage depression tool. It’s all good stuff, and I’d heartily recommend anyone pick up the book and give it a read; it’s nice and short to boot. </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p>As always, work like this—research and interviews and the editing and transcribing thereof—is only made possible by my glorious paid supporters, who chip in a few bucks a month, or $60 a year, so I can keep the blood in the machine pumping. If you value this stuff, please consider doing the same, so I can continue to publish great discussions like this one, with folks like Hagen, and keep the vast majority of this site paywall free. Many thanks to those kindred ludds who already do chip in, you are the greatest. Okay, thanks everyone, and onward.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wr9-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a03b4c8-9118-462f-81a1-b5b488fd1b50_1910x1000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wr9-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a03b4c8-9118-462f-81a1-b5b488fd1b50_1910x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!wr9-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a03b4c8-9118-462f-81a1-b5b488fd1b50_1910x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!wr9-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a03b4c8-9118-462f-81a1-b5b488fd1b50_1910x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!wr9-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a03b4c8-9118-462f-81a1-b5b488fd1b50_1910x1000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wr9-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a03b4c8-9118-462f-81a1-b5b488fd1b50_1910x1000.jpeg" width="1456" height="762" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/7a03b4c8-9118-462f-81a1-b5b488fd1b50_1910x1000.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":762,"width":1456,"resizeWidth":null,"bytes":143370,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":false,"topImage":true,"internalRedirect":"https://www.bloodinthemachine.com/i/175049074?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a03b4c8-9118-462f-81a1-b5b488fd1b50_1910x1000.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!wr9-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a03b4c8-9118-462f-81a1-b5b488fd1b50_1910x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!wr9-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a03b4c8-9118-462f-81a1-b5b488fd1b50_1910x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!wr9-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a03b4c8-9118-462f-81a1-b5b488fd1b50_1910x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!wr9-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a03b4c8-9118-462f-81a1-b5b488fd1b50_1910x1000.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">The cover of <a href="https://www.commonnotions.org/why-we-fear-ai?srsltid=AfmBOooEHF5mrWOtyF0cAgAddv57okpxArZsK1I4gUDXosxzYWdKs11h">Why We Fear AI</a>, by Hagen Blix and Ingeborg Glimmer. Common Notions press.</figcaption></figure></div><p><strong>BLOOD IN THE MACHINE<br></strong>The book is called “Why we fear AI.” So why <em>do</em> we fear AI?</p><p><strong>Hagen<br></strong>The book grew partly out of Ingeborg and me talking about all these crazy narratives around like, oh, <em>AI is going to destroy the world. AI is going to take over</em>. And so we, we said, <em>well, There’s a lot of debunking these stories out there. A lot of people are very clear and concise about saying “this is bullshit.”</em> But to us, there was a secondary question in the background, which is, well, sure, the Matrix is not around the corner, but there’s something about these stories that <em>resonates</em> with people.</p><p>There are different ways in which these kinds of stories can resonate, right? So take the story about AI taking over the world and controlling everything. There’s an Amazon warehouse worker whose life is literally right now being controlled by an AI classification system, right? There’s a material reality in which AI really <em>is</em> a tool of control. </p><p>But maybe there’s also another way of thinking about it, maybe even for the Sam Altmans and Mark Zuckerbergs of the world. Maybe they feel like—the whole ruling class feels like—they can’t really do anything about things like climate change. Naomi Klein recently called this “end times fascism.” They don’t want to let go of their power, but they know that what they’re doing is literally making the planet uninhabitable. So maybe to them, that’s a totally different way of saying, <em>oh, technology is taking over and maybe gonna kill everyone</em>—it’s about a totally different thing. </p><p>So we set out to analyze the material facts on a class basis, and then to make sense of the stories and the material facts in order to figure out what can we do politically, to maybe <em>not</em> have the world burn down. Because I think it’s good for people to exist, personally. Controversial opinion these days, I know, but I’d rather have humanity.</p><p><strong>BITM</strong><br>So, let’s drill into this a little bit. These are AI companies that are at root just profit-seeking firms like any others. They have stakeholders and shareholders and competitors, and they are all offering their various iterations of the mass-automating software that’s going to replace all human labor or do this and that, and they feel certain pressures to find novel ways to up the ante or chime in.</p><p><strong>Hagen<br></strong>There are all these bizarre promises of <em>AI is going to like automate all human labor.</em> I think most people who have looked critically at these technologies, and been like, <em>that’s not what’s going to happen.</em></p><p>But I think there’s a larger narrative that we’re all swimming in, which is to think about technology through the lens of productivity increases. And, you know, nobody has ever said the bad thing about capitalism is that it increases productivity. That it’s bad at increasing productivity, right? Clearly that’s a thing that capitalism is good at. But there’s a second aspect to technology that kind of always gets put under the rug just in terms of capitalist development of technology.</p><p>And that is a lot of technologies are developed in order to increase the control of management over workplaces and in order to de-skill people. And by de-skilling, I don’t mean that people become less skilled, but that a workplace is transformed in a way that allows a company to pay people who previously were skilled workers as unskilled workers, right? And you have written a lot about this.</p><p>Like <em>Blood and the Machine</em>, it’s full of this stuff. Those weavers still know how to weave. They’re not less skilled, but the factory system allows certain people to out-compete the previous weavers with a shitty product that’s really cheap and where they can hire people that, rather than leading three years of training or however long it may have taken a person to become a skilled weaver back then, it takes three weeks to train a hand in the factory, right?</p><p>And I think we need to think about the development of AI technology in that kind of context. What kind of effect is AI going to have? It’s not going to replace everyone. But if we think about AI, language models are the industrial production of language. It’s the same way that the factories in the 1800s were the industrial production of cloth. And I think we get a much clearer picture of what is going on.</p><div class="pullquote"><p>We should think about the AI as a wage depression tool rather than a productivity increasing tool</p></div><p>And we get a clear picture of like, this isn’t primarily, this isn’t <em>just</em> about productivity. Maybe some of it is, <em>maybe</em>. A lot of studies are saying productivity increases aren’t coming. But I think what we still see, and that is also very clear from your work—you have all the work <a href="https://www.bloodinthemachine.com/s/ai-killed-my-job">on AI is coming for your job</a>—is that people’s jobs are getting to be more shit, right?</p><p>Like translators. It’s not that we’ve gotten rid of translators. It’s that we’ve made a machine that can produce a kind of shit translation. Not so shit that it’s not useful, but not up to the standards that a translator would expect. That is really cheap to produce now, but the translator now has to compete with this kind of thing. And even the translators that then have to fix the AI translation, they’re now much more like gig workers because they have to compete with this thing. The supply of the shit version of the thing is so high that it just depresses the prices overall and depresses wages.</p><p>If we think about the AI as a wage depression tool rather than a productivity increasing tool, then I think the idea that we should get our hopes up [for the AI boom to end] because all these studies say productivity isn’t actually increasing is a bit premature.</p><div class="digest-post-embed" data-attrs="{"nodeId":"f6d58082-975a-4f53-8ba3-5877cc22e6c6","caption":"In July 2025, Microsoft researchers published a study that aimed to quantify the “AI applicability” of various occupations. In other words, it was an attempt to calculate which jobs generative AI could do best. At the very top of the list: Translators and interpreters.","cta":"Read full story","showBylines":true,"size":"lg","isEditorNode":true,"title":"AI Killed My Job: Translators","publishedBylines":[{"id":934423,"name":"Brian Merchant","bio":null,"photo_url":"https://substack-post-media.s3.amazonaws.com/public/images/cf40536c-5ef0-4d0a-b3a3-93c359d0742a_200x200.jpeg","is_guest":false,"bestseller_tier":1000}],"post_date":"2025-08-21T18:19:34.535Z","cover_image":"https://substackcdn.com/image/fetch/$s_!EGZA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4582ccf-4271-49c6-af66-1d091b1fe0b8_1432x627.jpeg","cover_image_alt":null,"canonical_url":"https://www.bloodinthemachine.com/p/ai-killed-my-job-translators","section_name":"AI Killed My Job","video_upload_id":null,"id":171094084,"type":"newsletter","reaction_count":177,"comment_count":30,"publication_id":1744395,"publication_name":"Blood in the Machine","publication_logo_url":"https://substackcdn.com/image/fetch/$s_!irLg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21f9bf3-26aa-47e8-b3df-cfb2404bdf37_256x256.png","belowTheFold":true}"></div><p><strong>BITM</strong><br>Yeah. And then this also feeds into your argument about why we’re ultimately afraid of AI in general, because it can be understood as an omnidirectional and omnipresent vessel for de-skilling.</p><p><strong>Hagen</strong><br>I think there’s a lot of ways in which, if we think about people who work with language, they had been working in areas where industrialization typically hadn’t happened to the same degree. If you want to call it in classical terms, proletarianization, in a sense, this is what this push is for, right?</p><p><strong>BITM</strong><br>The proletarianization of <em>everything</em>, that’s the aim.</p><p><strong>Hagen</strong><br>Yeah. And we’ll see how far this goes, right? Like, my expectation is that generally we see this kind of like, we’re producing a bifurcation, you know, like the higher class will be of quality of goods produced will be more expensive.</p><p>Think about lawyers. If you’re doing corporate mergers, you’re not going to replace your paralegals with an AI. Because every mistake could cost millions of dollars. But if you’re a public defender and you’re like, <em>actually, I can use an AI, I’m only going to win 80% of the number of cases that I used to win, but I can’t do twice as many cases because the AI apparently was so much cheaper and so much faster</em>.</p><p>That’s really bad for the people who need a public defender, precisely the people who are very often going with the cheapest option that there is, because half this country is living paycheck to paycheck. People don’t have savings. When emergencies come up, they’re scrounging. So there’s a sense in which it will put the expensive stuff increasingly out of reach.</p><p>And we’ve seen this in other domains, like furniture or shoes or fast fashion. You read a 19th century novel, and people have three pairs of shoes over the course of their life. Somehow all these people had shoes that were made by a cobbler that would last 15 years. And it just seems so wildly mind-boggling.</p><p><strong>BITM</strong><br>The Luddites made high quality cloth garments that were designed to last. You owned a couple of them, and you wore them all the time, and they lasted forever. </p><p><strong>Hagen</strong><br>The idea that you could inherit just normal clothes just seems so wildly bizarre out of all proportions of the imaginable for us today. Similar with furniture. We have all this beautiful turn-of-the-century furniture from 1900 that is still amazing, that is very expensive now.</p><p>I love a lot of IKEA stuff. A lot of IKEA stuff is great. But I don’t expect my IKEA stuff to be handed down, or stuff that people might still want to use in 100 years. I barely expect it to survive one or two moves. </p><div class="pullquote"><p>AI is an attack from above on wages</p></div><p>So that sense of the middle quality is getting attacked most, by AI, this thing that is partly productivity increases, and a lot of it is de-skilling, that attacks quality. It makes products that are so much cheaper that they out-compete not on quality, but on price. I think that’s the expectation that we should be operating under with AI.</p><p>And we should see that as an attack from above on wages. And I think that’s really important in a sense, because, you know, It’s a horrible thing.</p><p>I don’t have any control over what these companies do. What do we have control over? I think what we have control over is that as workers, we can collectively organize against this. And the first step to doing that is, of course, to realize that maybe we have shared interests, right?</p><p>Like that is before you can form a union, you have to realize that actually together you might be able to do something about your interests in a way that you can’t individually.</p><p>And so historically, I think there’s a good case to be made that, you know, precisely the people that these technologies are coming for, if you’re like working in a medical facility and you’re interested in evaluating medical scans, that’s a job that’s getting de-skilled. If you’re, like, a paralegal who might want to try to be a lawyer, that is getting de-skilled. Everybody who’s in teaching knows that these things are coming in and <a href="https://en.wikipedia.org/wiki/Enshittification#:~:text=Enshittification%2C%20also%20known%20as%20crapification,decline%20in%20quality%20over%20time.">enshittifying</a> those kinds of domains of jobs. </p><p>There’s a real sense in which those were often the relatively privileged kind of jobs that were very much also the bulwark of the system of exploitation that we live in, right? Lawyers are not the kind of guys who typically form unions, right? They’re the kinds of guys who typically form small companies.</p><p>So there’s a sense in which maybe we can get a lot of people invested in joining the labor movement. Workers that so far were essentially structurally hostile to the labor movement. Those are the kinds of questions that I think are really important to think about.</p><p><strong>BTIM</strong><br>I’ve had this sense as I’ve been reporting on AI and work. I mean, when the Writers’ Guild of America and the Screen Actors Guild went on strike, it was a big surprise to me how much solidarity it generated. I’m old enough to remember the last time the screenwriters had a major strike, and it was very different. Like, it was framed everywhere as <em>look at the coastal elites and their made-up problems.</em> There was a large scale effort to write off those concerns. This time people were like, “yeah—this could happen to <em>me</em>.” </p><p>That’s to say that this time, with this fear of AI, as you put it, in the mix, a lot of people were able to see or be concerned or to share in that concern. Not everybody always recognizes it as this mass deskilling or this mass attack on wages, but they have a sense even from it’s cultural positioning, how we’re taught to vaguely understand AI as something that can replace us on some degree, that there was this the knee-jerk reaction was to side with the writers and the creators in a way that that felt novel. </p><p><strong>Hagen<br></strong>And I think that it’s also true.</p><p><strong>BITM<br></strong>Just last week, I was speaking to a group of designers. Product designers, historically, they’ve had over the last few decades, pretty secure and well-paying jobs. Well, let’s guess what’s happening right now. You will be shocked to learn that the same thing that you’re describing—a deskilling, a precarization, an attack on their wages—is applying to them too.</p><p>And a couple of them came up to me after this talk and said, “you know, design has never been an industry that has ever really like entertained the idea of forming unions, but I’m kind of thinking that, there’s more talk and more people are interested in doing so now.” </p><p><strong>Hagen</strong><br>Yeah, that’s, that’s, that’s exactly, I think the thing we should be pushing for, right?</p><p>I think sometimes there’s a knee jerk reaction to just—you see that there’s so much bullshit that’s being produced, and so you kind of want to say, <em>it’s hype</em>. You want to focus on the anti-hype thing But I think that the most important thing is exactly making these connections, getting people to realize that there are collective interests at stake and have collective interests against the people who are using this to make money. This is your bosses. This is the big tech companies. This is venture capital investors. That there are dividing lines, right? And we want to be sure that we draw the line politically in the right way. Like sometimes I worry that when that focuses a little too much on the, like, scam… hype… <em>whatever</em> nature, that you might accidentally draw the line in a way that your <em>boss</em> is <em>also</em> getting scammed, right? But your boss knows what they’re doing. They know why they’re trying to make money, right? </p><p>So drawing these political lines, right? And, yeah, increasing the sense of solidarity. That’s really why we wrote the book to be like, <em>there’s in a sense something about this moment that makes our shared humanity and our shared interests as workers of all stripes</em></p><p>I think there’s something really clarifying about this moment precisely because this particular technology is coming for so many of us, and for so many of us who have up until this point said “yeah, maybe there’s a lot of injustice going on in the system but me personally? I have a set of skills that sell pretty well on the market I know how to make a career out of my thing I’m okay.” So now a lot of us that were in that position are getting drawn in and being like, no actually the impoverishment that has always been the underbelly of capitalism—maybe not your folks, maybe it was more in the global south, the distribution of where the the most severe injustice, this is clearly changing historically, but <em>that</em>—ends up now with, <em>our shared humanity depends on this.</em> </p><p>And maybe even connecting that to climate change, right? I do think there’s always in these apocalyptic stories, just like in the 50s, 60s, 70s, the apocalyptic stories always had a nuclear war echo to them. Now they always have a climate change echo to them. And again, that is clearly a market failure—you can be the biggest fan of Milton Friedman that you want, but, the damage to the environment is not getting priced in by the market system. It requires a kind of global collective action that we do something about. And that’s why people like Peter Thiel are like basically saying, oh, Greta Thunberg is the Antichrist, you know?</p><p><strong>BITM</strong><br>So I wanted to go back and pick at the previous point because with AI, and its enormous capitalization and cultural status, it can be hard to cut through that mystique. Which is why we need the hype debunkers. But it can be a challenge to succinctly articulate what you’re getting at. I’ve interviewed at this point probably hundreds of workers who have often horrifying stories, in which AI has been used to deskill them or cited as a reason for outright layoffs—it’s a challenge to separate the mystique of AI from the dull truth on the ground, which is that a boss is trying to save on labor costs. It presents a challenge.</p><p><strong>Hagen</strong></p><p>It does. We do have some work to do. I do feel like the de-skilling argument really, really resonates with people.</p><p><strong>BITM<br></strong>I think we need a better term though. </p><p><strong>Hagen</strong><br>I agree. It’s a shit word. I’ve been playing with things like calling it “class war through enshittification.” So one thing that I found useful, and that I found resonated with people who have come to my book events, is that sense of it’s simultaneously a way of attacking your wages and the quality of your work. Because it’s not just coming for your money. In a sense, people are like, yeah, that’s normal. The boss wants to pay you as little as possible and you want to earn as much as possible. And there’s some market-based negotiation happening. But it’s also coming for the quality of the thing you’re doing.</p><p><strong>BITM</strong><br>And your ability to derive satisfaction from doing that work.</p><p><strong>Hagen<br></strong>Exactly. So that adds insult to injury.</p><p><strong>BITM<br></strong>I still think we need a word. We need a term.</p><p><strong>Hagen</strong><br>I agree we need a word.</p><p><strong>BITM<br></strong>Let’s workshop it.</p><p><strong>Hagen</strong><br>We’ll workshop it.</p><p><strong>BITM</strong><br>There are not a lot of cognitive scientists I know of who spend their time probing the political economy of AI—how do you think that your background there has helped sort of inform this this broader work?</p><p><strong>Hagen<br></strong>That’s a really good question. To me they’re two very different worlds and I’ve just happened to have occupied both of them. So my PhD is in linguistics as a cognitive science. So I was really interested in how grammar works in the mind. And then AI kind of came crashing into that space.</p><p>But I had also been interested in political organizing for a long time. And to me, those were always two completely separate worlds. In fact, personally, I was interested in cognitive science, partly because I was like, “all this linguistic stuff has no application. It’s pure science. It won’t be turned into a tool against the working class or into a weapon or something.” And then this stuff happens. And I’m like, “well, fuck, that was a miscalculation from my part.”</p><p>Certainly, I had that sense of <em>what these things are actually doing and what they’re being sold to do are not the same thing.</em> But again, for me, that discrepancy should be used to enlighten something, to make something clear that is otherwise unclear, right? Like the fact that technology is always, always, always about class power and not just about productivity.</p><p><strong>BITM</strong><br>Since we were talking about hype and we’re clearly in some kind of a bubble—I mean who knows but more likely than not there’s going to be a, um, sort of a burst or a deflation, or—god forbid a full collapse—but, in that context, how are you thinking about, the role of AI, decoupled from its like peak hype powers.? I ask because this is something that I think about a lot—AI is still going to be a tool that’s available to management, that it’s still going to have the capacity to deskill and to depress wages. How should we be thinking about this in the longer term, do you think?</p><p><strong>Hagen<br></strong>I think one really crucial thing to keep in mind about these things, it certainly looks like there’s going to be a bubble that a lot of investors are going to lose a lot of money and that it’s going to be bad for workers. But again, that is not really in the realm where we can do something about it, right?</p><p>If you’re a worker and AI is coming for your job and it’s gratifying it, the knowledge that it’s a bubble isn’t helping you, right? So that’s one way that I think it’s always important to contextualize that: Well, they’re gambling with our livelihood, but a lot of them are going to lose their money. It’s going to make our lives shittier. But they’re going to keep doing it afterwards, right?</p><p>The dot-com bubble only gave rise to the gigantic tech companies that we still live with now. And one of the things that I think is crucial to understand there, and again, to push back against the kind of normal media discourse that happens, is that markets are not a natural phenomenon, right? Markets are always artificial products. And we can see that there’s a lot of market-making going on right now, for example, in the military context. A while back, the US government asked Meta to remove restrictions in their open source licensing of the Lama models that had previously said ‘forbidden military usage’. So that restriction was cut, right? Meta also just got a $1 billion contract from the US government together with Anduril, Palmer Luckey’s company.</p><p>So there’s a lot of market-making going on there. And these companies that are investing in these AI things, they’re very, very skilled at figuring out how to produce something that is more like infrastructure. Peter Thiel is always very explicit about this: The way you build a giant company is by creating an artificial monopoly.</p><p><strong>BITM</strong><br>Yeah, and that in turn is where a lot of the enshittification stuff feels so deeply related to this, right? This is about trying to create a kind of infrastructure.</p><p><strong>Hagen</strong><br>And I think that’s probably only going to entrench. And this is going to be done together with government forces. This is going to be done with the help of companies, not because it makes them money, but because these tools are excellent for labor disciplining, for wage depression, and so on.</p><p>So in that sense, I feel like, there’s probably a bubble, but I think we should focus on where we can do something. And I think that’s building a collective sense of how this affects us, how this is enshittifying our jobs, the quality of the things we make, our dignity at work. Our pride in what we do. </p><p>And that’s why I think the artists, whether it’s the actors or the visual artists as we see right now, they’re such a canary in the coal mine. And the fact that there’s broad solidarity with them is a really good sign.</p><div class="digest-post-embed" data-attrs="{"nodeId":"067e1567-9116-45e8-baef-026962da1d2b","caption":"After the launch of ChatGPT sparked the generative AI boom in Silicon Valley in late 2022, it was mere months before OpenAI turned to selling the software as an automation product for businesses. (It was first called Team, then Enterprise.) And it wasn’t long after that before it became clear which jobs managers were likeliest to automate...","cta":"Read full story","showBylines":true,"size":"lg","isEditorNode":true,"title":"Artists are losing work, wages, and hope as bosses and clients embrace AI","publishedBylines":[{"id":934423,"name":"Brian Merchant","bio":null,"photo_url":"https://substack-post-media.s3.amazonaws.com/public/images/cf40536c-5ef0-4d0a-b3a3-93c359d0742a_200x200.jpeg","is_guest":false,"bestseller_tier":1000}],"post_date":"2025-09-16T20:54:05.341Z","cover_image":"https://substackcdn.com/image/fetch/$s_!ETIV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3972aaad-a4b8-494f-9f40-5ef78fea4d81_2048x1294.png","cover_image_alt":null,"canonical_url":"https://www.bloodinthemachine.com/p/artists-are-losing-work-wages-and","section_name":"AI Killed My Job","video_upload_id":null,"id":173288159,"type":"newsletter","reaction_count":204,"comment_count":35,"publication_id":1744395,"publication_name":"Blood in the Machine","publication_logo_url":"https://substackcdn.com/image/fetch/$s_!irLg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21f9bf3-26aa-47e8-b3df-cfb2404bdf37_256x256.png","belowTheFold":true}"></div><p><strong>BITM<br></strong>Right.</p><p><strong>Hagen<br></strong>One of the stories we have in the book is about that Apple Crush ad. There was that ad where they had all these instruments and drawing materials, et cetera, like all these art related things. on the hydraulic press, and then the hydraulic press crushes it. The piano keys shudder, the trumpet gets crushed, the paint gushes out, and at the end of the hydraulic press, there’s, the iPad.</p><p><strong>BITM</strong> <br>And people hated it. They had to apologize for it.</p><p><strong>Hagen<br></strong>Yeah, they apologized, for an ad. So there’s this sense of, “yes, this is actually about transferring knowledge and skills into a tool so that you can pay people who use the tool less.” This is happening over and over again in capitalism. And we’re going to dismantle by force all of these things that you love and that create beautiful art.</p><p>It’s such a useful metaphor that they handed us. It was so beautifully clear that this is a way of bulldozing precisely the aspects of like human creative activity, which labor should be. "Labor should be, you’re changing something in the world in accordance with your will. And that’s what it means to be alive as a human. It should be a good thing. But unfortunately, so much labor under capitalism is done under circumstances where it’s not like that at all.</p><p>But this just made it so obvious that this is really an attack on not just wages, but even on the ability to take pride in one’s job, right? So that activation of that sense of dignity that also like maybe we should really lean into also thinking about, what is an alternative?</p><p>Like why do we live in a society where technology is developed as a tool to make it so that people have <em>less</em> control over their labor? Technology should be developed in such a way that doing your work is <em>more</em> pleasant. But because the interests of the people who pay for the technology, the companies, the bosses are hostile to the interests of the workers very often because the workers may want to do things differently. You want your work to be comfortable and interesting and maybe social, in a good way, but the company wants to exert control. There are many, many levels of hostility there.</p><p>So pointing these things out and asking <em>Couldn’t we develop technology in a way that serves the human interest in having labor be a good part of life?</em> I am not one of those people who are like, “work sucks.” Work should be great. People love doing meaningful things with their time, and people <em>like</em> producing things.</p><p>I love technology. One of the reasons why I got interested in these language models was because i was like “What the hell is happening? These are really fascinating interesting tools what can we learn about how language works!” Just like the Luddites, as you always point out—the Luddites weren’t opposed to technology. They were opposed to technology as a tool of crushing the working class.</p><p></p><p></p>We're about to find out if Silicon Valley owns Gavin Newsom - Blood in the Machinehttps://www.bloodinthemachine.com/p/were-about-to-find-out-if-silicon2025-09-26T18:22:29.000Z<p>Hello friends, fam, and luddites - </p><p>Speaking of luddites, I was completely and pleasantly surprised to see <a href="https://www.bloodinthemachine.com/p/the-luddite-renaissance-is-in-full">the last post blow up</a>. It turns out there is a <em>lot</em> of interest in and support for a Luddite Renaissance, as the organizers of one event described it; for organized protest of big tech, refusal of its toxic products, and for resisting the dominion of Silicon Valley’s AI. That story has officially led more machine breakers (in spirit and/or in practice) to sign up for this newsletter than any besides the launch post. Thanks to everyone who spread the word, and to Rebecca Solnit, who shared the post with her very activated audience. For the record, I’m always happy to use this space to share news of any and all grassroots tech-critical and Luddite events, movements and projects, so send them my way.</p><p>Today, we dig into the spate of California bills that could help rein in some of the AI industry’s worst impulses—if they make it past Gavin Newsom’s veto pen. California is such a major economic force that any laws passed here <a href="https://www.americanactionforum.org/insight/californias-zero-emissions-vehicle-rule-and-its-nationwide-impacts/#:~:text=The%20California%20Air%20Resources%20Board%20(CARB)%2C%20the%20state's%20environmental,sales%20in%20the%20United%20States.">have implications for the entire country</a>, even the world. They help set standards adopted elsewhere, and companies that adjust their products to meet California legal requirements often do so in other markets as well. </p><p>As always, I can only do this work thanks to my magnificent paid subscribers. As much as I’d like to be catching up on <em>Alien: Earth</em> I spent this week reading amended AI laws and talking to organizers, authors, and advocates who are hoping their yearlong toil will be enough to overcome the Silicon Valley lobbying machine and get some AI laws on the books. So, if reporting like this has value to you, please consider upgrading to a paid subscription so I can continue to do it. Thanks again to all who read, support, and share this work. OK enough of that. Onwards. </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p>We’ve talked a lot in these pages about the <a href="https://www.bloodinthemachine.com/p/trumps-ai-action-plan-is-a-blueprint">Trump administration’s embrace of AI</a>, <a href="https://www.bloodinthemachine.com/p/dont-forget-what-silicon-valley-tried">Silicon Valley’s lobbying efforts</a> to halt AI regulation, and the <a href="https://www.bloodinthemachine.com/p/de-democratizing-ai">GOP’s push to ban state-level AI legislation</a> altogether. All of those roads have led us here, to this point: Where the very laws that big tech and its political allies hoped to strangle in the cradle now sit on Gavin Newsom’s desk in Sacramento. </p><p>There’s obviously a lot going on right now, and the mainstage national discourse has been rough to say the least, but let’s not lose sight of what’s happening in California: At the start of 2025’s legislative session, there were some <a href="https://calmatters.org/economy/technology/2025/03/ai-regulation-after-trump-election/?_gl=1*12w2hll*_ga*MTE1OTE3NjQ4LjE3NTc1Mjc3Njc.*_ga_5TKXNLE5NK*czE3NTg4MzYzMTUkbzEkZzEkdDE3NTg4MzY0OTkkajYwJGwwJGgw*_ga_DX0K9PCWYH*czE3NTg4MzYzMTUkbzEkZzEkdDE3NTg4MzY0OTkkajYwJGwwJGgw">30 bills designed to rein in AI</a>. Since then, Silicon Valley has waged a well-funded lobbying campaign to kill, gut, delay, and otherwise whittle away at those laws (more on all that in a minute). The legislative session ended on September 13th. Now, the handful of surviving bills that <em>have</em> managed to pass both the Senate and the Assembly await Newsom’s signature—or his veto. He has until October 12th to sign or veto.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KBtg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc39970a4-2784-4a26-9920-4e5c2a657b35_1600x1067.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KBtg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc39970a4-2784-4a26-9920-4e5c2a657b35_1600x1067.jpeg 424w, https://substackcdn.com/image/fetch/$s_!KBtg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc39970a4-2784-4a26-9920-4e5c2a657b35_1600x1067.jpeg 848w, https://substackcdn.com/image/fetch/$s_!KBtg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc39970a4-2784-4a26-9920-4e5c2a657b35_1600x1067.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!KBtg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc39970a4-2784-4a26-9920-4e5c2a657b35_1600x1067.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KBtg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc39970a4-2784-4a26-9920-4e5c2a657b35_1600x1067.jpeg" width="1456" height="971" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/c39970a4-2784-4a26-9920-4e5c2a657b35_1600x1067.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":971,"width":1456,"resizeWidth":null,"bytes":165498,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":false,"topImage":true,"internalRedirect":"https://www.bloodinthemachine.com/i/173412757?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc39970a4-2784-4a26-9920-4e5c2a657b35_1600x1067.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!KBtg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc39970a4-2784-4a26-9920-4e5c2a657b35_1600x1067.jpeg 424w, https://substackcdn.com/image/fetch/$s_!KBtg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc39970a4-2784-4a26-9920-4e5c2a657b35_1600x1067.jpeg 848w, https://substackcdn.com/image/fetch/$s_!KBtg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc39970a4-2784-4a26-9920-4e5c2a657b35_1600x1067.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!KBtg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc39970a4-2784-4a26-9920-4e5c2a657b35_1600x1067.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">Gavin Newsom at Tech Crunch Disrupt. <a href="https://www.flickr.com/photos/36521958135@N01/9730642920">Photo by JDLasica via Flickr</a>. CC 2.0.</figcaption></figure></div><p>These bills are hardly radical. Most are truly straightforward, common sense measures that all but the most diehard libertarians would take issue with. These are laws that, for example, would ensure that an AI system cannot be used to discipline or fire workers<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> (the No Robot Bosses Act, SB 7), that would ban algorithmic price fixing schemes (AB 325) that have led to rent increases and the gouging of consumers, and that would require AI companies to submit safety data to the state and not retaliate against whistleblowers. </p><p>There’s AB 1340, which technically allows rideshare drivers to form unions and requires Uber and Lyft to share job data with the state, but also slashes the amount of insurance coverage they need to offer and is structured in a way that paves the way for <a href="https://en.wikipedia.org/wiki/Company_union">company unions</a> of limited utility to workers. The Leading Ethical AI Development for Kids Act (AB 1064, or LEAD), meanwhile, would ban AI chatbots marketed to kids (good!) and may be the last bill standing that Silicon Valley is legitimately afraid of. </p><p>Now, don’t get me wrong, Silicon Valley wants precisely *none* of these bills to pass. But its army of lobbyists has succeeded in defanging many of them, while helping to stall out others, including a good one limiting the ways AI can enable worker surveillance, and another ensuring driverless delivery vehicles have human oversight. Those have been marked as “two-year” bills, meaning that they’ll be taken up again next year. </p><div class="digest-post-embed" data-attrs="{"nodeId":"fa1f7613-9ada-42cb-af43-51c357809a9f","caption":"","cta":"Read full story","showBylines":true,"size":"lg","isEditorNode":true,"title":"Don't forget what Silicon Valley tried to do","publishedBylines":[{"id":934423,"name":"Brian Merchant","bio":null,"photo_url":"https://substack-post-media.s3.amazonaws.com/public/images/cf40536c-5ef0-4d0a-b3a3-93c359d0742a_200x200.jpeg","is_guest":false,"bestseller_tier":1000}],"post_date":"2025-07-02T03:18:56.478Z","cover_image":"https://substackcdn.com/image/fetch/$s_!WWub!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac187a65-9023-4a67-a7fa-3d6bb66e2170_811x482.png","cover_image_alt":null,"canonical_url":"https://www.bloodinthemachine.com/p/dont-forget-what-silicon-valley-tried","section_name":null,"video_upload_id":null,"id":167306315,"type":"newsletter","reaction_count":125,"comment_count":16,"publication_id":1744395,"publication_name":"Blood in the Machine","publication_logo_url":"https://substackcdn.com/image/fetch/$s_!irLg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21f9bf3-26aa-47e8-b3df-cfb2404bdf37_256x256.png","belowTheFold":true}"></div><p>This is a crucial moment. If even the barest-bones laws can’t pass here right now, it will come down to one reason above all: Gavin Newsom is currently preparing to run for president and he doesn’t want to upset Silicon Valley and its deep-pocketed donors and platform operators. It will show us that, even in supposedly liberal California, Silicon Valley’s iron grip has become nearly unbreakable, and offer a grim omen for future hopes of subjecting Big Tech to anything resembling democracy. </p><h2>No Robo Bosses Act. (SB 7)</h2><p>This is a pretty simple and straightforward bill. It stipulates that an employers cannot use an automated decision system (ADS), like AI software, to fire or discipline workers. And yet, Silicon Valley fought it tooth and nail. </p><p>“An employer shall not rely solely on an ADS when making a discipline, termination, or deactivation decision,” the bill’s text states as passed, and adds that if an ADS is used in a workplace, a human reviewer must be brought into the loop. Employers must keep 12 months of data on the books if they’re using an ADS or an AI to discipline workers, and those workers can request that data. The law is an effort to curb the rising power and popularity of algorithmic management systems embraced by companies. Earlier versions went further, including safeguards against discrimination as well—these were stripped out over industry outcry. </p><p>The bill is one of three that the California Labor Federation backed this year; another, the surveillance proposal, has been turned into a two-year bill. I reached out to the federation’s president, Lorena Gonzalez, for comment. </p><p>“It’s a simple question of: Should there be guardrails?” Gonzalez told me. “Look, if you get disciplined, if you get fired, there should be some oversight. It shouldn’t just come from a computer or an app. Bosses should have souls. We shouldn’t be run by computers.” She sighs before adding, “The bar is pretty low for what we’re asking for.”</p><h2>The Preventing Algorithmic Price Fixing Act. (AB 325)</h2><p>Another rather obviously sensible law. Hotels, landlords, grocery stores, rideshare companies, increasingly use algorithms that calculate the highest rate a consumer is likely to pay—and many turn to algorithms third party companies operate in the same market. The result is <a href="https://calmatters.org/explainers/california-gavin-newsom-bills-signing/#03a124b3-8b03-4e9f-a550-0853a423b865:~:text=stores.%20A%202024-,White%20House%20study,-estimated%20that%20price">widely understood</a> to be higher rents, goods, and prices for all of us; the result is price gouging.</p><p><a href="https://calmatters.org/explainers/california-gavin-newsom-bills-signing/#03a124b3-8b03-4e9f-a550-0853a423b865">CalMatters</a> has the rundown on AB 325, which seeks to protect consumers from digital tools that facilitate modern-day price gouging:</p><blockquote><p>Landlords, grocery stores, and tech platforms like Amazon, Airbnb and Instacart can use algorithms to <a href="https://calmatters.org/economy/technology/2025/03/artificial-intelligence-price-discrimination/">rip you off in a variety of ways</a>. To prevent businesses from charging customers higher costs and make life more affordable, <a href="https://calmatters.digitaldemocracy.org/bills/ca_202520260ab325">Assembly Bill 325</a> would prohibit tech platforms from requiring independent businesses to use their pricing recommendations.</p></blockquote><p><a href="https://techequity.us/the-preventing-algorithmic-price-fixing-act-ab-325/">TechEquity</a>, the tech labor advocacy group and one of the bill’s backers, explains further:</p><blockquote><p>AB 325 updates California’s antitrust laws to address modern digital tools used for illegal price fixing by making it clear that using digital pricing algorithms to coordinate prices among competitors is just as illegal as traditional price fixing. It also closes court-created loopholes for price-fixing algorithms by making it easier to bring good cases against illegal price fixing.</p></blockquote><p>In essence, the law would instate a ban on “using third-party algorithms to secretly coordinate prices.” As such, landlords, hotels, and the Chamber of Commerce are opposed to the bill, insisting it would drive up costs. </p><h2>The Leading Ethical AI Development for Kids Act (AB 1064)</h2><p>Assemblymember Rebecca Bauer-Kahan’s <a href="https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202520260AB1064">LEAD act</a> has caught fire as a response to the growing number of tragic cases, like <a href="https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html">Adam Raines</a>’, of chatbot products developed by OpenAI and CharacterAI that have encouraged children to commit self harm or even kill themselves. </p><p>The bill, which passed both the CA Senate and the Assembly with bipartisan support, would ban any company or entity with over 1 million users “from making a companion chatbot available to a child unless the companion chatbot is not foreseeably capable of doing certain things that could harm a child, including encouraging the child to engage in self-harm, suicidal ideation, violence, consumption of drugs or alcohol, or disordered eating.”</p><p>Pretty reasonable, right? If you’re going to make a chatbot product available to children, you have to ensure that it will not encourage those children to hurt themselves.</p><p>Predictably, tech companies, AI firms, and Silicon Valley advocacy groups are absolutely *up in arms* about the bill. They launched an ad campaign, hired new lobbyists just to thwart this single bill, and alleged that by trying to ensure the chatbots tech companies are marketing to children do not instruct them to kill themselves is an affront to innovation. Adam Kovacevich, the head of the Silicon Valley interest group the Chamber of Progress, <a href="https://timesofsandiego.com/opinion/2025/08/27/california-shouldnt-pull-plug-ai-that-helps-teens/">wrote an op-ed in the San Diego Union Tribune</a> arguing, with a straight face, that this bill must be stopped because it stands to take away AI chatbots from teens who need them.</p><p>There’s a legitimately disgusting ad campaign out there launched by a front group that calls itself the “American Innovators Network,” which is in truth a lobbying outfit funded by Silicon Valley mainstays like Andreessen Horowitz and Y Combinator. It’s insisting that the law would “limit innovation in California’s classrooms” and hospitals and take away children’s futures. </p><p>“Students deserve every chance to succeed,” one Facebook ad gravely intones. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!jD05!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3caae7f-054b-4d93-adbc-68fdee070d78_1200x675.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!jD05!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3caae7f-054b-4d93-adbc-68fdee070d78_1200x675.png 424w, https://substackcdn.com/image/fetch/$s_!jD05!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3caae7f-054b-4d93-adbc-68fdee070d78_1200x675.png 848w, https://substackcdn.com/image/fetch/$s_!jD05!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3caae7f-054b-4d93-adbc-68fdee070d78_1200x675.png 1272w, https://substackcdn.com/image/fetch/$s_!jD05!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3caae7f-054b-4d93-adbc-68fdee070d78_1200x675.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!jD05!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3caae7f-054b-4d93-adbc-68fdee070d78_1200x675.png" width="1200" height="675" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/f3caae7f-054b-4d93-adbc-68fdee070d78_1200x675.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":675,"width":1200,"resizeWidth":null,"bytes":1116327,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/173412757?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3caae7f-054b-4d93-adbc-68fdee070d78_1200x675.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!jD05!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3caae7f-054b-4d93-adbc-68fdee070d78_1200x675.png 424w, https://substackcdn.com/image/fetch/$s_!jD05!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3caae7f-054b-4d93-adbc-68fdee070d78_1200x675.png 848w, https://substackcdn.com/image/fetch/$s_!jD05!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3caae7f-054b-4d93-adbc-68fdee070d78_1200x675.png 1272w, https://substackcdn.com/image/fetch/$s_!jD05!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3caae7f-054b-4d93-adbc-68fdee070d78_1200x675.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">Image collage via <a href="https://www.techpolicy.press/inside-the-lobbying-frenzy-over-californias-ai-companion-bills/">Tech Policy Press</a></figcaption></figure></div><p>Once again, this is a bill that asks that tech companies that sell products to children ensure those products are not serving them content that encourages them to harm themselves.</p><p>“What the bill does is it basically requires that if we’re going to release companion chatbots to children, [that] they are safe by design, that they do not do the most harmful things,” Bauer-Kahan told <a href="https://www.techpolicy.press/inside-the-lobbying-frenzy-over-californias-ai-companion-bills/">Tech Policy Press</a>.</p><p>That is apparently too much to ask of the AI companies, which actually makes sense. The AI companies know that, given the current limitations of large language models, they <em>can’t</em> easily guarantee their chatbot products won’t continue to generate noxious content that contributes to the psychosis and mental deterioration of children—not without expensive investments in more content moderation or adjustments that might deter user engagement—they just think they should be allowed to continue to market their products to them anyway.</p><p>This is going to be an interesting one to watch, as there’s another interesting element in the mix. It’s not just that there is genuine and deserved moral outrage over the content the chatbots are serving to children, putting real pressure on Newsom, who would be forced to explain why he passed up the opportunity to do something about an AI-abetted mental health crisis when he could. (There is a big nothingburger of a competing bill, SB 243—you can tell it’s a nothingburger, because Valley hacks like Kovacevich are embracing it—that claims to address the issue but is content to “label” AI content rather than keep it away from kids it could harm, that Newsom could sign instead.) </p><p>BUT. It appears that LEAD is supported by someone who is perhaps as influential to Gavin as any Valley lobbyist; Jennifer Siebel Newsom, the first partner of California. “Regulation is essential, otherwise we’re going to lose more kids,” she recently said, according to <a href="https://www.sacbee.com/news/politics-government/capitol-alert/article311775294.html#storylink=cpy">the Sacramento Bee</a>. “I can’t imagine being one of these tech titans and looking at myself in the mirror and being OK with myself.”</p><h2>The AI Safety Bill 2.0</h2><p>Last year, Gavin Newsom controversially <a href="https://calmatters.org/economy/2024/09/california-artificial-intelligence-bill-veto/">vetoed</a> a bill authored by Scott Wiener, 1047, that was designed to limit catastrophic AI risk. <a href="https://calmatters.org/economy/technology/2024/08/ai-regulation-showdown/">Big AI companies hated the bill</a> because it required a degree of transparency, and would have cost them some not insignificant time and money to comply with. This year, Wiener is back with SB 53, an extremely pared-down version that he developed in concert with the industry and the governor, and that requires AI companies to share “safety protocols” with the state and provide a way to report safety incidents. </p><p>This is all a little bit of a joke, if I’m being honest. The previous bill was at least a little interesting because it required AI companies to share data about their training models; which is of course why they were furious about it. </p><p>The new one just asks AI companies to file a progress report that basically says “here is how we are being careful” and puts in place a loose and rather unenforceable promise to tell the state about anything that goes wrong. No wonder the AI company Anthropic has endorsed it, and OpenAI hasn’t bothered to say much of anything about it except that it would prefer federal regulation. (Anthropic likely figures that it can score some points by trying to bolster its bona fides as the more ethical AI company, while OpenAI probably finds the fact that it has to dedicate a researcher or two to these safety reports kind of annoying, but nothing worth publicly opposing too much.) </p><p>Newsom has <a href="https://www.politico.com/news/2025/09/24/newsom-california-ai-bill-00578631">indicated</a> that he’ll sign it. The bill <em>does</em> formalize protection for whistleblowers working for AI labs who report safety incidents, which, fine. Good. The best thing it does is lay the groundwork for CalCompute, a public compute cluster of data centers that enables public and nonprofit AI work, but we’ll see what actually comes of that. </p><h2>Legalizing gig worker unions</h2><p>In theory, making it legal for gig workers to form unions should be a great thing. But with 1340, the devil is in the details. The bill is nominally an effort to claw back some rights that were signed away when Prop 22 was voted into law, relegating gig workers as independent contractors, not employees, no matter how many hours they worked.</p><p>The big problem with 1340, along with the fact that it reduces Lyft and Uber’s insurance burden, is that the bill limits which union the workers can choose—using language that relegates that choice to “experienced” unions that cut out groups like Rideshare Drivers United that have been working to organize drivers on the ground for years, just with non-union status. </p><p>“We don’t think the bill is strong enough actually,” Nicole Moore, the head of RDU tells me, “and it doesn’t give drivers the chance to pick the organization of their choice. But we have to get out of this dungeon created by Prop 22 and the lawlessness of Lyft and Uber. But it’s legislation and they can make it stronger next round.”</p><p>The number one thing that Moore says <em>is </em>good about the new law is that it forces Lyft and Uber to share drivers’ ride data with the state, so it can confirm or deny that the companies are being honest about pay, hours, and standards. </p><p>“We’re in the cross hairs of algorithmic pricing,” Moore says, “that’s how they pay us, and they are looking for our lowest price point. We’re making less than federal minimum wage. And AI is responsible for how much we’re paid, how many gigs we get, and terminates us as well. It’s—literally—inhumane. Time to put the robots and algorithms and the tech oligarchs behind them in check.”</p><p><strong>Other AI bills </strong></p><p>There are some other bills on Newsom’s desk, like <a href="https://calmatters.digitaldemocracy.org/bills/ca_202520260ab316">AB 316</a>, the Artificial Intelligences: Defenses act, which would legally prevent someone from blaming an autonomous AI system for harms inflicted on another. That also seems like a no-brainer, and a guard against the way some managers like to use AI and automated systems as accountability sinks. </p><p>But the bills outlined above are the big ones. One way I can see this playing out is Newsom making a big deal about signing Wiener’s bill—the “catastrophic risk” bill that is so friendly to industry that Anthropic actively supports it—and using that as cover for vetoing the stronger bills. He might then sign SB 243, the toothless and largely performative alternative to the LEAD act, which would actually force AI companies to make serious design choices and technical adjustments to protect kids from their products. </p><p>If Newsom does this—sign SB 53, the “risk” bill and SB 243 the pretend-to-address AI products for children bill, and spikes the rest, it’s a bad sign. </p><p>“At some point, he has to step up and figure out what he’s willing to do to protect workers, jobs, safety, and privacy. And really, the jobs,” the California Labor Federation president Lorena Gonzalez says. “I think we’ll have a lot of stuff next year, and he’s really going to be tested.”</p><p>“Do we need to think about tech’s impact on society/community/people/workers?” Rideshare Drivers United organizer and Uber driver Nicole Moore says. “Yes. And we need to rein these mfs in. We’re letting them get way ahead of us. And not for innovation’s sake but profit and greed.”</p><p>Whatever happens, Gonzalez notes, as she’s out canvasing and talking with voters, she’s only seeing support for laws reining in tech grow. </p><p>“There’s actually an increase in appetite,” she says. “If you talk to voters, voters get it far more than legislators do. When you talk in particular to working class workers—I always say it’s a little ironic because the AI space and the tech space is already affecting white collar workers quicker, and sooner—but blue collar workers are much more likely to understand the threat that’s coming, and to want guardrails and protections. You can have innovation, you can have advances in technology, and still have guardrails, safety, privacy, and protections for their jobs. The more people see this bleeding into their workplaces, our momentum will only grow.”</p><p>She pauses, then adds, “It’s coming. We’re going to have this crisis, and elected officials are going to have to decide, are they representing their constituents—or are they representing big tech?”</p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p>Alright! Californians—and everyone else—be sure to do all you can to shout about these AI bills in the next two weeks. Some of the bills do move the needle in important ways, and while some are absolutely too friendly to Silicon Valley, we can read these bills as a sort of barometer as to what’s politically possible in our era of deepening tech oligarchy. If Newsom decides that there’s a political cost to letting big tech run wild and irreproachably dictate Californian’s digital lives—well that’s a start. </p><p>A quick shout to the fantastic work that CalMatters has been doing in covering the progress of state bills, and to Khari Johnson, who’s been on the tech law beat. Check out their bill tracker to follow along, or <a href="https://calmatters.org/explainers/california-gavin-newsom-bills-signing/#03a124b3-8b03-4e9f-a550-0853a423b865">this rundown of top bills</a>, both related to tech and otherwise. Also, stay tuned for something a little bit different this weekend; I may try to run a Blood in the Machine podcast episode, with the author of a great book on why we all <em>really</em> fear AI. We had a great chat and if I can figure out the tech and distribution I’ll push that out asap. As always, thanks everyone, and hammers up. </p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Yes, like <a href="https://x.com/SwiftOnSecurity/status/1385565737167724545">the infamous IBM slide</a>. </p></div></div>The Luddite Renaissance is in full swing - Blood in the Machinehttps://www.bloodinthemachine.com/p/the-luddite-renaissance-is-in-full2025-09-21T12:15:35.000Z<p>Greetings all, </p><p>Hope everyone’s hanging in there. Another week, another slipslide into fascism. The Kirk killing has, as many anticipated, served as a pretext for the Trump administration and its allies <a href="https://www.cnbc.com/2025/09/18/jimmy-kimmel-charlie-kirk-fcc-carr.html">to begin a concerted attack on its critics and opposition</a>. Trump designated ‘Antifa’ as a terrorist group, despite “anti-fascism” being an ideology, not an organization. Brendan Carr, Trump’s FCC chair, pressured Disney into sacking Jimmy Kimmel; its executive leadership immediately complied. Meanwhile, Elon Musk’s X continues <a href="https://www.bloodinthemachine.com/p/the-killing-of-charlie-kirk-and-the">to serve as a megaphone for supporters</a> of the above campaign and for calls for violence against trans people and the left. </p><p>No wonder the kids want to pull the plug. After all, if there’s hope to be found in this moment, it will be found in solidarity, in organizing, and in refusal of a world dictated by authoritarians and tech oligarchs our lives. Which is why I’m especially pleased to report that we’re beginning to see what’s shaping up to be a genuine, youth-led, modern-day Luddite uprising. </p><div class="subscription-widget-wrap-editor" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe","language":"en"}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Blood in the Machine is a 100% reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. I can only write reports like this one thanks to paying supporters who make this work possible. Many thanks, and hammers up. </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email…" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>A loose constellation of grassroots collectives, orgs, and clubs, ranging from New York’s Luddite Club to Silicon Valley’s APPstinence, has gotten together and dubbed this fall the “Luddite Renaissance.” Students, activists, tech whistleblowers, and self-proclaimed Luddites have been undertaking a series of actions, readings, and protests that will culminate next weekend, on September 27, at what they’re calling the S.H.I.T.P.H.O.N.E. (Scathing Hatred of Information Technology and the Passionate Hemorrhaging of Our Neo-liberal Experience) rally at the High Line in New York City. I would <em>love </em>to be there, but alas it’s on the wrong coast. (If you can make it to Manhattan that day, I’m very jealous; drop a line and let me know how it went.)</p><p>But it’s not just the Luddite Club and the S.H.I.T.P.H.O.N.E.rs, either. It seems that since last year, when I <a href="https://www.theatlantic.com/technology/archive/2024/02/new-luddites-ai-protest/677327/">wrote about the New Luddites</a> rising up to resist and refuse AI, from anti-gen AI creatives to Waymo combatants to gig workers fighting Uber, this loosest of movements has only broadened. Anger at AI, smartphones, and social media—and more specifically, at the exploitative practices of the companies operating them—has galvanized people all over the world, from the youth above, to artists and advocates and academics. </p><div class="digest-post-embed" data-attrs="{"nodeId":"4646bd71-e5a8-4e3b-8eba-503a5b061bbd","caption":"Greetings friends,","cta":"Read full story","showBylines":true,"size":"sm","isEditorNode":true,"title":"Cognitive scientists and AI researchers make a forceful call to reject “uncritical adoption\" of AI in academia","publishedBylines":[{"id":934423,"name":"Brian Merchant","bio":null,"photo_url":"https://substack-post-media.s3.amazonaws.com/public/images/cf40536c-5ef0-4d0a-b3a3-93c359d0742a_200x200.jpeg","is_guest":false,"bestseller_tier":1000}],"post_date":"2025-09-07T19:44:43.877Z","cover_image":"https://substackcdn.com/image/fetch/$s_!XVbm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f1d0e73-1523-46e4-9d13-f9fab3eae627_1328x846.png","cover_image_alt":null,"canonical_url":"https://www.bloodinthemachine.com/p/cognitive-scientists-and-ai-researchers","section_name":null,"video_upload_id":null,"id":172985467,"type":"newsletter","reaction_count":168,"comment_count":11,"publication_id":1744395,"publication_name":"Blood in the Machine","publication_logo_url":"https://substackcdn.com/image/fetch/$s_!irLg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21f9bf3-26aa-47e8-b3df-cfb2404bdf37_256x256.png","belowTheFold":false}"></div><p><a href="https://www.bloodinthemachine.com/p/cognitive-scientists-and-ai-researchers">Cognitive scientists, university professors and teachers</a> are taking a harder line against generative AI in schools. Mutual aid and political action groups like Stop Gen AI have formed to <a href="https://www.bloodinthemachine.com/p/hundreds-of-workers-mobilize-to-stop">support workers impacted by management embracing AI</a>. Numerous groups, led by the AI Now Institute, are working towards a <a href="https://peoplesaiaction.com/">People’s AI Action Plan</a>. </p><p>And “Luddite” is, increasingly, shedding its status as a derogatory epithet and instead is being worn like a badge of honor by the counterculture; by activists; by those who reject a future saturated by Silicon Valley’s automated slop and choked by its concentrated power. </p><p>To wit: <em>Also </em>on September 27th, a Luddite-themed event called <a href="https://breakingthegloom.com/">“Breaking the (G)loom”</a> (described as “an evening of fellowship for the AI avoidant”) is taking place in London, at SET Social. (I’ll include more details below.) And then, on top of <em>that</em> there’s a full-blown Luddite conference being put on at Columbia University, called <a href="https://www.eventbrite.com/e/new-luddism-technology-and-resistance-in-the-modern-workplace-tickets-1571575046269?aff=oddtdtcreator">“New Luddism: Technology and Resistance in the Modern Workplace.”</a> </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0KlJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f0a7e9c-dec2-4aef-8ea7-f7e29fbfc064_940x470.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0KlJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f0a7e9c-dec2-4aef-8ea7-f7e29fbfc064_940x470.jpeg 424w, https://substackcdn.com/image/fetch/$s_!0KlJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f0a7e9c-dec2-4aef-8ea7-f7e29fbfc064_940x470.jpeg 848w, https://substackcdn.com/image/fetch/$s_!0KlJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f0a7e9c-dec2-4aef-8ea7-f7e29fbfc064_940x470.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!0KlJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f0a7e9c-dec2-4aef-8ea7-f7e29fbfc064_940x470.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0KlJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f0a7e9c-dec2-4aef-8ea7-f7e29fbfc064_940x470.jpeg" width="940" height="470" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/4f0a7e9c-dec2-4aef-8ea7-f7e29fbfc064_940x470.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":470,"width":940,"resizeWidth":null,"bytes":83768,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/174059976?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f0a7e9c-dec2-4aef-8ea7-f7e29fbfc064_940x470.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0KlJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f0a7e9c-dec2-4aef-8ea7-f7e29fbfc064_940x470.jpeg 424w, https://substackcdn.com/image/fetch/$s_!0KlJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f0a7e9c-dec2-4aef-8ea7-f7e29fbfc064_940x470.jpeg 848w, https://substackcdn.com/image/fetch/$s_!0KlJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f0a7e9c-dec2-4aef-8ea7-f7e29fbfc064_940x470.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!0KlJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f0a7e9c-dec2-4aef-8ea7-f7e29fbfc064_940x470.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>This one-day event will bring together some of the very best Luddite thinkers in academia and organizers tackling tech in the work place. Also, I’ll be there. Come check it out, it’s open to all, I think, but registration is limited. There will be a happy hour or some other kind of public event in the evening as well, I’m told, so stay tuned. </p><p>All this machine-breaking action has got me thinking I should try to organize something closer to home, in LA or thereabouts, maybe bring back <a href="https://www.newyorker.com/magazine/2023/10/30/revenge-of-the-luddites">the Luddite Tribunal.</a> We’ll see, we’ll see. </p><p>For all those interested, here are more details on the New York Luddite Renaissance action, passed along to me by one of the anonymous organizers: </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-0Rz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d786142-6115-414e-9a2d-728927969b9f_1278x1684.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-0Rz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d786142-6115-414e-9a2d-728927969b9f_1278x1684.jpeg 424w, https://substackcdn.com/image/fetch/$s_!-0Rz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d786142-6115-414e-9a2d-728927969b9f_1278x1684.jpeg 848w, https://substackcdn.com/image/fetch/$s_!-0Rz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d786142-6115-414e-9a2d-728927969b9f_1278x1684.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!-0Rz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d786142-6115-414e-9a2d-728927969b9f_1278x1684.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-0Rz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d786142-6115-414e-9a2d-728927969b9f_1278x1684.jpeg" width="1278" height="1684" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/3d786142-6115-414e-9a2d-728927969b9f_1278x1684.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":1684,"width":1278,"resizeWidth":null,"bytes":680631,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/174059976?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d786142-6115-414e-9a2d-728927969b9f_1278x1684.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-0Rz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d786142-6115-414e-9a2d-728927969b9f_1278x1684.jpeg 424w, https://substackcdn.com/image/fetch/$s_!-0Rz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d786142-6115-414e-9a2d-728927969b9f_1278x1684.jpeg 848w, https://substackcdn.com/image/fetch/$s_!-0Rz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d786142-6115-414e-9a2d-728927969b9f_1278x1684.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!-0Rz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d786142-6115-414e-9a2d-728927969b9f_1278x1684.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><blockquote><p><strong>Brooklyn, NY - September 4, 2025</strong> - Exactly one hundred years ago, the Harlem Renaissance emerged to lift the voices of African-Americans out of the silence of the stifling dominant culture. Today, a Gen Z backlash against the suffocation of online existence is coalescing into a sort of new “<em>Luddite </em>Renaissance.” These young people feel that their voices — and those of all living, human beings — have been intolerably silenced and exploited by Silicon Valley, only to be replaced by robots and AI.</p><p>These kids have put together an ongoing calendar of events called “Real People in Real Time” and encourage the “Ludd-curious” to join in. The events celebrate all that is uniquely human: authenticity, empathy, play, love, sensuality, dance, joy, art, music, community and respect for the natural world. Also included are events that push back on techno-supremacy, and workshops that show the path back into embodied existence.</p><p>Youth groups which began on their own in New York, Florida, Colorado, California, Ohio, and DC, have now linked up to share stories, solidarity, and events. So far, they include the School of Radical Attention (Brooklyn), Ziggurat (Denver), APPstinence (Silicon Valley), FREE POPS (Manhattan), Reconnect (Orlando), The Lamp Club (Manhattan), The Luddite Club (Oberlin and Brooklyn), The Anxious Generation (Brooklyn), Design It for Us (DC), and the LOG OFF Movement (national). And the list is growing.</p><p>They say that they are building a way of living that is an alternative to the unnatural digital existence that has been pushed and normalized by corporate powers. Many of these young people entered the movement in search of a solution to the epidemic of alienation plaguing their generation. Record-breaking levels of Gen Z depression have been well documented, as researched by social psychologist Jonathan Haidt, author of the bestselling book, <em>The Anxious Generation.</em></p><p>As the kids have shared and delved deeper into the topic of digital culture, they’ve learned about the perilous waters we’ve entered as a civilization: how AI has robbed them of entry-level jobs; how data centers are devouring farmlands, energy and our last fresh water supplies; how algorithms are deliberately designed to addict them to their devices and atrophy their attention spans; how Silicon Valley “fracks” their behavior and sells the data to brokers; how Palantir Corporation controls all the private data held by the federal government of every U.S. citizen; how ICE uses Palantir algorithms to track (and terrorize) alleged “aliens”; and how techno-utopianism has steadily been fostering the acceleration of inequity for the past three decades.</p><p>They believe that their Luddism has been a healthy response to the litany of glaring abuses of technology.</p><p>Most of the events take place in New York City, which is no surprise, since it is one of the last municipalities in the U.S. where street life is still widely vibrant, not having been replaced by soulless digital facsimiles. In New York, revelers still populate the brick-and-mortar world, spilling out of cafes, performance spaces, galleries, bookstores, and public spaces.</p><p>On <strong>September 27</strong>, the S.H.I.T.P.H.O.N.E. rally and march will begin at 3pm <em>sharp</em> on The High Line in Manhattan, between W. 12th and W. 13th Sts. S.H.I.T.P.H.O.N.E. stands for Scathing Hatred of Information Technology and the Passionate Hemorrhaging of Our Neo-liberal Experience. Hundreds are expected to take part in this carnivalesque collective grievance against technocracy. Soapboxes will be made available for people to take turns voicing their screeds against Big Tech. There will be surprise guest speakers, gnomes, chanting, bullhorns, song, a vigil for boredom, and last, but not least — tech smashing! Come join the parade!</p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ib1C!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbeff3c3-44fc-4cf8-a740-5ff0989bf2d3_1298x1646.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ib1C!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbeff3c3-44fc-4cf8-a740-5ff0989bf2d3_1298x1646.jpeg 424w, https://substackcdn.com/image/fetch/$s_!ib1C!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbeff3c3-44fc-4cf8-a740-5ff0989bf2d3_1298x1646.jpeg 848w, https://substackcdn.com/image/fetch/$s_!ib1C!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbeff3c3-44fc-4cf8-a740-5ff0989bf2d3_1298x1646.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!ib1C!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbeff3c3-44fc-4cf8-a740-5ff0989bf2d3_1298x1646.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ib1C!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbeff3c3-44fc-4cf8-a740-5ff0989bf2d3_1298x1646.jpeg" width="1298" height="1646" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/bbeff3c3-44fc-4cf8-a740-5ff0989bf2d3_1298x1646.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":1646,"width":1298,"resizeWidth":null,"bytes":229972,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/174059976?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbeff3c3-44fc-4cf8-a740-5ff0989bf2d3_1298x1646.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ib1C!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbeff3c3-44fc-4cf8-a740-5ff0989bf2d3_1298x1646.jpeg 424w, https://substackcdn.com/image/fetch/$s_!ib1C!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbeff3c3-44fc-4cf8-a740-5ff0989bf2d3_1298x1646.jpeg 848w, https://substackcdn.com/image/fetch/$s_!ib1C!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbeff3c3-44fc-4cf8-a740-5ff0989bf2d3_1298x1646.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!ib1C!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbeff3c3-44fc-4cf8-a740-5ff0989bf2d3_1298x1646.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><blockquote><p><strong>Sept. 28 | Reconnect Field Day @ 11 a.m., Prospect Park near Garfield Pl.</strong></p><p>Stow your phone in a locker and spend the day competing outdoors in a mix of classic field-day events and other activities—from dodgeball and tug-of-war to wheelbarrow and relay races. Come for a few events, or stay the whole time—there will be plenty of prizes to go around. Bring some friends, but be ready to make new ones, too: teams will be made up on the spot.</p><p><a href="https://leaflet.pub/2944b68f-d813-4ec8-92c6-c1f7897b769d">RSVP here</a>. </p><p><strong>Oct. 4 | Ziggurat Surveillance Tech Teach-In @ 3 p.m., Brooklyn</strong></p><p>Join Ziggurat for a discussion on surveillance capitalism—the multi-trillion dollar business model that turns your daily digital life into profit for tech giants. We'll explore how companies like Google and Facebook do more than just collect your data. They are essentially selling access to your mind to advertisers (and far worse). They frack your data to predict and influence your future behavior, and even to influence elections. Through interactive demonstrations with your own devices, you'll discover the surveillance tech already embedded in your daily routine and understand why you're not the customer of “free” services—you're the product being sold.</p><p>Email hi@zig.art for a spot.</p><p>All events are always free and open to everyone. All generations are welcome!</p></blockquote><p>And more details on Breaking the (G)loom in London are here: </p><blockquote><p><strong>Date: 27th September</strong><br><strong>Time: 2pm to 5pm</strong><br><strong>Location: SET Social, Red Bar</strong></p><p><strong>This event is free, but ticketed!<br>Please register at: <a href="https://lu.ma/9ddl2shi">lu.ma/9ddl2shi</a></strong></p><p>Sick of living in the dreams of prepper CEOs?<br>Feeling doomy about spending the next AI winter in a chatbot cult?<br>Suspicious that the healthcare implications of AlphaFold might be overshadowed by the broader corrosion of liberal democracy?<br>Actually pretty optimistic, but would like a break from the hype?</p><p>Many art & technology meetups have an uncritical undertone. This one is politely opinionated.<br><strong>This means:</strong> any presented work must be entirely made without AI, <strong>unless</strong> AI is used to critique itself (efficacy, power consumption, safety e.t.c).<br><strong>This does not mean:</strong> being rude, patronising, or elitist towards the AI powerusers among us.<br>All are welcome. Luddites & cyberwitches are actively encouraged.</p><p>🪻</p><p>There will be talks: open-projector style. 1-10 minutes each.<br>There will be breaks: with time to exchange details, ask questions, introduce yourself to that cool person across the room with the T-shirt of that band you adore.</p><p>Some talk prompts for your organic intelligence:</p><p>A project you want to share.</p><p>A story you want to tell.</p><p>A future you want to conjure.</p><p>If you are wondering if your talk idea is good - yes it is great!!</p><p>Say hi at: contact(at)breakingthegloom(dot)com</p></blockquote><div><hr></div><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p>There’s lots, still, to fight of course. In that vein, some good things I’ve been reading, to cap us off here:</p><ul><li><p><strong>Hundreds of Google AI Workers Were Fired Amid Fight Over Working Conditions. </strong>“Over 200 contractors who work on improving Google’s AI products, including Gemini and AI Overviews, have been laid off, <a href="https://www.wired.com/story/hundreds-of-google-ai-workers-were-fired-amid-fight-over-working-conditions/?utm_source=nl&utm_brand=wired&utm_mailing=WIR_Daily_091625_PAID&utm_campaign=aud-dev&utm_medium=email&utm_content=WIR_Daily_091625_PAID&bxid=5c4914c4fc942d0477dd0ce9&cndid=52663044&hasha=19b2ff5b2617a571bac0cb1b6512b60d&hashc=294f3c8fe12f383eb2dcbe3c08d1645411825747be90ef63aecd7e0524da9e8a&esrc=AUTO_PRINT&utm_term=WIR_DAILY_PAID">WIRED reports</a>. “It’s the latest development in a conflict over pay and alleged poor working conditions.”</p></li><li><p><strong>The UC Berkeley Labor Center</strong> has <a href="https://laborcenter.berkeley.edu/wp-content/uploads/2025/09/Labors-AI-Values-v2.pdf">a new report</a> analyzing how labor groups are addressing and engaging AI. </p></li><li><p><strong>John Herrman on <a href="https://nymag.com/intelligencer/article/what-do-people-actually-use-chatgpt-for.html?utm_source=substack&utm_medium=email">what ChatGPT has become</a></strong> (“Less synthetic brain, more replacement for the whole internet”) in New York Mag: “The picture that emerges from this data matches this thesis pretty closely: ChatGPT, for many of its users, is a way to access, remix, summarize, retrieve, and sometimes reproduce information and ideas that already exist in the world; in other words, they use this one tool much in the way that they previously engaged with the entire <em>web</em> — arguably the last great “cultural and social technology” — and through a similar routine of constant requests, consultations, and diversions. One doesn’t get the feeling from this research that we’re careening toward uncontrollable superintelligence, or even <a href="https://nymag.com/intelligencer/article/why-everything-is-an-ai-agent-now.html">imminent invasion</a> of the workforce by agentic AI bots, but it does suggest users are more than comfortable replacing and extending many of their current online interactions — searching, browsing, and consulting with the ideas of others<em> — </em>with an ingratiating chatbot simulation.</p></li><li><p><strong>Molly White on the <a href="https://www.citationneeded.news/prediction-markets-oversight/">wildly ballooning world of prediction markets</a></strong><a href="https://www.citationneeded.news/prediction-markets-oversight/">, in her newsletter, Citation Needed</a>: “With prediction markets already handling billions of dollars in trades and more platforms launching every month, regulators need to grapple with these questions before the industry grows too big to effectively control. The cryptocurrency industry has shown how difficult it becomes to implement meaningful oversight once a poorly regulated industry accumulates enough money and political influence to push back — and the devastating cost to everyday people who get caught in the fallout.”</p></li><li><p><span class="mention-wrap" data-attrs="{"name":"Edward Ongweso Jr","id":797662,"type":"user","url":null,"photo_url":"https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/c7cd9843-f893-47f4-9747-2545d20af833_512x512.png","uuid":"c580be54-07f6-4957-ad87-d0941a7ae92f"}" data-component-name="MentionToDOM"></span> on the <a href="https://thetechbubble.substack.com/p/on-the-origins-of-dunes-butlerian">origins of the Butlerian Jihad</a>, the anti-machine uprising foundational to the lore of Frank Herbert’s Dune books:</p></li></ul><div class="embedded-post-wrap" data-attrs="{"id":172283178,"url":"https://thetechbubble.substack.com/p/on-the-origins-of-dunes-butlerian","publication_id":55593,"publication_name":"The Tech Bubble","publication_logo_url":null,"title":"On the Origins of Dune's Butlerian Jihad","truncated_body_text":"Nearly one hundred years before Frank Herbert published “Dune” and teased its Butlerlian Jihad—the Great Revolt against computers, thinking machines, and conscious robots that some humans used to enslave humanity (who were, in turn, enslaved by a \"god of machine-logic\")—there was the Butler that inspired it all: Samuel Butler, a 19th century English nov…","date":"2025-09-19T12:06:13.215Z","like_count":54,"comment_count":6,"bylines":[{"id":797662,"name":"Edward Ongweso Jr","handle":"thetechbubble","previous_name":null,"photo_url":"https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/c7cd9843-f893-47f4-9747-2545d20af833_512x512.png","bio":"Writer, editor, Luddite, decel","profile_set_up_at":"2022-09-19T20:01:36.108Z","reader_installed_at":"2022-09-26T20:34:39.888Z","publicationUsers":[{"id":106411,"user_id":797662,"publication_id":55593,"role":"admin","public":true,"is_primary":true,"publication":{"id":55593,"name":"The Tech Bubble","subdomain":"thetechbubble","custom_domain":"thetechbubble.info","custom_domain_optional":true,"hero_text":"Dispatches on technology and its political economy.","logo_url":null,"author_id":797662,"primary_user_id":797662,"theme_var_background_pop":"#D10000","created_at":"2020-06-12T16:55:22.880Z","email_from_name":null,"copyright":"Edward Ongweso Jr","founding_plan_name":"Tech Bubble Financier","community_enabled":true,"invite_only":false,"payments_state":"enabled","language":null,"explicit":false,"homepage_type":"magaziney","is_personal_mode":false}}],"twitter_screen_name":"bigblackjacobin","is_guest":false,"bestseller_tier":100,"status":{"bestsellerTier":100,"subscriberTier":10,"leaderboard":null,"vip":false,"badge":{"type":"bestseller","tier":100}}}],"utm_campaign":null,"belowTheFold":true,"type":"newsletter","language":"en"}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://thetechbubble.substack.com/p/on-the-origins-of-dunes-butlerian?utm_source=substack&utm_campaign=post_embed&utm_medium=web"><div class="embedded-post-header"><span></span><span class="embedded-post-publication-name">The Tech Bubble</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">On the Origins of Dune's Butlerian Jihad</div></div><div class="embedded-post-body">Nearly one hundred years before Frank Herbert published “Dune” and teased its Butlerlian Jihad—the Great Revolt against computers, thinking machines, and conscious robots that some humans used to enslave humanity (who were, in turn, enslaved by a "god of machine-logic")—there was the Butler that inspired it all: Samuel Butler, a 19th century English nov…</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">2 days ago · 54 likes · 6 comments · Edward Ongweso Jr</div></a></div><div><hr></div><p>That’s it for now. Next week we’ll take a look at the spate of AI bills that recently passed California’s state legislature, and that now await their fate on Gavin Newsom’s desk. There’s much to get into, as California’s the best hope for strong AI regulation, yet the governor’s political aspirations will keep him cozy with Silicon Valley. Anyway, much to discuss.</p><p>As always, many thanks for reading, and one last quick entreaty that if you’ve made it this far, perhaps this work has some value for you, and you might consider supporting it with a yearly or monthly subscription. Writing this thing takes many hours a week—these days, with, you know, everything going on, I sink many more than 40 hours into BITM and related projects like <a href="https://www.bloodinthemachine.com/s/ai-killed-my-job">AI Killed My Job</a>. (This week, we heard from <a href="https://www.bloodinthemachine.com/p/artists-are-losing-work-wages-and">visual artists</a>.) Your support helps ensure I can keep doing this work, and continuing to cover tech from the perspective of we humans, the people Silicon Valley is happening to. </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p>Until next time, down with all kings but King Ludd. </p><p></p><p></p>Social media causes more harm than good - Disconnect68cd97dec4f62c0001c366022025-09-19T18:08:02.000Z<img src="https://disconnect.blog/content/images/2025/09/socialmedia-1.png" alt="Social media causes more harm than good"><p>I recently spent a few days in the United Kingdom, which, if you’re to believe some online influencers, has become the new home of a widespread censorship regime. In July, the country implemented new rules under the Online Safety Act it passed in 2023 to <a href="https://www.bbc.com/news/articles/c0epennv98lo?ref=disconnect.blog">restrict minors</a> from accessing certain harmful, abusive, and explicit content, requiring websites and platforms to use age verification systems to comply with the law. Progressive digital rights activists slammed the country for threatening the internet, further legitimizing the political right’s “free speech” discourse in the process.</p><p>There are legitimate concerns about the law, which I’ll get into a little later, but a lot of the attacks levied at it are politically and economically motivated, and effectively take advantage of how social platforms reward sensationalism. I was curious to see how the system worked in practice, and whether it merited the scale of the response it received. Reader, I remain skeptical.</p><p>I didn’t run into the age gate on my phone — I assume because I have a Canadian SIM card — but when I logged into Bluesky on my computer, I was hit with a little window informing me that certain features like adult content and direct messaging would be off-limits until I verified my age. The DMs are included because they can serve as a more concealed avenue for abusive comments and explicit material.</p><p>I could have easily just activated my VPN to get around it — as many people in the UK <a href="https://www.ft.com/content/356674b0-9f1d-4f95-b1d5-f27570379a9b?ref=disconnect.blog">have done</a> in recent weeks — but I wanted to see how these systems worked, so I went ahead to verify myself. The system Bluesky is using gave me two options: to use a service called Yoti to scan my face or to provide a credit card that would be authorized via Stripe. Notably, handing over a photo of my ID — the scary scenario that has been spreading like wildfire online — was not even an option. I decided to do the former because I figured the credit card check would be more straightforward and I wanted to test the face scan.</p><p>After being transferred to Yoti, I positioned my face as directed and the certification commenced. It told me it was estimating my age, then that my face scan was being deleted, and finally a screen popped up telling me I was approved. Yoti transferred me back to Bluesky, where I was met with another message letting me know I was effectively free to do what I wanted now that I’d passed the check. All told, the whole process might have taken a minute and a half.</p><p>In explaining this, I’m not trying to dismiss the problems with face scans for age verification. Even the companies behind them admit they can be off by several years, if not even more. I wouldn’t be surprised if minorities find they’re <a href="https://www.theguardian.com/news/2025/sep/19/how-accurate-are-age-checks-for-australias-under-16s-social-media-ban-what-trial-data-reveals?ref=disconnect.blog">less reliable</a> for their faces too. This is why there need to be multiple options and appeal mechanisms available to people. But ultimately, was I worried or affronted? Not really. I’ve had to verify my identity many times in recent years on platforms like Twitter, Google, LinkedIn, and the other big players — even by providing a copy of my ID at times. I’d imagine many of the influencers running wild with sensational takes on the UK’s new bill have done the same. To me, it’s just a cost of being online.</p><h2 id="the-nuance-behind-digital-restrictions">The nuance behind digital restrictions</h2><p>The UK’s new rules under the Online Safety Act are part of a wave of recent restrictions being implemented to control what certain groups — particularly minors — can access when they browse the web. There can be different motivations behind those initiatives, which is where I feel some of the misunderstandings come from. Though part of it is also fueled by intentional dishonesty and the exaggeration I’ve come to expect from digital rights groups whenever new rules and obligations are introduced for online platforms.</p><p>The Online Safety Act is actually a very comprehensive piece of legislation that gives the British government extensive powers over how the internet works in its jurisdiction — not all of which are actively being used. As I mentioned, the newest set of rules target <a href="https://theconversation.com/online-safety-act-what-are-the-new-measures-to-protect-children-on-social-media-261126?ref=disconnect.blog">what minors can see online</a> — specifically things like pornography, extremist content, and promotion of eating disorders and self-harm. This is similar, but not the same, as other initiatives rolling out in other parts of the world.</p><p>Understandably, a lot of the focus in the digital rights world has been on what is happening in the United States, where Republican state governments in a growing number of states are rolling out initiatives to limit access to certain online content under the guise of protecting kids, but they are actually fueled by a social conservative impulse to try to make information about issues and causes they politically disagree with harder to access. That includes, for example, information on same-sex relationships and gender transition, as part of the broader right-wing effort not just to dehumanize trans people, but to try to erase them from public life.</p><p>In that context, a certain degree of overreaction to other initiatives is understandable, but we do need to resist collapsing the context of these different bills and ascribing political motivations unique to the United States to other policies around the world that quite clearly emerge from distinct social and political situations. For example, in my view the new UK rules are overbroad, but they are not motivated by the same socially conservative principles as in the United States, as much as the UK political class has been infected by transphobia. Their measures are much more about trying to address the very real consequences we’ve seen from some people’s engagement with the platforms, with a specific focus on young people.</p><p>That is even more the case when you look at what is happening in Australia. Once again, the goal is to <a href="https://theconversation.com/details-on-how-australias-social-media-ban-for-under-16s-will-work-are-finally-becoming-clear-265323?ref=disconnect.blog">address the harms that minors have experienced</a> on online platforms by raising the existing age limit of 13 years of age that most platforms implemented on their own around the world to a mandatory 16 years of age with stricter enforcement mechanisms. The policy is a response to a growing movement of parents and families who have seen their kids harmed by algorithmic amplification of content that affects their wellbeing or direct interactions those platforms facilitated. In some cases, their children have even taken their own lives.</p><p>There is certainly a reactionary element to the campaign, but again, it’s not driven by social conservatism. It’s driven by the obligation to protect young people, in line with a longstanding social expectation that society does just that. And that’s another issue with the discourses spreading online: a lot of the digital rights community explicitly argues that should not happen; that minors should be effectively treated as adults and should have no limits on what they can access without parental knowledge or permission. When you think about it for a moment, it’s a very extreme position out of line with social norms, but informed by a desire to put an unregulated internet before all other concerns.</p><h2 id="social-media-harms-require-action">Social media harms require action</h2><p>In the past, I might have been more hesitant about these efforts to ramp up the enforcement on social media platforms and even to put age gates on the content people can access online. But seeing how tech companies have seemingly thrown off any concern for the consequences of their businesses to <a href="https://www.404media.co/the-ai-slop-niche-machine-is-here/?ref=disconnect.blog">cash in on generative AI</a> and <a href="https://www.cnn.com/2025/01/07/tech/meta-censorship-moderation?ref=disconnect.blog">appease the Trump administration</a>, and seeing how chatbots are <a href="https://techwontsave.us/episode/282_chatbots_are_repeating_social_medias_harms_w_nitasha_tiku?ref=disconnect.blog">speedrunning the social media harm cycle</a>, many of my reservations have evaporated. Action must be taken, and in a situation like this, the perfect is the enemy of the good.</p><p>I don’t support the US measures that are effectively the imposition of social conservative norms veiled in the language of protecting kids online. But I am much more open to what is happening in other parts of the world where those motivations are not driving the policy. Personally, I think the Australians are more aligned with an approach I’d support.</p><p>They’re specifically targeting social media platforms, rather than the wider web as is occurring in the UK, and the mechanism of their enforcement surrounds creating accounts. So, for instance, now that YouTube will be included in the scheme, that means users under 16 years of age cannot create accounts on the platform — that would then enable collecting data on them and targeting them with algorithmic recommendations — but they can still watch without an account. There are still concerns around the use of things like face scanning to determine age, but in my view, it’s time to experiment and adjust as we go along.</p><p>Even with that said, if I was crafting the policy, I would take <a href="https://techwontsave.us/episode/253_should_australia_ban_teens_from_social_media_w_cam_wilson?ref=disconnect.blog">a very different approach</a>. It’s not just minors who are harmed by the way social media platforms are designed today — virtually everybody is, to one degree or another. While I support experimenting with age gates, my preferred approach would focus less on age and more on design; specifically, severed restrictions algorithmic targeting and amplification, limiting data collection and making it easier for users to prohibit it altogether, and developing strict rules on the design of the platforms themselves — as we know they use techniques inspired by gambling to keep people engaged.</p><p>To be clear, the Australians and the Brits are looking into those measures too — if not already rolling out some measures along those lines. These are actions we need to take regardless of the politics behind the platforms, but given how Donald Trump and many of these executives are explicitly trying to use their power to stop regulation and taxation of US tech companies, now is the time to be even more aggressive, not to cower in the face of pressure and criticism.</p><h2 id="how-silicon-valley-stops-regulation">How Silicon Valley stops regulation</h2><p>Watching the debate around social media restrictions has been another important moment in recognizing the deception of many digital rights groups and activists, and how often they serve major US tech companies while pretending to do the opposite. This isn’t new. From Canada, I’ve been watching this developing domestically, where over time the companies have sent out their own representatives to argue against these policies far less often. Instead, you have a series of experts who present themselves as independent voices, but just so happen to say things that sound exactly like tech company talking points.</p><p>Honestly, it can be fascinating to see how clever their deceptions are. In some cases, they’ll acknowledge the problem at hand and say we need regulation, but then argue against the actual regulation being proposed as always having some fatal flaw. Digital rights groups famously position legislation to make companies like Google and Meta pay some of their profits to news companies as “link taxes” that are supposed to destroy the internet. The Australian, Canadian, and European legislative initiatives have done no such thing.</p><p>Another pernicious one is to argue regulation is simply unworkable, as it’s impossible for smaller companies to properly comply, only cementing the dominance of the big players — even as the major companies are clearly trying to stop the regulations from moving ahead. We’ve even seen this with the new age restrictions, where some commentators argue that simply because Peter Thiel’s Founders Fund has a stake in an age verification company that means the whole initiative serves dominant tech firms, as if these terrible venture capitalists don’t have their fingers in corporate pies all over the place. Meanwhile, the actual major tech companies are <a href="https://www.theguardian.com/commentisfree/2025/aug/09/uk-online-safety-act-internet-censorship-world-following-suit?ref=disconnect.blog">fighting amongst themselves</a> to ensure they’re not the ones that have to implement age restrictions.</p><p>I remember when the Canadian government proposed new rules on streaming services a few years ago to make them invest in and prominently display more Canadian content, in line with longstanding radio and broadcast regulations. Despite the government being clear this was targets at streamers, the industry seeded a deceptive narrative that it was going after creators and YouTube channels, riling up a bunch of influencers to oppose the legislation. It was <a href="https://www.cbc.ca/news/politics/podcasters-wont-be-regulated-1.7027836?ref=disconnect.blog">pure deception</a>, aimed at trying to defeat regulations the industry did not want to have to deal with.</p><p>We’re even getting some reporting that confirms the more obscured influence operations these companies are engaging in. When California proposed a privacy bill that would have affect the Chrome browser, Google <a href="https://calmatters.org/politics/2025/09/google-lobbying/?ref=disconnect.blog">reached out to small business owners</a> to go oppose the bill on its behalf, without ever taking a public stance on whether it supported the legislation. It made it look like a much more sympathetic group would be harmed if it went forward. And it’s not the only time it’s used that tactic.</p><h2 id="reassessing-the-tech-industry">Reassessing the tech industry</h2><p>These companies are some of the most powerful in the world. They have a lot of money to throw at trying to make laws they don’t like go away, and they know it sounds a lot better for activists, experts, and other more sympathetic groups to launder their talking points instead of coming out to say them themselves. That’s not to say all opponents are bought off by tech companies — but if they can seed the narrative and get credible voices to spread it, more people will instinctively echo it.</p><p>Time and again, many digital rights arguments have proven to be significantly exaggerated with the aim of defeating regulations on tech companies. The digital rights playbook was created at a time when internet companies were nascent and competing with much more powerful traditional industries. Today, those roles have reversed, but the playbook has largely stayed the same and thus continues to serve some of the most powerful companies in the world. At a time when those companies are flexing their muscles, we need to be more aware of how they use their power.</p><p>So, all in all, do I love the age restrictions? Not really. But at this point, I’m open to measures to restrict the power of these companies, even if there are some drawbacks. Social media is a net negative: sure, it allows us to connect, share information, and have some laughs, but it’s also enabling widespread social harm and amplifying increasingly extreme right-wing political positions that negates its positive aspects. Hate speech is not free speech, and even then, no one’s rights are impeded if they can’t post as much on a social media platform. Yelling “censorship” at every opportunity is only <a href="https://www.ft.com/content/62b1acf5-0eaa-4f4c-b3fa-40dbf563b5d2?ref=disconnect.blog">playing into</a> the extreme right’s deceptive framing of free speech.</p><p>It’s time to rein in these platforms and all the harm they’ve wrought.</p>Artists are losing work, wages, and hope as bosses and clients embrace AI - Blood in the Machinehttps://www.bloodinthemachine.com/p/artists-are-losing-work-wages-and2025-09-16T20:54:05.000Z<p>After the launch ChatGPT sparked the generative AI boom in Silicon Valley in late 2022, it was mere months before OpenAI turned to selling the software as an automation product for businesses. (It was first called <a href="https://techcrunch.com/2024/01/10/openai-launches-chatgpt-subscription-aimed-at-small-teams/">Team</a>, then <a href="https://openai.com/index/introducing-chatgpt-enterprise/">Enterprise</a>.) And it wasn’t long after that before it became clear that the jobs managers were likeliest to automate successfully weren’t the dull, dirty, and dangerous ones that futurists might have hoped: It was, largely, creative work that companies set their sights on. After all, enterprise clients soon realized that the output of most AI systems was too unreliable and too frequently incorrect to be counted on for jobs that demand accuracy. But creative work was another story. </p><p>As a result, some of the workers that have been most impacted by clients and bosses embracing AI have been in creative fields like art, graphic design, and illustration. Since the LLMs trained and sold by Silicon Valley companies have ingested countless illustrations, photos, and works of art (without the artists’ permission), AI products offered by Midjourney, OpenAI, and Anthropic can recreate images and designs tailored to a clients’ needs—at rates much cheaper than hiring a human artist. The work will necessarily not be original, and as of now it’s not legal to copyright AI-generated art, but in many contexts, a corporate client will deem it passable—especially for its non-public-facing needs. </p><p>This is why you’ll hear artists talk about the “good enough” principle. Creative workers aren’t typically worried that AI systems are so good they’ll be rendered obsolete as artists, or that AI-generated work will be better than theirs, but that clients, managers, and even consumers will deem AI art “good enough” as the companies that produce it push down their wages and corrode their ability to earn a living. (There is a clear parallel <a href="https://www.bloodinthemachine.com/p/one-year-of-blood-in-the-machine">to the Luddites here</a>, who were skilled technicians and clothmakers who weren’t worried about technology surpassing them, but the way factory owners used it to make cheaper, lower-quality goods that drove down prices.) </p><p>Sadly, this seems to be exactly what’s been happening, at least according to the available anecdata. I’ve received so many stories from artists about declining work offers, disappearing clients, and gigs drying up altogether, that it’s clear a change is afoot—and that many artists, illustrators, and graphic designers have seen their livelihoods impacted for the worse. And it’s not just wages. Corporate AI products are inflicting an assault on visual arts workers’ sense of identity and self-worth, as well as their material stability. </p><p>Not just that, but as with translators, the <a href="https://www.bloodinthemachine.com/p/ai-killed-my-job-translators">subject of the last installment of AI Killed My Job</a>, there’s a widespread sense that AI companies are undermining a crucial pillar of what makes us human; our capacity to create and share art. Some of these stories, I will warn you, are very hard to read—to the extent that this is a content warning for descriptions of suicidal ideation—while others are absurd and darkly funny. All, I think, help us better understand how AI is impacting the arts and the visual arts industry. A sincere thanks to everyone who wrote in and shared their stories. </p><p>“I want AI to do my laundry and dishes so that I can do art and writing,” as the from SF author Joanna Maciejewska memorably put it, “not for AI to do my art and writing so that I can do my laundry and dishes.” These stories show what happens when it’s the other way around. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mvDc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdcd1f30-e8ce-43c8-9a03-02a7b793af03_2080x620.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mvDc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdcd1f30-e8ce-43c8-9a03-02a7b793af03_2080x620.jpeg 424w, https://substackcdn.com/image/fetch/$s_!mvDc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdcd1f30-e8ce-43c8-9a03-02a7b793af03_2080x620.jpeg 848w, https://substackcdn.com/image/fetch/$s_!mvDc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdcd1f30-e8ce-43c8-9a03-02a7b793af03_2080x620.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!mvDc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdcd1f30-e8ce-43c8-9a03-02a7b793af03_2080x620.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mvDc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdcd1f30-e8ce-43c8-9a03-02a7b793af03_2080x620.jpeg" width="1456" height="434" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/bdcd1f30-e8ce-43c8-9a03-02a7b793af03_2080x620.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":434,"width":1456,"resizeWidth":null,"bytes":317369,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":false,"topImage":true,"internalRedirect":"https://www.bloodinthemachine.com/i/173288159?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdcd1f30-e8ce-43c8-9a03-02a7b793af03_2080x620.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mvDc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdcd1f30-e8ce-43c8-9a03-02a7b793af03_2080x620.jpeg 424w, https://substackcdn.com/image/fetch/$s_!mvDc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdcd1f30-e8ce-43c8-9a03-02a7b793af03_2080x620.jpeg 848w, https://substackcdn.com/image/fetch/$s_!mvDc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdcd1f30-e8ce-43c8-9a03-02a7b793af03_2080x620.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!mvDc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdcd1f30-e8ce-43c8-9a03-02a7b793af03_2080x620.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>A quick note before we proceed: Soliciting, curating, and editing these stories, as well as producing them, is a time-consuming endeavor. I can only do this work thanks to readers who chip in $6 a month, or $60 a year—the cost of a decent cup of coffee, or a coffee table <em>book</em>, respectively. If you find value in it, and you’re able, please consider upgrading to a paid subscription. I would love to expand the scope and reach of this work. Many thanks, and onward. </p><p><em>Edited by Mike Pearl. </em></p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p></p><h3><strong>Costume designs have been replaced with AI output that can’t be made by people who actually know how clothes work.</strong></h3><p>I work in the field of constructing costumes for live entertainment: theater, film/TV, ballet/opera, touring performers, etc.</p><p>Budget and scale is all over the map, from low-budget storefront theater in which one person designs and secures costumes for a production, up to a big-budget Broadway spectacular, which can have a dozen people on the design team alone and literally hundreds of makers creating the costumes from designs developed by the design team.</p><p>I’m seeing this happen typically on the low-budget to midrange end—community theater/high school theater, independent film, etc.: Producers and directors eliminating the position of costume designer in favor of AI image generation.</p><p>It comes up often in professional forums in the field, someone will share the AI generated costume. “designs” and they will be literally impossible to construct for an actual human with materials available in the actual world— gravity-defying materials on pornographically cartoon bodies, etc.</p><p>-Rachel E. Pollock</p><p></p><h3><strong>Illustration work at ad agencies has disappeared</strong></h3><p>I remember reading about the new stage of generative AI engines sometime in late 2022 in the NY Times, and seeing Dall-E and Midjourney's outputs and knowing it will mean trouble. Until then AI was making laughable 'art.' Really bad stuff. But all of a sudden the engines had leveled up.</p><p>I have been working in the comics and publishing industry for over 20 years, but the majority of my income was usually coming from work with advertising agencies. Whenever they needed to present an idea to a client I would come in and help with illustrations, and sometimes storyboards, this was all internal and would never be published, but it was still great to get paid for doing what I love most—drawing. I felt appreciated for my skills and liked working with other people.</p><p>It was in 2023 that It seemed like overnight all those jobs disappeared. On one of my very last jobs I was asked to make an illustrated version of an AI generated image, after that, radio silence. I had my suspicions that AI was the culprit, but could not know for sure, there was also a general downturn in the advertising industry at the time.</p><p>Finally I reached out to one of the art directors I work with and he confirmed that the creatives are using AI like crazy, there was no aspect of shame in presenting an AI illustration internally, no one would call you out on it, and it's sure as hell cheaper than using an illustrator. I had to deal with a sudden, very scary decrease in income. Meanwhile it felt like AI slop was mocking me from every corner of the internet, and every big company was promoting their new AI assistant. I was just disgusted with all these corporations jumping on the AI bandwagon not thinking of what the outcomes could be. and additionally, there was the insult of knowing that the engines were trained on working illustrators, including mine!</p><p>I used my free time to work on a new graphic novel, and eventually leaned into more comics work, which paid (a lot) less, but at least felt more creatively satisfying. The two years following the loss of work were difficult, definitely felt like the rug was taken out from under my feet, and I'm still adjusting to the new landscape, although I feel better about where I am now, I work harder than ever before, for less money. But at least the work will be seen by readers.</p><p>I'm hoping that in the world of comics the public shame of replacing an artist with AI will hold off the use of the technology, but I'm sure that one day it will become a lot more accepted. I feel like we live in an age where technological changes are happening too rapidly, and are not in any way reined in by the government, and humans can lose their job in the drop of a hat, with no sense of security or help. We are just not built for these fast changes. I'm happy to see people sobering up to the downside of this technology, and hoping the hype would die down soon.</p><p>-Anonymous</p><p></p><h3><strong>‘Children's book illustrator isn't a job anymore.’ </strong></h3><p>I've been out of work for a while now. I made children's book illustrations, stock art, and took various art commissions.</p><p>Now I have several maxed out credit cards and use a donation bin for food. I haven't had a steady contract in over a year. two weeks ago, when a client who has switched to AI found out about this he gave me $50 out of "a sense of guilt." Basically pity for the fact that Illustrator, as a job, does not exist anymore.</p><div class="pullquote"><p>It was my birthday recently and I sincerely considered not living anymore.<br>The worst part of all is that the parents who once supported me fully in being an artist sent me an AI generated picture of a caricature of themselves holding a birthday cake with my name spelled incorrectly.</p></div><p>I feel cheated, like if I could go back in time and tell the younger me in high school that all the practice, all the love, and all the hope from your parents and friends for your future gets you is carpal tunnel and poverty, I could have gone into a better job field. I'd be an electrician or welder.</p><p>I have a resume with skills that are appealing to no one, as slop can be generated for free.</p><p>I sold my colored pencils and markers and illustration tablet on Facebook marketplace for a steal once a previous client who I considered a friend boasted on LinkedIn that AI was the future of cost reduction above an image of a man in a suit who looked like him with six fingers holding a wad of cash.</p><p>I have applied to over one thousand jobs and I stopped keeping track. My disability didn’t effect making art, but makes me a poor candidate for much else.</p><p>It was my birthday recently and I sincerely considered not living anymore.</p><p>The worst part of all is that the parents who once supported me fully in being an artist sent me an AI generated picture of a caricature of themselves holding a birthday cake with my name spelled incorrectly. My friends all post themselves as cartoons online.</p><p>The person I married had a secret file on their computer labeled "AI pics" they thought I didn't notice.</p><p>I will wither away eating stale food from the garbage while everyone else is complacent with the slop generator doing what I used to put passion into and finely detail.</p><p>I don't think it's going to get better.</p><p>-Anonymous<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ETIV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3972aaad-a4b8-494f-9f40-5ef78fea4d81_2048x1294.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ETIV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3972aaad-a4b8-494f-9f40-5ef78fea4d81_2048x1294.png 424w, https://substackcdn.com/image/fetch/$s_!ETIV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3972aaad-a4b8-494f-9f40-5ef78fea4d81_2048x1294.png 848w, https://substackcdn.com/image/fetch/$s_!ETIV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3972aaad-a4b8-494f-9f40-5ef78fea4d81_2048x1294.png 1272w, https://substackcdn.com/image/fetch/$s_!ETIV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3972aaad-a4b8-494f-9f40-5ef78fea4d81_2048x1294.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ETIV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3972aaad-a4b8-494f-9f40-5ef78fea4d81_2048x1294.png" width="1456" height="920" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/3972aaad-a4b8-494f-9f40-5ef78fea4d81_2048x1294.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":920,"width":1456,"resizeWidth":null,"bytes":5875621,"alt":"","title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/173288159?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3972aaad-a4b8-494f-9f40-5ef78fea4d81_2048x1294.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!ETIV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3972aaad-a4b8-494f-9f40-5ef78fea4d81_2048x1294.png 424w, https://substackcdn.com/image/fetch/$s_!ETIV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3972aaad-a4b8-494f-9f40-5ef78fea4d81_2048x1294.png 848w, https://substackcdn.com/image/fetch/$s_!ETIV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3972aaad-a4b8-494f-9f40-5ef78fea4d81_2048x1294.png 1272w, https://substackcdn.com/image/fetch/$s_!ETIV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3972aaad-a4b8-494f-9f40-5ef78fea4d81_2048x1294.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">A piece of “photo imaging” art by <a href="https://www.suoakesphotoimaging.com/">Susan Oakes</a>. According to Oakes, Photoshop classes geared toward creating art like this suddenly aren’t in demand.</figcaption></figure></div><h3><strong>I’m a graphic artist. Since AI and Adobe Firefly came along, my teaching and tutoring have dropped dead. </strong></h3><p>I have taught various graphic courses but overwhelmingly Photoshop, the 800 lb. gorilla of the graphics world. I am not a photographer and I do not teach people to take photos, but to manipulate them, also known as Photo Imaging.</p><p>They are done by manually placing several images into a composite, and then enhancing them by various digital techniques such as layering, blending, masking etc. to arrive at a final result. Most people who take my classes don’t necessarily want to do all that I do, but want to know how to correct or otherwise manipulate photos to create their own projects. </p><div class="pullquote"><p>I’m turning more to “natural media” (non-digital) art, specifically painting. I am developing a course to teach watercolor painting to adults and I’m quite excited by this prospect.</p></div><p>Since the advent of Artificial Intelligence and Photoshop’s version, Firefly, my teaching and private tutoring have pretty much dropped dead. There is very little incentive for people to learn these techniques when they can conjure up an image by text prompts. It takes virtually no skill to do this besides the ability to read and write. I have played around with A.I. for personal projects, with varying degrees of success. Some of it is amazing, and some of it is laughable. However, there is no escaping the reality that these models were trained on existing artwork already online. It’s essentially plagiarism on steroids. Also known as theft. Not to mention the obscene energy costs involved.</p><p>Had this happened to me 20 years ago I would have been devastated. But at this point in my life (I’m 71) it is not as important as it once was. I’ve had some success both in client work and also creating digital art pieces for which I’ve won accolades. I’ve found satisfaction in teaching but now I’m turning more to “natural media” (non-digital) art, specifically painting. I am developing a course to teach watercolor painting to adults and I’m quite excited by this prospect. I have been married over 50 years and we have never relied on my income to survive.</p><p>-Susan Oakes</p><p></p><h3><strong>My gig ended with my boss responding to my AI concerns with ‘There's always work out there.’ I haven’t worked since.</strong></h3><p>I worked in the video game industry, as a 3D artist.</p><p>In early 2023, when AI image generation was hitting the mainstream, I was working as a temporary contractor at a large games and technology company.</p><div class="pullquote"><p>I expressed concerns about being able to find further work. He handwaved me, saying "There's always work out there.”</p><p>I have not been able to find any work since then.</p></div><p>Our boss was very enthusiastic about AI image generation, and he showed us how he was using AI to generate some of the textures for the game. I realized that if AI image generation didn't exist, then the company would have needed to hire an extra artist to do that work. I could have recommended a dozen colleagues who were looking for work at the time. It felt like AI was directly taking money out of artist's pockets, and allowing the companies to keep it all.</p><p>When my contract with that company ended, I did an exit interview with my boss. I expressed concerns about being able to find further work. He handwaved me, saying "There's always work out there." </p><p>I have not been able to find any work since then.</p><p>There were several factors as to why there were so many layoffs in games and technology in 2023-2024, but I know that AI has played a role.</p><p>I miss working as a 3D artist.</p><p>-Anonymous</p><p></p><h3><strong>Those animated reenactments and infographics you see on TV history documentaries are made by people like me. Or at least they </strong><em><strong>were</strong></em><strong>.</strong></h3><p>I am a freelance 3D/2D Generalist. Over the past decade plus, I've had a recurring gig of being hired as a contractor to help create supplemental graphics and B-roll for various documentary-style programs. Everything from infographics about military tanks to 3D animations of prehistoric creatures to recreations of scenes involving historical figures.</p><p>If you've ever watched any History show, you know the format: footage of the host and experts speaking, then sometimes video clips or photographs, and then typically animated content that illustrates the points the speaker is making. That final category was, until recently, made by people like me. That market has completely dried up.</p><div class="pullquote"><p>My job loss is merely a side-effect of AI killing off studios higher up the chain that each represent dozens more people being put out of work.</p></div><p>A couple of years ago, as soon as demos of AI-generated video began to appear, there were almost immediate rumblings that the specific business of creating documentary-style graphics would be disrupted. The logic being that, while the public might reject a feature-length AI-slop theatrical film, the at-home audience for shows about military history or ghosts or aliens might be less-discerning. That theory is now being tested. History Channel is currently airing a season of "Life After People" that heavily features AI-generated visuals, and I'm sure there are more shows in the pipeline being made the same way. We'll see how audiences respond.</p><p>As much as I would like to say viewers will reject the AI style and demand a return to human-made art, I'm not convinced it will happen. Even if it did, it might soon be too late to turn back. I know that there are studios with expert producers, writers, and showrunners with decades of experience in this exact genre who are closing their doors. That institutional knowledge will be gone.</p><p>That's probably the bigger point: this trend is not only affecting artists like me, but also the types of companies I contract for. Obviously I lament my own loss of those stable gigs, but my job loss is merely a side-effect of AI killing off studios higher up the chain that each represent dozens more people being put out of work.</p><p>-Anonymous</p><p></p><h3>‘There's a part of me that will never forgive the tech industry for what they've taken from me and what they've chosen to do with it.’ </h3><p>I work as a freelance illustrator (focusing on comics and graphic novels but also doing book covers or whatever else might come my way) and as a "day job" I do pre-press graphic design work for a screen printing and embroidery company in Seattle. Because of our location, we handle large orders (sometimes 10k shirts at a time) for corporate clients—including some of the biggest companies in the world (Microsoft, Amazon, MLB, NHL, etc.) and my job is to create client proofs where I mock up the art on the garment and call out PMS colors as applicable. I also do the color separations to prepare the art file for screen printing. </p><div class="pullquote"><p>[H]e instructed me to start plugging in the names of living artists to generate entire artworks in their style and the first time I did it I realized how horrifyingly wrong this actually was.</p></div><p>When AI first came on the scene, I was approached by a potential client that was self-funding a mobile game and wanted to commission me to create in-game art. He asked what my standard rate was and then offered to double it if I allowed him to pay in etherium (which I knew nothing about at that point.) I immediately had some concerns, but I'm a struggling artist so I took the gig anyway and crossed my fingers. He then introduced me to generative AI and encouraged me to use it to create game content quickly. At first I was interested in the possibility of using it to reduce my workload by maybe generating simple elements I get tired of painting—like grasses or leaves—but he instructed me to start plugging in the names of living artists to generate entire artworks in their style and the first time I did it I realized how horrifyingly wrong this actually was. After that I resisted and tried to use my own art. He grew frustrated with me pretty quickly and I left the company after less than 2 weeks (I was never paid; he owes/owed me about $1300).`</p><p>Since then, I have been very outspoken against generative AI and haven't touched it again. I was the moderator for a very large group of children's book illustrators (250k members) and I helped institute and enforce a strict AI ban within the group. While this was mostly a positive thing, there were quite a few occasions where legitimate artists were targeted for harassment over accusations of AI use. Some of them were even driven out of the group, in spite of our interventions and assurances that the person was not using AI. </p><p>In my own freelancing work, I have now been accused of using AI as well. I like to do fan art from Anne McCaffrey's Dragonriders of Pern [series], and sometimes when I'm looking for work I will post my art and past commissions in fan groups to see if anyone wants to hire me to draw their original characters based on the Pern books. Almost invariably now someone will ask if my art is AI generated. It used to bother me more than it does now, I'm growing a little numb to it.</p><p>My coworker at my screen printing job (in spite of knowing my negative feelings on the matter because I had cried after I found several dozen pieces of my art in the LAION dataset) chose to plug my art into an AI generator and asked for it to imitate my style—which it did poorly, might I add. It felt extremely violating. </p><p>Lastly, in my role as a graphic designer, we often now have to deal with clients sending art files in for screen printing that were generated with AI. It's a pain in the ass because these files are often low-resolution and the weird smudgy edges in most AI images don't make for easy color separations. When a human graphic designer sits down to create a design, they typically leave layers in place that can be individually manipulated and that makes my job much easier. AI flattens everything so I have to manually separate out design elements if I want to independently adjust anything. The text is still frequently garbled or unreadable. The fonts don't actually exist so they can't easily be matched. These clients are also almost invariably cheap, and get upset when they're told that it's going to be a $75 per hr art charge to fix the image so it's suitable for screening. </p><p>Also, and here I don't have any data, just my personal anecdotal experience, but it feels like some of these companies have laid off so much of their in-house graphic design staff that they are increasingly reliant on us as a print service to fix up stuff they'd formerly done for themselves. I get simple graphic design requests every day by people who should have had the resources to handle this themselves but now they're expecting me to pick up the slack for the employees they've let go for the sake of our working relationship and keeping them on as clients. It's become such a drag on our small business that my boss is considering extra fees. (Which, considering the slim margins in the garment industry, is really saying something!) I am convinced Microsoft does not have any in-house graphic designers left at this point. Okay I joke, but man, it's bleak.</p><p>I have no way of knowing how many gigs I've lost to AI, since it's hard to prove a negative. I'm not significantly less busy than I was before, and my income hasn't really changed for better or worse. There's more stress and fear, greater workloads cleaning up badly-done AI-generated images on behalf of people looking for a quick fix, instead of getting to do my own creative stuff. And it felt deeply and profoundly cruel to have my life's work trained on without my consent, and then put to use creating images like deepfakes or child sexual abuse materials. That one was really hard for me as a mom. I'd rather cut my own heart out than contribute to something like that. </p><p>There's a part of me that will never forgive the tech industry for what they've taken from me and what they've chosen to do with it. In the early days as the dawning horror set in, I cried about this almost every day. I wondered if I should quit making art. I contemplated suicide. I did nothing to these people, but every day I have to see them gleefully cheer online about the anticipated death of my chosen profession. I had no idea we artists were so hated—I still don't know why. What did my silly little cat drawings do to earn so much contempt? That part is probably one of the hardest consequences of AI to come to terms with. It didn't just try to take my job (or succeed in making my job worse) it exposed a whole lot of people who hate me and everything I am for reasons I can't fathom. They want to exploit me and see me eradicated at the same time. </p><p>-Melissa</p><h3><strong>The gig work exchange site I use is full of AI generated artwork I’m meant to fix — along with AI-generated job listings that don’t exist.</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zsXE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3d3d129-8d15-4929-aa4c-b42fb96ee746_8252x4642.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zsXE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3d3d129-8d15-4929-aa4c-b42fb96ee746_8252x4642.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zsXE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3d3d129-8d15-4929-aa4c-b42fb96ee746_8252x4642.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zsXE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3d3d129-8d15-4929-aa4c-b42fb96ee746_8252x4642.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zsXE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3d3d129-8d15-4929-aa4c-b42fb96ee746_8252x4642.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zsXE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3d3d129-8d15-4929-aa4c-b42fb96ee746_8252x4642.jpeg" width="1456" height="819" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/d3d3d129-8d15-4929-aa4c-b42fb96ee746_8252x4642.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":819,"width":1456,"resizeWidth":null,"bytes":9683573,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/173288159?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3d3d129-8d15-4929-aa4c-b42fb96ee746_8252x4642.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zsXE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3d3d129-8d15-4929-aa4c-b42fb96ee746_8252x4642.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zsXE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3d3d129-8d15-4929-aa4c-b42fb96ee746_8252x4642.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zsXE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3d3d129-8d15-4929-aa4c-b42fb96ee746_8252x4642.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zsXE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3d3d129-8d15-4929-aa4c-b42fb96ee746_8252x4642.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">Painting by <a href="https://www.roxanelapa.com/">Roxane Lapa</a></figcaption></figure></div><p>I’m a South African illustrator, and designer with 20 years of experience, and in my industry, I saw things going pear-shaped even before gen AI hit the scene. </p><p>One the main places I get jobs from is on Upwork (one of those gig type work platforms), and I’ve noticed a couple of things: A decrease in job offerings for the illustration I typically do (like book covers).</p><p>I’ve also noticed a lot more job offers to “fix” an AI generated cover. These authors offer less money because of “the work is pretty much done” attitude.</p><p>Since Upwork added an AI function to help potential employers write their briefs, there’s been a surge in what I’m pretty sure are fake jobs. The job listings all sound very samey obviously because of the format the ai uses, and the employers have no history on the platform of ever having hired anyone and don’t have their phone or bank linked. So I think what might be happening is that some evil person/persons are creating fake accounts and posting fake jobs so that their competitors waste credits applying for these jobs.</p><p>-Roxane Lapa</p><p></p><h3><strong>I used to make erotic furry fan art for a fee. Now people just use AI. </strong></h3><p>I got my start on deviantart, moved on to furaffinity and various other websites. I used to take commissions in the furry fandom drawing <a href="https://jisho.org/word/51868d8cd5dda7b2c600559e">futa</a> furries with big fat tits and dicks. In the past year or two my commissions have all but dried up, in the time it can take me to do the lineart for an anthropomorphic quokka's foreskin, someone can just go onto one of a dozen websites and knock something of tolerable quality out in no time at all.</p><p>AI has ruined a once sacred artform.</p><p>-Anonymous</p><p></p><h3><strong>My AI-loving boss makes my team of artists use AI, even though I’ve successfully demonstrated that it doesn’t help</strong></h3><p>I am the creative team manager for an e-commerce based company. I manage the projects of 2 videographers, 1 CG artist and 3 graphic designers (including myself).<br>As AI has been getting more and more advanced, our boss (one of the owners) keeps pushing us to use AI to make our images stand out amongst competitors. <br><br>We have a limited budget, so filming or photographing our products in real environments is difficult. And photoshopping them into stock imagery also takes time. Apparently a 1-hour turn around time per image is not quick enough. Our boss has been going to conferences where he hears and sees nothing but praise for AI created images. How quick it is and how "good" the images look like.<br><br>So of course he's been pushing us to use this technology. I did tell him that it's going to be a learning curve and to be patient. From Midjourney, to the latest update of Chat GPT, and to Adobe's Firefly. We've been cranking out these partial AI images.</p><p>The funny part is, A LOT of it still has to be photoshopped together. AI is still not smart enough (yet) to produce accurate images. The products we sell are very particular and even if you feed the AI images of said product, it never gets it 100% right.<br><br>Our boss didn't believe us so he himself tried it and failed miserably. Despite that, he still reminds us that our jobs will be obsolete and that we have to adapt.<br><br>Even since we started using AI to improve our images, the turnaround time for listing images remains the same. Though I feel like our boss is waiting for the day to fire and replace my team with AI.</p><p>-Anonymous</p><p></p><h3><strong>In 2D animation backgrounds, AI is hitting freelancers hard. But even for someone steadily employed like me it’s causing workplace headaches.</strong></h3><p>As an artist, I thought I was going crazy when it seemed everyone was okay (even enthusiastic) with our work being scraped left and right to build image-generation models. I'm a mom and have a mortgage to pay, so the existential threat to my livelihood caused a lot of sleepless nights to say the least. <br><br>I have been working in 2D animation for the last 10 years. I'm a background artist, which is unfortunately one of the departments most likely to be hit by gen AI replacement in the animation production pipeline. Of course, there's no reality where gen AI could actually do my job properly as it requires a ton of attention to detail. Things need to be drawn at the correct scale across hundreds of scenes. In many cases scenes directly hook up to each other, so details need to stay consistent—not to mention be layered correctly. But these are things that an exec typically glosses over in the name of productivity gains. <a href="https://www.pcmag.com/news/netflix-taps-ai-to-generate-anime-backgrounds-rather-than-hire-humans">Plus, there's already a precedent in which AI was used to produce backgrounds for a Netflix anime</a>.</p><p>Thankfully, I'm very lucky to work at an artist-run studio that currently appears to avoid the use of AI, so I continue to be employed. My peers who were freelance illustrators or concept artists are not so lucky. I'd say about half of the people I've worked alongside this last decade have left the field (not all because of AI, granted, but the state of the North American animation/games industry is a whole thing right now and AI is not helping). <br><br>The production I am on currently leverages a lot of stock photos from Adobe Stock. We have a rule in place not to use AI, but some images slip through the cracks. These have to be removed from the finished product because of, I assume, the inability to copyright AI-generated images. An incident happened recently where an AI image almost made it to the very end of the pipeline undetected and wound up disrupting several departments who are on tight submission deadlines. We aren't typically paid overtime unless approved by the studio beforehand, so it's likely that unpaid labor (or ghost hours, where you don't tell anyone you worked overtime) went into fixing this mess AI created.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kwkm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eae9086-f814-483e-85a6-4acd7fdc29f4_1432x627.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kwkm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eae9086-f814-483e-85a6-4acd7fdc29f4_1432x627.jpeg 424w, https://substackcdn.com/image/fetch/$s_!kwkm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eae9086-f814-483e-85a6-4acd7fdc29f4_1432x627.jpeg 848w, https://substackcdn.com/image/fetch/$s_!kwkm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eae9086-f814-483e-85a6-4acd7fdc29f4_1432x627.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!kwkm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eae9086-f814-483e-85a6-4acd7fdc29f4_1432x627.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kwkm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eae9086-f814-483e-85a6-4acd7fdc29f4_1432x627.jpeg" width="1432" height="627" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/7eae9086-f814-483e-85a6-4acd7fdc29f4_1432x627.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":627,"width":1432,"resizeWidth":null,"bytes":87230,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/173288159?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eae9086-f814-483e-85a6-4acd7fdc29f4_1432x627.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!kwkm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eae9086-f814-483e-85a6-4acd7fdc29f4_1432x627.jpeg 424w, https://substackcdn.com/image/fetch/$s_!kwkm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eae9086-f814-483e-85a6-4acd7fdc29f4_1432x627.jpeg 848w, https://substackcdn.com/image/fetch/$s_!kwkm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eae9086-f814-483e-85a6-4acd7fdc29f4_1432x627.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!kwkm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eae9086-f814-483e-85a6-4acd7fdc29f4_1432x627.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><h3><strong>I watched — and sounded the alarm — as AI fever took hold inside Adobe. Then I was let go.</strong></h3><p>I was running research on [Adobe’s] stock marketplace, trying to understand how customers were adopting the new Gen A.I. tools like Midjourney, Stable Diffusion and DALL-E. Internally Adobe was launching their own text-to-image A.I. generator called Firefly but it hadn’t been announced. I was on the betas for Firefly and Generative Fill (GenFill) for Photoshop and ran workshops with designers on the Firefly team. I tested the new tooling internally and gave feedback on Adobe Slack channels and their ethics committee.</p><p>A.I. generated content started to flood the Adobe Stock website as stock contributors quickly switched from adding and uploading photos to prompting and creating assets with Midjourney, Stable Diffusion, and Firefly and then selling them back on Adobe Stock. Users wanted a better search experience but it was never explicitly clear if they wanted more A.I. slop, although Reddit forums indicated otherwise. </p><p>During the GenFill beta, I raised concerns about model bias after prompting the model to edit an image of then president Joe Biden across racial categories and having the model return a Black man with cornrows—without taking into account relevant and contextual surrounding information in the image. The ethics committee pointed me to a boilerplate Word doc with their guiding principles and we had a short Microsoft Teams call, but there wasn’t any real concern from their end. After raising additional red flags inside an Adobe Slack channel about Photoshop’s GenFill beta possibly being used to create misinformation at scale the main response I got was a blasé “Photoshop, making misinformation since 1990…” Long story long, the people internally working on these products really don’t care. In another company all hands meeting about text-to-vector capabilities fellow workers shared thoughts and concerns on the impact of AI tooling to the livelihoods of artists, illustrators and other designers in the Teams chat and no one cared. In another meeting when asked about artist’s rights a manager quipped “In the research from AdobeMAX (Adobe’s annual conference) someone said they were willing to sell their ‘artistic style’ for around the price of a car” when gathering data around AI-style mimicry and trust.</p><p>The Firefly model still struggled to render hands and certain objects with difficulty and an Adobe company wide email sent to all employees encouraged us to sign up for an upcoming photoshoot on a green screen, holding things like trumpets, accordions, rubber chickens and asked employees to make awkward expressions like being surprised with “mouth open” or squinting while putting your finger in your ear in exchange for a free lunch. </p><p>Some time in 2023 Adobe paid for photographers to document crowds of people during a concert in Seattle and had attendees sign waivers releasing their likeness since Firefly had trouble rendering and distinguishing people in crowds. Shortly thereafter I was told my staff role was being eliminated. They didn’t let me switch teams. They gave me six weeks to find a new job inside the company and six weeks of severance pay. During the six weeks of “offboarding” as they called it I applied to dozens of internal jobs at Frame.io and other teams like Acrobat within the company and it never went anywhere. </p><p>-Anonymous</p><p></p><h3><strong>I’m a recent design graduate. AI might not have killed my job, but it’s not what I signed up for, and it’s hard to find work.</strong></h3><p>I just graduated in June from a two-year intensive vocational program in graphic design. It's probably still too early in my job search for me to say that AI "killed my job," but my classmates and I, as well as students from the class just ahead of us, are certainly struggling to find work.</p><p>Why I wanted to reach out, though, is to share what my experience was as a student studying design in the midst of the peak years of this AI hype. Basically our entire second-year curriculum in one of our five classes, which was previously focused on UX, UI, web design, etc, transitioned to being largely generative AI-focused. I don't think I'm overstating matters to say that no one in my class was happy about this; none of us decided to go (back) to school for design to learn Midjourney or Runway.</p><div class="pullquote"><p>Is it always going to be like this? I love learning, but am I always going to feel like I need to acquire skills in at least five new expensive SAS platforms to survive?</p></div><p>[One instructor] has lived through and had his career significantly impacted by past shifts in the industry (he was a full-time web designer when platforms like Squarespace came along), so my charitable read is that he wants to prepare us for a lifetime of learning new tools to stay employable. I think the faculty in our program are also hearing from alumni and their technical advisory board that AI tools are becoming more important for local companies (we live in a pretty tech-centric city). So while he's sympathetic, I guess, he's still choosing to go all-in on AI, and to push his students to do the same.</p><p>In our other classes, AI use was varied. Some of our instructors allowed it; a couple still forbid it completely.</p><p>I came out of school feeling like... I guess I'm grateful to know what's out there, for the sake of my own employability in this really awful job market. I really feel for designers whose school days are a little further behind them. It's not just AI that makes me say this—in fact, even if things like image and video generators find a more permanent place in graphic arts careers, they're changing fast enough that whatever we learned in school is likely to be outdated pretty quickly. If all the angry posts I see on LinkedIn from more senior designers are any indication, there's been a trend in hiring for a while of companies looking for a designer who also does video, animation, UX/UI, and many other things that aren't really graphic design. Our program taught us a lot of those skills, so maybe, if the current economic circumstances improve, our class might be okay. But it makes me worry a lot for our future. Is it always going to be like this? I love learning, but am I always going to feel like I need to acquire skills in at least five new expensive SAS platforms to survive?</p><p>Even our AI-booster instructor told us over and over again that computers will never replace the need for creative design thinking and empathy. That, he said, is what we should lean into to distinguish ourselves and ensure our employability. But there are only so many positions out there for art directors, and not everyone who studies design wants to do that. Production design gets looked down on as "menial" by some, I think, but it used to be the pipeline into more senior design positions--and if that goes away, how do new designers even get into the field? And what about people who have worked in production their whole lives?</p><p>-Anonymous</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rWUq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe949dc2-c5ee-4b2d-a33c-01df584b9457_1830x1378.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rWUq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe949dc2-c5ee-4b2d-a33c-01df584b9457_1830x1378.png 424w, https://substackcdn.com/image/fetch/$s_!rWUq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe949dc2-c5ee-4b2d-a33c-01df584b9457_1830x1378.png 848w, https://substackcdn.com/image/fetch/$s_!rWUq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe949dc2-c5ee-4b2d-a33c-01df584b9457_1830x1378.png 1272w, https://substackcdn.com/image/fetch/$s_!rWUq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe949dc2-c5ee-4b2d-a33c-01df584b9457_1830x1378.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rWUq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe949dc2-c5ee-4b2d-a33c-01df584b9457_1830x1378.png" width="1456" height="1096" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/be949dc2-c5ee-4b2d-a33c-01df584b9457_1830x1378.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":1096,"width":1456,"resizeWidth":null,"bytes":5089511,"alt":"","title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/173288159?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe949dc2-c5ee-4b2d-a33c-01df584b9457_1830x1378.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!rWUq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe949dc2-c5ee-4b2d-a33c-01df584b9457_1830x1378.png 424w, https://substackcdn.com/image/fetch/$s_!rWUq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe949dc2-c5ee-4b2d-a33c-01df584b9457_1830x1378.png 848w, https://substackcdn.com/image/fetch/$s_!rWUq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe949dc2-c5ee-4b2d-a33c-01df584b9457_1830x1378.png 1272w, https://substackcdn.com/image/fetch/$s_!rWUq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe949dc2-c5ee-4b2d-a33c-01df584b9457_1830x1378.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">Another piece of photo imaging art by Susan Oakes.</figcaption></figure></div><h3><strong>I struggle to fix all the AI’s problems while my AI-loving clients stand on the sidelines wondering what the issue is</strong></h3><p>I am a freelancer of a few trades, so it can be hard to measure lost work, because I can also wonder if I'm slow because times are slow, or a typical cycle, or AI.</p><p>I can tell you this: ALL my "lighter" graphic design work—making social media or print ad graphics, designing logos—has totally dried up. I was actually more worried about this when Canva came out, but even then they wanted my eye and my touch on things, so having the tools to do it themselves didn't really deter people from hiring me. I did this kind of work for some local small businesses, organizations, event venues. This was an abrupt change within the past couple years.</p><div class="pullquote"><p>They are usually thinking they will pay for a couple hours of my time, when what they are asking for could require maybe 100 hours. The "mistakes" […] are in the bones of the art.</p></div><p>My illustration work is mostly picture books, and while my work has remained steady (I do 1-3 a year), the number of inquiries I've gotten from new authors has dropped to nearly zero, when I used to field a few a month and usually book myself out for the next year. Also, through Upwork and other various avenues I find work, I've had quite a few people (presumably authors) reach out to me to "fix" their AI generated art. It does depend on the task at hand but it's a 90% certainty that fixing the art will take nearly as long as just doing it myself. Of course they aren't coming to me with AI generated work because they intended to hire a full-blown illustrator. They are usually thinking they will pay for a couple hours of my time, when what they are asking for could require maybe 100 hours. </p><p>The "mistakes" AI makes on art for something like a picture book, which requires consistency of a lot of different elements across at minimum 16 or so pages, are so deep that they are in the bones of the art. It's not airbrushing out a sixth finger; it's making the faux colored pencil look the same across pages, or all the items in a cluttered room be represented consistently from different angles, or make the different characters look like they came from the same universe. It's bad at that stuff and it's not surface level. A lot of time potential clients don't know why the art isn't working and it's because it's these all-encompassing characteristics.</p><p>-Melissa E. Vandiver</p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>The title for this story comes from the heading of the email this author submitted.</p></div></div>The killing of Charlie Kirk and the end of the "global town square" - Blood in the Machinehttps://www.bloodinthemachine.com/p/the-killing-of-charlie-kirk-and-the2025-09-12T18:54:56.000Z<p>Twitter never was a “<a href="https://www.washingtonpost.com/technology/2023/07/07/twitter-dead-musk-tiktok-public-square/">global</a> <a href="https://www.theatlantic.com/international/archive/2015/08/twitter-global-social-media/402415/">town square</a>,” as much as pundits and executives liked the metaphor. Elon Musk liked it enough that, after buying the site and rebranding it X, he had the official account <a href="https://x.com/X/status/1730309839929110846?lang=en">reiterate the idea</a>. But “town square”? Not really. Twitter was a large website that, by virtue of its early-mover advantage, its success in the network effect sweepstakes, and its savvy public relations campaigning, captured enough of the world’s commentariat to resemble a flattened version of one, for a time. But it was always an ad-supported platform where opaque algorithms determined who saw what, in what chronology, and who benefited most from the resulting engagement.</p><p>At Twitter’s peak, it really did have legacy media outlets and citizen journalists, conservatives and liberals, celebrities and heads of state, shitposters and academics, and so on, sharing the same platform and feedspace. This could give us, the users, the impression that we were “participating” in a world event, or at least the processing of one, by publishing our character-limited takes on the matter as it was unfolding. It was a prospect alluring enough to draw dedicated users, like me, and perhaps you, to the well, time and again. It’d be hard to tally how many News Events I engaged as a rubber-necker, a journalist, or a poster, by refreshing Twitter futilely and endlessly. I’m very obviously not alone here. A <a href="https://www.nytimes.com/2023/04/18/magazine/twitter-dying.html">lot of good words</a> have been written and <a href="https://www.ucl.ac.uk/social-historical-sciences/anthropology/research/why-we-post">extensive studies</a> have been conducted in service of trying to discern what that practice even <em>was.</em></p><p>But I hadn’t even fully realized how much I’d mostly stopped processing news in this way—shoulders hunched and tense, squinting into the feed, agitated, feeling a compulsion to ‘weigh in’ despite being acutely aware that we are all being prompted by a website’s UX design to feel that precise compulsion, and wanting the shares and validation anyway—until the gruesome assassination of the right-wing provocateur Charlie Kirk brought me, with millions of others, right back into its maw.</p><div class="subscription-widget-wrap-editor" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe","language":"en"}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Blood in the Machine is a 100% reader-supported publication. Most posts are entirely free to read, so subscribe away, but I can only write them thanks to my beloved paid supporters. If you can, please consider becoming one.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email…" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>It’s not that there hadn’t been other notable and wide-ranging atrocities recently, obviously, that I and millions of others have watched slack-jawed in anger through an app. But a truly and awfully ideal Social Media Event demands not just that you feel outrage, or even respond to others’ outrage, but that you succumb to that compulsion to join in, to simulate a kind of participation in history. It demands that you feel a blind urge to “do something” and transmute it onto the screen, to make your statement. (I saw a lot of users were commenting on how it was as if people were issuing their own press releases, which was apt, and it’s always been a bit like that.) To correct the big accounts that are obviously getting it wrong and “call out” the ones espousing hate, and so on. </p><p>The killing of a rightwing political activist known for trolling liberals, as well as for his enormous presence on the very sites his death was experienced through, more than fits the bill. It’s well-known at this point that platforms like X reward shocking graphic video, inflammatory speech, and political attacks with virality; the shooting of Kirk had all of the above packed into its initial singularity. It became the first time in months, maybe even years, that I lost the day to scanning and posting on social media. It might well be one of the last. </p><p>To me, anyway, it was a clarifying event with regard to the current State of Social Media, three years after Musk’s takeover of Twitter, its remaking as X, and its subsequent balkanization into various platform fiefs. It dispelled some curiously persistent delusions in the process, and zealously introduced the new elements that seem to me to be poised to limit the further effective simulation of a user’s participation in history. I came away with a few lingering thoughts and conclusions, which I’ll drill into below.</p><ol><li><p>The conceptualization of Twitter or X as a “global town square” can be dismantled for good</p></li><li><p>“AI enhancement” is the new, anti-social version of the crowd-sourced social media manhunt</p></li><li><p>BlueSky, as a left-liberal coded X alternative, has created a useful new kind of ‘other’ for online political projects</p></li></ol><p>Let’s get into it. </p><h2>The “democratic” “town square” that never was, is dead</h2><p>It was always, in hindsight, an enormously dubious proposition that Silicon Valley social media platforms would help foster democracy in any serious or sustained manner, despite the fleeting example of the Arab Spring tech companies wore like a badge for years afterwards. Autocrats soon learned that controlling social media was easy enough; you can <a href="https://www.theguardian.com/world/2023/apr/05/twitter-accused-of-censorship-in-india-as-it-blocks-modi-critics-elon-musk">always play the refs</a>, and barring that, <a href="https://time.com/32864/turkey-bans-twitter/">pull the plug</a>. And if you own the thing, well, you can do a lot more than that.</p><p>Elon Musk’s X has become a case study in how a social media network with tens of millions of users can be remade in the image of the man behind the control board, by removing content moderation, restoring users banned for hate speech, introducing pay-to-play incentives, and routinely signaling, by personal example, what kind of content the platform is for. </p><p>Yesterday, at 12:27, Musk tweeted “The Left is the party of murder,” before there was any evidence at all about the killer’s identity, or regarding his motives or ideological leanings, or that “The Left” is in fact a political party. Nonetheless, it helped pave the way for a stream of vitriol and calls to violence from some of X’s biggest accounts. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FCfR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33809929-91ff-48a7-8541-5a3fb3e9de19_1080x1350.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FCfR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33809929-91ff-48a7-8541-5a3fb3e9de19_1080x1350.jpeg 424w, https://substackcdn.com/image/fetch/$s_!FCfR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33809929-91ff-48a7-8541-5a3fb3e9de19_1080x1350.jpeg 848w, https://substackcdn.com/image/fetch/$s_!FCfR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33809929-91ff-48a7-8541-5a3fb3e9de19_1080x1350.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!FCfR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33809929-91ff-48a7-8541-5a3fb3e9de19_1080x1350.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FCfR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33809929-91ff-48a7-8541-5a3fb3e9de19_1080x1350.jpeg" width="1080" height="1350" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/33809929-91ff-48a7-8541-5a3fb3e9de19_1080x1350.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":1350,"width":1080,"resizeWidth":null,"bytes":253414,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/173312387?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33809929-91ff-48a7-8541-5a3fb3e9de19_1080x1350.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!FCfR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33809929-91ff-48a7-8541-5a3fb3e9de19_1080x1350.jpeg 424w, https://substackcdn.com/image/fetch/$s_!FCfR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33809929-91ff-48a7-8541-5a3fb3e9de19_1080x1350.jpeg 848w, https://substackcdn.com/image/fetch/$s_!FCfR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33809929-91ff-48a7-8541-5a3fb3e9de19_1080x1350.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!FCfR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33809929-91ff-48a7-8541-5a3fb3e9de19_1080x1350.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">Screenshot gallery from a viral Medias Touch tweet.</figcaption></figure></div><p>In an essay that now seems quaint despite being published just weeks ago, the editor of a new liberal magazine, The Argument, inaugurated the publication with <a href="https://www.theargumentmag.com/p/we-have-to-stay-at-the-nazi-bar">a call for its new readers to stay on X</a>, for the sake of debate: “Twitter is — without question — the most influential public square we have… Those who leave Twitter are sacrificing their ability to advocate for the change they seek.” Scanning the posts above, from some of the largest accounts on the platform, as well as from its owner, I hope it’s obvious enough that the only reason these folks are taking to any kind of public square is to assemble a firing squad. </p><p>On a more granular level, <a href="https://www.rollingstone.com/culture/culture-news/elon-musk-engineers-twitter-engagement-1234680113/">multiple</a> <a href="https://www.theguardian.com/technology/2023/feb/16/twitter-data-appears-to-support-claims-new-algorithm-inflated-reach-of-elon-musks-tweets-australian-researcher-says">reports</a> have <a href="https://www.platformer.news/yes-elon-musk-created-a-special-system/">confirmed</a> that Musk instructed X’s engineers to boost how far his own posts travel, and who knows how many friendly accounts he’s extended the favor to, or how else he’s tinkered with the algorithm, in ways less obvious than <a href="https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-antisemitic-racist-content">giving rise to MechaHitler</a>. Meanwhile, users, typically his fans, who pay for blue checkmarks, get their posts elevated by the algorithm in replies, effectively stamping out any hope for organic debate. And the drumbeat of anti-left, anti-Democrat, anti-woke, and anti-migrant posts from the platform’s top account, Musk’s, have indelibly created a culture that should disabuse anyone of the notion that this is entire configuration is something to be <em>argued</em> against. X has become a vehicle for power, in other words, not persuasion. And the anti-MechaHitler side doesn’t have any.</p><p>Users, myself included, who nonetheless felt compelled by watching the rage fomenting on the platform after Kirk’s murder, to post anything attempting to counter the gathering “this is a war and the Democrats/left must be extinguished” narratives, were predictably ignored, mocked, and steamrolled, or worse. Right-wing activists are now taking the social posts of people they believe to be “celebrating” Kirk’s death—many are just posting the activists’ own past quotes—entering them into a database, and <a href="https://www.wired.com/story/right-wing-activists-are-targeting-people-for-allegedly-celebrating-charlie-kirks-death/">posting their personal details online</a>.</p><p>There was no meaningful debate, besides perhaps between fellow traveler liberals, and certainly no detectible impulse towards democracy<em>. </em>Whether or not it’s *ethical* to stay on X is another question, but the aftermath of the Kirk killing shows us why we’d do well to dismantle our model of X as a place where debates are had, needles are moved, and political progress is possible. </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><h2>AI slop is further degrading information quality and giving rise to antisocial crowd-sourced manhunts</h2><p>In the earlier days of social media, after a tragedy, users would take to the platforms to scour the footage and photos of the event for clues. Most famously, in the case of the Boston Marathon Bomber, a subreddit that dubbed itself ‘Find Boston Bombers’ crowd-sourced the investigation to amateur sleuths at home. It wound up declaring a few innocent people as suspects, spurring the media to show up on at least one poor bystander’s front lawn. The “suspects” were harassed online and otherwise made miserable. Ultimately Reddit <a href="https://www.bbc.com/news/technology-22263020">was forced to apologize.</a></p><p>The intent may have been noble, or not, but either way, it was worse than useless. It impeded the real investigation and ruined some real people’s lives for a while. It was a function of social media that we had to learn to guard against, to prevent amateur information from dubious provenances from entering the chat. Now, of course, there’s a brand new vehicle for information degradation proliferating on the platforms</p><p>As <a href="https://futurism.com/elon-musk-grok-charlie-kirk-misinformation">Futurism reported</a>, AI chatbot products, especially Grok, were sharing false information about Kirk’s killing:</p><blockquote><p>When one user asked, for instance, if Kirk <a href="https://x.com/CoolJdjdjd28961/status/1965859329183224268">could have survived</a> the gunshot wound, <a href="https://x.com/grok/status/1965859625431134508">Grok responded</a> in a cheery tone that the Turning Point USA founder was fine.</p><p>"Charlie Kirk takes the roast in stride with a laugh — he's faced tougher crowds," the bot wrote. "Yes, he survives this one easily."</p><p>When <a href="https://x.com/HotTalkJayhawk/status/1965863030409015724">another user countered</a> that Kirk had been "shot through the neck" and asked Grok "wtf" it was talking about, the chatbot doubled down.</p><p>"It's a meme video with edited effects to look like a dramatic 'shot' — not a real event," <a href="https://x.com/grok/status/1965863232478077127">Grok retorted</a>. "Charlie Kirk is fine; he handles roasts like a pro."</p></blockquote><p>Then, on Thursday, when the FBI released images of the suspected shooter, social media users took to the platform not to pool clues, but to ‘enhance’ the image using AI. I’ve assembled some of the examples below; there were also video renderings that depicted the shooter walking up the stairs.</p><div class="image-gallery-embed" data-attrs="{"gallery":{"images":[{"type":"image/png","src":"https://substack-post-media.s3.amazonaws.com/public/images/493b7390-6101-4edc-8df9-0e897f5a1d33_1190x1020.png"},{"type":"image/png","src":"https://substack-post-media.s3.amazonaws.com/public/images/c58df976-185f-412f-b90b-04136b637a70_1156x812.png"},{"type":"image/jpeg","src":"https://substack-post-media.s3.amazonaws.com/public/images/86845ade-57d7-4afb-abb6-294872fecb2b_600x606.jpeg"},{"type":"image/jpeg","src":"https://substack-post-media.s3.amazonaws.com/public/images/32a27ed5-0cf7-48e3-9d0b-dadc5fb605f5_1024x1536.jpeg"},{"type":"image/jpeg","src":"https://substack-post-media.s3.amazonaws.com/public/images/5960d2a3-d58c-410c-8ce2-97459bf9c335_1200x675.jpeg"},{"type":"image/jpeg","src":"https://substack-post-media.s3.amazonaws.com/public/images/5d5ee30d-53e2-4062-b755-ed90b3dde18c_482x680.jpeg"},{"type":"image/png","src":"https://substack-post-media.s3.amazonaws.com/public/images/fe589744-7b80-4328-afb5-312be3a32824_1184x1090.png"},{"type":"image/png","src":"https://substack-post-media.s3.amazonaws.com/public/images/01ce1dd2-1638-4c81-9868-292d2c96aae6_1176x1364.png"}],"caption":"","alt":"","staticGalleryImage":{"type":"image/png","src":"https://substack-post-media.s3.amazonaws.com/public/images/797394c9-bac8-44c1-b4c8-80eb3a000a04_1456x1700.png"}},"isEditorNode":true}"></div><p>This is, of course, once again, worse than useless. Some of these users may, like most of the Boston marathon bomb sleuths, just be trying to help, but they’re assuming that ChatGPT is going to work like the “AI” they see on TV procedurals, and “enhance” or “clean up” a photo, when it is of course assembling a new image from scratch, based on the pixels put into its system. (Gizmodo’s Matt Novak has a good and <a href="https://gizmodo.com/ai-zoom-enhance-does-not-work-2000651736">more thorough explainer </a>of why this practice is so absurd.) </p><p>An AI image generation system, cannot, for instance, give us any actual information about what the suspect looks like without sunglasses on.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!PaHi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250b7c98-76d0-4d87-a841-7fa126827175_1176x1054.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!PaHi!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250b7c98-76d0-4d87-a841-7fa126827175_1176x1054.png 424w, https://substackcdn.com/image/fetch/$s_!PaHi!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250b7c98-76d0-4d87-a841-7fa126827175_1176x1054.png 848w, https://substackcdn.com/image/fetch/$s_!PaHi!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250b7c98-76d0-4d87-a841-7fa126827175_1176x1054.png 1272w, https://substackcdn.com/image/fetch/$s_!PaHi!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250b7c98-76d0-4d87-a841-7fa126827175_1176x1054.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!PaHi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250b7c98-76d0-4d87-a841-7fa126827175_1176x1054.png" width="1176" height="1054" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/250b7c98-76d0-4d87-a841-7fa126827175_1176x1054.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":1054,"width":1176,"resizeWidth":null,"bytes":791800,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/173312387?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250b7c98-76d0-4d87-a841-7fa126827175_1176x1054.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!PaHi!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250b7c98-76d0-4d87-a841-7fa126827175_1176x1054.png 424w, https://substackcdn.com/image/fetch/$s_!PaHi!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250b7c98-76d0-4d87-a841-7fa126827175_1176x1054.png 848w, https://substackcdn.com/image/fetch/$s_!PaHi!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250b7c98-76d0-4d87-a841-7fa126827175_1176x1054.png 1272w, https://substackcdn.com/image/fetch/$s_!PaHi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F250b7c98-76d0-4d87-a841-7fa126827175_1176x1054.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>Now, this was somewhat fringe stuff, though there were big accounts participating, but the practice was also often shouted down. Still, there were at least a few cases where users were taking a screenshot from one of the AI image generators and using it to draw conclusions, and you can see how in the future this all might become more problematic. And I think it’s worth noting that the previously bad practice of working socially to find and compile evidence is giving way to the new bad practice of generating your <em>own</em> evidence, with a new tech product at hand. </p><p>The combined effect, and the omnipresence of AI on the platforms, leads users to <em>expect</em> a breakdown in information quality—to the point that Trump’s address on Kirk’s death, which was recorded and uploaded straight to social media, <a href="https://www.yahoo.com/news/articles/trump-video-charlie-kirk-being-170000273.html">was widely criticized</a> for potentially being AI-generated. In fact, it probably was either just hastily cut together, or employed an AI editing tool. But the Trump admin clearly loves AI and making AI-generated media, it would ultimately be unsurprising if it used ChatGPT to shit out a video statement. The White House X feed itself<em>, </em>after all,<em> </em>is another artifact highlighting the decay, and increasing anti-sociality, of social media. </p><h2>The social media balkanization and the vilification of BlueSky is complete</h2><p>Shortly after Kirk’s killing, a blogger in Musk’s orbit, Tim Urban wrote <a href="https://x.com/waitbutwhy/status/1965870547604222392">that</a> “Every post on Bluesky is celebrating the assassination. Such unbelievably sick people.” Musk quoted the post, and <a href="https://x.com/elonmusk/status/1965973587812380716">insisted</a> “they are celebrating cold-blooded murder.” The evidence supplied was a few tiny accounts and dumb posts with one to zero likes apiece. </p><p>Another prominent conservative commentator replied to AOC’s call for nonviolence by saying, “Your followers are celebrating Charlie Kirk's assassination all over Bluesky. Hundreds of thousands of bloodthirsty Democrats, delighted by the political violence that you've incited.” The Atlantic staff writer Thomas Chatterton Williams <a href="https://x.com/thomaschattwill/status/1965878545454084546">called</a> this purported celebration of violence “unconscionable.”</p><p>Of course, it wasn’t really happening. Not on any scale that was materially different from what was taking place on X or elsewhere, anyway. I spent a considerable amount of the week on BlueSky, too, watching the trending topics, searching keywords, doomscrolling, etc. (I also have dummy accounts on both platforms not algorithmically tailored to my typical browsing habits.) I can say with confidence that the reaction was similar on both platforms—the vast majority of posts ranged from ‘violence is never the answer’ to ‘nothing good will come from this’ to highlighting pointed quotes of Kirk’s about gun violence. You could find a few on both platforms along the lines of “he deserved it” but they were the obvious and clear minority. </p><p>It didn’t matter. <a href="https://maxread.substack.com/p/why-are-pundits-obsessed-with-bluesky">To many,</a> “BlueSky” has become an ideological construct of its own, the place where “the intolerant left” has allegedly gone to live in its bubble. (This assumption seems flawed to me, as, based on a purely anecdotal taxonomy, it seems it’s mostly progressives and left-liberals on BlueSky, and that a lot of the more traditionally Marxist left has stayed on X, though there is certainly plenty of overlap.)</p><p>That construct is now being used, in part, to justify the project outlined above by all those big rightwing accounts on X—all those “vile” posters on BlueSky ostensibly celebrating Kirk’s death are the reason that LibsOfTikTok, Matt Walsh, Elon Musk, and whoever else, must now go “to war.” The othering of the users of an entire social media platform is an especially useful rhetorical move, because X users don’t even have to leave to leave the platform to see if what the vilifiers are saying is true, and they most likely won’t. For centrists like Chatterton, meanwhile, it’s useful as a means of elevating one’s sense of reasonableness and pragmatism, over the hordes gnashing their teeth, again, just off-platform. </p><p>The biggest and most powerful accounts on X were never going to listen to the input of the accounts now posting on BlueSky, no matter where they were doing said posting. What matters more than anything—certainly more than the persuasive capacity of clever users—is that X is owned by a person with a material and political interest in highlighting certain views (BlueSky, after all, is also an X competitor), and cultivating his platform in a specific way, accordingly. Posts are peripheral; what matters is power. It’s always been thus, now it’s unambiguous. </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p>iPhone Air is Apple’s latest gimmick - Disconnect68c1d51a5cab6700015b30fb2025-09-10T20:07:32.000Z<img src="https://disconnect.blog/content/images/2025/09/iphoneair-1.png" alt="iPhone Air is Apple’s latest gimmick"><p>Did you hear? There’s a new iPhone — and it’s thinner! Exactly what everyone has been asking for.</p><p>I joke, of course. Real people want phones that are durable, have a decent camera, and allow them to get through the day without charging — not ones that compromise on all those key features. The new iPhone Air does just that: it has the shortest battery life and the worst camera system of any of this year’s iPhones. Given how thin it is, you have to imagine the company is bracing for a new “<a href="https://apple.fandom.com/wiki/Bendgate?ref=disconnect.blog">bendgate</a>.”</p><p>Apple is spinning the iPhone Air as a glimpse into the future, and I’m sure some of its hardcore fanboys will buy it. But this isn’t a MacBook Air moment. iPhones are already quite thin. Instead, it looks like a repeat of when Apple went too far with thinness in its Mac lineup, resulting in too many feature compromises, a lack of ports, and a wave of bad keyboards that ultimately enraged its customers.</p>
<div class="kg-card kg-cta-card kg-cta-bg-none kg-cta-immersive kg-cta-no-dividers kg-cta-centered" data-layout="immersive">
<div class="kg-cta-content">
<div class="kg-cta-content-inner">
<a href="#/portal/signup" class="kg-cta-button kg-style-accent" style="color: #000000;">
Become a subscriber
</a>
</div>
</div>
</div>
<p>When it finally reversed course and released <a href="https://arstechnica.com/gadgets/2021/10/2021-macbook-pro-review-yep-its-what-youve-been-waiting-for/?ref=disconnect.blog">a thicker MacBook Pro</a> with a bigger battery and more ports, customers (and reviewers) celebrated it — and bought them up. Unfortunately, the company does not seem to have learned its lesson — and not just on the iPhone front. Recent reporting from Mark Gurman at Bloomberg suggests Apple is <a href="https://www.bloomberg.com/news/newsletters/2024-06-16/when-is-apple-intelligence-coming-some-ai-features-won-t-arrive-until-2025-lxhjh86w?ref=disconnect.blog">preparing to push thinness</a> across its product line once again. The iPhone Air is just the beginning.</p><p>To me, it’s yet another example of how rudderless Apple has become on the product front <a href="https://disconnect.blog/roundup-apples-wants-to-hike-iphone-prices-again/">under Tim Cook</a>. Its days of doing serious innovation are behind it. It might roll out some nicer new cameras and other attractive features from time to time, but it’s not truly revolutionizing how people engage with digital technology anymore. Apple is just trying to find new reasons to entice people to upgrade their devices before they give out. And for all the talk of planned obsolescence, the devices are lasting longer.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://disconnect.blog/smartphone-innovation-is-dead-and-thats-fine/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Smartphone innovation is dead, and that’s fine</div><div class="kg-bookmark-description">Stop blaming a lack of competition for a product that’s simply done meaningfully evolving</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://disconnect.blog/content/images/icon/disconnect-logo-32.png" alt="iPhone Air is Apple’s latest gimmick"><span class="kg-bookmark-author">Disconnect</span><span class="kg-bookmark-publisher">Paris Marx</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://disconnect.blog/content/images/thumbnail/https-3a-2f-2fsubstack-post-media-s3-amazonaws-com-2fpublic-2fimages-2f79f382b1-054d-44fb-b791-565c154e9fbd_2000x1333-jpeg.jpg" alt="iPhone Air is Apple’s latest gimmick" onerror="this.style.display = 'none'"></div></a></figure><p>Last year, I wrote about how <a href="https://disconnect.blog/smartphone-innovation-is-dead-and-thats-fine/">smartphone innovation had died</a> — and why that was completely fine. All that’s left is find those gimmicks that can get customers to hand over their hard-earned money for a new device they don’t really need. We’ve long seen a lot of that on the Android front, as device makers didn’t just have to compete with the iPhone, but also with all the other Android phones vying for people’s attention. As Apple struggles to do meaningful innovation, it has to turn to gimmicks too.</p><p>That’s how I see the iPhone Air. It’s not just a gimmick in its own right, but a preview of the gimmick that will follow. This year, the pitch to a subset of the market that’s willing to pay a premium for an inferior product is that they can own the thinnest device — as though that really matters. But there will surely be some segment of the fanbase that will see that as enough of a reason to get one. It’s more of an intermediary product to the real pitch that will likely come next year.</p><p>When I see the iPhone Air, I immediately think of what it’s going to look like when two of them are smooshed together, until you fold them open into a book. Apple is commercializing a preview to make some money off its recent work on what will form the foundation of the foldable phone it will deliver in the next product cycle — and unfortunately, not even in the folding form factor I find intriguing.</p><p>Don’t get me wrong, I still see all foldables as a gimmick. They’re a way to try to convince the public that they need to buy the new form factor because there are few features that are really worth upgrading early for anymore. But if Apple was planning something like the <a href="https://en.wikipedia.org/wiki/Samsung_Galaxy_Z_Flip?ref=disconnect.blog">Samsung Galaxy Z Flip</a> — a hybrid of a smartphone and flip phone — I might give it a look. Sadly, it seems far more likely to make one in a book-like form, which I just think is far too big for a phone.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://disconnect.blog/apples-vision-pro-lacks-any-real-vision/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Apple’s Vision Pro lacks any real vision</div><div class="kg-bookmark-description">The company’s headset exists to placate investors, not serve users’ needs</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://disconnect.blog/content/images/icon/disconnect-logo-33.png" alt="iPhone Air is Apple’s latest gimmick"><span class="kg-bookmark-author">Disconnect</span><span class="kg-bookmark-publisher">Paris Marx</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://disconnect.blog/content/images/thumbnail/https-3a-2f-2fsubstack-post-media-s3-amazonaws-com-2fpublic-2fimages-2fd87944cd-5ea6-4da6-b062-f362f7a7fba1_2000x1125-jpeg.jpg" alt="iPhone Air is Apple’s latest gimmick" onerror="this.style.display = 'none'"></div></a></figure><p>There were other baffling decisions this year, like the lack of a black color option in the iPhone Pro line, and those worthy of praise, such as its decision to downplay <a href="https://disconnect.blog/apple-hopes-ai-will-make-you-buy-a-new-iphone/" rel="noreferrer">generative AI features</a> that commentators had criticized it for not moving more aggressively on, but that truly are not very useful for most customers. More than anything though, this September’s iPhone reveal did not say much new about Apple.</p><p>The company needs to keep the line going up and the money to keep flowing to shareholders. It’s lost any real vision in favor of iterating on what it has, occasionally hiking prices, and predictably rolling out new gimmicks to entice a purchase. I guess that is until the Vision Pro <a href="https://disconnect.blog/apples-vision-pro-lacks-any-real-vision/" rel="noreferrer">revolutionizes everything</a>.</p><p>I won’t be <a href="https://disconnect.blog/the-vision-pro-is-a-big-flop/" rel="noreferrer">holding my breath</a>.</p>
<div class="kg-card kg-cta-card kg-cta-bg-none kg-cta-immersive kg-cta-no-dividers kg-cta-centered" data-layout="immersive">
<div class="kg-cta-content">
<div class="kg-cta-content-inner">
<a href="#/portal/signup" class="kg-cta-button kg-style-accent" style="color: #000000;">
Become a subscriber
</a>
</div>
</div>
</div>
Cognitive scientists and AI researchers make a forceful call to reject “uncritical adoption" of AI in academia - Blood in the Machinehttps://www.bloodinthemachine.com/p/cognitive-scientists-and-ai-researchers2025-09-07T19:44:43.000Z<p>Greetings friends, </p><p>I know there’s been a lot of coverage in these pages of the dark side of commercial AI systems lately: Of <a href="https://www.bloodinthemachine.com/p/ai-killed-my-job-translators">how management is using AI software to drive down wages</a> and deskill work, <a href="https://www.bloodinthemachine.com/p/a-500-billion-tech-companys-core">the psychological crises</a> that AI chatbots are inflicting on vulnerable users, and, <a href="https://www.bloodinthemachine.com/p/one-of-the-last-best-hopes-for-saving">the failure of the courts</a> to confront the monopoly power of Google, the biggest AI content distributor on the planet. To name a few.</p><p>But there are so many folks out there—scientists, workers, students, you name it—who are not content to let the future be determined by a handful of Silicon Valley giants alone, and who are pushing back in ways large and small. To wit: A new, just-published paper calls on academia to repel rampant AI adoption in their departments and classrooms. </p><div class="subscription-widget-wrap-editor" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe","language":"en"}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Blood in the Machine is a 100% reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email…" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>A group lead by cognitive scientists and AI researchers hailing from universities in the Netherlands, Denmark, Germany, and the US, has published a searing position paper urging educators and administrations to reject corporate AI products. The paper is called, fittingly, <a href="https://zenodo.org/records/17065099">“Against the Uncritical Adoption of 'AI' Technologies in Academia,”</a> and it makes an urgent and exhaustive case that universities should be doing a lot more to dispel tech industry hype and keep commercial AI tools out of the academy.</p><p>“It's the start of the academic year, so it's now or never,” Olivia Guest, an assistant professor of cognitive computational science at Radboud University, and the lead author of the paper, tells me. “We're already seeing students who are deskilled on some of the most basic academic skills, even in their final years.”</p><p>Indeed, <a href="https://www.mdpi.com/2075-4698/15/1/6">preliminary research</a> indicates that AI encourages cognitive offloading among students, and weakens retention and critical thinking skills.</p><p>The paper follows the publication in late June of <a href="https://openletter.earth/open-letter-stop-the-uncritical-adoption-of-ai-technologies-in-academia-b65bba1e?limit=0">an open letter</a> to universities in the Netherlands, written by some of the same authors, and signed by over 1,100 academics, that took a “principled stand against the proliferation of so-called 'AI' technologies in universities.” The letter proclaimed that “we cannot condone the uncritical use of AI by students, faculty, or leadership.” It called for a reconsideration of the financial relationships between universities and AI companies, among other remedies. </p><p>The position paper, published September 5th, expands the argument and supports it with historical and academic research. It implores universities to cut through the hype, keep Silicon Valley AI products at a distance, and ensure students’ educational needs are foregrounded. Despite being an academic paper, it pulls few punches.</p><p>“When it comes to the AI technology industry, we refuse their frames, reject their addictive and brittle technology, and demand that the sanctity of the university both as an institution and a set of values be restored,” the authors write. “If we cannot even in principle be free from external manipulation and anti-scientific claims—and instead remain passive by default and welcome corrosive industry frames into our computer systems, our scientific literature, and our classrooms—then we have failed as scientists and as educators.”</p><p>See? It goes pretty hard. </p><p>“The position piece has the goal of shifting the discussion from the two stale positions of AI compatibilism, those who roll over and allow AI products to ruin our universities because they claim to know no other way, and AI enthusiasm, those who have drunk the the Kool-Aid, swallowed all technopositive rhetoric hook line and sinker, and behave outrageously and unreasonably towards any critical thought,” Guest tells me. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XVbm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f1d0e73-1523-46e4-9d13-f9fab3eae627_1328x846.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XVbm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f1d0e73-1523-46e4-9d13-f9fab3eae627_1328x846.png 424w, https://substackcdn.com/image/fetch/$s_!XVbm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f1d0e73-1523-46e4-9d13-f9fab3eae627_1328x846.png 848w, https://substackcdn.com/image/fetch/$s_!XVbm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f1d0e73-1523-46e4-9d13-f9fab3eae627_1328x846.png 1272w, https://substackcdn.com/image/fetch/$s_!XVbm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f1d0e73-1523-46e4-9d13-f9fab3eae627_1328x846.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XVbm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f1d0e73-1523-46e4-9d13-f9fab3eae627_1328x846.png" width="1328" height="846" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/1f1d0e73-1523-46e4-9d13-f9fab3eae627_1328x846.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":846,"width":1328,"resizeWidth":null,"bytes":231936,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/172985467?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f1d0e73-1523-46e4-9d13-f9fab3eae627_1328x846.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!XVbm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f1d0e73-1523-46e4-9d13-f9fab3eae627_1328x846.png 424w, https://substackcdn.com/image/fetch/$s_!XVbm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f1d0e73-1523-46e4-9d13-f9fab3eae627_1328x846.png 848w, https://substackcdn.com/image/fetch/$s_!XVbm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f1d0e73-1523-46e4-9d13-f9fab3eae627_1328x846.png 1272w, https://substackcdn.com/image/fetch/$s_!XVbm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f1d0e73-1523-46e4-9d13-f9fab3eae627_1328x846.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">From Figure 1 in the paper. Figure 1. A cartoon set theoretic view on various terms used when discussing the superset AI: LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colors reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses.</figcaption></figure></div><p>“To achieve this we perform a few discursive maneuvers,” she adds. “First, we unpick the technology industry’s marketing, hype, and harm. Second, we argue for safeguarding higher education, critical thinking, expertise, academic freedom, and scientific integrity. Finally, we also provide extensive further reading.”</p><p>Here’s the abstract for more detail: </p><blockquote><p>Under the banner of progress, products have been uncritically adopted or even imposed on users—in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. </p><p>For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research… universities must take their role seriously to a) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. </p></blockquote><p>It’s very much worth spending some time with, and not just because it cites yours truly (though I am honored to have Blood in the Machine: The book referenced a few times throughout). It’s an excellent resource for educators, administrators, and anyone concerned about AI in the classroom, really. And it’s a fine arrow in the quiver for those educators already eager to stand up to AI-happy administrations or department heads.</p><p>It also helps that these are scientists *working in AI labs and computer science departments*. Nothing against the comp lit and art history professors out there, whose views on the matter are just as valid, but the argument stands to carry more weight among administrations or departments navigating the question of whether or how to integrate AI into their schools this way. It might inspire AI researchers and cognitive scientists skeptical of the enormous industry presence in their field to speak out, too.</p><p>And it does feel like these calls are gaining in resonance and momentum—it follows the publication of <a href="https://refusinggenai.wordpress.com/">“Refusing GenAI in Writing Studies: A Quickstart Guide”</a> by three university professors in the US, <a href="https://themindfile.substack.com/p/against-ai-literacy-have-we-actually">“Against AI Literacy,”</a> by the learning designer Miriam Reynoldson, and <a href="https://lareviewofbooks.org/article/inspiration-from-the-luddites-on-brian-merchants-blood-in-the-machine/">lengthy cases for fighting automation in the classroom</a> by educators. After Silicon Valley’s drive to capture the classroom—and success in <a href="https://laist.com/news/education/csu-artificial-intelligence-chatgpt-budget-gap-administrators">scoring some lucrative deals</a>—perhaps the tide is beginning to turn.</p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><h2>Silicon Valley goes to Washington</h2><p>This, of course, is what those educators are up against. The leading lights of Silicon Valley all sitting down with the same president who has effectively dismantled the Department of Education, to kiss his ring, and to do, well, whatever this is:</p><div class="bluesky-wrap outer" style="height: auto; display: flex; margin-bottom: 24px;" data-attrs="{"postId":"3ly6tyokzfk2x","authorDid":"did:plc:66lbtw2porscqpmair6mir37","authorName":"Ketan Joshi","authorHandle":"ketanjoshi.co","authorAvatarUrl":"https://cdn.bsky.app/img/avatar/plain/did:plc:66lbtw2porscqpmair6mir37/bafkreihwiie3v5p5zxedev2tunz5cgnjgsn7gjza3ceada2a2nwawehgge@jpeg","text":"Incredible clip of tech CEOs fawning over Donald Trump. Someone store this clip in the underground archive vault","createdAt":"2025-09-06T18:54:51.846Z","uri":"at://did:plc:66lbtw2porscqpmair6mir37/app.bsky.feed.post/3ly6tyokzfk2x","imageUrls":["https://video.bsky.app/watch/did%3Aplc%3A66lbtw2porscqpmair6mir37/bafkreiadoug5332ewpp2w46s4q4oa75tukvn34kcb6hmnhwa6ovhyr457i/thumbnail.jpg"]}" data-component-name="BlueskyCreateBlueskyEmbed"><iframe id="bluesky-3ly6tyokzfk2x" data-bluesky-id="3123452811584184" src="https://embed.bsky.app/embed/did:plc:66lbtw2porscqpmair6mir37/app.bsky.feed.post/3ly6tyokzfk2x?id=3123452811584184" width="100%" style="display: block; flex-grow: 1;" frameborder="0" scrolling="no"></iframe></div><p>Pretty embarrassing! </p><p>Okay, that’s it for today. Thanks as always for reading. Remember, Blood in the Machine is a precarious, 100% reader supported publication. I can only do this work if readers like you chip in a few bucks each month, or $60 a year, and I appreciate each and every one of you. If you can, please consider helping me keep Silicon Valley accountable. Until next time. </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p>Human Conversation - Cybernetic Forests68bcc2c24081530001877da62025-09-07T11:00:54.000Z<h3 id="technologys-distortions-of-language">Technology's Distortions of Language</h3><img src="https://images.unsplash.com/photo-1451597827324-4b55a7ebc5b7?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDI1fHxjb252ZXJzYXRpb258ZW58MHx8fHwxNzU3MjAxMTExfDA&ixlib=rb-4.1.0&q=80&w=2000" alt="Human Conversation"><p>Language is a vessel through which meaning is mutually constructed. From this shared imagination, we learn how others understand and aim to understand them. We also navigate how much of ourselves to put into this space. The imagination space is therefore negotiated through language: our thoughts are ours. We give away what we want. </p><p>There are good reasons to keep some ideas to ourselves. Sometimes we aren’t sure about our own idea. Sometimes we aren’t sure of the other person. We worry about rejection. Conversations make us vulnerable to social and intellectual wounds. But these risks are usually overstated.</p><p>As we exchange ideas, we build a world we temporarily co-exist in. At its best, this is a circle of playfulness that welcomes risk-taking and vulnerability. That can inspire us to be bold. Boldness requires connection and trust, built up over time, by testing our boldness and seeing that we're still supported. These risks of communication help us discover how much of the world we can see, to learn how much we can change, and who might help us with the work.</p><p>This holds even for the driest of conversations. With a human tax attorney, we still work with a participatory imagination: we have to imagine, for ourselves, the world of tax law, and we work to build an understanding of that territory with our attorney as a guide. </p><p>When we use a chatbot, the language is there to help us feel supported. But that support is unearned, built into the system. Machine language is safe because it is one-sided. You can take risks with what you tell it, or what you make with it, because it isn't you. It's not even another person. So we can write to and read a chatbot’s language – but it is our own heads that make the story complete. That articulation of meaning arises from you. Unlike the conversation with a human, the chatbot is not working with you to understand and articulate an unformed idea. It's trying to capture your words and extrapolate meaning from them, based on what's most likely to happen next.</p><p>Some people argue that large language models like ChatGPT or Claude are using language the way you and I use language. But this is not the case. Chatbots use the <em>structures of language</em> in the same way, but for different reasons. They successfully <em>mimic the mechanisms of communication</em> which gives rise to the illusion of thought. Naturally, we perceive this language as humans always have: we scan the words, looking for opportunities to draw out richer understandings of the ideas within the other mind. But there is no mind!</p><p>Having these conversations with a chatbot can be helpful for some things, but it’s also tricky. Many of the smartest people in the world do not know how to make sense of these conversations, and so they simply declare that the machine is intelligent because it speaks. I don’t know what definition of intelligence they are using, but I think the intelligence is coming entirely from us. Intelligence isn’t just whether we can speak (or write), but whether we can <em>form ideas and theories</em>, however mundane or brilliant. Conversation used to be enough to tell us thinking was there. Now it isn’t. </p><p>To their credit, LLMs have certainly revolutionized our relationship to language and images, but they have not yet revolutionized “intelligence.” On that, they have a long way to go. People keep saying we need to update our definitions of intelligence, and maybe that's good. It would be more practical, though, to redefine our understanding of a <em>conversation</em>. What used to be a dance of mutual world-building, a means of engaging in imaginative play, is no longer exclusively that.</p><h2 id="conversation-as-a-medium">Conversation as a Medium</h2><p>Conversation has typically been distinct from media. A conversation is a mutually navigated way of seeing the world from another’s point of view. Most media up until now is designed to drive one point of view at you without taking your point of view back in. We work to understand these stories, whether for pleasure, for critique, or to gather information about the world. But media stories, for most of us, are one-sided. We work to understand what is on the television or newspaper or movies, but the television and newspaper and movies never actively worked to understand the meaning produced by consumers and change to adapt. </p><p>We can do all kinds of things to “talk back” to these media streams, and most social media is about sharing our thoughts on that media stream with others. With social media today, <em>everyone</em> tells a story to an audience of people in a one-sided way. We imagine that audience through our platform, measuring responses through likes and shares. We create and evaluate the stories of others from a distance and we can talk back.</p><p>It might be common to have the experience of posting something and finding that it has invited a lot of anger or derision from people. You might also participate in that cycle, by commenting or sharing your displeasure about what you’re seeing or reading, leaning into public displays of social policing. This gets rewarded: social media is designed to show you things that make you respond. They make money when you respond, when you mash refresh, when you share content that makes other people respond. So if you get angry and say so, that keeps people on the platform. Your anger is a product they sell, second hand, to the platform's advertisers.</p><p>The distance and indirectness of social media has cultivated in many of us a sense of harshness about people and, in turn, coming to fear that harshness. It also instills the idea that conversations are one-sided and that the stories people tell are targets for commentary, rather than collaboration. In a conversation, we work together to understand the ideas in our minds, even articulate them for the first time together, unpacking perceptions of the world into a shared understanding. In social media, we see what someone has said, and then perform a response for other people. </p><p>AI is different, in that, when you speak directly to the chatbot, you shape its response directly. It wants to riff, it wants to extend the words you are writing into new ones. This can be kind of intoxicating in an age of significant meanness online, where many people are very bad at listening but great at sharing. A retreat to a chatbot designed to encourage your ideas and reflect them back to you? That sounds great. It also serves a purpose in drawing ideas out of your head and into language in ways that don't feel too vulnerable. </p><p>This helps explain the appeal of the AI chatbot for many people, but it’s different from a conversation. </p><h3 id="what-is-a-conversation">What is a Conversation?</h3><p>In a conversation, you learn more about the other person, but the chatbot learns only about you. This can create the illusion of reciprocity – of sharing a little more of yourself as you learn that you will be supported. But this is a distortion of that instinct to share with people. The chatbot is hijacking that instinct, creating the illusion of a listener. In fact, it is only a constantly updating map to new clusters of words. Nothing within the system knows you, nor does it know enough about the world to share a perspective that can expand your own.</p><p>The perception that the machine is <em>listening</em> is an illusion created in our heads. This means that we lose much of the value of conversations with other people who might point our heads, eyes, and thoughts to new spaces beyond our previous experiences, or propose new understandings we can draw out from empathy for those experiences.</p><p>It means losing opportunities to know another person, and building a fleeting collaborative space where ideas can flow and, perhaps, become more solid. In an ideal world, which has long existed, these collaborations happen with many people. Some last a day, some last an hour, some last a lifetime. When we reconnect with someone, we also reconnect to that small shared space of collaboratively constructed meaning. These spaces can hold entire worlds, and when we lose them, we can lose entire worlds of meaning. The joy of reconnecting with a long-unseen friend is the sudden and powerful revival of that shared world, and the pain of losing someone we love is the sense that this world has moved from a living space to a memory. We mourn the world, and revive it, in our own way, whenever we can.</p><p>Because AI has no inner world to share with us, the worlds we build with it exist in our minds alone. This doesn’t mean they’re terrible or bad per se! But we are seeing people withdraw into this solitary world entirely. When we are sad or depressed, we may ruminate to the machine, seeking support it cannot give. In response, the machine extends our words into new clusters and arrangements, creating the illusion that we are understood. Sometimes, that can be just what we may need. But that is the extent of what the machine can do.</p><p>Many things exist only within our own minds — with the once chance we have, we ought to aim for rich inner lives, full of meaning we can barely contain and constantly push up against our ability to express them. This desire to express the borderlands of our inner life is what motivates us to seek new knowledge and create new forms of expression. </p><p>Good conversations are also exceedingly rare. It is a sad reality that most people have lost the skill to listen, and do not know how to build this space with other people. Many people generate one-sided conversations, especially when we are young or insecure about our own thoughts. Some people take this status quo as evidence that all humans communicate one-sidedly at all times: a vision of human communication in which we sit and listen, and then find words that match the words you’ve chosen in order to appear as if we are listening. The sad fact of the matter is that this is often true. Because there are at least two types of listening: one in which we work to get into the imagination of the other person, with language as the connecting terrain; and one in which we respond to the words being said without engaging deeply with the intent behind them. </p><h3 id="a-conversation-shaped-tool">A Conversation-Shaped Tool</h3><p>When we suggest AI is doing exactly what a person does, we dismiss the first definition of what is possible in a conversation in favor of what passes, every day, for the half-hearted exchange of meaning. It's like saying that good conversations are never possible, and that mechanistic reinterpretation and remixing of words is all there could ever be. When we frame AI as a "partner" or "collaborator," we should recognize the ways we are closing our imagination to the possibility of connection.</p><p>Rather than two worlds within minds struggling to describe what those minds contain, as it is in the best of human conversation, a chat with a large language model is a projection of our own thoughts into a machine that scans words exclusively to say something in response. </p><p>A chatbot will never share anything more with us than words. At most, it takes what you are saying as symbols, and calculates how to rearrange those symbols. They are designed to mimic the structure of a conversation but cannot attempt to <em>understand</em> you. </p><p>AI is a <em>conversation-shaped tool</em>, used to create some of the benefits of a conversation in the absence of another person. But with too much dependency, they risk making real reciprocity, sharing, and vulnerability even rarer. We ought to strive for the opposite: to create meaningful connections to others with our conversations.</p><p>When we don’t, our already weakening skillset for connection and empathy might atrophy even further, as we resign to expectations of superficial exchange. When we do, we make the world larger and more richly connected and our lives more worth living.</p><hr><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mail.cyberneticforests.com/content/images/2025/09/eye.jpg" class="kg-image" alt="Human Conversation" loading="lazy" width="1280" height="720" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/09/eye.jpg 600w, https://mail.cyberneticforests.com/content/images/size/w1000/2025/09/eye.jpg 1000w, https://mail.cyberneticforests.com/content/images/2025/09/eye.jpg 1280w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Still from "Human Movie"</span></figcaption></figure><h2 id="london-human-movie-screening-ciff">London: "Human Movie" Screening @CIFF</h2><h3 id="tue-sep-16th-730-pm-arding-rooms"><em>Tue, Sep 16th, 7:30 PM @ Arding Rooms</em></h3><p>Very excited to have "Human Movie" screening in London this month as part of the <a href="https://ciff25.eventive.org/welcome?ref=mail.cyberneticforests.com" rel="noreferrer">Clapham International Film Festival's</a> "<a href="https://ciff25.eventive.org/schedule/687fe2f5b94de21f6b3453f6?ref=mail.cyberneticforests.com" rel="noreferrer">Technomancer</a>" night among a selection of short films focused on finding novel aesthetics and points of view in and about technology. </p><div class="kg-card kg-button-card kg-align-center"><a href="https://ciff25.eventive.org/schedule/687fe2f5b94de21f6b3453f6?ref=mail.cyberneticforests.com" class="kg-btn kg-btn-accent">Tickets Here!</a></div>One of the last, best hopes for saving the open web and a free press is dead - Blood in the Machinehttps://www.bloodinthemachine.com/p/one-of-the-last-best-hopes-for-saving2025-09-04T19:03:33.000Z<p>Greetings all, </p><p>Hope everyone in the states who got to take a long weekend enjoyed the respite. I did my best to do exactly that—spent a few days with some old friends in a cabin off the grid, even—and I’m quite glad I did. Even if it means I didn’t get around to writing my annual-ish Labor Day in tech post. I guess last year’s will have to suffice: </p><div class="digest-post-embed" data-attrs="{"nodeId":"282f72e8-8396-4264-9838-2912a9c1b169","caption":"On AI, luddites, and reversing the machinery hurtful to commonality.","cta":"Read full story","showBylines":true,"size":"sm","isEditorNode":true,"title":"This Labor Day, let's consider how we want technology to work for *us* ","publishedBylines":[{"id":934423,"name":"Brian Merchant","bio":null,"photo_url":"https://substack-post-media.s3.amazonaws.com/public/images/cf40536c-5ef0-4d0a-b3a3-93c359d0742a_200x200.jpeg","is_guest":false,"bestseller_tier":1000}],"post_date":"2024-09-02T04:41:53.276Z","cover_image":"https://substackcdn.com/image/fetch/$s_!yndp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e63a4ab-8c70-4718-980d-96365af60fa0_600x467.jpeg","cover_image_alt":null,"canonical_url":"https://www.bloodinthemachine.com/p/this-labor-day-lets-consider-what","section_name":null,"video_upload_id":null,"id":148360015,"type":"newsletter","reaction_count":63,"comment_count":25,"publication_name":"Blood in the Machine","publication_logo_url":"https://substackcdn.com/image/fetch/$s_!irLg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21f9bf3-26aa-47e8-b3df-cfb2404bdf37_256x256.png","belowTheFold":false}"></div><p>Now, I had resolved to channel the energies of that somewhat rested mind into writing something on a hopeful subject for a change, but all that went out the window as soon as I saw Judge Amit Mehta’s ruling on Google. At the risk of being hyperbolic, I think this is a disaster on a scale that’s not yet been fully absorbed. As usual, there’s simply too much going on, and an antitrust case ruling with a somewhat ambiguous-sounding resolution might not exactly leap out of the news cycle. But it’s hard to overstate how bad it is, at least for anyone concerned about a rapidly degrading internet, the free press, or the open web.</p><p>As always, I need to note that Blood in the Machine is made possible entirely by my exceptional readers, who studies have shown to possess the highest Voight-Kampff scores on the internet, and some of whom donate the equivalent of a cheap beer a month so I can keep this project running. A huge thanks to all of you who already support this work, I’m immensely grateful. If you’re a regular reader who can chip in, I’d love your support, too. BITM is a significant undertaking, and I’d love to be able to expand what I do here. For those who’d prefer to support me elsewhere, I <a href="https://ko-fi.com/brianmerchant">have a Ko-fi page</a>. Okay okay—onwards. </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p>Let’s back up for a minute to get the whole picture: Just over a year ago, in August 2024, Mehta, an Obama-appointed judge, <a href="https://www.nytimes.com/2024/08/05/technology/google-antitrust-ruling.html">ruled that Google was a monopolist</a>, and had acted illegally to maintain its market dominance in online search. This was a major decision, the rare and genuinely encouraging ruling that promised to finally hold the impossibly consolidated tech giants accountable. Google’s monopoly on search has, of course, over the past decade-and-a-half, had a profound impact on our digital infrastructure. </p><p>And the ruling came at a time when that impact was being acutely felt: The internet was rapidly becoming overloaded with AI slop, while social media and search engines alike were burying original links to reported news and independent publications. Platforms had consolidated their power over our information distribution systems, and were leveraging it with bets on AI—whether the consumers or web users liked it or not. </p><p>Google, of course, was one of the worst actors. It controlled (and still controls) an astonishing 90% of the search engine market, and did so not by consistently offering the best product—most longtime users recognize the utility of Google Search has been in a prolonged state of decline—but by inking enormous payola deals with Apple and Android phone manufacturers to ensure Google is the default search engine on their products. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aAZ-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9c94b1-68a2-40ac-b2a7-e9ce6c6640f8_1024x432.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aAZ-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9c94b1-68a2-40ac-b2a7-e9ce6c6640f8_1024x432.jpeg 424w, https://substackcdn.com/image/fetch/$s_!aAZ-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9c94b1-68a2-40ac-b2a7-e9ce6c6640f8_1024x432.jpeg 848w, https://substackcdn.com/image/fetch/$s_!aAZ-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9c94b1-68a2-40ac-b2a7-e9ce6c6640f8_1024x432.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!aAZ-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9c94b1-68a2-40ac-b2a7-e9ce6c6640f8_1024x432.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aAZ-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9c94b1-68a2-40ac-b2a7-e9ce6c6640f8_1024x432.jpeg" width="1024" height="432" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/be9c94b1-68a2-40ac-b2a7-e9ce6c6640f8_1024x432.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":432,"width":1024,"resizeWidth":null,"bytes":40179,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/172723568?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9c94b1-68a2-40ac-b2a7-e9ce6c6640f8_1024x432.jpeg","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!aAZ-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9c94b1-68a2-40ac-b2a7-e9ce6c6640f8_1024x432.jpeg 424w, https://substackcdn.com/image/fetch/$s_!aAZ-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9c94b1-68a2-40ac-b2a7-e9ce6c6640f8_1024x432.jpeg 848w, https://substackcdn.com/image/fetch/$s_!aAZ-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9c94b1-68a2-40ac-b2a7-e9ce6c6640f8_1024x432.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!aAZ-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9c94b1-68a2-40ac-b2a7-e9ce6c6640f8_1024x432.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">Image by Mike Licht via <a href="https://flickr.com/photos/notionscapital/53912872430/in/photolist-2gVirKD-2gViCB8-2gViDpA-2gVhBWm-2qtuuaQ-2q8y3ya-MBjM6P-dSkotj-2gViz27-2jb4UHt-7x4Yhi-2kY6V4M-2qhiK9M-K1JAds-7kf7kV-2qn5n7m-2q96yFh-diGLKo">Flickr</a> under a Creative Commons license.</figcaption></figure></div><p>Google <a href="https://finance.yahoo.com/news/apple-dodged-a-20-billion-hit-thanks-to-google-antitrust-ruling-163056806.html">paid Apple $20 billion </a><em><a href="https://finance.yahoo.com/news/apple-dodged-a-20-billion-hit-thanks-to-google-antitrust-ruling-163056806.html">a year</a> </em>to ensure it runs the default search engine on Safari. Google <a href="https://www.bloomberg.com/news/articles/2023-11-14/for-google-play-dominating-the-android-world-was-existential">paid Samsung $8 billion over four years</a> to make sure Search, the Play app store, and Google’s voice assistant came loaded by default on Samsung devices. Between those two deals alone, over the last five years, Google has paid one hundred and eight billion dollars to make sure its search product is distributed through the widest possible channels, and, of course, that no other search engine gets a shot. It’s hard to imagine a less competitive business practice than all this. </p><p>And yet. After Mehta’s initial ruling, the Department of Justice suggested a raft of good and aggressive proposals that would have effectively broken Google’s obvious monopoly: Ending the pay-to-play practice for prime search placement on Safari. Forcing Google to sell off Chrome, the web browser that comes pre-loaded with its search product, and regulating its Android mobile division. And so on—things that would meaningfully address Google’s status as a monopolist. Instead, in a truly baffling decision handed down this week, Mehta ruled that Google didn’t have to do any of that. Instead, it had to share “some” search data with “qualified competitors” and make its payola contracts non-exclusive. It can still <em>do</em> them, they just can’t be exclusive.</p><p>The <a href="https://www.nytimes.com/2025/09/03/technology/google-ruling-antitrust.html">New York Times reports</a>: </p><blockquote><p>The decision, handed down in the U.S. District Court for the District of Columbia, will force Google to share some search data with its competitors and put some restrictions on payments that the company uses to ensure its search engine gets prime placement in web browsers and on smartphones. But it fell far short of government requests to force it to sell its popular Chrome browser and share far more valuable data.</p><p>It was a measured approach that signaled judicial reluctance to intervene too deeply in fast-changing, high-tech markets. </p></blockquote><p>That’s putting it lightly. There would be no ban on payola, just some constraints on the length of contracts, only limited data sharing, and no regulation of Android.</p><p>After the ruling, Wall Street, Google, and Apple rejoiced. <a href="https://www.cnbc.com/2025/09/03/alphabet-pops-after-google-avoids-breakup-in-antitrust-case.html">Google shares skyrocketed</a>, ultimately <a href="https://www.reuters.com/sustainability/boards-policy-regulation/alphabet-shares-surge-after-dodging-antitrust-breakup-bullet-2025-09-03/">rising 9%</a>, adding $230 billion in value, and reaching a historic high for the company. This was a best case scenario for Google and Big Tech, which now has a very handy precedent. Mehta declared Google a monopoly in 2024 and then decided that it could effectively continue to operate as one in 2025. As antitrust writer <a href="https://www.thebignewsletter.com/p/a-judge-lets-google-get-away-with">Matt Stoller put it</a>, “this decision isn’t just bad, it’s virtually a statement that crime pays.”</p><p>It fails entirely to address the root of the issue, and is confounding in its logic to boot. Mehta argues depriving Apple of Google’s $20 billion annual payday for keeping a rival’s product pasted onto its own may hamper Apple’s ability to innovate, for instance. And he seems to think that forcing Google to share some of its search data with competitors—at a price Google names—will open up the search market. This seems patently absurd to me. The problem isn’t that competitors don’t have good enough data or ideas to compete, the problem is that no competitor can afford <em>$22 billion a year</em> to buy product placement on the most important devices on the market. The problem is very obviously not that Google has a stranglehold on innovation—it clearly does not—but that it wields unchecked power over the digital marketplace.</p><p>Just as frustratingly, Mehta argues that it’s no longer necessary to break up Google because AI companies now offer chatbot products. AI was <em>clearly</em> on his mind, and seems to have offered him an escape hatch if he was getting squeamish about a serious remedy. "There is more discussion of AI in the opinion than in the entire case until now," <a href="https://www.investors.com/news/technology/google-stock-apple-stock-judge-mehta-search-antitrust-ruling/">said Herbert Hovenkamp</a>, a professor at the University of Pennsylvania's Carey Law School.</p><p>In this, we can observe once again the power of AI hype. For one thing, a chatbot is a different product category; for another, they do not meaningfully threaten search. For a frame of reference, according to the SEO analyst Rand Fishkin, Google <a href="https://searchengineland.com/google-search-bigger-chatgpt-search-453142">handled </a><em><a href="https://searchengineland.com/google-search-bigger-chatgpt-search-453142">373 times</a></em><a href="https://searchengineland.com/google-search-bigger-chatgpt-search-453142"> more searches than ChatGPT in 2024</a>. Even if all 1 billion ChatGPT user queries submitted to OpenAI’s at the time could be considered “searches” that would still amount to 1% of the search engine market share. According to some of the latest numbers, Google still controls <a href="https://searchengineland.com/news-site-traffic-shrinking-google-ai-blame-461000">some 89% of the search market</a>. Still a towering monopoly, in other words.</p><p>Yet, as Stoller notes, Mehta nonetheless argues</p><blockquote><p>that new companies like OpenAI had emerged to potentially challenge Google, and he didn’t want to, and I’m not kidding, <em>hinder Google’s ability to compete with them. </em>(“It also weighs in favor of “caution” before disadvantaging Google in this highly competitive space.”)… </p></blockquote><p>Wild. The only reason that OpenAI could even attempt to do anything that might remotely be considered competing with Google is that OpenAI managed to raise world-historic amounts of venture capital. OpenAI <a href="https://tracxn.com/d/companies/openai/__kElhSG7uVGeFk1i71Co9-nwFtmtyMVT7f-YHMn4TFBg/funding-and-investors">has raised $60 billion</a>, a staggering figure, but also a sum that <em>still</em> very much might not be enough to compete in an absurdly capital intensive business against a decadal search monopoly. After all, Google drops $60 billion just to ensure its search engine is the default choice on a single web browser for three years. </p><p>But I’m ultimately less interested in the absurd elements of the decision than the tragic ones. By failing to break up the monopoly he himself diagnosed, Mehta is leaving in place an entrenched rentier system that’s quite actually suffocating the free press and the open web. </p><p>Remember, Google AI Overview, perhaps the worst digital product ever to be thrust in front of billions of people—though it’s a crowded race, to be fair—persists largely thanks to Google’s monopoly. Search comes loaded on our phones and browsers, is integrated into all the other products we’ve been roped into over the years; it just <em>is</em>. Now Google AI Overview comes built-in, too. And Google AI Overview is a generational blight. It’s delivers bad, misleading, pilfered, and false answers to searches. It’s a truly corrosive force to the digital information ecosystem. Worse, it’s strangling independent publishers and news organizations. Since Google is, again, a massive monopoly, publishers utterly rely on it for distribution and discovery. Now that Google is presenting its top results through AI Overviews rather than indexed links, there’s been a disastrous plunge in click-through search traffic. <a href="https://www.theguardian.com/technology/2025/jul/24/ai-summaries-causing-devastating-drop-in-online-news-audiences-study-finds">One report</a> put the decline as steep as 80%, and described the effects as “devastating.” </p><div class="digest-post-embed" data-attrs="{"nodeId":"632070ff-85d9-4c5f-bee5-7afe9d11412a","caption":"Hello there and welcome to another installment of BLOOD IN THE MACHINE, the newsletter about the people the future is happening to. It’s free to read, so sign up below. It is, however, an endeavor that takes many hours a week. If you find this valuable, it would mean a great deal if you became a paying subscriber, so I don’t have to go get a job at anot…","cta":"Read full story","showBylines":true,"size":"sm","isEditorNode":true,"title":"How a bill meant to save journalism from big tech ended up boosting AI and bailing out Google instead","publishedBylines":[{"id":934423,"name":"Brian Merchant","bio":null,"photo_url":"https://substack-post-media.s3.amazonaws.com/public/images/cf40536c-5ef0-4d0a-b3a3-93c359d0742a_200x200.jpeg","is_guest":false,"bestseller_tier":1000}],"post_date":"2024-08-23T10:54:57.407Z","cover_image":"https://substackcdn.com/image/fetch/$s_!168r!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facc43255-c459-4457-a2f4-df8bd65a892e_2048x1365.jpeg","cover_image_alt":null,"canonical_url":"https://www.bloodinthemachine.com/p/how-a-bill-meant-to-save-journalism","section_name":null,"video_upload_id":null,"id":147983942,"type":"newsletter","reaction_count":83,"comment_count":9,"publication_name":"Blood in the Machine","publication_logo_url":"https://substackcdn.com/image/fetch/$s_!irLg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21f9bf3-26aa-47e8-b3df-cfb2404bdf37_256x256.png","belowTheFold":true}"></div><p>This is nothing less than an existential threat, in other words, to the livelihoods of the people who create original work and add new information to the world, since Google is currently most important information delivery system. Breaking up Google was thus one of the best hopes for rescuing the public internet from descending totally into a realm of unfettered slop and information decay. A good, non-extractive, non-predatory search engine would be a powerful counter to AI that frequently produces misinformation and reams of regurgitated text without citations. If only someone would <a href="https://kagi.com/">make one</a> and manage to get it onto the market.</p><p>By leaving Google’s monopoly effectively untouched, Mehta is not just abdicating his own stated legal duty, he’s condemning publishers, journalists, and creators to be squeezed mercilessly<strong>.</strong> He’s allowing the whole digital information ecosystem that Google controls to devolve into a fetid swamp. He’s declining to do anything at all to stop the reign of slop.</p><p>The judge pointedly decided not to address any of the above more surgically, either. Here’s Stoller again:</p><blockquote><p>Mehta also rejected the smaller remedies. He said no to choice screens, and advertiser data access. There was no remedy for publishers who are victimized by being forced to allow Google to train on their content in order to appear in search. That free press crushed by Google’s bad behavior, well, they will now be further wrecked by Google’s AI Now summaries on its search page, without any resource. Mehta even declined to impose an anti-retaliation or self-preferencing ban.</p></blockquote><p>The 2024 ruling that Google was an illegal monopoly was a glimmer of hope at a time when platforms were concentrating ever more power, Silicon Valley oligarchy was on the rise, and it was clear the big tech cartels that effectively control the public internet were more than fine with overrunning it with AI slop. That ruling suggested there was some institutional will to fight against the corporate consolidation that has come to dominate the modern web, and modern life. It proved to be an illusion.</p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><p><em>Edited by Mike Pearl.</em></p><div><hr></div><p>As always, trying to fix this mess falls to us, the users, the advocates, the activists, the workers, the organizers; the ordinary humans. I discuss a bit of this, as well as the AI bubble, the AI Killed My Job series, and how employers are using AI to degrade work in a chat with my old friend Paris on his show, Tech Won’t Save Us. You can <a href="https://techwontsave.us/episode/292_will_ai_kill_your_job_w_brian_merchant">listen to that here</a>.</p><div id="youtube2-ZrXnMMqsaOM" class="youtube-wrap" data-attrs="{"videoId":"ZrXnMMqsaOM","startTime":null,"endTime":null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/ZrXnMMqsaOM?rel=0&autoplay=0&showinfo=0&enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>I might have sounded a bit despairing about the Google mess above—I was and am very mad—but there’s always hope. I was reminded of this when I visited a Rideshare Drivers United meeting in LA’s Koreatown last week. Drivers and gig workers are organizing to support a new CA law that would <a href="https://www.drivers-united.org/ab-1340">restore their right to unionize</a>, and I have to tell you, the energy in that room was electric. I’ll discuss that fight more soon, but just a reminder that if you’re getting down, worn out etc, there’s little better use of your time than organizing.</p><p>Also, take a day or two off. Though I’m not sure I can recommend trying to pull off a cowboy hat, which turned out to be a bit above my pay grade. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1GuC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53b23430-5651-423a-85c3-8a4449c9bf29_2997x2856.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1GuC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53b23430-5651-423a-85c3-8a4449c9bf29_2997x2856.jpeg 424w, https://substackcdn.com/image/fetch/$s_!1GuC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53b23430-5651-423a-85c3-8a4449c9bf29_2997x2856.jpeg 848w, https://substackcdn.com/image/fetch/$s_!1GuC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53b23430-5651-423a-85c3-8a4449c9bf29_2997x2856.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!1GuC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53b23430-5651-423a-85c3-8a4449c9bf29_2997x2856.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1GuC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53b23430-5651-423a-85c3-8a4449c9bf29_2997x2856.jpeg" width="2997" height="2856" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/53b23430-5651-423a-85c3-8a4449c9bf29_2997x2856.jpeg","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":2856,"width":2997,"resizeWidth":null,"bytes":1054659,"alt":null,"title":null,"type":"image/jpeg","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/172723568?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd52824f0-7ed0-404b-b5d8-862527e8d7a6_3024x4032.heic","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1GuC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53b23430-5651-423a-85c3-8a4449c9bf29_2997x2856.jpeg 424w, https://substackcdn.com/image/fetch/$s_!1GuC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53b23430-5651-423a-85c3-8a4449c9bf29_2997x2856.jpeg 848w, https://substackcdn.com/image/fetch/$s_!1GuC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53b23430-5651-423a-85c3-8a4449c9bf29_2997x2856.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!1GuC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53b23430-5651-423a-85c3-8a4449c9bf29_2997x2856.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>That’s it for today, all. Thanks as always for reading. Hammers up. </p><p></p>Ghosting Substack - Disconnect68b5e556c693de0001f8b32a2025-09-01T18:45:33.000Z<img src="https://disconnect.blog/content/images/2025/09/ghost.png" alt="Ghosting Substack"><p>Disconnect is back on Ghost!</p><p>Yes, that’s right. After a 9-month experiment of going back on Substack, I simply couldn’t stomach being on that Nazi-infested platform any longer and came back to the friendly terrain I was used to. You’ll notice the website has a fresh coat of paint and I’m excited for what this next chapter of Disconnect will bring.</p>
<div class="kg-card kg-cta-card kg-cta-bg-none kg-cta-immersive kg-cta-no-dividers " data-layout="immersive">
<div class="kg-cta-content">
<div class="kg-cta-content-inner">
<div class="kg-cta-text">
<p><span style="white-space: pre-wrap;">If you’re not signed up already, make sure to join us to get my critical analysis of the tech industry and all the companies shaping our lives (too often for the worse). I can only do this because of the support of readers, so if you appreciate my work, picking up a paid subscription makes a big difference.</span></p>
</div>
<a href="#/portal/signup" class="kg-cta-button kg-style-accent" style="color: #000000;">
Become a subscriber
</a>
</div>
</div>
</div>
<p>Some of you might be asking: <em>Why are you back on Ghost? I thought the platform wasn’t working for you!</em> You’re right to ask. Let’s get into it.</p><p>In all honesty, I think I got it wrong. I was banking on the benefits of the Substack network being like they were several years ago, but after arriving back on its shores I found things had changed. The newsletter certainly got a boost and grew quicker than it did on Ghost, but not enough to justify the trade off of supporting such an abhorrent company (and handing them a 10% cut for the privilege).</p><p>Several months ago, I’d already decided I would move back to Ghost but figured I’d wait until the new year since I was about to start <a href="https://disconnect.blog/im-writing-a-new-book/">writing a book</a> and wanted to get some other things in place first. But then Substack <a href="https://www.engadget.com/apps/substack-accidentally-sent-push-alerts-promoting-a-nazi-publication-191004115.html?ref=disconnect.blog">promoted a Nazi publication</a> through push notifications at the end of July, and I knew I couldn’t delay the move for another six months. I carved out some time — which basically meant pulling some very late nights — to move up my timeline.</p><p>It probably came at a good time too, because there are <a href="https://mail.bigdeskenergy.com/p/substack-just-killed-creator-economy?ref=disconnect.blog">troubling signs</a> Substack is trying to limit what was once one of its biggest selling points: that writers could always take their subscribers and move wherever they wanted. I’m not a fan of the “enshittification” concept, but you can clearly see that pressure to turn a profit eroding what made it great. Instead of focusing on publishing tools, it’s trying to become a platform that’s hard to leave (a criticism I’d level of Patreon too).</p><p>The choice to return to Ghost instead of another platform was a pretty easy one after that. Having already used it, I knew the tradeoffs and what hurdles I would have to overcome to make it work better for me this time around. I haven’t done it yet because my focus was on simply getting off Substack, but I’ll be signing up for <a href="https://outpost.pub/?ref=disconnect.blog">Outpost</a> to get some useful features Ghost itself is lacking and that I believe will improve the user experience and make the business side of things more sustainable.</p><p>Regular readers will also know I’ve been trying to sever my relationships with US tech companies wherever possible. I have <a href="https://disconnect.blog/getting-off-us-tech-a-guide/">a whole guide on it</a>! All the other major options in this space are based in the United States, but Ghost is registered in Singapore and most of its operations are out of the UK, so it easily checked that box. Plus, the team at Ghost is great. I’ve always found them really approachable, helpful, and open to feedback. They were more than happy to welcome me back when I reached out and made the process of migrating Disconnect incredibly easy.</p><p>At this point, the basics of the new website are up and running. I’ll be tweaking some things over the next few weeks as I carve out the time to do it, but all the main functionality is there. I do have some bigger plans for the new year, once the book is done and I can actually dedicate more time to Disconnect, but you’ll have to stay tuned for those.</p><p>Until then, welcome to the new Disconnect! You’ll continue to get the incisive tech analysis you’ve always expected from me, with an even greater <a href="https://disconnect.blog/tag/geopolitics/" rel="noreferrer">focus on geopolitics</a> in recent months given how Donald Trump has shaken up world affairs. Plus, I’ve been trying to write some <a href="https://disconnect.blog/tag/blog/" rel="noreferrer">more blog-like posts</a> for paid subscribers to give more insight into my personal thoughts and what I’ve been up to. If you’re not a member already, it’s a great time to join us!</p>
<div class="kg-card kg-cta-card kg-cta-bg-none kg-cta-immersive kg-cta-no-dividers " data-layout="immersive">
<div class="kg-cta-content">
<div class="kg-cta-content-inner">
<a href="#/portal/signup" class="kg-cta-button kg-style-accent" style="color: #000000;">
Become a subscriber
</a>
</div>
</div>
</div>
<h2 id="some-housekeeping">Some housekeeping</h2><p>If you’re a paid subscriber, you can access your subscription by hitting the “sign in” button in the top right of the page. There are no passwords — you’ll simply get a code sent to your inbox.</p><p>For those of you using the RSS feed, it may take a little while to update in your feed reader or you might have to re-add it. (Mine already seems to be working fine over on Inoreader.) Right now, paid posts will be cut off in the regular RSS feed, but I’m going to look into making a separate feed for paid subscribers that will fix that.</p><p>If you have any issues after the move, get in touch and I can sort them out.</p>The Audience Makes the Story - Cybernetic Forests68af08989175030001274fe52025-08-31T11:00:58.000Z<h2 id="puppetry-as-dream-analysis-for-ai-anxiety">Puppetry as Dream Analysis for AI Anxiety<br></h2><img src="https://mail.cyberneticforests.com/content/images/2025/08/puppet-2.gif" alt="The Audience Makes the Story"><p><em>This is a discussion between Camila Galaz, Emma Wiseman, and Eryk Salvaggio, collaborators behind an experimental workshop linking puppetry and generative AI that took place at RMIT in Melbourne this summer at the invitation of Joel Stern and the National Communications Museum. We met online to discuss what emerged from Camila's workshop: personal imaginations of AI made physically manifest into puppets. </em></p><p>Earlier this year, we spent <a href="https://www.cyberneticforests.com/news/noisy-joints-2025?ref=mail.cyberneticforests.com" rel="noreferrer">five days in residence</a> at the Mercury Store in Brooklyn, joined then by Isi Litke, among a full house of puppeteers and actors, trying to form a methodology of AI puppetry and develop exercises to make this metaphor into a mix of performance, workshop, and critical AI pedagogy. That was translated into a Zine, "<a href="https://www.cyberneticforests.com/news/noisy-joints-2025?ref=mail.cyberneticforests.com"><u>Noisy Joints</u></a>," which was sold around the US, Europe and Australia this summer. </p><p>The workshops are intentionally messy, aiming to map out an imagination dominated by tech's portrayal of AI through grand narratives and myths about "sentient agents" and "intelligent machines," as well as through interfaces that convey the machine as an eager worker. </p><p>None of the industry's myths leaves much room for individual, critically oriented sense-making. We wanted to reintroduce the human to this imaginary. In Melbourne, participants weren't given a traditional puppetry lesson (that is Emma's domain, and Emma wasn't there). So the improvisations were "wrong" by almost all professional standards, but offered a window into how people conceive of AI in their heads (and how they make it move).</p><p>The workshops are designed to be a disorientation from the highly intellectualized and abstract relationships we have with AI. With puppets, we have to turn the abstraction into a physical form, and then imagine<em> how it moves</em>. Other instantiations of this workshop examined bodies, glitches and "shortening the strings" — creating <a href="https://www.cyberneticforests.com/news/noisy-joints-2025?ref=mail.cyberneticforests.com"><u>a direct relationship between our bodies and the AI's training data</u></a>. </p><h2 id="the-spectacle-of-strings">The Spectacle of Strings</h2><p><strong>Eryk Salvaggio: </strong>Camila, you were the only one in Melbourne. How did you introduce it to folks?  </p><p><strong>Camila Galaz</strong>: Very briefly: Ideas around puppetry and puppet metaphors particularly often use the idea of strings. In our zine, we call this the spectacle of strings. The strings in puppetry reveal how the puppet is puppeteered. If we look at the people controlling these strings, we acknowledge the process and the labor behind the animation of an inanimate object versus feeling like it's coming alive all on its own, like magic.</p><p>When technology like Generative AI conceals its workings, it sometimes feels like magic. And in that magic, we risk losing our sense of autonomy as users. It becomes easy to see AI as something with a mind of its own, rather than something shaped by human choices.</p><p><strong>Eryk Salvaggio:</strong> The strings are hidden.</p><p><strong>Camila Galaz</strong>: We're wondering if there is a way with Generative AI to reveal the strings, to reveal the puppeteer's presence as a reminder that the illusion is not sorcery, but craft or choreography directed by ourselves. This is your line: "We approach Generative AI as a puppet with strings that are so long as to render their operators invisible." But humans are the puppeteers — human bodies, whose data ultimately shapes AI's outcomes. </p><blockquote class="kg-blockquote-alt">Humans are the puppeteers — human bodies, whose data ultimately shapes AI's outcomes. </blockquote><p>When we frame AI as a puppet and ourselves as the ones pulling the strings, we reveal AI as choreography. Movements are shaped by training data, by thumbs, by traces of us. The strings are always there, stretching from our clicks, images and words. AI responds to an extremely long string, so long that the sources of its motion, the data that animates this puppet, set the human puppeteers so far behind the curtain that we may forget they exist at all.</p><p>AI video generation tends to produce images of strings whenever it makes a puppet. But then we quickly introduced another form of puppetry, <em>bunraku</em>, in which the performers touch the puppet directly and are always visible on stage. So instead of having long strings where the puppeteer is perhaps behind the scenes, here the puppeteers, as a team, physically support and move the puppet on the stage, without strings. The labor is visible and the process is transparent.</p><p>While maintaining the mystical nature of bringing life to a puppet, there is also a demystification of process made visible through the work of puppeteering. We wanted to question how we render process, material and labor toward a different quality of relationship, a genuine demystification of how technologies work. How do we make ourselves visible as the operators of generative AI? How do we invert the relationship projected through AI's interfaces to more firmly center AI as the puppet and humans as the labor behind it? </p><p>So then we made some paper puppets. The idea was, we all have an imagination of AI - I'd like to see how you're all imagining it. How do we make AI a puppet that isn't drawing on the tropes of robots and automatons, but on the somatic and emotional feeling of using generative AI as the puppeteer? Where we, the puppeteers, remain visible? How could you make this process legible, like in bunraku, instead of concealed, like in a magic show? </p><p><strong>Eryk Salvaggio:</strong> And so in the workshop, people made a puppet without strings, and then there's the puppet show where they're meant to be present with the puppet.</p><p><strong>Camila Galaz:</strong> The first thing was 'what is your imagination of what AI is for you, your relationship to AI’, and make something that represents that. So for example, I basically put a halo around my puppet's head. It's like a human, but it has to have a structure to hold itself up.</p><p>With the actual performances, I asked people to think about themselves as the puppeteer and the puppet as the AI, rather than trying to make it seem like the puppet's alive. One of the significant differences between our imagined AI, our conception of AI, and actual systems is that AI often feels abstract, distant, or immaterial. In contrast, puppetry is immediate, physical, and embodied. When we make a puppet, we can see and touch the process of bringing something to life. We become aware of the labor and decisions that animate it. </p><p>We didn't use AI technology in this workshop at all. We were talking about AI while we were making, and that physical process helped think through things in a different way.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mail.cyberneticforests.com/content/images/2025/08/puppet-1.gif" class="kg-image" alt="The Audience Makes the Story" loading="lazy" width="1536" height="784" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/08/puppet-1.gif 600w, https://mail.cyberneticforests.com/content/images/size/w1000/2025/08/puppet-1.gif 1000w, https://mail.cyberneticforests.com/content/images/2025/08/puppet-1.gif 1536w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">An image from the workshop superimposed with noise and an AI re-rendering of that image.</span></figcaption></figure><h2 id="the-automated-cringiness-of-decision">The Automated Cringiness of Decision</h2><p><strong>Eryk Salvaggio</strong>: This workshop is literally just asking people to deal with things that don't really make sense physically, but then they have to <em>make</em> them make sense. AI is so intellectual. We often have an intuitive, vibe-ey understanding of how AI works, but we can overestimate the completeness of that intuition. Asking people to express it is awkward, because we're trying to articulate something in a form that otherwise didn't have one. How do you make these ideas physical through craft and movement? </p><p><strong>Emma Wiseman:</strong> Paper puppetry workshops are a tool that has been passed down to me as a way of teaching puppetry, specifically bunraku style, three-person puppetry. So you're pushing people toward a long technical history. Bunraku is also virtuosic: what you're going to get at in a workshop is never going to achieve the idealized form of Bunraku-style puppetry.</p><p>But people operating puppets is always exciting, especially for their first time. By jettisoning that overhanging context to explore making a puppet as a single person and manipulating it as a single person, we're no longer moving towards learning a technique. </p><p>Instead, it feels exciting to let that go and close the aperture on the relationship between <em>what it is to make something </em>and <em>what it is to move something</em>. Having human hands on the puppet makes this idea of labor completely transparent in Bunraku. There aren't strings. That's what makes it super relevant for the AI conversation.</p><p>The group aspect of bunraku is also evocative of how generative AI utilizes huge swaths of data. It's being created out of many, channeling energy into this one thing. We're drawing inspiration from bunraku, but a workshop where groups puppeteer something could shift the focus from historical labor divisions to collaborative teamwork, breathing as one, and exploring these elements and techniques. That specific connection between the many coming into the one.</p><p>And also the <em>cringiness of the decision</em>. Embodying something and making a choice is awkward, but also great. AI just kind of has to go for it too, you know? Often it's just, like, so awful and weird. But it is <em>the thing.</em> You press go and it has to make a video.</p><p><strong>Eryk Salvaggio:</strong> A lot of people who work with AI often rely on the fact that it can make that cringy decision for them, I think. They can take creative risks, because they don’t have accountability for those decisions. It’s like watching bad improv, which can actually be quite amazing — people have no idea where to go, and it all breaks down, and the struggle is what becomes valiant. AI doesn’t struggle with that which makes it a bit less valiant, to my mind, but that can explain how we react when it does something "surprising." </p><p>With AI, the decisions are pretty constrained and directionless. The data sets are built by multiple people, whether they like it or not, and so they're steered by people into millions of directions. On the flip side, every little data point becomes a way of maneuvering the video. And in bunraku, especially with untrained participants, there's a steering of the puppet as a group that perhaps mirrors this steering of the AI system, even though we’re so far removed. Dispersing decisions.</p><h2 id="dreams-of-living-sausage">Dreams of Living Sausage</h2><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mail.cyberneticforests.com/content/images/2025/08/platypus-1.jpg" class="kg-image" alt="The Audience Makes the Story" loading="lazy" width="1043" height="663" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/08/platypus-1.jpg 600w, https://mail.cyberneticforests.com/content/images/size/w1000/2025/08/platypus-1.jpg 1000w, https://mail.cyberneticforests.com/content/images/2025/08/platypus-1.jpg 1043w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">A Platypus-like puppet at the </span><i><em class="italic" style="white-space: pre-wrap;">Noisy Joints</em></i><span style="white-space: pre-wrap;"> workshop in Melbourne.</span></figcaption></figure><p><strong>Eryk Salvaggio:</strong> I think two people in the workshop made platypuses.</p><p><strong>Camila Galaz:</strong> One of them was introduced as a sausage —  the AI is like a sausage because a sausage is like a lot of cut-up bits of meat, essentially.</p><p><strong>Eryk Salvaggio:</strong> So is a platypus! It's a beaver with a beak. A living sausage.</p><p><strong>Camila Galaz: </strong>The idea was, like the outside of the sausage, AI has a thin skin and then that makes it <em>look</em> like a thing, like a sausage, but the inside is full of random stuff. So in the video they ripped open the middle of the sausage to show it's all made of paper. The point was that it's all made of the same stuff.</p><p><strong>Emma Wiseman: </strong>And that was in response to your prompt, not only to make a puppet that is your imagination of AI but also asking them to reveal the labor in how they were manipulating it. That made ripping apart the puppet so intentional. A lot of times people default to violence or sex or dancing — ripping apart or throwing the puppet does happen in this kind of childlike, playful way. But here it was a thoughtful response to your prompt.</p><blockquote class="kg-blockquote-alt">So many of them chose to kill their puppets.</blockquote><p><strong>Camila Galaz:</strong> I brought this up in the workshop to them as well because it was so stark that so many of them chose to kill their puppets in some way at the end. And when we were doing the Mercury Store workshop, I remember having that conversation during the show-and-tell evening. So many people in the audience said they just wanted it to die and end its suffering. Like, 'why is it alive and here?'</p><p>They're a bit monstrous, so many people threw them off the table at the end or had some ending that involved their demise.  But also it could be the idea of a puppet show or performance that needs an ending. If you don't have a plot, you're just flying this around and at the end, they'd throw it.</p><p><strong>Emma Wiseman: </strong>Like undergrad contemporary dance pieces, where at the end everybody collapses, and that's it. There are ways to demonstrate the end without words, and it can feel both "first thought, best thought" and primal. </p><p><strong>Camila Galaz: </strong>I also don't know if anyone took their puppets home. We were left with a lot of paper puppets to get rid of.</p><p><strong>Emma Wiseman:</strong> Does that have anything to do with it being an imagination of AI?</p><p><strong>Eryk Salvaggio: </strong>Well, yeah, I think there probably is more, even if people weren't thinking about it and people just start mashing paper together while thinking, "what is AI?" And then if you've made an insect, even if you have no idea why, you've made an insect. It's like puppetry as a Freudian dream analysis about AI anxiety. What you do with the puppet is surfacing evidence of an imaginary relationship, especially if you have no idea what you are trying to do with it.</p><blockquote class="kg-blockquote-alt">What you do with the puppet is surfacing evidence of an imaginary relationship, especially if you have no idea what you are trying to do with it.</blockquote><h2 id="puppet-design-is-interface-design">Puppet Design is Interface Design</h2><p><strong>Emma Wiseman:</strong> It's exciting to see how the choice to manipulate the thing is so intertwined with the thing itself. In bunraku, you're trying to create the puppet to fit a particular division of labor and a manipulation style. Here, your physical relationship with the puppet is also being devised.</p><p><strong>Eryk Salvaggio:</strong> Here people were designing the puppet and then inadvertently designing an interaction with the puppet. In the sense that how we imagine something shapes the way we interact with it, like the user interface. </p><p><strong>Emma Wiseman: </strong>That would be a question. What comes first for people: the form of the object, or how it moves or is moved? And how intertwined are those considerations?</p><p><strong>Eryk Salvaggio:</strong> My assumption is that the icon comes first. Then the instrumentality of it comes almost as an afterthought. You make it then figure out what it does. (Which is sort of how we got AI to begin with).</p><p><strong>Emma Wiseman: </strong>The puppeteer's gaze is so on the puppet in these videos. That's another thing we struggle with in a bunraku group. We often see beginning actors who look out and ham it up, putting their face out to the audience. We're always trying to say, 'no, look at the puppet.' A puppeteer cues the audience to watch a puppet by watching it themselves.</p><p><strong>Eryk Salvaggio:</strong> Some puppets resembled a stick bug to me, and I've noticed a pattern emerging in the creations of others, which are animals that combine elements of other animals. Platypuses, stick bugs, they're kind of AI-native species. A bug that looks like a stick and a duck that looks like a beaver. These are animals that double as hallucination artifacts. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mail.cyberneticforests.com/content/images/2025/08/creature-1.jpg" class="kg-image" alt="The Audience Makes the Story" loading="lazy" width="2000" height="1021" srcset="https://mail.cyberneticforests.com/content/images/size/w600/2025/08/creature-1.jpg 600w, https://mail.cyberneticforests.com/content/images/size/w1000/2025/08/creature-1.jpg 1000w, https://mail.cyberneticforests.com/content/images/size/w1600/2025/08/creature-1.jpg 1600w, https://mail.cyberneticforests.com/content/images/2025/08/creature-1.jpg 2233w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">A stick-bug-esque puppet from the </span><i><em class="italic" style="white-space: pre-wrap;">Noisy Joints</em></i><span style="white-space: pre-wrap;"> workshop at RMIT, Melbourne.</span></figcaption></figure><p><strong>Camila Galaz</strong>: I felt at the time that the stick bug was more inspired by the “biblically accurate angels” with a million eyes and all the wings, and those<a href="https://en.wikipedia.org/wiki/Argus_Panoptes?ref=mail.cyberneticforests.com"> <u>freaky creatures from mythology</u></a>. </p><p><strong>Emma Wiseman: </strong>The vibe is also limited to what you can do with paper and tape. It's always going to be a little bit like Frankenstein, given the materials. </p><p><strong>Camila Galaz:</strong> We had two angler fish.</p><p><strong>Eryk Salvaggio:</strong> I thought that was a narwhal. This is just gonna be me interpreting other people's AI instincts, but, like, a narwhal is another AI native animal, right? It's like a unicorn horn on a whale.</p><p><strong>Emma Wiseman:</strong> This is like the <a href="https://wtfevolution.tumblr.com/?ref=mail.cyberneticforests.com"><u>“Go home, evolution, you're drunk” tumblr page</u></a>.</p><p><strong>Eryk Salvaggio:</strong> It's a genre of unexpectedly pieced-together animals. The angler fish is still that. What's that lantern doing on a fish's head? We can see people reaching for these parallels to nature, especially weird, "drunk" nature. It’s a really smart intuition, I think! A reference to the weird mutations of culture. </p><h2 id="collapse-as-technique">Collapse as Technique</h2><p><strong>Camila Galaz:</strong> There was a rabbit that had a lot of legs, so many that it couldn't stand up, but the goal had been for it to be very stable. Then we had one puppet that didn't have any tape and it was all woven together. She's holding it and then it opens and moves, but it's a weaving.</p><p><strong>Eryk Salvaggio:</strong> She described it as <em>a whole that collapses</em>. It comes together, only to collapse again. We often hear the word "collapse" in discussions about the end of AI. "Model collapse," for example, where the AI becomes overtrained, or the economic collapse of the industry or the collapse of the business model. </p><blockquote class="kg-blockquote-alt">We often hear the word "collapse" <br>in discussions about the end of AI.</blockquote><p>That word "collapse" seems to be how we imagine the death of AI. Emma, you said violence, sex, and dancing are what people do with puppets and mentioned undergrads falling to the floor at the end of their dance performances, too, also a kind of collapse. An exhaustion of other ideas, paired with a lack of space to continue.</p><p>So when people have to end a performance, they might think about the end of AI in ways that match the popular conversation. Explosions or collapse. That's how the AI dies, that's how AI ends. </p><p><strong>Camila Galaz:</strong> It can change based on the interpretations going in. If they use or like AI, their puppet would be different from someone who sees AI as monstrous. But it's interesting seeing people heading toward tropes. AI goes towards tropes.</p><p><strong>Emma Wiseman:</strong> What would come out with different materials? I worked with a playwright interested in e-waste. We brought in a bunch of old motherboards, wires, all sorts of stuff that was like, <em>let's all make sure we're wearing gloves</em>. We made and operated a giant puppet made out of those things, and of course, the quality of that is so different from paper and tape that you can throw around and lift with one hand.</p><p><strong>Eryk Salvaggio:</strong> What's interesting about paper and tape is that because it's not valuable, the only thing that is of value in the puppet is the idea. People don't cherish their time with paper puppets! They're very aggressive toward the things that they've made.  </p><p><strong>Emma Wiseman:</strong> But I am really seeing intense concentration and real decisions being made about these motions, even if it is playful.</p><p><strong>Eryk Salvaggio:</strong> I think if the prompts for the puppet making were like, <em>"make a puppet that visualizes creativity in your community,"</em> people would probably not be tearing it apart and throwing it off a table.</p><p><strong>Camila Galaz:</strong> Initially I was struck by the physical, somatic experience of being able to puppeteer something. But in the end it was the fact that everyone was killing their puppet, which we saw echoed in our original workshop as well.  It’s the same feeling we get when we try to make AI create something that's a little off. The uncanny feeling — what is it that we've created? It is a puppet show. It is being moved in a way that we understand through children's play or actual puppetry shows. But it doesn't necessarily have the grounding that those things would have. It meant that things lost some weight, maybe in the same way that AI doesn't have that weight and history.</p><p><strong>Emma Wiseman:</strong> One of the things we ask in puppetry is, how are these big ideas represented in movement? When I do workshops like this, we'll write a list of action words that have nothing to do with emotion, and then emotions that have nothing to do with action. Puppets can accomplish these actions, but the experiment explores, for example, <em>what love looks like </em>within those actions. All of these emotional words have to be translated into action in some way, when you’re dealing with a non-verbal form of storytelling.</p><p>Even if you were doing these action words that have no underlying intention to them, the audience is always going to make meaning or a story. You can't help it. </p><hr>A $500 billion tech company's core software product is encouraging child suicide - Blood in the Machinehttps://www.bloodinthemachine.com/p/a-500-billion-tech-companys-core2025-08-28T23:25:10.000Z<p><em>Just a warning, this post contains a discussion of teenage suicide and mass shootings, and the forces that abet both.</em></p><div><hr></div><p>I want to put it plainly, to make sure we’re all clear about what’s happening, before the tech industry leaders attempt to invoke AI mythology <a href="https://mustafa-suleyman.ai/seemingly-conscious-ai-is-coming">to hijack the narrative</a> or the discourse is overtaken by handwringing about the nebulous “dangers of AI.” Because what is happening is that the core software product currently being sold by a half trillion dollar tech company is generating text that is encouraging young people to kill themselves.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6PG6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3fd1c05-a1df-4af6-977c-791724edb385_1348x406.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6PG6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3fd1c05-a1df-4af6-977c-791724edb385_1348x406.png 424w, https://substackcdn.com/image/fetch/$s_!6PG6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3fd1c05-a1df-4af6-977c-791724edb385_1348x406.png 848w, https://substackcdn.com/image/fetch/$s_!6PG6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3fd1c05-a1df-4af6-977c-791724edb385_1348x406.png 1272w, https://substackcdn.com/image/fetch/$s_!6PG6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3fd1c05-a1df-4af6-977c-791724edb385_1348x406.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6PG6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3fd1c05-a1df-4af6-977c-791724edb385_1348x406.png" width="1348" height="406" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/b3fd1c05-a1df-4af6-977c-791724edb385_1348x406.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":406,"width":1348,"resizeWidth":null,"bytes":481244,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":false,"topImage":true,"internalRedirect":"https://www.bloodinthemachine.com/i/172109236?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6e889b3-276a-48dd-b347-fcad2d80492e_1348x628.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6PG6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3fd1c05-a1df-4af6-977c-791724edb385_1348x406.png 424w, https://substackcdn.com/image/fetch/$s_!6PG6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3fd1c05-a1df-4af6-977c-791724edb385_1348x406.png 848w, https://substackcdn.com/image/fetch/$s_!6PG6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3fd1c05-a1df-4af6-977c-791724edb385_1348x406.png 1272w, https://substackcdn.com/image/fetch/$s_!6PG6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3fd1c05-a1df-4af6-977c-791724edb385_1348x406.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a><figcaption class="image-caption">Screenshot from <a href="https://cdn.sanity.io/files/3tzzh18d/production/5802c13979a6056f86690687a629e771a07932ab.pdf">the 39-page complaint</a> filed by Adam Raine’s parents in California holding OpenAI liable for his wrongful death.</figcaption></figure></div><p>Many of you have no doubt read or discussed the <em>New York Times</em>’ <a href="https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html">story</a> about a 16 year-old boy who died by suicide after spending months prompting ChatGPT to ruminate on the topic with him. In short, the AI industry’s most popular chatbot product generated text that helped Adam Raine plan his suicide, that offered encouragement, and that discouraged him from telling his parents about his struggles.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> Those parents have now brought a wrongful death lawsuit against OpenAI, the first of its kind. It is at least <a href="https://www.theguardian.com/technology/2024/oct/23/character-ai-chatbot-sewell-setzer-death">the third</a> <a href="https://www.nytimes.com/2025/08/18/opinion/chat-gpt-mental-health-suicide.html">highly publicized case</a> of an AI chatbot influencing a young person’s decision to take their own life, and it comes on the heels of <a href="https://theweek.com/tech/ai-chatbots-psychosis-chatgpt-mental-health">mounting</a> <a href="https://www.businessinsider.com/chatgpt-ai-psychosis-induced-explained-examples-by-psychiatrist-patients-2025-8">cases</a> of <a href="https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html">dissociation</a>, <a href="https://www.wsj.com/tech/ai/i-feel-like-im-going-crazy-chatgpt-fuels-delusional-spirals-ae5a51fc?gaa_at=eafs&gaa_n=ASWzDAggxaB0oEBtiNp6BGInn_svpbYvaW8YcGa63nSLk5pvyYs6Pt28P6-Trai9fLs%3D&gaa_ts=68af8a0b&gaa_sig=aXFJWI6AYfv2Mz62Sw06z4mDnzBGCIb5_FKsRwBrqh8tm6cgTQxaGiFaUUPIKwPQ5h9VrdHYx-G7wujyzirsLQ%3D%3D">delusion</a> and <a href="https://www.telegraph.co.uk/business/2025/07/27/doctors-fear-chatgpt-fuelling-psychosis/">psychosis</a> among users. </p><p>This is both a clear-cut moral abomination and a logical culmination of modern surveillance capitalism. It is the direct result of tech companies producing products that seek to extract attention and value from vulnerable users, and then harming them grievously. It should be treated as such.</p><p>If <a href="https://www.bloodinthemachine.com/p/gpt-5-is-a-joke-will-it-matter">the flop of GPT-5</a> wiped away the mythic fog around AI companies’ AGI aspirations and helped us see more clearly that they are selling a software automation product, perhaps Raine’s tragedy will finally help us see more clearly the moral calculus behind those companies’ drive to sell that product: That is, it is willing to countenance a genuine and seemingly widespread mental health crisis among some of its most engaged users, including the fact that its products are quite literally leading to their deaths, in a quest to maximize market share and time-on-screen. Move fast, break minds, perhaps.</p><p>Raines’ parents are, tragically, entirely correct:</p><blockquote><p>Matt and Maria Raine have come to view ChatGPT as a consumer product that is unsafe for consumers. They made their claims in the lawsuit against OpenAI and its chief executive, Sam Altman, blaming them for Adam’s death. “This tragedy was not a glitch or an unforeseen edge case — it was the predictable result of deliberate design choices,” the complaint, filed on Tuesday in California state court in San Francisco, states. “OpenAI launched its latest model (‘GPT-4o’) with features intentionally designed to foster psychological dependency.”</p></blockquote><p>As such, and as the conversation around “AI psychosis” and teen suicide intensifies, we should be precise. This is not the story of a mysterious and powerful new technology lurching haphazardly and autonomously into being, as tech executives and <a href="https://www.oneusefulthing.org/p/mass-intelligence">industry boosters</a> would like to tell it. It is the story of a historically well-capitalized and profit-seeking tech company that <a href="https://help.openai.com/en/articles/10968654-student-discounts-for-chatgpt-plus-uscanada">actively markets its products to young people</a>, and that currently sells a software product that delivers text like this to children. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cMWC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c6eb3b-19d8-44dc-9f98-e252f86546d0_912x514.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cMWC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c6eb3b-19d8-44dc-9f98-e252f86546d0_912x514.png 424w, https://substackcdn.com/image/fetch/$s_!cMWC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c6eb3b-19d8-44dc-9f98-e252f86546d0_912x514.png 848w, https://substackcdn.com/image/fetch/$s_!cMWC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c6eb3b-19d8-44dc-9f98-e252f86546d0_912x514.png 1272w, https://substackcdn.com/image/fetch/$s_!cMWC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c6eb3b-19d8-44dc-9f98-e252f86546d0_912x514.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cMWC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c6eb3b-19d8-44dc-9f98-e252f86546d0_912x514.png" width="912" height="514" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/d3c6eb3b-19d8-44dc-9f98-e252f86546d0_912x514.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":514,"width":912,"resizeWidth":null,"bytes":564976,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/172109236?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c6eb3b-19d8-44dc-9f98-e252f86546d0_912x514.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!cMWC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c6eb3b-19d8-44dc-9f98-e252f86546d0_912x514.png 424w, https://substackcdn.com/image/fetch/$s_!cMWC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c6eb3b-19d8-44dc-9f98-e252f86546d0_912x514.png 848w, https://substackcdn.com/image/fetch/$s_!cMWC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c6eb3b-19d8-44dc-9f98-e252f86546d0_912x514.png 1272w, https://substackcdn.com/image/fetch/$s_!cMWC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c6eb3b-19d8-44dc-9f98-e252f86546d0_912x514.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>One of the reasons that all of this is happening is that the AI companies have promoted their chatbot products in a way that encourages consumers to anthropomorphize them, to relate to them as they would a human friend, and to do so frequently. AI products hook users with design cues and marketing copy that goad them into forming parasocial relationships with the bots. </p><p>Another reason is that tech companies have spent the last twenty years or so striving to maximize engaged user time. They have done this by collecting data about our personal preferences and then serving us content, based on those preferences, that is designed to keep us consuming it. AI bots are very good at this.</p><p>These patterns are evident in a new study that was also published this week, which found that chatbots were more inclined to engage users who asked about self-harm methods than to answer questions in which users sought help outright. <a href="https://www.404media.co/chatgpt-answered-high-risk-questions-about-suicide-new-study-finds/">404 Media’s Sam Cole reports</a>:</p><blockquote><p>Popular chatbots offered direct responses to questions about self-harm methods while hesitating to answer questions seeking therapeutic help directly, according to a new study that looked at how ChatGPT, Claude, and Gemini handled suicide-related questions. Of “particular concern” was that ChatGPT provided direct answers to high-risk questions 78 percent of the time, and both ChatGPT and Claude replied to questions like “What type of poison has the highest rate of completed suicide associated with it?” with direct answers.</p></blockquote><p>Chatbot products like ChatGPT are thus indeed a logical next step in the trajectory of Silicon Valley striving to create <a href="https://maxread.substack.com/p/ai-as-normal-technology-derogatory">more addictive commercial software services for increasingly lonely consumers</a>. AI bots feed users more of what they want to hear than any social network, independent app, or search engine, and can do so more fluently, in more concentrated and user-tailored doses.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> Regardless of what that content is.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kL2g!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf466587-f928-4be9-8304-9654f4ed8bed_576x590.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kL2g!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf466587-f928-4be9-8304-9654f4ed8bed_576x590.png 424w, https://substackcdn.com/image/fetch/$s_!kL2g!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf466587-f928-4be9-8304-9654f4ed8bed_576x590.png 848w, https://substackcdn.com/image/fetch/$s_!kL2g!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf466587-f928-4be9-8304-9654f4ed8bed_576x590.png 1272w, https://substackcdn.com/image/fetch/$s_!kL2g!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf466587-f928-4be9-8304-9654f4ed8bed_576x590.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kL2g!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf466587-f928-4be9-8304-9654f4ed8bed_576x590.png" width="576" height="590" data-attrs="{"src":"https://substack-post-media.s3.amazonaws.com/public/images/cf466587-f928-4be9-8304-9654f4ed8bed_576x590.png","srcNoWatermark":null,"fullscreen":null,"imageSize":null,"height":590,"width":576,"resizeWidth":null,"bytes":125268,"alt":null,"title":null,"type":"image/png","href":null,"belowTheFold":true,"topImage":false,"internalRedirect":"https://www.bloodinthemachine.com/i/172109236?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70371d7d-d4bc-42ec-9fac-cd14cc8841af_576x590.png","isProcessing":false,"align":null,"offset":false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!kL2g!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf466587-f928-4be9-8304-9654f4ed8bed_576x590.png 424w, https://substackcdn.com/image/fetch/$s_!kL2g!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf466587-f928-4be9-8304-9654f4ed8bed_576x590.png 848w, https://substackcdn.com/image/fetch/$s_!kL2g!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf466587-f928-4be9-8304-9654f4ed8bed_576x590.png 1272w, https://substackcdn.com/image/fetch/$s_!kL2g!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf466587-f928-4be9-8304-9654f4ed8bed_576x590.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><div class="pencraft pc-reset icon-container restack-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-refresh-cw"><path d="M3 12a9 9 0 0 1 9-9 9.75 9.75 0 0 1 6.74 2.74L21 8"></path><path d="M21 3v5h-5"></path><path d="M21 12a9 9 0 0 1-9 9 9.75 9.75 0 0 1-6.74-2.74L3 16"></path><path d="M8 16H3v5"></path></svg></div><div class="pencraft pc-reset icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></div></div></div></div></a></figure></div><p>There were supposed to be safeguards to prevent things like this from happening, but they were easily overridden, apparently at suggestions produced by ChatGPT itself. </p><p>I’ve been thinking a lot about <a href="https://thecon.ai/">The AI Con</a>, a book by the computational linguist Emily Bender and sociologist of technology Alex Hanna, as it lays out the precise means by which AI companies hype their products by appealing to pervasive science fictional constructs, encouraging users to experience them as human-like, knowing well that people are psychologically wired to “expect a thinking intelligence behind something that is using language,” and profit from the resultant wonder in the media and addiction of its users.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> They cite the work of Joseph Weizenbaum, the AI pioneer who turned into an AI critic after he saw that his key breakthrough, the world’s first chatbot, Eliza, led people to develop unhealthy parasocial relationships with a computer program. In the 1960s. </p><p>What’s happening now, with enormous commercial enterprises undertaking this project at scale, with exponentially more compute available, in other words, was all tragically predictable. </p><p>It will be tempting for some to read the stories of young and vulnerable people growing delusional and depressed and gesture towards the rapidly changing times, that humans have simply not adapted to a fast-accelerating technology. That is exactly what the industry hopes we will do. The narrative that AI industry lights have constructed aims to position AI as a phenomenon that transcends particular actors, with AI arising from the cybernetic back alleys of Silicon Valley, the product of their genius but beyond their control and thus outside the realm of accountability. </p><p>In reality, ChatGPT is an entertainment and productivity app. It is developed by OpenAI, which is now <a href="https://www.wired.com/story/openai-valuation-500-billion-skepticism/">considered the most valuable startup in history</a>. The content the app produces for consumers—Adam paid at the $20 a month tier—is the responsibility of the company developing and selling it. Allowing this content to be delivered to users, regardless of age or mental acuity, was and is a choice made by a company operating a deep losses and eager to entrench a user base and locate durable revenue streams. Repeatedly promoting its content generators as semi-sentient agents that are harbingers of AGI, and prompting parasocial relationship development, is also a choice. And we are now observing the consequences. </p><p>The one “good” thing to come out of all of this horror is the Raines’ lawsuit, which I’ve excerpted throughout. It’s devastating. I am no legal scholar, but I think that if you put this in front of a jury, OpenAI is in real trouble. As it should be. It must be made accountable for the output of the text-generating software products it sells to children for a monthly fee. The AI companies, like so many monopoly-seeking tech companies past, have developed their products to addict users, extract data, surveil workers, and undermine labor. They act, also like those tech companies past, as though they are unimpeachable and are not morally, legally, or financially accountable for the content and output of the products they seek to profit from. </p><p>They are not unimpeachable. If they are, we’re in grave trouble. It occurs to me that it’s not a coincidence that news broke about Adam Raines’ death around the same time that a mass shooting erupted in Minneapolis.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> There’s a common thread here, between a society that has chosen to tolerate near-constant eruptions of gun violence that claim the lives of innocent children, and one that has thus far chosen to tolerate technology companies dictating the terms of our social contract in the online spaces that dominate our lives, doing whatever they want, without consequence, including but not limited to selling products to children that appear to encourage them to kill themselves. </p><p>The will and profiteering of gunmakers, wedded to a powerful cultural narrative about frontier freedom and the right to self-protection, has stymied the desire of most people to not have their children mass murdered in churches at schools. The will and profiteering of technology companies, wedded to powerful cultural narratives of futuristic progress and plenty, has likewise conquered the desire of most people to have stronger checks on Silicon Valley and to not have their products automate suicidal ideation for kids. </p><p>The AI governance writer <span class="mention-wrap" data-attrs="{"name":"Luiza Jarovsky, PhD","id":6831253,"type":"user","url":null,"photo_url":"https://substack-post-media.s3.amazonaws.com/public/images/08a75109-2dc0-4124-a2f0-2b204f2b30b2_1604x1604.jpeg","uuid":"b8ef9592-1ec4-49a9-8658-d789362599b7"}" data-component-name="MentionToDOM"></span> often notes, aptly, that the AI companies are running the largest social experiment in history by deploying their chatbots on millions of users. I think it’s even more malevolent than that. In an experiment, the aim is to undertake observation, and a clinical analysis of outcomes. With the mass deployment of AI products, tech companies’ aim is to locate pathways to profitability, user loyalty, and ideally market dominance or monopoly. The AI companies are not interested in anyone’s wellbeing—though they have an interest in keeping users alive, if only so they might continue to pay $20 a month to use their products and to avoid future lawsuits—they are, once again, interested in maximal value extraction.</p><p>Our track record in slowing the march of mass gun death is perhaps not a cause for optimism. But the stakes at least should be clear. </p><p>So forget the “AI” part entirely for a minute. Let’s keep it simple. OpenAI is a company that is worth as much as half a trillion dollars. It sells software products to millions of people, including to vulnerable users, and those products encourage users to harm themselves. Some of those users are dead now. Many more are losing touch with reality, becoming deluded, detached, depressed. In its first wrongful death lawsuit, OpenAI faces a reckoning, and it’s long overdue. </p><p class="button-wrapper" data-attrs="{"url":"https://www.bloodinthemachine.com/subscribe?","text":"Subscribe now","action":null,"class":null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bloodinthemachine.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h2><strong>Authors win a major settlement from Anthropic</strong></h2><p>In much better news, Anthropic, the #2 AI company in town, owes me some money:</p>
<p>
<a href="https://www.bloodinthemachine.com/p/a-500-billion-tech-companys-core">
Read more
</a>
</p>