Shellsharks Blogroll - BlogFlock 2026-02-27T07:01:46.054Z BlogFlock Werd I/O, Evan Boehs, Robb Knight, Aaron Parecki, destructured, Molly White, fLaMEd, Trail of Bits Blog, Westenberg, gynvael.coldwind//vx.log (pl), James' Coffee Blog, joelchrono, Kev Quirk, cool-as-heck, Posts feed, Sophie Koonin, Adepts of 0xCC, <span>Songs</span> on the Security of Networks, cmdr-nova@internet:~$, Johnny.Decimal, Hey, it's Jason!, Terence Eden’s Blog Published on Citation Needed: "Issue 101 – Bought and paid for" - Molly White's activity feed 69a0c2e2498c9cae1763b0e8 2026-02-26T22:02:10.000Z <article class="entry h-entry hentry"><header><div class="description">Published an issue of <a href="https://www.citationneeded.news/"><i>Citation Needed</i></a>: </div><h2 class="p-name"><a class="u-syndication" href="https://www.citationneeded.news/issue-101" rel="syndication">Issue 101 – Bought and paid for </a></h2></header><div class="content e-content"><div class="media-wrapper"><a href="https://www.citationneeded.news/issue-101"><img src="https://www.citationneeded.news/content/images/size/w1200/format/webp/2026/02/trump-tahnoon.jpg" alt="President Trump walks with Sheikh Tahnoon bin Zayed Al Nahyan at the White House in March 2025"/></a></div><div class="p-summary"><p>Bitcoin is down 50%, several prominent industry figures have been uncovered in the Epstein files, Trump’s facing a probe into his family’s $500M deal with the UAE, and crypto super PACs spend their first $6 million in the midterms.</p></div></div><footer class="footer"><div class="flex-row post-meta"><div class="timestamp">Posted: <a href="https://www.citationneeded.news/issue-101"><time class="dt-published" datetime="2026-02-26T22:02:10+00:00" title="February 26, 2026 at 10:02 PM UTC">February 26, 2026 at 10:02 PM UTC</time>. </a></div><div class="social-links"> <span>Also posted to:</span><a class="social-link u-syndication mastodon" href="https://hachyderm.io/@molly0xfff/116139192506197110" title="Mastodon" rel="syndication">Mastodon</a><a class="social-link u-syndication bluesky" href="https://bsky.app/profile/molly.wiki/post/3mfs72z72as2n" title="Bluesky" rel="syndication">Bluesky</a></div></div><div class="bottomRow"><div class="tags">Tagged: <a class="tag p-category" href="https://www.mollywhite.net/feed/tag/binance" title="See all feed posts tagged "Binance"" rel="category tag">Binance</a>, <a class="tag p-category" href="https://www.mollywhite.net/feed/tag/corruption" title="See all feed posts tagged "corruption"" rel="category tag">corruption</a>, <a class="tag p-category" href="https://www.mollywhite.net/feed/tag/sec" title="See all feed posts tagged "SEC"" rel="category tag">SEC</a>, <a class="tag p-category" href="https://www.mollywhite.net/feed/tag/trump_administration" title="See all feed posts tagged "Trump administration"" rel="category tag">Trump administration</a>.</div></div></footer></article> A Sunday of chaos and quiet - Joel's Log Files https://joelchrono.xyz/blog/sunday 2026-02-26T17:00:00.000Z <p>I woke up early, turned on my laptop and opened the presentation for today’s sermon, the preacher sent it overnight and I was revising it. It took longer than expected, but it was bound to happen—AI generated graphics were used.</p> <p>Helping with the media and worship presentations during service at my local church is a common task of mine. I probably should have done more to prevent this and set some guidelines beforehand, but now that it happened, I wouldn’t let it fly. After a short explanation of why, I saved the day, and educated one more person about the horrors of AI slop. I wish that had been the biggest mess of the day…</p> <p>The morning kept going, I finished the presentation, song lyrics, the usual stuff. I have no breakfast on Sundays, I dressed up—more formal than usual, I was feeling fancy—and headed out with my family!</p> <p>After the service was over—with AI nowhere to be seen—everyone stayed to have lunch together.</p> <p>My church has been collecting money for a new building, so each week a different family prepares and sells a meal for everyone—we are a rather small group—to acquire funds and enjoy the time together.</p> <p>I get in line for my serving—some great <em>tacos de bistec</em> that I couldn’t stop eating—and I sat on a different spot than normal, since most of my friend group helped prepare the orders, or handle cash. I was enjoying myself, listening to the chatter without saying much. I start to hear about violence going on in the streets, about drones flying above some areas, things start to get a little tense.</p> <p>As much as I hate it, I install Facebook again—still the most popular social network in Mexico. I read the headlines, I read posts from the government in my state and my city. I hear the people around me, sharing their own thoughts and updates on the situation.</p> <p>Some people with friends or contacts elsewhere start to get phone calls, people checking up on each other, people making sure everyone is safe. A few leave early, but things keep going pretty well.</p> <p>I talk about it on the fediverse, in some group chats. I check up and see this is news worldwide. Flights are delayed, roads are blocked, hotels closed their doors— nobody in, nobody out.</p> <p>The leader of the biggest drug cartel in Mexico died.</p> <p>After this military operation, the narco is acting in retaliation with terrorist acts all over the country. The posts online from the local government are clear: go home and stay inside.</p> <p>Things were going fine. People left at a rather slow pace, those who stayed finished their meal, and things went on as usual—the kids played and ran around outside. We were all rather calm, it was no big deal.</p> <p>Perhaps is in moments like these where faith shines through, and being together brought a sense of peace amidst the chaos. Maybe it’s naive, maybe we are oblivious to reality. But we rest assured, beyond any despair this world may bring.</p> <p>And the day went on, we returned home, and enjoyed a quiet evening. My mom decided to catch up on her current TV show of choice, my grandma joined her, even my dad. They made some popcorn too.</p> <p>I wasn’t feeling like watchting TV, and I stopped reading the news. It had been a while since we had popcorn, so I got my own bowl and went to my bedroom to <del>watch Resident Evil retrospective videos and essays</del> clean up and get it tidy.</p> <p>Later I returned to my playthrough of <em>Resident Evil 2</em>. It has been such an enjoyable experience, and it felt a bit surreal this time, I couldn’t help but think about the fact that I’m surviving a zombie apocalypse while there are vehicles on fire and buildings closed all over my state. At least I don’t think Zombies stand a chance in Mexico, I admit I chuckled at the intrusive thought.</p> <p>Checking my emails, I realize I won’t have to work on Monday, and that my gym would be closed too. Not much to worry about, just an unexpected day off, a welcome development despite the circumstances.</p> <p>A bit later, my church did an online call, where we prayed together about the whole situation going on, everyone arrived home well, and even though shops and the like were closed, we all had our needs covered. We prayed for the authorities, forgiveness for those committing these acts and their repentance, for the people who suffered a loss, and those who struggled because of it all.</p> <p>My dad prepared some <em>migas con huevo</em> for dinner—probably my favorite way to eat eggs. I was happy that I wouldn’t have to wake up early tomorrow, and things went as normal, then it was time to go to sleep.</p> <hr/> <p>The thought about how I could have realized absolutely nothing if I hadn’t been with other people is also weird, I really am completely unaware of the news most of the time, but when stuff like this could happen all of a sudden, I see now why some folks are often very worried on what’s going on around them.</p> <p>I guess I should try to find a balance. It’s easy to ignore that which doesn’t affect you, I guess, but then it does, and <em>now what?</em> I do hope that I’ll remain calm even then, maybe I can say it because nothing really affected me much, just the tension in the moment. I don’t know, I’ll figure something out.</p> <p>I’d also like to thank those who checked up on me! I’m writing this last couple paragraphs a few days later now, and it really does feel kinda melancholic. I thought about writing about my day off too, but it really was just a normal time staying at home and playing videogames. That Sunday though, that was a day full of mixed emotions, but it kind of got me to see so some things differently, it was an interesting experience.</p> <p>This is day 22 of <a href="https://100daystooffload.com">#100DaysToOffload</a></p> <p> <a href="mailto:me@joelchrono.xyz?subject=A Sunday of chaos and quiet">Reply to this post via email</a> | <a href="https://fosstodon.org/@joel/116138132859293592">Reply on Fediverse</a> </p> This time is different - Terence Eden’s Blog https://shkspr.mobi/blog/?p=64559 2026-02-26T12:34:39.000Z <p>3D TV, AMP, Augmented Reality, Beanie Babies, Blockchain, Cartoon Avatars, Curved TVs, Frogans, Hoverboards, iBeacons, Jetpacks, Metaverse, NFTs, Physical Web, Quantum Computing, Quibi, Small and Safe Nuclear Reactors, Smart Glasses, Stadia, WiMAX.</p> <p>The problem is, the same dudes (and it was nearly always dudes) who were pumped for all of that bollocks now won&#39;t stop wanging on about Artificial Fucking Intelligence.</p> <p>&#34;It&#39;s gonna be the future bro, just trust me!&#34;</p> <p>&#34;I dunno, man. Seems like you say that about every passing fancy - and they all end up being utterly underwhelming.&#34;</p> <p>&#34;This time is different!&#34;</p> <p><em>*sigh*</em></p> <blockquote><p>The investor who says, “This time is different,” when in fact it’s virtually a repeat of an earlier situation, has uttered among the four most costly words in the annals of investing.</p> <p><a href="https://www.franklintempleton.com/forms-literature/download/TL-R16">16 rules for investment success - Sir John Templeton</a></p></blockquote> <p>All of the above technologies are still chugging along in some form or other (well, OK, not Quibi). Some are vaguely useful and others are propped up by weirdo cultists. I don&#39;t doubt that AI will be a <em>part</em> of the future - but it is obviously just going to be one of <em>many</em> technology which are in use.</p> <blockquote><p>No enemies had ever taken Ankh-Morpork. Well technically they had, quite often; the city welcomed free-spending barbarian invaders, but somehow the puzzled raiders found, after a few days, that they didn&#39;t own their horses any more, and within a couple of months they were just another minority group with its own graffiti and food shops.</p> <p>Terry Pratchet&#39;s <del>Faust</del> Eric</p></blockquote> <p>The ideology of &#34;winner takes all&#34; is unsustainable and not supported by reality.</p> Making fLaMEd fury Glow Everywhere With an Eleventy Transform - The Weblog of fLaMEd https://flamedfury.com/posts/making-flamed-fury-glow-everywhere-with-an-eleventy-transform/ 2026-02-26T12:00:00.000Z <p>What’s going on, Internet? I originally added a <code>.gradient-text</code> CSS class as a fun way to make my name (fLaMEd) and site name (fLaMEd fury) pop on the homepage. <a href="https://shellsharks.com/notes/2026/02/17/citations-css" rel="noopener">shellsharks</a> gave me a shoutout for it, which inspired me to take it further and apply the effect site-wide. Site wide was the original intent and the problem was it was only being applied manually in a handful of places, and I kept forgetting to add it whenever I wrote a new post or created a new page. Classic.</p> <p>Instead of hunting through templates and markdown files, I’ve added an Eleventy HTML transform that automatically applies the glow up.</p> <aside class="aside flow note"> <p class="aside__content">I had Claude Code help me figure out the regex and the transform config. This allowed me to get this done before the kids came home. Don't @ me.</p> </aside> <p>The effect itself is a simple utility class using <code>background-clip: text</code>:</p> <pre class="language-css"><code class="language-css"><span class="token selector">.gradient-text</span> <span class="token punctuation">{</span> <span class="token property">color</span><span class="token punctuation">:</span> transparent<span class="token punctuation">;</span> <span class="token property">background-image</span><span class="token punctuation">:</span> <span class="token function">var</span><span class="token punctuation">(</span>--gradient-flames<span class="token punctuation">)</span><span class="token punctuation">;</span> <span class="token property">padding</span><span class="token punctuation">:</span> 0.6rem 0<span class="token punctuation">;</span> <span class="token property">background-size</span><span class="token punctuation">:</span> 50%<span class="token punctuation">;</span> <span class="token property">background-clip</span><span class="token punctuation">:</span> text<span class="token punctuation">;</span> <span class="token punctuation">}</span></code></pre> <p>Swap <code>--gradient-flames</code> for whatever gradient custom property you have defined. The <code>background-size: 50%</code> repeats the gradient across the text for a more dynamic flame effect.</p> <p>The transform lives in its own plugin file and gets registered in <code>eleventy.config.js</code>. It runs after Eleventy has rendered each <code>.html</code> page, tokenises the HTML by splitting on tags, tracks a skip-tag stack, and only replaces text in text nodes.</p> <pre class="language-js"><code class="language-js"><span class="token keyword">export</span> <span class="token keyword">const</span> <span class="token function-variable function">glowUp</span> <span class="token operator">=</span> <span class="token parameter">eleventyConfig</span> <span class="token operator">=></span> <span class="token punctuation">{</span> eleventyConfig<span class="token punctuation">.</span><span class="token function">addTransform</span><span class="token punctuation">(</span><span class="token string">'glow-up'</span><span class="token punctuation">,</span> <span class="token punctuation">(</span><span class="token parameter">content<span class="token punctuation">,</span> outputPath</span><span class="token punctuation">)</span> <span class="token operator">=></span> <span class="token punctuation">{</span> <span class="token keyword">if</span> <span class="token punctuation">(</span><span class="token operator">!</span>outputPath<span class="token operator">?.</span><span class="token function">endsWith</span><span class="token punctuation">(</span><span class="token string">'.html'</span><span class="token punctuation">)</span><span class="token punctuation">)</span> <span class="token keyword">return</span> content<span class="token punctuation">;</span> <span class="token keyword">const</span> skipTags <span class="token operator">=</span> <span class="token keyword">new</span> <span class="token class-name">Set</span><span class="token punctuation">(</span><span class="token punctuation">[</span><span class="token string">'a'</span><span class="token punctuation">,</span> <span class="token string">'code'</span><span class="token punctuation">,</span> <span class="token string">'h1'</span><span class="token punctuation">,</span> <span class="token string">'h1'</span><span class="token punctuation">,</span> <span class="token string">'h3'</span><span class="token punctuation">,</span> <span class="token string">'pre'</span><span class="token punctuation">,</span> <span class="token string">'script'</span><span class="token punctuation">,</span> <span class="token string">'style'</span><span class="token punctuation">,</span> <span class="token string">'textarea'</span><span class="token punctuation">,</span> <span class="token string">'title'</span><span class="token punctuation">]</span><span class="token punctuation">)</span><span class="token punctuation">;</span> <span class="token keyword">const</span> skipStack <span class="token operator">=</span> <span class="token punctuation">[</span><span class="token punctuation">]</span><span class="token punctuation">;</span> <span class="token keyword">let</span> result <span class="token operator">=</span> <span class="token string">''</span><span class="token punctuation">;</span> <span class="token keyword">for</span> <span class="token punctuation">(</span><span class="token keyword">const</span> token <span class="token keyword">of</span> content<span class="token punctuation">.</span><span class="token function">split</span><span class="token punctuation">(</span><span class="token regex"><span class="token regex-delimiter">/</span><span class="token regex-source language-regex">(&lt;[^>]*>)</span><span class="token regex-delimiter">/</span></span><span class="token punctuation">)</span><span class="token punctuation">)</span> <span class="token punctuation">{</span> <span class="token keyword">if</span> <span class="token punctuation">(</span>token<span class="token punctuation">.</span><span class="token function">startsWith</span><span class="token punctuation">(</span><span class="token string">'&lt;'</span><span class="token punctuation">)</span><span class="token punctuation">)</span> <span class="token punctuation">{</span> <span class="token keyword">const</span> isClosing <span class="token operator">=</span> token<span class="token punctuation">.</span><span class="token function">startsWith</span><span class="token punctuation">(</span><span class="token string">'&lt;/'</span><span class="token punctuation">)</span><span class="token punctuation">;</span> <span class="token keyword">const</span> isSelfClosing <span class="token operator">=</span> token<span class="token punctuation">.</span><span class="token function">endsWith</span><span class="token punctuation">(</span><span class="token string">'/>'</span><span class="token punctuation">)</span><span class="token punctuation">;</span> <span class="token keyword">const</span> tagMatch <span class="token operator">=</span> token<span class="token punctuation">.</span><span class="token function">match</span><span class="token punctuation">(</span><span class="token regex"><span class="token regex-delimiter">/</span><span class="token regex-source language-regex">^&lt;\/?([a-zA-Z][a-zA-Z0-9-]*)</span><span class="token regex-delimiter">/</span></span><span class="token punctuation">)</span><span class="token punctuation">;</span> <span class="token keyword">if</span> <span class="token punctuation">(</span>tagMatch <span class="token operator">&amp;&amp;</span> <span class="token operator">!</span>isSelfClosing<span class="token punctuation">)</span> <span class="token punctuation">{</span> <span class="token keyword">const</span> tag <span class="token operator">=</span> tagMatch<span class="token punctuation">[</span><span class="token number">1</span><span class="token punctuation">]</span><span class="token punctuation">.</span><span class="token function">toLowerCase</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">;</span> <span class="token keyword">if</span> <span class="token punctuation">(</span><span class="token operator">!</span>isClosing<span class="token punctuation">)</span> <span class="token punctuation">{</span> <span class="token keyword">if</span> <span class="token punctuation">(</span>skipTags<span class="token punctuation">.</span><span class="token function">has</span><span class="token punctuation">(</span>tag<span class="token punctuation">)</span><span class="token punctuation">)</span> skipStack<span class="token punctuation">.</span><span class="token function">push</span><span class="token punctuation">(</span>tag<span class="token punctuation">)</span><span class="token punctuation">;</span> <span class="token keyword">else</span> <span class="token keyword">if</span> <span class="token punctuation">(</span>tag <span class="token operator">===</span> <span class="token string">'span'</span> <span class="token operator">&amp;&amp;</span> token<span class="token punctuation">.</span><span class="token function">includes</span><span class="token punctuation">(</span><span class="token string">'gradient-text'</span><span class="token punctuation">)</span><span class="token punctuation">)</span> skipStack<span class="token punctuation">.</span><span class="token function">push</span><span class="token punctuation">(</span><span class="token string">'gradient-text-span'</span><span class="token punctuation">)</span><span class="token punctuation">;</span> <span class="token punctuation">}</span> <span class="token keyword">else</span> <span class="token punctuation">{</span> <span class="token keyword">for</span> <span class="token punctuation">(</span><span class="token keyword">let</span> i <span class="token operator">=</span> skipStack<span class="token punctuation">.</span>length <span class="token operator">-</span> <span class="token number">1</span><span class="token punctuation">;</span> i <span class="token operator">>=</span> <span class="token number">0</span><span class="token punctuation">;</span> i<span class="token operator">--</span><span class="token punctuation">)</span> <span class="token punctuation">{</span> <span class="token keyword">if</span> <span class="token punctuation">(</span>skipStack<span class="token punctuation">[</span>i<span class="token punctuation">]</span> <span class="token operator">===</span> tag <span class="token operator">||</span> <span class="token punctuation">(</span>tag <span class="token operator">===</span> <span class="token string">'span'</span> <span class="token operator">&amp;&amp;</span> skipStack<span class="token punctuation">[</span>i<span class="token punctuation">]</span> <span class="token operator">===</span> <span class="token string">'gradient-text-span'</span><span class="token punctuation">)</span><span class="token punctuation">)</span> <span class="token punctuation">{</span> skipStack<span class="token punctuation">.</span><span class="token function">splice</span><span class="token punctuation">(</span>i<span class="token punctuation">,</span> <span class="token number">1</span><span class="token punctuation">)</span><span class="token punctuation">;</span> <span class="token keyword">break</span><span class="token punctuation">;</span> <span class="token punctuation">}</span> <span class="token punctuation">}</span> <span class="token punctuation">}</span> <span class="token punctuation">}</span> result <span class="token operator">+=</span> token<span class="token punctuation">;</span> <span class="token punctuation">}</span> <span class="token keyword">else</span> <span class="token keyword">if</span> <span class="token punctuation">(</span>skipStack<span class="token punctuation">.</span>length <span class="token operator">===</span> <span class="token number">0</span> <span class="token operator">&amp;&amp;</span> token<span class="token punctuation">)</span> <span class="token punctuation">{</span> result <span class="token operator">+=</span> token<span class="token punctuation">.</span><span class="token function">replace</span><span class="token punctuation">(</span><span class="token regex"><span class="token regex-delimiter">/</span><span class="token regex-source language-regex">flamed( fury)?</span><span class="token regex-delimiter">/</span><span class="token regex-flags">gi</span></span><span class="token punctuation">,</span> <span class="token parameter">match</span> <span class="token operator">=></span> <span class="token template-string"><span class="token template-punctuation string">`</span><span class="token string">&lt;span class="gradient-text"></span><span class="token interpolation"><span class="token interpolation-punctuation punctuation">${</span>match<span class="token interpolation-punctuation punctuation">}</span></span><span class="token string">&lt;/span></span><span class="token template-punctuation string">`</span></span><span class="token punctuation">)</span><span class="token punctuation">;</span> <span class="token punctuation">}</span> <span class="token keyword">else</span> <span class="token punctuation">{</span> result <span class="token operator">+=</span> token<span class="token punctuation">;</span> <span class="token punctuation">}</span> <span class="token punctuation">}</span> <span class="token keyword">return</span> result<span class="token punctuation">;</span> <span class="token punctuation">}</span><span class="token punctuation">)</span><span class="token punctuation">;</span> <span class="token punctuation">}</span><span class="token punctuation">;</span></code></pre> <p>Tags in the <code>skipTags</code> set, along with any span already carrying the <code>gradient-text</code> class, push onto the stack. No replacement happens while the stack is non-empty, so link text, code examples, the page <code>&lt;title&gt;</code>, and already-wrapped instances are all left alone. HTML attributes like <code>alt</code> and <code>href</code> are never touched because they sit inside tag tokens, not text nodes.</p> <p>A single regex <code>/flamed( fury)?/gi</code> handles everything in one pass. The optional group greedily matches &quot; fury&quot; when present, so “fLaMEd fury” is always wrapped as a unit rather than just “fLaMEd”. The <code>i</code> flag covers every capitalisation variant (“fLaMEd fury”, “Flamed Fury”, “FLAMED FURY”) with the original casing preserved in the output. This helps because I can be inconsistent with the styling at times.</p> <p>Export the plugin from wherever you manage your Eleventy plugins:</p> <pre class="language-js"><code class="language-js"><span class="token keyword">import</span> <span class="token punctuation">{</span> glowUp <span class="token punctuation">}</span> <span class="token keyword">from</span> <span class="token string">'./plugins/glow-up.js'</span><span class="token punctuation">;</span> <span class="token keyword">export</span> <span class="token keyword">default</span> <span class="token punctuation">{</span> <span class="token comment">// ...other plugins</span> glowUp<span class="token punctuation">,</span> <span class="token punctuation">}</span><span class="token punctuation">;</span></code></pre> <p>Then register it in <code>eleventy.config.js</code>. Register it before any HTML prettify transform so the spans are in place before reformatting runs:</p> <pre class="language-js"><code class="language-js">eleventyConfig<span class="token punctuation">.</span><span class="token function">addPlugin</span><span class="token punctuation">(</span>plugins<span class="token punctuation">.</span>glowUp<span class="token punctuation">)</span><span class="token punctuation">;</span> eleventyConfig<span class="token punctuation">.</span><span class="token function">addPlugin</span><span class="token punctuation">(</span>plugins<span class="token punctuation">.</span>htmlConfig<span class="token punctuation">)</span><span class="token punctuation">;</span> <span class="token comment">// html prettify</span></code></pre> <p>That’s it. Any mention of the site name (fLaMEd fury) in body text gets the gradient automatically, in posts, templates, data-driven content, wherever.</p> <p>Look out for the easter egg I’ve dropped in. Later.</p> <p>Hey, thanks for reading this post in your feed reader! Want to chat? <a href="mailto:hello@flamedfury.com?subject=RE: Making fLaMEd fury Glow Everywhere With an Eleventy Transform">Reply by email</a> or add me on <a href="xmpp:flamed@omg.lol">XMPP</a>, or send a <a href="https://flamedfury.com/posts/making-flamed-fury-glow-everywhere-with-an-eleventy-transform/#webmention">webmention</a>. Check out the <a href="https://flamedfury.com/posts/">posts archive</a> on the website.</p> Four More Years - The Weblog of fLaMEd https://flamedfury.com/posts/four-more-years/ 2026-02-26T11:21:43.000Z <p>What’s going on, Internet? I’ve been using my iPhone 13 Pro for a little over four years now, since September 2021 and I want to keep using it for a couple more.</p> <p>Keyboard lag. Apps taking a second to think before opening. Battery health sitting at 80%.</p> <p>I had a quick look at the iPhone 17 and couldn’t justify it when everything else about this phone is still good. Storage is only half full. Camera is fine. It’s fast when it wants to be.</p> <p>So I tried the logical fix first: a $90 battery replacement from a local repair shop.</p> <img src="https://flamedfury.com/assets/images/posts/2026/2026-02-18-extending-the-life-of-my-iphone-01.jpeg" alt="iPhone battery health screen showing 80% maximum capacity" style="max-inline-size: 280px; display: block; margin-inline: auto" /> <p>This is where it had been sitting for a while. Still usable, but clearly the reason iOS had started throttling performance.</p> <p>Straight after the swap I got a warning about the battery not being genuine which I was only made aware of right before pulling the trigger on this after reading a comment on the <a href="https://www.ifixit.com/products/iphone-13-pro-battery" rel="noopener">iPhone 13 Battery page on iFixit</a>.</p> <img src="https://flamedfury.com/assets/images/posts/2026/2026-02-18-extending-the-life-of-my-iphone-02.jpeg" alt="iOS Important Battery Message warning after fitting a third-party battery" style="max-inline-size: 280px; display: block; margin-inline: auto" /> <p>I made peace with myself and I was prepared to live with it. For $90 and being done in 30 minutes without having to schedule with an authorised repair dealer and being without the phone for up to four days. I just wanted the speed back.</p> <p>But apparently because I’m on a newer version of iOS, I had the option to run Apple’s verification process. So I did… and it passed.</p> <img src="https://flamedfury.com/assets/images/posts/2026/2026-02-18-extending-the-life-of-my-iphone-03.jpeg" alt="iPhone battery health screen restored to 100% maximum capacity after Apple verification" style="max-inline-size: 280px; display: block; margin-inline: auto" /> <p>Battery health back to 100%, full stats restored, and the warning moved to Parts &amp; Service History where it belongs.</p> <img src="https://flamedfury.com/assets/images/posts/2026/2026-02-18-extending-the-life-of-my-iphone-04.jpeg" alt="Parts and Service History screen showing the third-party battery listed as a note rather than a warning" style="max-inline-size: 280px; display: block; margin-inline: auto" /> <p>That’s basically the authorised-repair end result for third-party-repair money.</p> <p>I followed the usual calibration cycle: charge to 100%, leave it on the charger for a couple more hours, run it down until it turns off, charge back to 100%. Mostly to give iOS a clean read on the new battery.</p> <p>The battery fixes the hardware bottleneck. The other half is software. Years of installed apps, background processes, cached junk. So I’m preparing for a full wipe and setting the phone up as new. No restoring from backup, sign into iCloud and let the data sync back, reinstall apps one at a time. Only the things I actually use get to come back. It’s the closest you get to a new phone without buying one.</p> <p>This whole reset cost less than a case for a new phone. If the lag disappears, that’s another couple of years out of a device that’s still more than good enough. If it doesn’t, then I look at upgrading. But it makes more sense to solve the worn-out-battery problem before spending thousands to avoid it.</p> <p>I’ll report back once the clean install is done and I’ve lived with it for a few days.</p> <p>Hey, thanks for reading this post in your feed reader! Want to chat? <a href="mailto:hello@flamedfury.com?subject=RE: Four More Years">Reply by email</a> or add me on <a href="xmpp:flamed@omg.lol">XMPP</a>, or send a <a href="https://flamedfury.com/posts/four-more-years/#webmention">webmention</a>. Check out the <a href="https://flamedfury.com/posts/">posts archive</a> on the website.</p> Members Only: Your anonymity set has collapsed and you don't know it yet - Westenberg 699fa289bb486a00012c43ed 2026-02-26T01:41:29.000Z Introducing Pure Comments (and Pure Commons) - Kev Quirk https://kevquirk.com/introducing-pure-comments-and-pure-commons 2026-02-25T15:04:00.000Z <p>A few weeks ago I <a href="https://kevquirk.com/introducing-pure-blog">introduced Pure Blog</a> a simple PHP based blogging platform that I've <a href="https://kevquirk.com/ive-moved-to-pure-blog">since moved to</a> and I'm very happy. Once Pure Blog was done, I shifted my focus to start <a href="https://kevquirk.com/updates-to-my-commenting-system">improving my commenting system</a>. I ended that post by saying:</p> <blockquote> <p>At this point it's battle tested and working great. However, there's still some rough edges in the code, and security could definitely be improved. So over the next few weeks I'll be doing that, at which point I'll probably release it to the public so you too can have comments on your blog, if you want them.</p> </blockquote> <p>I've now finished that work and I'm ready to release <a href="https://comments.purecommons.org">Pure Comments</a> to the world. 🎉</p> <p>I'm really happy with how Pure Comments has turned out; it slots in perfectly with Pure Blog, which got me thinking about creating a broader suite of apps under the <em>Pure</em> umbrella.</p> <h2>Enter Pure Commons</h2> <p>I've had <a href="https://simplecss.org">Simple.css</a> since 2022, and now I've added Pure Blog and Pure Comments to the fold. So I decided I needed an umbrella to house these disparate projects. That's where <a href="https://purecommons.org">Pure Commons</a> comes in.</p> <p>My vision for Pure Commons is to build it into a suite of simple, privacy focussed tools that are easy to self-host, and have just what you need and no more.</p> <h2>What's next for Pure Commons?</h2> <p>Well, concurrent to working on Pure Comments, I've also started building a fully managed version that people will be able to use for a small monthly fee. That's about 60% done at this point, so I should be releasing that over the next few weeks.</p> <p>In the future I plan to add a managed version of Pure Blog too, but that will be far more complex than a managed version of Pure Comments. So I think that will take some time.</p> <p>I'm also looking at creating <em>Pure Guestbook</em>, which will obviously be a simple, self-hosted guestbook along the same vein as the other <em>Pure</em> apps. This should be relatively simple to build, as a guestbook is basically a simplified commenting system, so most of the code is already exists in Pure Comments.</p> <p>Looking beyond <em>Pure Guestbook</em> I have some other ideas, but you will have to wait and see...</p> <p>In the meantime, please take a look as <a href="https://comments.purecommons.org">Pure Comments</a> - download the <a href="https://github.com/kevquirk/purecomments">source code</a>, take it for a spin, and provide any feedback/bugs you find.</p> <p>If you have any ideas for apps I could add to the <em>Pure Commons</em> family, please get in touch.</p> <div class="email-hidden"> <hr /> <p>Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️</p> <p>You can <a href="mailto:72ja@qrk.one?subject=Introducing%20Pure%20Comments%20%28and%20Pure%20Commons%29">reply to this post by email</a>, or <a href="https://kevquirk.com/introducing-pure-comments-and-pure-commons#comments">leave a comment</a>.</p> </div> Book Review: Of Monsters and Mainframes - Barbara Truelove ★★★⯪☆ - Terence Eden’s Blog https://shkspr.mobi/blog/?p=67527 2026-02-25T12:34:02.000Z <img src="https://shkspr.mobi/blog/wp-content/uploads/2026/02/monsters.webp" alt="Book cover." width="225" class="alignleft size-full wp-image-67528"/> <p>This is fun, silly, charming, and <em>much</em> better than <a href="https://shkspr.mobi/blog/2026/02/book-review-all-systems-red-the-murderbot-diaries-by-martha-wells/">The Murderbot Diaries</a> despite being superficially similar.</p> <p>Imagine you are an interstellar ship and, of course, your AI is conscious. What would you do if your passengers were killed - not by a terrifying alien, but by Count Dracula???</p> <p>What if, on the return journey, another set of your passengers were similarly slaughtered. Except, this time, by a Werewolf? How would that make you feel? Would it drive you mad? Could you cope with the bullying from other starships? Or would you feel the need… the need for REVENGE!</p> <p>As I said, silly and campy fun. It is episodic adventure with just the right amount of Hammer-style horror and not too much technobabble. All the classic monsters are here - depression, intrusive thoughts, envy, fear.</p> <p>Oh, and Frankenstein’s spider.</p> <p>As an ebook, it makes great use of fonts - which give it a delightfully retrofuturistic feel. There are some fun binary Easter-Eggs as well.</p> mquire: Linux memory forensics without external dependencies - Trail of Bits Blog https://blog.trailofbits.com/2026/02/25/mquire-linux-memory-forensics-without-external-dependencies/ 2026-02-25T12:00:00.000Z <p>If you’ve ever done Linux memory forensics, you know the frustration: without debug symbols that match the exact kernel version, you’re stuck. These symbols aren’t typically installed on production systems and must be sourced from external repositories, which quickly become outdated when systems receive updates. If you’ve ever tried to analyze a memory dump only to discover that no one has published symbols for that specific kernel build, you know the frustration.</p> <p>Today, we’re open-sourcing <a href="https://github.com/trailofbits/mquire">mquire</a>, a tool that eliminates this dependency entirely. mquire analyzes Linux memory dumps without requiring any external debug information. It works by extracting everything it needs directly from the memory dump itself. This means you can analyze unknown kernels, custom builds, or any Linux distribution, without preparation and without hunting for symbol files.</p> <p>For forensic analysts and incident responders, this is a significant shift: mquire delivers reliable memory analysis even when traditional tools can&rsquo;t.</p> <h2 id="the-problem-with-traditional-memory-forensics">The problem with traditional memory forensics</h2> <p>Memory forensics tools like <a href="https://github.com/volatilityfoundation/volatility3">Volatility</a> are essential for security researchers and incident responders. However, these tools require debug symbols (or &ldquo;profiles&rdquo;) specific to the exact kernel version in the memory dump. Without matching symbols, analysis options are limited or impossible.</p> <p>In practice, this creates real obstacles. You need to either source symbols from third-party repositories that may not have your specific kernel version, generate symbols yourself (which requires access to the original system, often unavailable during incident response), or hope that someone has already created a profile for that distribution and kernel combination.</p> <p>mquire takes a different approach: it extracts both type information and symbol addresses directly from the memory dump, making analysis possible without any external dependencies.</p> <h2 id="how-mquire-works">How mquire works</h2> <p>mquire combines two sources of information that modern Linux kernels embed within themselves:</p> <p><strong>Type information from BTF</strong>: <a href="https://www.kernel.org/doc/html/next/bpf/btf.html">BPF Type Format</a> is a compact format for type and debug information originally designed for eBPF&rsquo;s &ldquo;compile once, run everywhere&rdquo; architecture. BTF provides structural information about the kernel, including type definitions for kernel structures, field offsets and sizes, and type relationships. We&rsquo;ve repurposed this for memory forensics.</p> <p><strong>Symbol addresses from Kallsyms</strong>: This is the same data that populates <code>/proc/kallsyms</code> on a running system—the memory locations of kernel symbols. By scanning the memory dump for Kallsyms data, mquire can locate the exact addresses of kernel structures without external symbol files.</p> <p>By combining type information with symbol locations, mquire can find and parse complex kernel data structures like process lists, memory mappings, open file handles, and cached file data.</p> <h3 id="kernel-requirements">Kernel requirements</h3> <ul> <li><strong>BTF support</strong>: Kernel 4.18 or newer with BTF enabled (most modern distributions enable it by default)</li> <li><strong>Kallsyms support</strong>: Kernel 6.4 or newer (due to format changes in <code>scripts/kallsyms.c</code>)</li> </ul> <p>These features have been consistently enabled on major distributions since they&rsquo;re requirements for modern BPF tooling.</p> <h2 id="built-for-exploration">Built for exploration</h2> <p>After initialization, mquire provides an interactive SQL interface, an approach directly inspired by <a href="https://github.com/osquery/osquery">osquery</a>. This is something I&rsquo;ve wanted to build ever since my first Querycon, where I discussed forensics capabilities with other osquery maintainers. The idea of bringing osquery&rsquo;s intuitive, SQL-based exploration model to memory forensics has been on my mind for years, and mquire is the realization of that vision.</p> <p>You can run one-off queries from the command line or explore interactively:</p> <figure class="highlight"> <pre tabindex="0" class="chroma"><code class="language-shell" data-lang="shell"><span class="line"><span class="cl">$ mquire query --format json snapshot.lime <span class="s1">&#39;SELECT comm, command_line FROM </span></span></span><span class="line"><span class="cl"><span class="s1">tasks WHERE command_line NOT NULL and comm LIKE &#34;%systemd%&#34; LIMIT 2;&#39;</span> </span></span><span class="line"><span class="cl"><span class="o">{</span> </span></span><span class="line"><span class="cl"> <span class="s2">&#34;column_order&#34;</span>: <span class="o">[</span> </span></span><span class="line"><span class="cl"> <span class="s2">&#34;comm&#34;</span>, </span></span><span class="line"><span class="cl"> <span class="s2">&#34;command_line&#34;</span> </span></span><span class="line"><span class="cl"> <span class="o">]</span>, </span></span><span class="line"><span class="cl"> <span class="s2">&#34;row_list&#34;</span>: <span class="o">[</span> </span></span><span class="line"><span class="cl"> <span class="o">{</span> </span></span><span class="line"><span class="cl"> <span class="s2">&#34;comm&#34;</span>: <span class="o">{</span> </span></span><span class="line"><span class="cl"> <span class="s2">&#34;String&#34;</span>: <span class="s2">&#34;systemd&#34;</span> </span></span><span class="line"><span class="cl"> <span class="o">}</span>, </span></span><span class="line"><span class="cl"> <span class="s2">&#34;command_line&#34;</span>: <span class="o">{</span> </span></span><span class="line"><span class="cl"> <span class="s2">&#34;String&#34;</span>: <span class="s2">&#34;/sbin/init splash&#34;</span> </span></span><span class="line"><span class="cl"> <span class="o">}</span> </span></span><span class="line"><span class="cl"> <span class="o">}</span>, </span></span><span class="line"><span class="cl"> <span class="o">{</span> </span></span><span class="line"><span class="cl"> <span class="s2">&#34;comm&#34;</span>: <span class="o">{</span> </span></span><span class="line"><span class="cl"> <span class="s2">&#34;String&#34;</span>: <span class="s2">&#34;systemd-oomd&#34;</span> </span></span><span class="line"><span class="cl"> <span class="o">}</span>, </span></span><span class="line"><span class="cl"> <span class="s2">&#34;command_line&#34;</span>: <span class="o">{</span> </span></span><span class="line"><span class="cl"> <span class="s2">&#34;String&#34;</span>: <span class="s2">&#34;/usr/lib/systemd/systemd-oomd&#34;</span> </span></span><span class="line"><span class="cl"> <span class="o">}</span> </span></span><span class="line"><span class="cl"> <span class="o">}</span> </span></span><span class="line"><span class="cl"> <span class="o">]</span> </span></span><span class="line"><span class="cl"><span class="o">}</span></span></span></code></pre> <figcaption><span>Figure 1: mquire listing tasks containing systemd</span></figcaption> </figure> <p>The SQL interface enables relational queries across different data sources. For example, you can join process information with open file handles in a single query:</p> <figure class="highlight"> <pre tabindex="0" class="chroma"><code class="language-shell" data-lang="shell"><span class="line"><span class="cl">mquire query --format json snapshot.lime <span class="s1">&#39;SELECT tasks.pid, </span></span></span><span class="line"><span class="cl"><span class="s1">task_open_files.path FROM task_open_files JOIN tasks ON tasks.tgid = </span></span></span><span class="line"><span class="cl"><span class="s1">task_open_files.tgid WHERE task_open_files.path LIKE &#34;%.sqlite&#34; LIMIT 2;&#39;</span> </span></span><span class="line"><span class="cl"><span class="o">{</span> </span></span><span class="line"><span class="cl"> <span class="s2">&#34;column_order&#34;</span>: <span class="o">[</span> </span></span><span class="line"><span class="cl"> <span class="s2">&#34;pid&#34;</span>, </span></span><span class="line"><span class="cl"> <span class="s2">&#34;path&#34;</span> </span></span><span class="line"><span class="cl"> <span class="o">]</span>, </span></span><span class="line"><span class="cl"> <span class="s2">&#34;row_list&#34;</span>: <span class="o">[</span> </span></span><span class="line"><span class="cl"> <span class="o">{</span> </span></span><span class="line"><span class="cl"> <span class="s2">&#34;path&#34;</span>: <span class="o">{</span> </span></span><span class="line"><span class="cl"> <span class="s2">&#34;String&#34;</span>: <span class="s2">&#34;/home/alessandro/snap/firefox/common/.mozilla/firefox/ </span></span></span><span class="line"><span class="cl"><span class="s2"> 4f1wza57.default/cookies.sqlite&#34;</span> </span></span><span class="line"><span class="cl"> <span class="o">}</span>, </span></span><span class="line"><span class="cl"> <span class="s2">&#34;pid&#34;</span>: <span class="o">{</span> </span></span><span class="line"><span class="cl"> <span class="s2">&#34;SignedInteger&#34;</span>: <span class="m">2481</span> </span></span><span class="line"><span class="cl"> <span class="o">}</span> </span></span><span class="line"><span class="cl"> <span class="o">}</span>, </span></span><span class="line"><span class="cl"> <span class="o">{</span> </span></span><span class="line"><span class="cl"> <span class="s2">&#34;path&#34;</span>: <span class="o">{</span> </span></span><span class="line"><span class="cl"> <span class="s2">&#34;String&#34;</span>: <span class="s2">&#34;/home/alessandro/snap/firefox/common/.mozilla/firefox/ </span></span></span><span class="line"><span class="cl"><span class="s2"> 4f1wza57.default/cookies.sqlite&#34;</span> </span></span><span class="line"><span class="cl"> <span class="o">}</span>, </span></span><span class="line"><span class="cl"> <span class="s2">&#34;pid&#34;</span>: <span class="o">{</span> </span></span><span class="line"><span class="cl"> <span class="s2">&#34;SignedInteger&#34;</span>: <span class="m">2846</span> </span></span><span class="line"><span class="cl"> <span class="o">}</span> </span></span><span class="line"><span class="cl"> <span class="o">}</span> </span></span><span class="line"><span class="cl"> <span class="o">]</span> </span></span><span class="line"><span class="cl"><span class="o">}</span></span></span></code></pre> <figcaption><span>Figure 2: Finding processes with open SQLite databases</span></figcaption> </figure> <p>This relational approach lets you reconstruct complete file paths from kernel <code>dentry</code> objects and connect them with their originating processes—context that would require multiple commands with traditional tools.</p> <h2 id="current-capabilities">Current capabilities</h2> <p>mquire currently provides the following tables:</p> <ul> <li><code>os_version</code> and <code>system_info</code>: Basic system identification</li> <li><code>tasks</code>: Running processes with PIDs, command lines, and binary paths</li> <li><code>task_open_files</code>: Open files organized by process</li> <li><code>memory_mappings</code>: Memory regions mapped by each process</li> <li><code>boot_time</code>: System boot timestamp</li> <li><code>dmesg</code>: Kernel ring buffer messages</li> <li><code>kallsyms</code>: Kernel symbol addresses</li> <li><code>kernel_modules</code>: Loaded kernel modules</li> <li><code>network_connections</code>: Active network connections</li> <li><code>network_interfaces</code>: Network interface information</li> <li><code>syslog_file</code>: System logs read directly from the kernel&rsquo;s file cache (works even if log files have been deleted, as long as they&rsquo;re still cached in memory)</li> <li><code>log_messages</code>: Internal mquire log messages</li> </ul> <p>mquire also includes a <code>.dump</code> command that extracts files from the kernel&rsquo;s file cache. This can recover files directly from memory, which is useful when files have been deleted from disk but remain in the cache. You can run it from the interactive shell or via the command line:</p> <figure class="highlight"> <pre tabindex="0" class="chroma"><code class="language-shell" data-lang="shell"><span class="line"><span class="cl">mquire <span class="nb">command</span> snapshot.lime <span class="s1">&#39;.dump /output/directory&#39;</span></span></span></code></pre> </figure> <p>For developers building custom analysis tools, the <code>mquire</code> library crate provides a reusable API for kernel memory analysis.</p> <h2 id="use-cases">Use cases</h2> <p>mquire is designed for:</p> <ul> <li><strong>Incident response</strong>: Analyze memory dumps from compromised systems without needing to source matching debug symbols.</li> <li><strong>Forensic analysis</strong>: Examine what was running and what files were accessed, even on unknown or custom kernels.</li> <li><strong>Malware analysis</strong>: Study process behavior and file operations from memory snapshots.</li> <li><strong>Security research</strong>: Explore kernel internals without specialized setup.</li> </ul> <h2 id="limitations-and-future-work">Limitations and future work</h2> <p>mquire can only access kernel-level information; BTF doesn&rsquo;t provide information about user space data structures. Additionally, the Kallsyms scanner depends on the data format from the kernel&rsquo;s <code>scripts/kallsyms.c</code>; if future kernel versions change this format, the scanner heuristics may need updates.</p> <p>We&rsquo;re considering several enhancements, including expanded table support to provide deeper system insight, improved caching for better performance, and DMA-based external memory acquisition for real-time analysis of physical systems.</p> <h2 id="get-started">Get started</h2> <p>mquire is available on <a href="https://github.com/trailofbits/mquire">GitHub</a> with prebuilt binaries for Linux.</p> <p>To acquire a memory dump, you can use <a href="https://github.com/504ensicsLabs/LiME">LiME</a>:</p> <figure class="highlight"> <pre tabindex="0" class="chroma"><code class="language-shell" data-lang="shell"><span class="line"><span class="cl">insmod ./lime-x.x.x-xx-generic.ko <span class="s1">&#39;path=/path/to/dump.raw format=padded&#39;</span></span></span></code></pre> </figure> <p>Then you can run mquire:</p> <figure class="highlight"> <pre tabindex="0" class="chroma"><code class="language-shell" data-lang="shell"><span class="line"><span class="cl"><span class="c1"># Interactive session</span> </span></span><span class="line"><span class="cl">$ mquire shell /path/to/dump.raw </span></span><span class="line"><span class="cl"> </span></span><span class="line"><span class="cl"><span class="c1"># Single query</span> </span></span><span class="line"><span class="cl">$ mquire query /path/to/dump.raw <span class="s1">&#39;SELECT * FROM os_version;&#39;</span> </span></span><span class="line"><span class="cl"> </span></span><span class="line"><span class="cl"><span class="c1"># Discover available tables</span> </span></span><span class="line"><span class="cl">$ mquire query /path/to/dump.raw <span class="s1">&#39;.schema&#39;</span></span></span></code></pre> </figure> <p>We welcome contributions and feedback. Try <a href="https://github.com/trailofbits/mquire">mquire</a> and let us know what you think.</p> Good vibes, bad vendors - Werd I/O 699e69f4fbb6790001143bc8 2026-02-25T10:00:03.000Z <img src="https://werd.io/content/images/2026/02/getty-images--t1Gbn-p29Y-unsplash.jpg" alt="Good vibes, bad vendors"><p>When I was thirteen or fourteen I had a really comfortable sweatshirt that I wore to school all the time &#x2014; but it did have a few inherent problems. For one thing, it had a great big target on it, and wearing a literal target to high school was just asking for it. For another, on top of that, in Looney Tunes writing, was the confident phrase: &#x201C;It&#x2019;s a good vibe!&#x201D;</p><p>I was bullied as mercilessly as one might expect, but I honestly think it might have killed in the AI era. I&#x2019;d like to think I was just ahead of my time.</p><p>Andrej Karpathy, an early OpenAI researcher who now works <a href="https://eurekalabs.ai/?ref=werd.io">at his own startup</a>, coined the phrase <em>vibe coding</em> last year. To vibe code is to use an LLM like Claude or ChatGPT to generate source code instead of writing it yourself. He meant it as a way to loosely prototype code or to make progress on a weekend project. LLMs, at least at the time, could not be fully trusted to write well-written, working code. It was an out-there idea.</p><p>What a difference a year makes. Today, it&#x2019;s a mainstream conversation that is rapidly reshaping technology strategy &#x2014; and informing layoffs across industries.</p><p>AI conversations are always fraught, for good reasons that include the underlying power dynamics and the bad behavior of most of the AI vendors. At the same time, the whole AI landscape is changing incredibly rapidly, and it&#x2019;s become a clich&#xE9; to point out that any discussion of what LLMs can and can&#x2019;t do today will probably be invalid two or three months from now. And, of course, millions of words have been written about it at this point. But even despite all that, I still think it&#x2019;s worth talking about.</p><p>If you&#x2019;re running technology in a small, resource-constrained environment &#x2014; like a newsroom or a non-profit &#x2014; how should you think about AI-enhanced software engineering? Come to that, how should <em>I</em>?</p><p>Let&#x2019;s talk about it.</p><h3 id="first-things-first-does-it-work">First things first: does it work?</h3><p>It didn&#x2019;t, and then it did.</p><p>Six months ago, LLMs could generate a certain amount of code, but they would often make inefficient decisions or hallucinate libraries and API endpoints, and you&#x2019;d need to babysit them a lot. Their use was mostly passive: they would generate code snippets based on immediate user prompts, and engineers would have to spend a bunch of time debugging the output. And in terms of security, it was the Wild West; there were essentially no security considerations. LLMs are famously stochastic (their output is randomly determined, not deterministic) and prone to hallucinations. The result was unreliable code.</p><p>A lot has changed since then. In particular, the models released in February, 2026 are a sea change in reliability: given the right prompt, they often genuinely can write decent code in one shot. Tools like Claude Code can go off, spawn multiple agents, investigate a problem, build a reasonable plan, and then execute on it, while working in a safely sandboxed environment.</p><p>It&#x2019;s not just about improved models, although they obviously have a central part to play. An ecosystem is developing around doing AI-assisted software engineering well. Plugins like <a href="https://blog.fsck.com/?ref=werd.io">Jesse Vincent</a>&#x2019;s <a href="https://github.com/obra/superpowers?ref=werd.io">Superpowers</a> encourage good decision-making based on principles of excellent software architecture design and product management. Structured frameworks like <a href="https://github.blog/ai-and-ml/generative-ai/spec-driven-development-with-ai-get-started-with-a-new-open-source-toolkit/?ref=werd.io">spec-driven development</a> similarly help lead the agent to sensible outcomes; both are incorporated in all-in-one coding lifecycle toolkits like <a href="https://github.com/dsifry/metaswarm?ref=werd.io">Metaswarm</a>. A rigorous process is preserved, and throughout, there are far more safety guardrails to prevent security incidents (although it&#x2019;s easy to overcome them, or they&#x2019;re sometimes not on by default), and using AI to generate code is much safer than it was.</p><p>Claude Code absolutely can write the code, build a plan, and document its work. I have been an AI skeptic, but in my experiments I&#x2019;ve found that it really can feel like magic. You can reasonably object to AI for any number of reasons, but this is no longer one. It works.</p><p>The thing to understand is that this is a tool for engineers &#x2014; and senior engineers will get the best results. It takes real engineering skill to craft a prompt that will do the right thing and result in a strong architecture.</p><p>The process changes the center of gravity from writing source code in a programming language to crafting goals, understanding your user, being crystal clear about the experience and the value you want to convey, and thinking about architectural implications. That probably means talking to people, forming a hypothesis about what they need, testing it with them, and considering the ongoing technical implications of the work.</p><p>Those are things that senior engineers already spend much of their time doing &#x2014; indeed, I&#x2019;d argue that it&#x2019;s what separates a great senior engineer from a mid-level one. The core question a senior engineer navigates well comes down to: lots of people <em>can</em> write code, but <em>should</em> they? Why, and for whom? Those questions only become more important in a world where AI is writing the source code. When implementation is faster, problem selection and scoping become the scarce skills.</p><p>Friction is training: we learn how to engineer software through our terrible experiences. When things go wrong, we learn. When we have to refactor, we learn. When we talk to our peers about our work, we learn. AI removes most of this friction and hides the complexity away from us: it obscures failure, compresses the process of debugging, and automates refactoring. When these hard-earned skills are the reason we can make good software engineering decisions with AI, but the AI doesn&#x2019;t offer newcomers the ability to build those skills, who will train the AI once we are gone?</p><p>Opinions on that change in center of gravity will be intensely divided. <a href="https://werd.io/2025-the-year-in-llms/">I stand by this New Year&#x2019;s Day thought about Claude Code</a>:</p><blockquote>It has the potential to transform all of tech. I also think we&#x2019;re going to see a real split in the tech industry (and everywhere code is written) between people who are outcome-driven and are excited to get to the part where they can test their work with users faster, and people who are process-driven and get their meaning from the engineering itself and are upset about having that taken away.</blockquote><p>I&#x2019;m very much an outcome-driven developer, and to me it&#x2019;s a giant relief. Not everyone will feel the same way.</p><p>Resource-constrained environments <em>must</em> be outcome-driven. They can&#x2019;t spend their time on the process of software engineering; the best way for them to move forward is to start small, release a valuable core that solves a problem for some set of users as early as possible, and then continually iterate around it, using user feedback as a guide.</p><p>There&#x2019;s no alternative to having empathetic, human-centered senior engineers on your team &#x2014; with or without AI. But AI engineering tools may have an interesting side effect: I can see a world where pushing these product and spec questions to the forefront helps more engineers build those skills more quickly. The first step, after all, is understanding that those answers are needed to begin with.</p><p>It&#x2019;s worth saying that there will be many managers who hope that tools like Claude Code will mean they can do away with engineers or dramatically cut their workforces. Of course there will. They may even see engineers as gatekeepers, and there may be resentment that they&#x2019;re needed at all &#x2014; and a hope that this work can be done directly by managers or other key employees. In a newsroom, for example, can&#x2019;t the <em>journalists</em> produce tools now?</p><p>For non-engineers, they can be a useful prototyping tool: for a product manager, for example, it may help assess a user interface or experiment with an idea. But those prototypes are not enduring software; nor are they projects that can be &#x201C;handed off&#x201D; to engineers to support.</p><p>To properly architect a system, there&#x2019;s a lot you need to consider. This includes performance, scalability, and the ongoing overhead of maintaining a project and keeping it safe: nobody wants to rely on software that proves to be slow, insecure, or impossible to update. You also need to assess the technical implications of a project: are there technical standards that the project should be adhering to, or battle-tested best practices that the design should take into consideration? For all these reasons, an engineer must be involved from the beginning.</p><p>These tools can&#x2019;t replace technical staff, and they shouldn&#x2019;t. Like I said, these tools are <em>for</em> engineers, not a replacement for them.</p><h3 id="okay-but-what-about-those-power-dynamics">Okay, but what about those power dynamics?</h3><p>Consider an individual, indie developer. Over the last few decades, they&#x2019;ve become more and more empowered: developer tools have become cheaper and more of them are open source. Power and control have been devolved to the individual; you can run the tools you want on your own hardware, configure or recode them to your needs, use them for free, and share any of your changes. Engineering has become more and more of an open collective built on radical collaboration. That allows developers with fewer resources to build more easily, widening the pool of people who can build startups, create useful tools, and learn these skills to begin with.</p><p>AI-assisted engineering centralizes power back in the other direction. Claude Code, Codex, and so on are all centralized, proprietary tools that become harder to move away from the longer they&#x2019;re relied upon. They&#x2019;re also expensive: while open source tools are decentralized and free, it&#x2019;s incredibly easy to spend large amounts on Claude. Based on my own experimentation and anecdotes from friends and peer companies, any engineer that relies on Claude Code as part of their daily work is likely to spend hundreds of dollars a week; these are new costs that didn&#x2019;t previously exist.</p><p>Those extra costs could theoretically be offset by significant performance or efficiency gains. The thing is, those gains aren&#x2019;t as strong as you might expect given the apparent magic of automatically generated code. A study recently published in Harvard Business Review indicated that <a href="https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it?ref=werd.io">adding AI actually intensified the workload, putting engineers at risk of burning out</a>:</p><blockquote>The changes brought about by enthusiastic AI adoption can be unsustainable, causing problems down the line. Once the excitement of experimenting fades, workers can find that their workload has quietly grown and feel stretched from juggling everything that&#x2019;s suddenly on their plate. That workload creep can in turn lead to cognitive fatigue, burnout, and weakened decision-making. The productivity surge enjoyed at the beginning can give way to lower quality work, turnover, and other problems.</blockquote><p>These can be mitigated by good work hygiene: enforcing breaks and sensible work hours. But the employers who are most enthusiastic about introducing AI may also be the ones that are least enthusiastic about benefits that center employee well-being over productivity.</p><p>I&#x2019;ve already mentioned that some managers may hope that AI can reduce their investment in software engineers. One can easily imagine that the presence of AI &#x2014; or, rather, the threat of being replaced by it &#x2014; could be used as a cudgel to depress engineer salaries. It gives managers more leverage beyond money, too: those longer hours and more intense workloads that the HBR study found could burn engineers out might be more likely in a world where engineers fear for their jobs.</p><p>The long-term implications are even starker. Consider a world where the recentralization of power from individuals to large, centralized companies continues at the current pace. When AI writes most source code, fewer and fewer engineers will be capable of doing this work themselves, which will lead to even more dependence and lock-in.</p><p>It&#x2019;s been noted in the past that while generative AI robs artists of the interesting work and leaves them with the mundane bits, for outcome-oriented engineers it robs them of the mundane bits and leaves them with the interesting parts. I&#x2019;d argue that the real value is in the intersection between coding and the higher-level work; they&#x2019;re inseparable. By improving the way we code, we improve the way we can solve problems for real people. (How can you solve a problem if you don&#x2019;t really understand how the solution works?) By improving the way we think about solving problems, we improve the way we code. (How can you code something well if you don&#x2019;t know who it&#x2019;s for or why it needs to exist?) They aren&#x2019;t two separate processes; they&#x2019;re parts of the same thing. Removing one makes the other less effective.</p><p>Without concerted effort, an entire industry will be de-skilled and de-valued, their human expertise replaced with software that charges by the token.</p><h3 id="so-let%E2%80%99s-put-in-the-effort">So let&#x2019;s put in the effort</h3><p>AI isn&#x2019;t going away, and AI-assisted software engineering is a permanent addition to the way we build software. But that&#x2019;s not the same thing as saying that the way we use AI <em>today</em> won&#x2019;t change.</p><p>Any policy for AI-assisted engineering has to take into account risks of various kinds. I&#x2019;d loosely separate them into the following categories:</p><ul><li><strong>Employee risk:</strong> preventing burnout, staff turnover, and poor morale.</li><li><strong>Security risk:</strong> preventing data leaks and security incidents that compromise customers, sources, employees, or other members of the community.</li><li><strong>Quality risk:</strong> preventing low-quality code that impacts the efficiency, experience, or perceived quality of the organization&#x2019;s work.</li><li><strong>Supplier risk:</strong> reducing the potential impact of potentially harmful choices made by AI vendors.</li></ul><p>While I&#x2019;m not going to go into a full framework here &#x2014; that&#x2019;s part of what I do at my day job &#x2014; let&#x2019;s talk about how we might think about addressing them together.</p><h4 id="employee-risk">Employee risk</h4><p>In that Harvard Business School report about AI-driven burnout in engineers, the authors suggested some sensible mitigations. These included creating, as team norms, structured time for quiet reflection on the project at hand, and limiting interruptions; intentional processes for limiting the work that can move forward, to prevent engineers from taking on (or being asked to take on) too many tasks just because they think they can; and creating more space for empathetic human connection as a team.</p><p>Those are all things that every team should do, whether or not they use AI! But they become even more important on an AI-accelerated team. If you don&#x2019;t have any norms about tightly controlling when work moves forward, for example, adding a tool that accelerates the work will result in a higher volume of work getting processed, but not necessarily any strategic selection about the most <em>important</em> work to do.</p><p>Perhaps most importantly, engineers are worried that they&#x2019;ll be replaced at the hands of managers who may not understand what they do. They need to have the emotional safety and security that comes from knowing that they won&#x2019;t. It needs to be communicated to them that the importance of their skills is understood. They are experts in their fields, and they&#x2019;ve just gained another tool to help them; they are not interchangeable with the tool.</p><h4 id="security-and-quality-risk">Security and quality risk</h4><p>It turns out that you go a long way towards addressing a lot of security, quality, and efficiency issues &#x2014; as well as some of the morale issues that lead to employee risk &#x2014; by placing engineers at the center of the process. Some AI processes talk about &#x201C;human in the loop&#x201D;. That term was borrowed from more traditional machine learning processes; in the case of anything where AI takes an action in the world on behalf of a user, like engineering, I&#x2019;d prefer to reframe it as a tool that is always directly under human control.</p><p>In that light, all code must have a human owner who will take responsibility for it. It&#x2019;s <em>their</em> code, just as if they&#x2019;d written it in an integrated development environment; they just happened to use a different tool. If all generated code must ultimately be owned and reviewed by a human, that person is able to tune the results for safety, efficiency, and quality.</p><p>Most well-run engineering teams have a peer review process where code written by an engineer must be officially reviewed by a second engineer before it can be merged into the main codebase. If we assume that generated code is owned by Engineer A, that means there must be a human Engineer B to give it a second pair of eyes. They might also be using automated tools to help their review along, but they&#x2019;re the ones who ultimately take responsibility for a review.</p><p>This isn&#x2019;t enough. All projects need to have comprehensive automated testing: tests that must run on code that is about to be merged into the main codebase in order to make sure everything still functions. Tests for efficiency, adherence to style guidelines, and security issues can be run here too. What&#x2019;s kind of fun is that when these are in place, tools like Claude Code will look at the test output, make corrections when something doesn&#x2019;t pass, and try again &#x2014; all automatically.</p><h4 id="supplier-risk">Supplier risk</h4><p>The centralization that removes power and agency from engineers also introduces a serious business risk. If a core part of an organization&#x2019;s value comes from software development, inexorably placing a centralized service in the middle of your process makes you heavily dependent on their decision-making. They can increase their prices, make changes to their stack, or change the way they think about keeping your data and source code safe, and there&#x2019;s very little you can do about it.</p><p>The good news is that, right now, no AI vendor can lock you into their services, because your source code itself and your infrastructure stack are independent of your AI tools. Your code is managed, stored, and hosted in different places, and you can think of source code itself as being a kind of open protocol: because it&#x2019;s plain text, you can use virtually any tool with it. Source code still has the devolved, open, decentralized properties of the open source ecosystem that has put power in engineers&#x2019; hands for decades. That provides at least some protection against an AI vendor suddenly increasing their prices or changing their privacy stance: you can always vote with your feet.</p><p>If you&#x2019;re uncomfortable using one of the major model providers, open source alternatives are available. Tools like <a href="https://aider.chat/?ref=werd.io">Aider</a> and <a href="https://cline.bot/?ref=werd.io">Cline</a> can provide agentic coding using any model, including local models that could theoretically be run on an organization&#x2019;s own infrastructure. In practice, though, this requires more powerful hardware than most smaller organizations can afford; this may become less of an issue over time, as new hardware emerges, but it certainly is one now. Still, local models could help prevent lock-in &#x2014; and may prevent some security issues, too.</p><p>This inherent openness could change as AI vendors look for ways to increase their revenue and reduce churn. We may see AI-specific alternatives to git and GitHub; I can even imagine programming languages that are &#x201C;optimized for AI&#x201D; but that just happen to be proprietary and locked in to a vendor. Every company that builds software should watch for these forms of lock-in and reject them.</p><p>We should also be wary of marketing that tells us to just let the AI write code autonomously. These are ideas that cement vendors as a full replacement for the software development process, moving a center of expertise that was previously owned by an organization into a centralized technology owned by someone else. It&#x2019;s a trap: that world is one where the source code can&#x2019;t be moved between agents and your products are fully locked into their services without a credible exit.</p><h3 id="do-we-want-to-invite-these-companies-into-our-workplaces">Do we want to invite these companies into our workplaces?</h3><p>A <em>ton</em> has been written on the issues surrounding AI. <a href="https://werd.io/evaluating-ai/">Last summer, I wrote a broader guide to navigating AI that I think still holds up.</a> In it, I noted:</p><blockquote>A lot of money has been spent to encourage businesses to adopt AI &#x2014; which means deeply embed services provided by these vendors into their processes. The intention is to make their services integral as quickly as possible. That&#x2019;s why there&#x2019;s heavy sponsorship at conferences for various industries, programs to sponsor adoption, and so on. Managers and board members see all this messaging and start asking, &#x201C;what are we doing with AI?&#x201D; specifically because this FOMO message has reached them.</blockquote><p>My approach to evaluating AI remains through two main lenses: the technology itself and the vendors who make it. <a href="https://werd.io/evaluating-ai/#further-reading">The further reading section of that earlier piece is a good place to start.</a></p><p>The thing that I didn&#x2019;t mention then, but is worth calling out now, is the sheer precarity of these vendors. AI vendors <a href="https://www.wheresyoured.at/why-everybody-is-losing-money-on-ai/?ref=werd.io#:~:text=Generative%20AI%20companies%20%E2%80%94%20OpenAI%20and%20Anthropic,(EG:%20OpenAI%20and%20Anthropic)%20is%20losing%20money.">are offering their services for below cost</a> and have struggled to articulate value in a way that could credibly lead to profitability. Apparently feeling this gap, OpenAI is experimenting with <a href="https://searchengineland.com/chatgpt-ads-spotted-and-they-are-quite-aggressive-469651?ref=werd.io">ads</a> and <a href="https://gizmodo.com/chatgpts-adult-mode-is-coming-in-2026-2000698677?ref=werd.io">porn</a>, while finding itself under scrutiny for <a href="https://www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-lawsuit-invs-vis?ref=werd.io">putting teen wellbeing at risk</a> through choices it made to boost engagement. Anthropic <a href="https://technologymagazine.com/articles/why-reddit-sues-anthropic-the-dangers-of-ai-data-privacy?ref=werd.io">was sued by Reddit last year</a> for scraping Reddit data for training without authorization, and <a href="https://www.wired.com/story/anthropic-settles-copyright-lawsuit-authors/?ref=werd.io">had to settle a high-profile lawsuit</a> brought by book authors whose work it stole for training data. I&#x2019;ve mentioned Claude Code a bunch in this piece, because it works really well, but it was trained using stolen work.</p><p>Meanwhile, from a technical standpoint, there has been some research that <a href="https://www.ineteconomics.org/research/research-papers/the-ai-bubble-and-the-u-s-economy-how-long-do-hallucinations-last?ref=werd.io">there are diminishing returns to new LLM development</a> and we&#x2019;re already past the peak.</p><p>There&#x2019;s no guarantee these companies will make it. If an organization has invested in agentic coding processes that don&#x2019;t substantially keep humans in the loop and the vendors that power them disappear, they will be left in a bind, with no in-house expertise, and company strategies that depend on AI. That makes it a dangerous gamble. We will have lost internal skill while increasing our dependence on very fragile external suppliers.</p><h3 id="so-how-should-you-think-about-it">So how should you think about it?</h3><p>AI coding works. It shifts the center of gravity from implementation to judgment, which increases the value of senior engineering skills. It also introduces significant power, labor, and supplier risks. That means that solid guardrails and cultural norms are non-optional.</p><p>Even if you haven&#x2019;t rolled it out yet, your engineers are almost certainly using it. In conversations with my peers, I&#x2019;ve heard countless stories of organizations that banned it but discovered that its workforce had just taken matters into their own hands. While there are many engineers who refuse to touch it, many more are eager to have it.</p><p>You could ban it, but it&#x2019;d likely be fruitless: those engineers who use their own accounts will probably keep doing so. It&#x2019;s better to have the tools in a place that&#x2019;s under your control and observable than used in the shadows in a way that might put your data at risk. Given that, it&#x2019;s better to roll it out than to not. But you need to do it with your eyes wide open and with a sense of intentionality. Be aware of the risks, and mitigate them in advance with common sense cultural norms like the ones I discussed earlier: pay attention to your employee and supplier risks in particular. Don&#x2019;t let AI push to production without oversight. And keep humans not just in the loop but fully in control.</p><p>I don&#x2019;t think it&#x2019;s productive to <em>mandate</em> the use of AI-assisted engineering, which runs the risk of alienating some engineers &#x2014; the split between AI skeptics and those who are excited about the technology is real &#x2014; and preventing nuanced discussions about how the technology can be used inside your workplace. What happens in practice when you just let it roll out to anyone who wants to try it is that people <em>do</em> try it; they find that it&#x2019;s useful for some tasks, and then quickly find its limitations. That&#x2019;s a healthy exploration.</p><p>How should <em>I</em> think about it? I&#x2019;m still figuring it out. Jesse Vincent compares the process of hating &#x201C;agentic&#x201D; development and pushing through it to discover that code was never the most important part of building software <a href="https://primeradiant.com/blog/2026/what-we-are-working-on.html?ref=werd.io">to the process of being a manager and asking your team to build things instead of coding them yourself</a>. I agree that these experiences rhyme &#x2014; but of course, when you lead a team, you&#x2019;re investing in human beings, working alongside them, and helping them to grow in the process. That&#x2019;s exponentially more rewarding than leading software agents built to provide value for a megacorporation.</p><p>But it doesn&#x2019;t need to be that way. You can do both. If you treat the technology as a tool, albeit one that has been made by genuinely problematic companies, you can roll it out to a real, human team and continue to build things together. You can invest in and support them while you navigate new kinds of software problems; together, you can figure out how to shape the culture of an engineering team that is undergoing a paradigm shift. You can train the next generation of software engineers, both keeping the long history of software development in mind, and taking into account these new skills. And you can look for the next thing that properly devolves power down the stack to the individual, for the benefit of everyone.</p><p>Software development is still human. You can work together towards a shared mission, pick and choose the pieces of this new technology that make sense according to your strategy and values, and build community in the process.</p><p><em>That&#x2019;s</em> a good vibe.</p> Everything is awesome (why I'm an optimist) - Westenberg 699e45e472729900014f842d 2026-02-25T01:39:00.000Z <img src="https://www.joanwestenberg.com/content/images/2026/02/ChatGPT-Image-Feb-25--2026--12_03_25-PM.png" alt="Everything is awesome (why I&apos;m an optimist)"><p>February is the month the internet decided we&apos;re all going to die.</p><p>In the span of about two weeks, Matt Shumer&apos;s <a href="https://shumer.dev/something-big-is-happening?ref=joanwestenberg.com">Something Big is Happening</a> racked up over 80 million views on X with its breathless comparison of AI to the early days of COVID, telling his non-tech friends and family that we&apos;re in the &quot;this seems overblown&quot; phase of something much, much bigger than a pandemic. Before anyone had finished arguing about that, Citrini Research published <a href="https://www.citriniresearch.com/p/2028gic?ref=joanwestenberg.com">THE 2028 GLOBAL INTELLIGENCE CRISIS</a> (all caps) a fictional dispatch from June 2028 in which unemployment has hit 10.2%, the S&amp;P 500 has crashed 38% from its highs, and the consumer economy has been hollowed out by what they coined &quot;Ghost GDP&quot;: output that shows up in the national accounts but never circulates through the real economy, because, as Citrini helpfully observed, machines spend zero dollars on discretionary goods. Michael Burry signal-boosted it. <a href="https://www.bloomberg.com/news/articles/2026-02-23/software-payments-shares-tumble-after-citrini-post-on-ai-risks?ref=joanwestenberg.com">Bloomberg covered it</a>. IBM fell 13%. Software and payments stocks shed over $200 billion in market cap in a single day, apparently because a Substack post called upon them by name and investors decided that constituted news.</p><p>The doom loop Citrini described is simple: AI capabilities improve, companies need fewer workers, white-collar layoffs increase, displaced workers spend less, margin pressure pushes firms to invest more in AI, AI capabilities improve. Repeat until civilization unravels. Shumer, meanwhile, told people to get their financial houses in order because the permanent underclass is imminent. </p><p>Both pieces went stratospherically viral, and both, I believe, are entirely wrong about where this is heading.</p><p>I want to make a case for optimism. </p><p>For anyone who read those pieces and felt the dread, whether you&apos;re building AI and worrying about what it means, or you&apos;ve absorbed the pessimist consensus and started treating decline as a foregone conclusion, or you&#x2019;re in the bucket of people Shumer insists are fucked; I&apos;m going to argue that the pessimists have the best narratives and the worst track record. The doom scenarios require assumptions that don&apos;t survive contact with economic history, and the psychological posture you bring to this moment actually matters for how it turns out.</p><h2 id="why-the-doom-loop-feels-so-right">Why the doom loop feels so right</h2><p>The central mechanism of the Citrini thesis: when you make intelligence abundant and cheap, you destroy the income that 70% of GDP depends on. A single GPU cluster in North Dakota generating the output previously attributed to 10,000 white-collar workers in midtown Manhattan is, in their framing, &quot;more economic pandemic than economic panacea.&quot; The velocity of money flatlines. The consumer economy withers. Ghost GDP accumulates in the national accounts while real humans stop being able to pay their mortgages.</p><p>Noah Smith, writing on <a href="https://www.noahpinion.blog/p/the-citrini-post-is-just-a-scary?ref=joanwestenberg.com">Noahpinion</a> the day after the selloff, called it &quot;a scary bedtime story&quot; and pointed out that Citrini doesn&apos;t use an explicit macroeconomic model, so you can&apos;t actually see what assumptions are driving the doom spiral. Smith noted that none of the analysts whose job it is to track Visa and Mastercard stock had apparently thought about AI disruption until a blogger spelled it out for them, which tells you more about sentiment-driven trading than it does about macroeconomics. The economist Gerard MacDonell described the entire piece as &quot;allegorical&quot; but pointed out that it ignores a basic economic principle: production generates income.</p><p>Ben Thompson, on Stratechery, has been making a version of this counterargument for months, most forcefully in his January piece <a href="https://stratechery.com/2026/ai-and-the-human-condition/?ref=joanwestenberg.com">AI and the Human Condition</a>, where he argued that even if AI does all of the jobs, humans will still want humans, creating an economy for labor precisely because it is labor. Thompson&apos;s framing cuts to something the doom narratives consistently miss. They model AI exclusively as labor substitution: the same economy, minus humans. Every section of the Citrini piece is about replacing workers and squeezing margins on existing activity. What they don&apos;t model is what the freed-up surplus creates. As Thompson put it in <a href="https://stratechery.com/2026/another-viral-ai-doomer-article-the-fundamental-error-doordashs-ai-advantages/?ref=joanwestenberg.com">his analysis of the Citrini selloff</a>, this is the real error: a refusal to believe in human choice and markets.</p><p>It&apos;s an error that has been made, in nearly identical form, about every major technological transformation in modern history. Every single time, the pessimists looked at what was being destroyed and extrapolated catastrophe, while failing to imagine what would be created, because the thing that would be created hadn&apos;t been invented yet.</p><h2 id="catastrophists-keep-being-wrong">Catastrophists keep being wrong</h2><p>In 1810, 81% of the American workforce was employed in agriculture. Two hundred years later, it&apos;s about 1%. If you had shown someone in 1810 a chart of agricultural employment decline and asked them to model the economic consequences, the only rational projection would have been apocalypse. Where would 80% of the population find work? What would they do? How would anyone eat if the farmers were all displaced by machines?</p><p>The answer, of course, is that entirely new categories of work were created that no one in 1810 could have conceived of, and these new jobs paid dramaticaly more than subsistance farming. Factory work, office work, services, knowledge work, the entire apparatus of modernity: none of it was visible from the vantage point of the pre-industrial economy. The transition was brutal and uneven. The handloom weavers of England suffered. Dickens documented the squalor of early industrialization in prose that still makes you flinch. But the trajectory was real, and the people projecting permanent immiseration from the displacement of agricultural labor were, in the fullest sense, catastrophically wrong.</p><p>Tom Lee of Fundstrat made this point with a specific example that I find clarifying. The invention of flash-frozen food in the early 1900s disrupted farming, taking agriculture from 30-40% of employment down to its current sliver. The economy didn&apos;t collapse. It reallocated value elsewhere, into industries and occupations that the frozen food pioneers couldn&apos;t have imagined. And today, I can&apos;t name a single family that subsists on frozen TV dinners. </p><p>The Citrini scenario expects you to believe that AI will be the first major technological revolution in which this reallocation mechanism fails entirely. Where every previous wave of automation freed up human labor and capital to flow into new, higher-value activities, this time the loop... <em>stops</em>. The surplus accrues to the owners of compute, consumers lose purchasing power, and the negative feedback loop has no natural brake. It&apos;s worth sitting with how strong a claim that is. It requires every previous pattern of technological adaptation to be wrong, or at least irrelevant. And when you look at the actual data, there are signs that white-collar job postings have stabilized, layoff mentions on earnings calls remain well below early 2023 peaks, and forward-looking labor indicators show no sign of the displacement spiral that the doom thesis predicts.</p><p>Does that mean AI won&apos;t disrupt specific industries and jobs? Obviously it will. Some of those disruptions will be painful and dislocating for the people caught in them. But there&apos;s an enormous gap between &quot;this technology will cause serious labor market disruption that we need to manage&quot; and &quot;this technology will cause a self-reinforcing economic death spiral from which there is no recovery.&quot; Citrini is arguing the latter, while the evidence supports the former.</p><h2 id="why-vivid-scenarios-beat-boring-probabilities">Why vivid scenarios beat boring probabilities</h2><p>There&apos;s a reason the doom narratives go viral while the measured counterarguments get a polite nod // a fraction of the engagement. It has nothing to do with the quality of the underlying analysis. It has everything to do with how human brains process information.</p><p>Daniel Kahneman&apos;s work on the availability heuristic showed that we judge the probability of events by how easily we can imagine them. Dystopia is easy to imagine. We have an extraordinarily rich cultural tradition of imagining technological nightmare scenarios in exquisite detail. Orwell did it brilliantly. Every season of Black Mirror does it competently. The Terminator gave us the visual grammar for AI catastrophe decades before anyone had a working language model. When Citrini describes a world where the unemployment rate hits 10.2% and the S&amp;P crashes 38%, you can picture it. You can feel the dread. Hollywood has been training you to feel exactly this dread for your entire life.</p><p>Now try to imagine the positive scenarios. Try to picture, in concrete sensory detail, a world where AI helps us solve protein folding problems across thousands of neglected tropical diseases, where it accelerates materials science research by orders of magnitude, where it makes high-quality legal and medical advice accessible to people who currently can&apos;t afford it, where it enables forms of creative expression and economic activity that we can&apos;t yet name because they don&apos;t exist yet. It&apos;s fuzzy and abstract. You can state it intellectually, but you can&apos;t feel it the way you can feel the unemployment spiral.</p><p>This asymmetry isn&apos;t trivial. The <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5637576&amp;ref=joanwestenberg.com">Ifo Institute</a> has published research showing that investors are willing to pay more for economic narratives than for raw forecasts, and that pessimistic narratives command higher prices among certain investor types. As <a href="https://klementoninvesting.substack.com/p/why-pessimists-make-more-money?ref=joanwestenberg.com">Joachim Klement</a> put it in his response to the Citrini selloff: investors value narratives more than actual recession forecasts. Stories travel faster than spreadsheets.</p><p>Shumer&apos;s piece is a narrative construction, and a questionable piece of analysis. He opens with the COVID comparison: remember February 2020, when a few people were talking about a virus and everyone thought it was overblown? He positions himself as the insider who sees what&apos;s coming, who&apos;s been &quot;giving the polite, cocktail-party version&quot; but can&apos;t hold back the truth any longer. <a href="https://carvao.substack.com/p/the-problem-with-techs-latest-something?ref=joanwestenberg.com">Paulo Carvao</a>, writing in Forbes, noted that it reads at times like a sales pitch. It&#x2019;s a used-car pitch at that. The Guardian pointed out that Shumer &quot;previously excited the internet by announcing the release of the world&apos;s &apos;top open-source model,&apos; which it was not.&quot; (To be clear: this is a kinder way of saying <a href="https://x.com/jawestenberg/status/2021782902342922514?s=20&amp;ref=joanwestenberg.com" rel="noreferrer">it was fraud.</a>)</p><p>But criticism doesn&apos;t travel like fear does. Fear is a better story. And so the doom narratives accumulate cultural mass while the boring, incremental, statistically-grounded counterarguments remain niche reading for economists and strategists.</p><h2 id="we-remember-disasters-not-the-ones-we-dodged">We remember disasters, not the ones we dodged</h2><p>Humans are spectacular at remembering disasters, passed down in every format from the written word to the oral tradition. We are (for obvious reasons) terrible at remembering the disasters that didn&apos;t happen. In 1962, during the Cuban Missile Crisis, a Soviet submarine officer named Vasili Arkhipov refused to authorize the launch of a nuclear torpedo, overriding two other officers who wanted to fire. The world didn&apos;t end. Most people today have never heard of Arkhipov. Everyone knows about Hiroshima and Nagasaki. The bomb that fell is seared into collective memory. The bomb that didn&apos;t fall is a footnote.</p><p>The Y2K bug was going to crash civilization; then billions of dollars of engineering work fixed it, and everyone retroactively decided it was never a real threat. The ozone layer was going to disintegrate; then the Montreal Protocol worked better than almost anyone predicted, and ozone depletion feels like a quaint 1990s worry. Acid rain was dissolving the forests of North America; then sulfur dioxide regulations cut emissions drastically, and the whole issue evaporated from public consciousness. Every one of these was a genuine threat. Every one was met by human ingenuity and institutional coordination. Every one was subsequently memory-holed, because success is boring and failure is vivid.</p><p>We&apos;re running our forecasting models on a dataset that systematically excludes our wins. It should be entirely unsurprising that the forecasts come out somewhat bearish.</p><h2 id="ben-thompson-as-usual-gets-it-right">Ben Thompson (as usual) gets it right</h2><p>Thompson&apos;s core insight is that humans want humans. He points to the agricultural revolutions: in the pre-Neolithic era, zero percent of humans worked in agriculture. By 1810, 81%. By today, 1%. Machines replaced human agricultural labor entirely, and rather than the economy collapsing, entirely new categories of work were created that paid dramatically more. This cycle played out again with industrialization, with computing, with the internet. Every time, the displacement was real, and every time, new forms of human-valued work emerged that couldn&apos;t have been predicted.</p><p>Citrini called DoorDash &quot;the poster child&quot; for AI disruption, imagining vibe-coded competitors fragmenting the market overnight. Thompson flips it: DoorDash is the poster child for why the article is absurd. DoorDash didn&apos;t always exist. It was built, and it wins through the active choice of customers, restaurants, and drivers. The doom thesis treats it as a static rent-extraction layer sitting on top of human laziness, but DoorDash created its market from scratch and generated new jobs for millions of drivers along the way. What the Citrini analysis lacks, Thompson argued, is any belief in human choice or markets. If your starting assumption is that things are as they are, you can only envision breaking them.</p><p>Citrini predicted AI would collapse real estate commissions by eliminating information asymmetry. But the internet already did that. You can look up every house for sale right now, with full history and photos. Real estate agents still exist, which is one of the better arguments that humans are resourceful at giving themselves work to do even in fields where they arguably shouldn&apos;t need to.</p><p>In a world of AI abundance, the things humans create will become more valuable precisely because they&apos;re human. AI art will make human art more desirable, not less, because provenance matters. AI-generated content will make human-generated content worth more, because the imperfections and idiosyncrasies are features. </p><p>Is this optimistic? Yes. Could it be wrong? Sure it could. But it&apos;s grounded in a real observation about human psychology that the doom models don&apos;t account for. Citrini&apos;s Ghost GDP thesis assumes that when AI replaces human labor, the value simply evaporates from the consumer economy. Thompson&apos;s counterargument is that humans will create new forms of value that are specifically human, and that demand for those forms of value will intensify as machine-generated alternatives become ubiquitous. The history of technological disruption suggests Thompson has the stronger case.</p><h2 id="pessimism-as-a-self-fulfilling-prophecy">Pessimism as a self-fulfilling prophecy</h2><p>What actually worries me is the second-order effects of the doom narrative itself.</p><p>When the smartest, most technically capable people in a field become convinced that the field is heading toward catastrophe, several things happen. Some leave the field entirely, removing exactly the talent you&apos;d want steering the ship. Some stay but adopt a posture of resigned inevitability, which is functionally identical to apathy. Some decide that since disaster is coming, they might as well accelerate and cash out. And a vocal minority become so consumed by existential risk that they advocate for extreme countermeasures that would concentrate power in ways that create entirely new categories of danger.</p><p>Robert Oppenheimer (in the wake of his famous invocation of the Bhagavad Gita) spent the years after the Manhattan Project arguing passionately for international cooperation on nuclear governance. He didn&apos;t say &quot;we should never have done this.&quot; He said, essentially, &quot;this is incredibly powerful, and we need to build institutions that can handle it.&quot; He was an optimist in the meaningful sense: he believed better outcomes were achievable if people worked to achieve them. He was right about that, because we&apos;re still here.</p><p>The most effective people working on AI safety and governance right now are, almost without exception, optimists. They work on alignment because they believe alignment is solvable. They push for better governance becuase they believe governance can work. The ones who&apos;ve concluded that the problem is unsolvable tend to stop doing useful work, for obvious reasons.</p><p>Gramsci wrote about &quot;pessimism of the intellect, optimism of the will.&quot; You look at the world clearly. You see the problems. And then you choose to act as if better outcomes are possible, because that choice is the precondition for achieving them.</p><h2 id="nobody-can-see-the-next-economy">Nobody can see the next economy</h2><p>What both Shumer and Citrini miss is that they&apos;re modeling a future economy using the structure of the present economy. They see AI replacing white-collar workers within the existing economic framework and project the consequences of that replacement within that same framework. But every major technological transformation has changed the framework itself, creating entirely new economic structures that were invisible from the vantage point of the old ones.</p><p>In 1995, if you told someone that one of the largest employers in America would be a company that let strangers sleep in each other&apos;s homes, they would have thought you were insane. If you told them that millions of people would make a living by talking into microphones about their opinions, or recording themselves playing video games, or writing newsletters on the internet, they&apos;d have had you committed. The entire creator economy, the gig economy, the app economy, the SaaS economy that Citrini is now eulogizing: none of it was predictable from the vantage point of 1995. And that&apos;s a 30-year window. The agricultural revolutions played out over centuries.</p><p>What will people do when AI can handle most current white-collar tasks? </p><p>I don&apos;t know. </p><p>And that&apos;s the whole point. </p><p>Nobody knew what displaced agricultural workers would do, either, until they did it. The absence of a visible next chapter isn&apos;t evidence that there won&apos;t be one. It&apos;s evidence that we&apos;re bad at predicting what humans will invent when constraints shift.</p><h2 id="choosing-optimism-with-open-eyes">Choosing optimism with open eyes</h2><p>I&apos;m not saying everything will be fine. I&apos;m not saying the transition will be smooth. I&apos;m not saying that the people displaced by AI won&apos;t suffer, or that we don&apos;t need better policy frameworks to handle the disruption. The distributional concerns at the heart of the Citrini piece are legitimate. If productivity gains accrue primarily to the owners of compute and capital while labor income stagnates, that&apos;s a genuine problem. Labour&apos;s share of GDP has been declining for decades. These are real numbers pointing to real challenges.</p><p>What I am saying is that the leap from &quot;this will be disruptive and we need to manage it carefully&quot; to &quot;this will cause an irreversible economic death spiral&quot; isn&apos;t supported by the evidence, by economic history, or by what we know about how humans respond to technological change. The Citrini scenario requires every adaptive mechanism in the economy to fail simultaneously and completely within roughly two years. That&apos;s a very specific left-tail outcome.</p><p>If you&apos;re building AI systems, if you&apos;re founding companies, if you&apos;re writing code that will shape how people experience the world, your psychological orientation toward the future is a variable that directly shapes // affects outcomes. Pessimistic builders build defensively. They hoard and hedge and make decisions based on fear. Optimistic builders build with ambition. They invest in safety because they believe safety is achievable. They take on hard problems because they believe hard problems have solutions.</p><p>The tech industry is at a hinge point, and the narrative it tells itself will shape what it creates. If the dominant narrative is doom, the best people leave, the remaining people race to extract value before the collapse, and the governance frameworks get built by people who don&apos;t understand the technology. If the dominant narrative is cautious optimism, the best people stay, the work is good, and the institutions get built by people who know what they&apos;re building for.</p><p>Ed Yardeni, the veteran Wall Street strategist, noted in the wake of the Citrini selloff that &quot;the AI story has morphed from a Roaring 2020s productivity booster to an existential threat to our way of life.&quot; He found this striking. I find it absurd. The underlying technology hasn&apos;t changed, and the capabilities haven&apos;t shifted. What changed is the narrative, and narratives are always, <em>always </em>choices.</p><p>I choose optimism. I choose it because the alternative is surrender as sophistication. And because every time I look at the historical record, <em>the full record</em> that includes both the disasters and the averted disasters, both the tragedies and the triumphs, the case for human ingenuity and resilience is stronger than the case against it.</p><p>The doomers may have the best stories.</p><p>I believe the optimists have the best evidence.</p><p>I&apos;ll take the evidence.</p><p>Everything is (going to be) awesome.</p> Notes on Setting up Forgejo on Coolify with SSH - Robb Knight • Posts • Atom Feed https://rknight.me/blog/notes-on-setting-up-forgejo-on-coolify-with-ssh/ 2026-02-24T19:59:19.000Z <p>For reasons that I'll write about on another post, I had occasion to setup my own instance of <a href="https://forgejo.org">Forgejo</a> - &quot;<em>a self-hosted lightweight software forge</em>&quot;, aka &quot;We have GitHub at home&quot;. Despite having an install of <a href="https://coolify.io">Coolify</a> on one of my servers which should have made this one-click, it was significantly more clicks than that.</p> <p>The version in Coolify's library is version 8 where the current version is 14 - this was the start of my issues. I was able to get Forgejo running. I could create repositories, clone them and push, but only over HTTPS and not SSH. The port <em>should</em> have been mapped correctly to make it work but something was misconfigured. SSH is never a fun thing to debug and I had lots of help from <a href="https://neatnik.net">Adam</a>, <a href="%5Bhttps://%5D(https://melkat.lol)">Melanie</a>, and <a href="https://www.andrlik.org">Daniel</a> all of whom had it working on their instances without any tinkering.</p> <p>As best I can tell is that between version 8 and 14 lots of things changed, as you'd expect, so changes I made to port mapping weren't applying correctly. Then I'd try a fresh install but forget other settings I needed to edit. Then I'd do it again and forget something else. I installed Forgjo from fresh at least six times before I was able to get it running and the final working version was simple: change the version to 14, change the <code>22222</code> port mapping to <code>2222</code> and <em>don't touch anything else</em>. That's it. I had seen <a href="https://github.com/coollabsio/coolify/issues/6280">this GitHub issue</a> which also ended with &quot;lol did a reinstall now it's fine&quot; so I at least have a bit more info here.</p> <p>My final docker compose file looks like this:</p> <pre class="language-yaml"><code class="language-yaml"><span class="token key atrule">services</span><span class="token punctuation">:</span><br /> <span class="token key atrule">forgejo</span><span class="token punctuation">:</span><br /> <span class="token key atrule">image</span><span class="token punctuation">:</span> <span class="token string">'codeberg.org/forgejo/forgejo:14'</span><br /> <span class="token key atrule">environment</span><span class="token punctuation">:</span><br /> <span class="token punctuation">-</span> SERVICE_URL_FORGEJO_3000<br /> <span class="token punctuation">-</span> <span class="token string">'FORGEJO__server__ROOT_URL=${SERVICE_URL_FORGEJO}'</span><br /> <span class="token punctuation">-</span> <span class="token string">'FORGEJO__migrations__ALLOWED_DOMAINS=${FORGEJO__migrations__ALLOWED_DOMAINS}'</span><br /> <span class="token punctuation">-</span> <span class="token string">'FORGEJO__migrations__ALLOW_LOCALNETWORKS=${FORGEJO__migrations__ALLOW_LOCALNETWORKS-false}'</span><br /> <span class="token punctuation">-</span> USER_UID=1000<br /> <span class="token punctuation">-</span> USER_GID=1000<br /> <span class="token key atrule">ports</span><span class="token punctuation">:</span><br /> <span class="token punctuation">-</span> <span class="token string">'2222:22'</span><br /> <span class="token key atrule">volumes</span><span class="token punctuation">:</span><br /> <span class="token punctuation">-</span> <span class="token string">'forgejo-data:/data'</span><br /> <span class="token punctuation">-</span> <span class="token string">'forgejo-timezone:/etc/timezone:ro'</span><br /> <span class="token punctuation">-</span> <span class="token string">'forgejo-localtime:/etc/localtime:ro'</span><br /> <span class="token key atrule">healthcheck</span><span class="token punctuation">:</span><br /> <span class="token key atrule">test</span><span class="token punctuation">:</span><br /> <span class="token punctuation">-</span> CMD<br /> <span class="token punctuation">-</span> curl<br /> <span class="token punctuation">-</span> <span class="token string">'-f'</span><br /> <span class="token punctuation">-</span> <span class="token string">'http://127.0.0.1:3000'</span><br /> <span class="token key atrule">interval</span><span class="token punctuation">:</span> 2s<br /> <span class="token key atrule">timeout</span><span class="token punctuation">:</span> 10s<br /> <span class="token key atrule">retries</span><span class="token punctuation">:</span> <span class="token number">15</span></code></pre> <h3>Miscelanea</h3> <p>The app.ini file, when installed with Docker, lives at <code>/data/gitea/conf/app.ini</code>.</p> <p>You can <a href="https://www.coryd.dev/posts/2025/updating-forgejos-robotstxt">add robots.txt</a>, <a href="https://forgejo.org/docs/next/contributor/customization/">customise the icons</a>, and even the templates. These won't exist in the container under <code>data/gitea/public</code> (for robots and icons) or <code>data/gitea/templates</code> on a standard install. If you add them, they <em>then</em> override the defaults, usually after a reboot. My updated home page template, <code>home.tmpl</code>:</p> <pre class="language-handlebars"><code class="language-handlebars"><br /><span class="token tag"><span class="token tag"><span class="token punctuation">&lt;</span>div</span> <span class="token attr-name">role</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>main<span class="token punctuation">"</span></span> <span class="token attr-name">aria-label</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span><span class="token punctuation">"</span></span> <span class="token attr-name">class</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>page-content home<span class="token punctuation">"</span></span><span class="token punctuation">></span></span><br /> <span class="token tag"><span class="token tag"><span class="token punctuation">&lt;</span>div</span> <span class="token attr-name">class</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>tw-mb-8 tw-px-8<span class="token punctuation">"</span></span><span class="token punctuation">></span></span><br /> <span class="token tag"><span class="token tag"><span class="token punctuation">&lt;</span>div</span> <span class="token attr-name">class</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>center<span class="token punctuation">"</span></span><span class="token punctuation">></span></span><br /> <span class="token tag"><span class="token tag"><span class="token punctuation">&lt;</span>img</span> <span class="token attr-name">class</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>logo<span class="token punctuation">"</span></span> <span class="token attr-name">width</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>150<span class="token punctuation">"</span></span> <span class="token attr-name">height</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>220<span class="token punctuation">"</span></span> <span class="token attr-name">src</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>/img/logo.svg<span class="token punctuation">"</span></span> <span class="token attr-name">alt</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span><span class="token punctuation">"</span></span><span class="token punctuation">></span></span><br /> <span class="token tag"><span class="token tag"><span class="token punctuation">&lt;</span>div</span> <span class="token attr-name">class</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>hero<span class="token punctuation">"</span></span><span class="token punctuation">></span></span><br /> <span class="token tag"><span class="token tag"><span class="token punctuation">&lt;</span>h1</span> <span class="token attr-name">class</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>ui icon header title<span class="token punctuation">"</span></span> <span class="token special-attr"><span class="token attr-name">style</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span><span class="token value css language-css"><span class="token property">font-size</span><span class="token punctuation">:</span> 3.5em<span class="token punctuation">;</span></span><span class="token punctuation">"</span></span></span><span class="token punctuation">></span></span><br /> <br /> <span class="token tag"><span class="token tag"><span class="token punctuation">&lt;/</span>h1</span><span class="token punctuation">></span></span><br /> <span class="token tag"><span class="token tag"><span class="token punctuation">&lt;</span>p</span> <span class="token special-attr"><span class="token attr-name">style</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span><span class="token value css language-css"><span class="token property">font-size</span><span class="token punctuation">:</span>1.3em</span><span class="token punctuation">"</span></span></span><span class="token punctuation">></span></span>The personal Git instance of <br /> <span class="token tag"><span class="token tag"><span class="token punctuation">&lt;</span>a</span> <span class="token attr-name">href</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>https://rknight.me<span class="token punctuation">"</span></span><span class="token punctuation">></span></span>Robb Knight<span class="token tag"><span class="token tag"><span class="token punctuation">&lt;/</span>a</span><span class="token punctuation">></span></span>. <br /> <span class="token tag"><span class="token tag"><span class="token punctuation">&lt;</span>a</span> <span class="token attr-name">href</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>/robb<span class="token punctuation">"</span></span><span class="token punctuation">></span></span>Have a gander at the code<span class="token tag"><span class="token tag"><span class="token punctuation">&lt;/</span>a</span><span class="token punctuation">></span></span>.<span class="token tag"><span class="token tag"><span class="token punctuation">&lt;/</span>p</span><span class="token punctuation">></span></span><br /> <span class="token tag"><span class="token tag"><span class="token punctuation">&lt;/</span>div</span><span class="token punctuation">></span></span><br /> <span class="token tag"><span class="token tag"><span class="token punctuation">&lt;/</span>div</span><span class="token punctuation">></span></span><br /> <span class="token tag"><span class="token tag"><span class="token punctuation">&lt;/</span>div</span><span class="token punctuation">></span></span><br /><span class="token tag"><span class="token tag"><span class="token punctuation">&lt;/</span>div</span><span class="token punctuation">></span></span><br /></code></pre> <p>Finally, not Forgejo related but noting it here anyway, to connect to the container when you're on the server (via <a href="https://coryd.dev">Cory</a>), run <code>docker ps -a | grep forgejo</code> to find the Forgejo container then use the ID to connect: <code>docker exec -it &lt;ID&gt; sh</code>.</p> <p>You can browse the code I've already moved on my Forgejo at <a href="https://git.7622.me">git.7622.me</a>.</p> Adding OpenStreetMap login to Auth0 - Terence Eden’s Blog https://shkspr.mobi/blog/?p=67593 2026-02-24T12:34:21.000Z <p>So you want to add OSM as an OAuth provider to Auth0? Here&#39;s a tip - you do <em>not</em> want to create a custom social connection!</p> <p>Instead, you need to create an &#34;OpenID Connect&#34; provider. Here&#39;s how.</p> <h2 id="opensteetmap"><a href="https://shkspr.mobi/blog/2026/02/adding-openstreetmap-login-to-auth0/#opensteetmap">OpenSteetMap</a></h2> <p>As per <a href="https://wiki.openstreetmap.org/wiki/OAuth#Using_OpenStreetMap_as_identity_provider">the OAuth documentation</a> you will need to:</p> <ul> <li>Register a new app at <a href="https://www.openstreetmap.org/oauth2/applications/">https://www.openstreetmap.org/oauth2/applications/</a></li> <li>Give it a name that users will recognise</li> <li>Give it a redirect of <code>https://Your Auth0 Tenant.eu.auth0.com/login/callback</code></li> <li>Tick the box for &#34;Sign in using OpenStreetMap&#34;</li> </ul> <p>Once created, you will need to securely save your Client ID and Client Secret.</p> <h2 id="auth0"><a href="https://shkspr.mobi/blog/2026/02/adding-openstreetmap-login-to-auth0/#auth0">Auth0</a></h2> <p>These options change frequently, so use this guide with care.</p> <ul> <li>Once you have logged in to your Auth0 Tennant, go to Authentication → Enterprise → OpenID Connect → Create Connection</li> <li>Provide the new connection with the Client ID and Client Secret</li> <li>Set the &#34;scope&#34; to be <code>openid</code></li> <li>Set the OpenID Connect Discovery URL to be <code>https://www.openstreetmap.org/.well-known/openid-configuration</code></li> <li>In the &#34;Login Experience&#34; tick the box for &#34;Display connection as a button&#34;</li> <li>Set the favicon to be <code>https://blog.openstreetmap.org/wp-content/uploads/2022/07/osm-favicon.png</code> or other suitable graphic</li> </ul> <h2 id="next-steps"><a href="https://shkspr.mobi/blog/2026/02/adding-openstreetmap-login-to-auth0/#next-steps">Next Steps</a></h2> <p>We&#39;re not quite done, sadly.</p> <p>The details which OSM sends back to Auth0 are limited, so Auth0 is missing a few bits:</p> <pre><code class="language-json">{ &#34;created_at&#34;: &#34;2026-02-29T12:34:56.772Z&#34;, &#34;identities&#34;: [ { &#34;user_id&#34;: &#34;openstreetmap-openid|123456&#34;, &#34;provider&#34;: &#34;oidc&#34;, &#34;connection&#34;: &#34;openstreetmap-openid&#34;, &#34;isSocial&#34;: false } ], &#34;name&#34;: &#34;&#34;, &#34;nickname&#34;: &#34;&#34;, &#34;picture&#34;: &#34;https://cdn.auth0.com/avatars/default.png&#34;, &#34;preferred_username&#34;: &#34;Terence Eden&#34;, &#34;updated_at&#34;: &#34;2026-02-04T12:01:33.772Z&#34;, &#34;user_id&#34;: &#34;oidc|openstreetmap-openid|123456&#34;, &#34;last_ip&#34;: &#34;12.34.56.78&#34;, &#34;last_login&#34;: &#34;2026-02-29T12:34:56.772Z&#34;, &#34;logins_count&#34;: 1, &#34;blocked_for&#34;: [], &#34;guardian_authenticators&#34;: [], &#34;passkeys&#34;: [] } </code></pre> <p>Annoyingly, Auth0 doesn&#39;t set a name or nickname - so you&#39;ll need to manually get the <code>preferred_username</code>, or create a &#34;User Map&#34;:</p> <pre><code class="language-json">{ &#34;mapping_mode&#34;: &#34;use_map&#34;, &#34;attributes&#34;: { &#34;nickname&#34;: &#34;${context.tokenset.preferred_username}&#34;, &#34;name&#34;: &#34;${context.tokenset.preferred_username}&#34; } } </code></pre> <p>There&#39;s also no avatar image - only the default one.</p> <h3 id="getting-the-avatar-image"><a href="https://shkspr.mobi/blog/2026/02/adding-openstreetmap-login-to-auth0/#getting-the-avatar-image">Getting the Avatar Image</a></h3> <p>The <a href="https://wiki.openstreetmap.org/wiki/API_v0.6">OSM API</a> has a method for <a href="https://wiki.openstreetmap.org/wiki/API_v0.6#Methods_for_user_data">getting user data</a>.</p> <p>For example, here&#39;s all my public data: <a href="https://api.openstreetmap.org/api/0.6/user/98672.json">https://api.openstreetmap.org/api/0.6/user/98672.json</a> - thankfully no authorisation required!</p> <pre><code class="language-json">{ &#34;user&#34;: { &#34;id&#34;: 98672, &#34;display_name&#34;: &#34;Terence Eden&#34;, &#34;img&#34;: { &#34;href&#34;: &#34;https://www.gravatar.com/avatar/52cb49a66755f31abf4df9a6549f0f9c.jpg?s=100&amp;d=https%3A%2F%2Fapi.openstreetmap.org%2Fassets%2Favatar_large-54d681ddaf47c4181b05dbfae378dc0201b393bbad3ff0e68143c3d5f3880ace.png&#34; } } } </code></pre> <p>Alternatively, you can <a href="https://github.com/microlinkhq/unavatar/issues/488">use the Unavatar service</a> to get the image indirectly.</p> <p>I hope that&#39;s helpful to someone!</p> Dopplr colours - James' Coffee Blog https://jamesg.blog/2026/02/24/dopplr-colours/ 2026-02-24T09:03:45.000Z <p>Last year I was introduced to the idea of “Dopplr colours” in the IndieWeb community. This refers to an accent colour assigned to cities on the now-defunct travel website <a href="https://en.wikipedia.org/wiki/Dopplr" rel="noreferrer">Dopplr</a>. You can see examples by clicking through <a href="https://web.archive.org/web/20130116102419/https://www.dopplr.com/place/us/ny/new-york">different Dopplr city pages in the Internet Archive</a> and paying attention to the borders of the map.</p><p>While I haven’t been able to find an authoritative description of the algorithm, to the extent I understand the Dopplr colours were assigned using an MD5-based algorithm. Aaron implemented a <a href="https://pin13.net/city-color.php?city=jamesg.blog">demo of the Dopplr colour system</a> and <a href="https://chat.indieweb.org/dev/2025-09-10#t1757532888865800">described the algorithm in PHP</a> as:</p><pre><code>substr(md5($_REQUEST['city']), 0, 6)</code></pre><p>Here is an equivalent Python implementation:</p><div class="highlight"><pre><span></span><span class="kn">import</span><span class="w"> </span><span class="nn">hashlib</span> <span class="err">​</span> <span class="n">colour</span> <span class="o">=</span> <span class="n">hashlib</span><span class="o">.</span><span class="n">md5</span><span class="p">(</span><span class="s2">"jamesg.blog"</span><span class="o">.</span><span class="n">encode</span><span class="p">())</span><span class="o">.</span><span class="n">hexdigest</span><span class="p">()[:</span><span class="mi">6</span><span class="p">]</span> <span class="err">​</span> <span class="nb">print</span><span class="p">(</span><span class="s2">"#"</span> <span class="o">+</span> <span class="n">colour</span><span class="p">)</span> </pre></div> <p>These code snippets calculate the MD5 hash for a string, then take the first six characters. This creates a hexadecimal value that can then be used as a colour. The Dopplr colour for my domain name is <code>#e228f3</code>. It’s pink! Of note, you can calculate a Dopplr colour for any string, not just city names.</p><p>The IndieWeb community uses Dopplr to assign colours to cities in <a href="https://indieweb.org/IndieWebCamps">its timeline of in-person IndieWebCamp events:</a></p><img alt="A table showing a list of cities with cells coloured using the city's Dopplr colour if an IndieWebCamp event was held in the city in a given year." class="kg-image" loading="lazy" sizes="(min-width: 720px) 720px" src="https://editor.jamesg.blog/content/images/2026/02/timeline.png" srcset="https://editor.jamesg.blog/content/images/size/w600/2026/02/timeline.png 600w, https://editor.jamesg.blog/content/images/size/w1000/2026/02/timeline.png 1000w, https://editor.jamesg.blog/content/images/size/w1600/2026/02/timeline.png 1600w, https://editor.jamesg.blog/content/images/2026/02/timeline.png 1816w"/><p>I had never heard of the idea of Dopplr colours prior to the IndieWeb, and a Google Search was not fruitful in returning a page that described the algorithm. I thought I’d write this page to document the idea, and make it easier for people to find the idea.</p> Finished reading Blood Rites - Molly White's activity feed 699d11748d9cd5e249062fd3 2026-02-24T02:48:20.000Z <article class="entry h-entry hentry"><header><div class="description">Finished reading: </div></header><div class="content e-content"><div class="book h-entry hentry"><a class="book-cover-link" href="https://www.mollywhite.net/reading/books?search=Blood%20Rites"><img class="u-photo book-cover" src="https://m.media-amazon.com/images/S/compressed.photo.goodreads.com/books/1661018414i/99383.jpg" alt="Cover image of Blood Rites" style="max-width: 300px;"/></a><div class="book-details"><div class="top"><div class="series-info"><i>The Dresden Files</i> series, book <span class="series-number">6</span>. </div><div class="title-and-byline"><div class="title"><i class="p-name">Blood Rites</i> </div><div class="byline">by <span class="p-author h-card">Jim Butcher</span>. </div></div><div class="book-info">Published <time class="dt-published published" datetime="2004">2004</time>. 372 pages. </div></div><div class="bottom"><div class="reading-info"><div class="reading-dates"> Started <time class="dt-accessed accessed" datetime="2026-02-08">February 8, 2026</time>; completed February 21, 2026. </div></div></div></div></div><img src="https://www.mollywhite.net/assets/images/placeholder_social.png" alt="Illustration of Molly White sitting and typing on a laptop, on a purple background with 'Molly White' in white serif." style="display: none;"/></div><footer class="footer"><div class="flex-row post-meta"><div class="timestamp">Posted: <time class="dt-published" datetime="2026-02-24T02:48:20+00:00" title="February 24, 2026 at 2:48 AM UTC">February 24, 2026 at 2:48 AM UTC</time>. </div></div><div class="bottomRow"><div class="tags">Tagged: <a class="tag p-category" href="https://www.mollywhite.net/reading/books?tags=fantasy" title="See all books tagged "fantasy"" rel="category tag">fantasy</a>, <a class="tag p-category" href="https://www.mollywhite.net/reading/books?tags=mystery" title="See all books tagged "mystery"" rel="category tag">mystery</a>, <a class="tag p-category" href="https://www.mollywhite.net/reading/books?tags=urban_fantasy" title="See all books tagged "urban fantasy"" rel="category tag">urban fantasy</a>. </div></div></footer></article> IndieWeb events, cleaning up and speedcubing - W08 - Joel's Log Files https://joelchrono.xyz/blog/2026-w08 2026-02-24T01:10:00.000Z <p>These weeknotes are being written earlier than usual, started on Thursday and will continue to be filled up throughout the weekend. I often forget some things going on in one day or another, so maybe doing it like this will be helpful, as more things are fresh on my memory as of the time of writing them.</p> <p>Now that I’m finishing them in early Monday, so many things have happened… My country is in a state of disarray, as conflict between the army and drug cartels has risen after the leader of the biggest cartel in Mexico died. I type this after my workplace gave us the day off to avoid any risks. Me and family are safe, thank God. Alas, let’s continue with the rest of the notes!</p> <ul> <li> <p>📅 A few days ago I forgot to bring up that I assisted to a <em>Homebrew Website Club</em> meeting! I was invited by <a href="https://burgeonlab.com">Naty</a> and have been to a couple meet-ups already, but never talked until now. you can check the <a href="https://indieweb.org/events/2026-02-18-hwc-pacific/">last event’s notes</a> to see what we talked about! I introduced myself, talked about my <a href="https://joelchrono.xyz/bookshelf/">bookshelf</a> and my <a href="https://joelchrono.xyz/pings/">blog pings</a>, and shared how I find people who mention my website via <a href="https://joelchrono.xyz/blog/using-freshrss-user-queries/">FreshRSS user queries</a>.</p> </li> <li> <p>✍️ I also had the chance to join the <a href="https://events.indieweb.org/2026/02/homebrew-website-club-writing-edition-RV8q3abBTeCx">HWC Writing Edition meeting</a>, which was hosted by <a href="https://jamesg.blog/">James</a> himself! That was pretty cool, although I didn’t speak at all on this one since I wasn’t by myself.</p> </li> <li> <p>🏋️ Gymgoing was a bit of a failure this week, I went only once. However, I did use the stationary bike on my room, so that’s at least something, right?</p> </li> <li> <p>🎮 Last week I couldn’t get my <strong>8BitDo Pro 2</strong> controller to connect via Bluetooth to my Laptop, but after dabbling around in a manner of ways, and installing tools like <code class="language-plaintext highlighter-rouge">evtest</code> and connecting to it via <code class="language-plaintext highlighter-rouge">bluetoothctl</code> something happened and it is now getting properly recognized and usable wirelessly! I played a lot more times later in the week and it connects without any issues now.</p> </li> <li> <p>💾 I cleaned up my bedroom during the weekend, and honestly I am much happier with it once again. I found some inspiration to write about some <a href="https://joelchrono.xyz/blog/devices-collecting-dust/">old hardware of mine collecting dust</a>. There’s a couple things yet to do, but they can wait.</p> </li> <li> <p>⏲️ For the first time in ages, I did a 10-solves <a href="https://en.wikipedia.org/wiki/Speedcubing">speedcubing</a> session. The results were pretty good, if I do say so myself. My best time was 18.38 seconds, by best average of 5 was 19.98, my overall average was 25.55.</p> </li> <li> <p>⌚ I think it might be nice to mention the watch I wore throughout the week! It’s often my <a href="https://joelchrono.xyz/blog/you-only-live-twice-casio-ae1200-review/">Casio Royale</a>, but this week I switched it up for the <a href="https://www.casio.com/us/watches/casio/product.CA-53W-1/">Casio CA-53W</a>, I also wore the <a href="https://www.casio.com/intl/watches/casio/product.A130WE-7A/">A-130WE</a> yesterday!</p> </li> </ul> <h2 id="gaming">Gaming</h2> <h3 id="completed">Completed</h3> <ul> <li><strong>Resident Evil 2</strong> - I cannot praise this game enough. This old title for the PS1 has been an absolute success for me, I finished playing the whole Leon A side and it was a joy from start to finish! I also started and finished the Claire B side, and experiencing a the story from a different angle was incredible to see, so much effort went into this game, and I enjoyed the variety between both scenarios, even if they don’t perfectly line up. For a game from the late 90s, it is commendable. It’s basically four campaigns—plus a bunch of extra modes to unlock—allowing for so much fun and replayability. I may revisit it some other time, but I’m also curious to try the other classics.</li> </ul> <h3 id="ongoing">Ongoing</h3> <ul> <li> <p><strong>Grapple Dog</strong> - I have played some more levels of this game and it is really fun! A cute platformer with some challenge but rather great level design so far, I’ve already completed the first two worlds, and gotten a bunch of the collectibles as well, it’s rather awesome to traverse and the gameplay design has been top notch, every level introduces interesting mechanics! I kind of want to get all the achievements of this one, and I’m poking away at it whenever I’m on my laptop.</p> </li> <li> <p><strong>Final Fantasy VI</strong> - After obtaining the Falcon, my new flying ship, I decided to just grind for a little bit, but didn’t progress much in the story, I talked to a few people in a new town, to find some of my lost friends!</p> </li> <li> <p><strong>Slice &amp; Dice</strong> - A few fights here and there, nothing too much happening, but I played it and had fun!</p> </li> </ul> <h2 id="reading">Reading</h2> <ul> <li> <p><strong>Planetes</strong> - I’ve read a few more chapters of this and I still find it super enjoyable. The plot has started to pick up and a timeskip happened that puts us right in the vanguard of space exploration. I an really looking forward for the rest of it, and I’m also a bit upset that it won’t much longer, as the manga is already finished with less than 30 chapters total.</p> </li> <li> <p><strong>Persepolis Rising</strong> - I keep struggling to commit to actually reading this book. I love the progress I made thus far, need to keep it up.</p> </li> </ul> <h2 id="around-the-web">Around the Web</h2> <p>This time I added a new section where I will mention things I discovered or learned during the week! It can contain games, websites and other tidbits that aren’t blogposts or videos.</p> <h3 id="blogposts">Blogposts</h3> <ul> <li><a href="https://smallcypress.bearblog.dev/first-impressions-of-my-mp3-player-innioasis-y1/">first impressions of my MP3 player (Innioasis Y1)</a> - I love seeing more fellow bloggers getting into this kinda retro-inspired single-purpose device trend that I’ve noticed more lately. It may be just a temporary thing, but I genuinely find joy in it.</li> <li><a href="https://syls.blog/unpublishing-posts/">Unpublishing posts</a> - I was actually going to share the series that Syl unpublished, but this post sharing some thoughts about getting too personal on the web. I don’t really mind that much when it comes to music, but there’s some things that I may have doubts on. Do whatever you like with your site people!</li> <li><a href="https://blog.ctms.me/posts/2026-02-19-letting-go-of-hobbies/">Letting go of old hobbies</a> - This is such an interesting post, what happens when you had your fill with the hobby you enjoyed doing? When you are content with your skill, or your setup, and you don’t need to pursue any more? Good stuff.</li> <li><a href="https://gabz.blog/posts/emulate-vs-paying-up">Emulate vs Paying up!</a> - Pokémon FireRed and LeafGreen made it to the Nintendo Switch, and they are not <em>$20</em>, but <strong>$26 USD</strong> in my country. I am not paying that kind of money for a GBA game that I can already play on my Anbernic or my Miyoo devices.</li> <li><a href="https://theresmiling.eu/blog/2026/02/rediscovering-my-cd-collection.html">Rediscovering my CD collection</a> - Elena shares some thoughts about her CD collection, ripping them up, digital vs physical media and optimizing the physical space they use, as well as some general thoughts about her music listening habits!</li> </ul> <h3 id="youtube">YouTube</h3> <ul> <li><a href="https://youtu.be/86FjQ3VhH0o">These are Anbernic’s strangest handhelds</a> - This was an incredible video featuring Anbernic’s mishaps creating handhelds. They always seem to have a caveat that makes them one step away from perfection. This was a very fun watch.</li> <li><a href="https://youtu.be/LmZ39HaPdpI">Handheld gaming has changed</a> - A video reflecting on modern handheld gaming. The experience on the go should be quick and easy to return to. It doesn’t need to be a grandiose adventure, just something quick to lock-in to and spend time, but things have gotten muddier lately.</li> <li><a href="https://youtu.be/nTzL2mHNANo">How Capcom botched the remakes of Resident Evil 2 and 3</a> - This was a wonderful video that explains in details the things that the RE remakes—wonderful games by themselves—</li> <li><a href="https://youtu.be/r9LCwI5iErE">The transformative power of classical music</a> - A YouTube short led me to this wonderful TED talk, I really enjoyed the speech given here, and the piano performance and the way things are laid out was simply superb, just give this one a watch, it’s an old one, but still holds up.</li> </ul> <h3 id="cool-discoveries">Cool discoveries</h3> <ul> <li> <p><a href="https://lategamer.bearblog.dev/ttrpg-resources">TTRPG Resources</a> - Dave wrote a cool list of resources he has been finding as he explores the world of tabletop RPGs.</p> </li> <li> <p><strong>Games like Resident Evil</strong> - I found out about some games that follow the classic fixed camera and tank controls of the original games. Some interesting ones are <a href="https://tophat.studio/games_alisa.html">Alisa</a>, which features PS1-style graphics, or <a href="https://en.wikipedia.org/wiki/Tormented_Souls">Tormented Souls</a>, which is a more modern take that still sticks to the classic formula.</p> </li> </ul> <p>This is day 21 of <a href="https://100daystooffload.com">#100DaysToOffload</a></p> <p> <a href="mailto:me@joelchrono.xyz?subject=IndieWeb events, cleaning up and speedcubing - W08">Reply to this post via email</a> | <a href="https://fosstodon.org/@joel/116122957092219203">Reply on Fediverse</a> </p> Agentic swarms are an org-chart delusion - Westenberg 699cf78f72729900014f8261 2026-02-24T01:07:08.000Z <img src="https://www.joanwestenberg.com/content/images/2026/02/HA3_mmUbwAAvNun.jpeg" alt="Agentic swarms are an org-chart delusion"><p>The &quot;agentic swarm&quot; vision of productivity is comfortingly familiar. </p><p>Which should be an immediate red flag...</p><p>You take the existing corporate hierarchy, you replace the bottom layers with a swarm of AI agents, and you keep humans around as supervisors. It&apos;s an org chart with robots instead of interns. The VP of Engineering becomes the VP of Engineering Agents.</p><p>Congratulations. You&apos;ve reinvented middle management.</p><p>This is what Clayton Christensen would have called a sustaining innovation in the guise of disruption: you&apos;re using new technology to do the same thing slightly more efficiently, in a way that looks and feels like the old thing, and the incumbents love it because the power structure stays intact.</p><p>The person at the top still delegates. They still think in terms of roles and departments and functional areas. They&apos;ve just swapped out the people underneath them for software that doesn&apos;t need health insurance.</p><p>But when an actually disruptive technology arrives, it makes the existing structure irrelevant. </p><p>And AI is that tech.</p><h2 id="roles-are-an-artifact-not-a-law">Roles are an artifact, not a law</h2><p>The entire &quot;swarms of agents&quot; model is based on the idea that work naturally decomposes into roles. You ~need a marketing agent, a sales agent, a support agent, a development agent - because marketing, sales, support, and development are existing job titles and, for humans at least, fundamentally different activities that belong in different boxes.</p><p>This feels like an obvious truism, but it&apos;s a depreciable artifact of organizational scaling, not some deep // universal truth about work itself.</p><p>Adam Smith&apos;s pin factory example is famous because he showed that dividing labor into specialized roles made pin production dramatically more efficient. But the pin factory was a specific solution to a specific constraint: individual humans are slow, they get tired, and they can only hold so much context in their heads at once.</p><p>Specialization was a workaround for bio-cognition. If you could have one person who simultaneously understood metallurgy, wire-drawing, straightening, cutting, pointing, grinding, and packaging - and could do all of it concurrently without fatigue - Smith would never have divided the labor in the first place.</p><p>That hypothetical person is what a solo practitioner with a capable AI already looks like.</p><h2 id="outcomes-over-org-charts">Outcomes over org charts</h2><p>When I sit down with an AI assistant and say &quot;write me a marketing brief, then generate the landing page copy, then draft the ad variants, then build the page, then set up the analytics tracking,&quot; I&apos;m not managing a team of five agents with five different specializations. I&apos;m issuing a sequence of commands to the same system from the same interface.</p><p>The boundaries between &quot;marketer&quot; and &quot;developer&quot; and &quot;analyst&quot; dissolve, because those boundaries were never real boundaries in the work itself. They were boundaries in human capacity.</p><p>The people who will thrive aren&apos;t &quot;agent managers.&quot; They&apos;re people who can say what they want and evaluate whether they got it - and whether what they got was either good or shit.</p><p>The workflow looks less like a CEO directing department heads and more like a musician working in a DAW: one person playing every instrument, mixing, mastering, and producing, toggling between tasks at the speed of thought rather than delegating through layers of abstraction.</p><p>Brian Eno talked about the recording studio as a compositional tool &#x2014; something that collapsed composer, performer, and engineer into one creative role. AI is doing the same thing to knowledge work, collapsing strategist, executor, and analyst into one operational role.</p><h2 id="this-is-already-happening">This is ~already happening</h2><p>Plenty of one-person businesses are shipping products, running marketing, handling support, and managing finances through a single AI-augmented workflow. They&apos;re not thinking about it in terms of agents with job titles. They&apos;re thinking about it as &quot;what do I need to get done today&quot; and then doing all of it, fluidly, without ever switching between conceptual departments.</p><p>The &quot;swarm of agents&quot; idea appeals to people who either come from or aspire to the world of management. If you&apos;ve spent your career hiring people and organizing them into teams - or dreaming // LARPing about becoming a CEO - then naturally you look at AI and see a new kind of team to organize. It&apos;s Maslow&apos;s hammer. When all you have is an org chart, everything looks like a headcount decision.</p><h2 id="why-this-matters">Why this matters</h2><p>If you believe the future is agent management, you&apos;ll build tools for orchestrating fleets of specialized bots. You&apos;ll create dashboards for monitoring your marketing agent separately from your sales agent separately from your dev agent. You&apos;ll recreate Salesforce, but for robots.</p><p>If you believe the future is unified execution, you&apos;ll build tools that let one person express intent and get outcomes across every domain from a single surface. The interface collapses. The abstraction layers disappear. You don&apos;t manage agents any more than you manage the individual transistors in your laptop.</p><p>The first path leads to a world that looks a lot like the one we already have, just with fewer humans in the lower tiers.</p><p>The second path leads to something actually new: a world where the unit of economic production isn&apos;t the company or the team but the individual, with a general-purpose cognitive tool that makes specialization itself an anachronism.</p><p>I know which version the people who currently sit atop org charts would prefer. And I know which version the technology is actually pushing toward. Those two things, for the moment, are very different.</p><p>But the technology tends to win these arguments eventually. It won in music production. It won in publishing. It won in video. Every time a tool collapses specialized roles into generalist capability, the generalists inherit the earth &#x2014; no matter how loudly the specialists insist their particular expertise can&apos;t be automated or absorbed.</p><p>The future of work isn&apos;t managing a swarm.</p><p>It&apos;s being the swarm.</p> Thoughts on Farcaster - Westenberg 699ccba472729900014f81a0 2026-02-23T22:07:22.000Z <img src="https://www.joanwestenberg.com/content/images/2026/02/ChatGPT-Image-Feb-24--2026--09_00_57-AM.png" alt="Thoughts on Farcaster"><p>For the past few weeks I&apos;ve been asking myself why I&apos;m still on Farcaster, whether I&apos;ll stay, whether I even want to.</p><p>I&apos;ve landed on some answers.</p><p>Farcaster, for the uninitiated, was the most credible attempt anyone has made at building a decentralized, crypto-based social network that people actually wanted to use. Founded in 2020 by Dan Romero and Varun Srinivasan, both ex-Coinbase, and backed by $180 million from Andreessen Horowitz, Paradigm, and Union Square Ventures, Farcaster set out to prove that crypto could build something worth using beyond speculation and exit liquidity and the endless recursive loop of tokens that exist to fund the creation of more tokens. It was going to be the social network you actually owned, where your identity and your social graph belonged to you in some meaningful sense, where no single corporate entity could rug-pull your entire online life the way Elon Musk had done to Twitter&apos;s culture or Mark Zuckerberg had done to everyone else.</p><p>And for a while, it worked. Vitalik Buterin posted there. Developers built interesting things on the protocol; Frames, for instance, let you embed interactive applications directly into posts. The Farcaster team shipped a working decentralized protocol that multiple independent teams could build on without permission. Most crypto projects never come close to that kind of technical achievement. And the vibe was good! If you squinted, you could see the outline of what a post-platform internet might look like: open protocols and communities forming around shared creation rather than algorithmic optimization.</p><p>And then 2025 happened.</p><h2 id="after-the-crash">After the crash</h2><p>In December 2025, Dan Romero announced that social-first hadn&apos;t worked.</p><p>Farcaster pivoted to wallets and trading. The thesis was &quot;come for the tool, stay for the network,&quot; an honest attempt to find the growth mechanic that the social layer alone hadn&apos;t provided. The wallet had been performing well, and Romero called it &quot;the closest we&apos;ve been to product-market fit in five years.&quot; You can argue with the direction, but I don&apos;t know that you can argue with a founder who spent half a decade on one approach, acknowleged it wasn&apos;t working, and tried something else instead of pumping a token and heading for the exits.</p><p>I&apos;d call this integrity by crypto standards and, frankly, by most standards.</p><p>In January 2026, Neynar, the infrastructure company that already powered most of Farcaster&apos;s ecosystem, acquired the whole thing. Protocol contracts, code repositories, the app, Clanker, all of it. Romero and Srinivasan stepped back // away.</p><p>The handoff makes sense. Neynar had been Farcaster&apos;s backbone since 2021, serving over a thousand customers. If anyone understood the ecosystem&apos;s plumbing, they did. But it also meant the social network now had a new steward, and regardless of how well-intentioned that steward might be, an era had come to an end and the structural reality had shifted.</p><p>All of this happened against the backdrop of crypto&apos;s broader 2025 reckoning. The memecoin market cap collapsed from $150.6 billion in December 2024 to $39.4 billion by November 2025. The TRUMP token, launched three days before the inauguration with the subtlety of a carnival barker, cratered over 90% from its $75 peak. The LIBRA token, shilled by Argentine President Javier Milei on Valentine&apos;s Day, vaporized $4.5 billion and took 86% of its investors&apos; money with it. Over 11.5 million crypto tokens died in 2025, most of them memecoins, most of them launched with no roadmap and no team, with no purpose beyond being the next thing someone could pump before dumping. The October crash wiped $19 billion in leveraged positions in a single event. The Fear &amp; Greed Index, which had read &quot;extreme greed&quot; in September, plummeted to levels that suggested the market had collectively remembered that gravity exists...</p><p>If you were looking for a narrative about crypto fulfilling its original promise of financial sovereignty and a more equitable internet, 2025 was a punishing year.</p><p>I believe Romero and Srinivasan gave Farcaster everything they had to give. I believe they cared // gave a shit // tried. They spent five years building real infrastructure and shipping real products. They cultivated a community. They weren&apos;t exit-scamming or pumping a token. They were doing the boring // unglamorous work of trying to make decentralized social media function at scale, and they ran into the hard problem that it might not be possible.</p><h2 id="loyalty-on-a-sinking-ship">Loyalty on a sinking ship</h2><p>Platform loyalty during a platform&apos;s decline starts to feel religious. You&apos;re maintaining faith in something when the material conditions no longer support that faith. You&apos;re posting into a feed that&apos;s getting quieter. You&apos;re engaging with a community that&apos;s growing smaller. The people who remain on a platform in decline (let&apos;s be blunt about this) are, by definition, the people who didn&apos;t leave, and that group is filtered for stubbornness, ideological commitment, sunk-cost fallacy, or some combinaton of all three.</p><p>I know this pattern. We&apos;ve all lived through it. LiveJournal. Google+. Vine. Tumblr&apos;s long twilight after the porn ban. Each one had its diaspora moment, the point where the population crossed some invisible threshold and the network effects reversed. Instead of each new user making the platform more valuable, each departing user made it less valuable, and the departure curve steepened. Robert Metcalfe&apos;s law, which tells us a network&apos;s value scales with the square of its users, works in both directions. The math is merciless on the way down.</p><p>You can map this against Albert Hirschman&apos;s framework from Exit, Voice, and Loyalty. When an organization declines, members can exit (leave), exercise voice (complain and try to fix things), or remain loyal (stay and hope). The internet has made exit nearly frictionless (you can sign up for Bluesky in ninety seconds) and voice nearly useless, because platforms at scale have no meaningful feedback mechanism between users and decision-makers. What&apos;s left is loyalty, and loyalty without either exit costs or voice mechanisms is inertia.</p><p>In Italo Calvino&apos;s Invisible Cities, Marco Polo describes a city called Fedora. In the city&apos;s museum, there are glass globes containing miniature models of the city, each one representing a version of Fedora that was imagined but never built, all the possible Fedoras that could have existed but didn&apos;t. The citizens spend their time gazing at these alternatives, at the roads not taken, the architectures never constructed. Farcaster sometimes feels like one of Calvino&apos;s globes: a beautiful model of what decentralized social could have been, preserved in amber, admired by a shrinking group of people who remember what it was supposed to become.</p><h2 id="why-i-havent-left">Why I haven&apos;t left</h2><p>...And yet?</p><p>And yet.</p><p>I still haven&apos;t left.</p><p>I wish I had a clean, satisfying reason; something about decentralization principles or the irreducible value of owning your own social graph.</p><p>And those things do matter. But the real // honest answer is messier.</p><p>Partly it&apos;s that the alternatives are all terrible in their own specific ways. X under Musk has become, as Vitalik Buterin put it, &quot;a death star laser for coordinated hate sessions.&quot; Bluesky absorbed the X refugees and immediately began replicating the dynamics that made X miserable. Threads is Instagram&apos;s vestigial social limb. Mastodon remains Mastodon, which is to say: technically impressive and culturally impenetrable, governed by norms that make posting feel like filing a planning application with the local council.</p><p>Partly it&apos;s that Farcaster, even in its diminished state, retains something I haven&apos;t found elsewhere. The community that remains is small, but it&apos;s weighted toward people who build things and people who think carefully about what they&apos;re building. The feed isn&apos;t optimized for engagement, nobody&apos;s trying to go viral, and the conversations that happen there have a texture I associate with early internet forums (in a good way.)</p><p>And partly it&apos;s that leaving would feel like conceding a point I&apos;m not yet ready to concede.</p><h2 id="cryptos-broken-promise">Crypto&apos;s broken promise</h2><p>The irony of crypto&apos;s 2025 collapse is that the technology worked. The Ethereum network processes transactions reliably. Layer 2 solutions have made fees manageable. Smart contracts execute as written. The decentralized exchange infrastructure handles billions in volume. The pipes do what pipes are supposed to do. What failed was the civilization we were supposed to build on top of them. What failed was...well, us.</p><p>The crypto pitch I actually gave a shit about (predating the NFT boom and the memecoin casino and the $75 presidential tokens) was infrastructure - building systems that couldn&apos;t be captured by any single actor, where the rules were encoded in mathematics rather than terms of service, and where your relationship to a platform couldn&apos;t reasonably be compared to serfdom.</p><p>That pitch had its origins in the cypherpunks of the 1990s, from Timothy May&apos;s &quot;Crypto Anarchist Manifesto&quot; and Eric Hughes&apos;s &quot;A Cypherpunk&apos;s Manifesto,&quot; documents that imagined cryptography as a tool for individual sovereignty in an age of institutional surveillance. The cypherpunks weren&apos;t utopians, exactly. They were pragmatists who understood that privacy and autonomy wouldn&apos;t be granted by institutions, that they&apos;d have to be built, technically, from the ground up.</p><p>Incentives pulled that vision apart. The same cryptographic tools that could enable sovereign identity and censorship-resistant communication enabled speculation at speed and scale. And speculation is both more &quot;fun&quot; than infrastructure and a good deal more viral. It certainly generates better fees, which is why pump.fun, the platform that enabled the creation of thousands of doomed memecoins, remained one of crypto&apos;s most profitable companies throughout 2025 even as the tokens it birthed collectivley lost billions in value.</p><p>When a technology designed to resist capture becomes the basis for financial instruments, the financial instruments capture the technology. The tail wags the dog. The protocol exists to serve the token, not the other way around. And the people building on the protocol start optimizing for token price rather than for the thing the protocol was supposed to enable.</p><p>Farcaster wasn&apos;t immune to this gravitational pull. The acquisition of Clanker, the AI token launchpad, in October 2025 signaled a shift in orientation. By the time Romero announced the wallet pivot in December, the trajectory was clear: the social network would become the social layer of a financial product, which is a very different thing than being a social network that happens to use crypto rails. You can respect the pragmatism of that decision (Romero and his team were responding to real data about what users actually wanted) and still mourn the original vision.</p><h2 id="a-working-hypothesis">A working hypothesis</h2><p>I cared about Farcaster for the community that decentralization attracted. But decentralization is a means, and means are only as good as the ends they serve. The protocol is the plumbing, and plumbing matters, but nobody moves into a house because the pipes are well-laid.</p><p>What Farcaster offered, at its best, was a social environment shaped by a belief in decentralization, a community that self-selected for people who cared about how their tools were built and who controlled them. The protocol was an attractor for a certain kind of person, and that kind of person created a certain kind of conversation, and that conversation was the actual product, regardless of what the cap table said.</p><p>This is the distinction: decentralization as architecture and decentralization as culture. The architecture has shifted; the founders have moved on; the wallet pivot reoriented everything toward trading.</p><p>But the protocol remains open, and the Neynar team has been embedded in the Farcaster ecosystem from the beginning. They understand what they&apos;ve inherited. Whether they&apos;ll preserve it is an open question, but it&apos;s not a foregone conclusion. The culture, the sensibility, the community of people who were drawn to Farcaster because they wanted something different from the engagement-optimized hellscapes of mainstream social media, that&apos;s harder to kill. It migrates and reconstitutes. It finds new vessels.</p><p>In the early twentieth century, the Vienna Circle, a group of philosophers, mathematicians, and scientists, gathered regularly at the University of Vienna to work out the foundations of logical positivism. They believed that meaningful statements had to be empirically verifiable, that metaphysics was nonsense, that philosophy should be brought into line with the methods of science. When the Nazis rose to power, the Circle scattered. Its members fled to the United States, the United Kingdom, New Zealand.</p><p>The institutional vessel broke, but the ideas traveled with the people who carried them.</p><h2 id="why-im-still-posting">Why I&apos;m still posting</h2><p>So this is where I&apos;ve landed.</p><p>I&apos;m still on Farcaster because the people there are still interesting. I&apos;m still there because the conversations still have a quality I can&apos;t reliably find elsewhere. I&apos;m still there because even in its acquired, pivoted, wallet-focused state, the residual community maintains a standard of discourse that I value. And I&apos;m still there because, whatever the business metrics say, Dan and Varun succeeded at something that doesn&apos;t show up on a revenue chart: they attracted and concentrated a community of thoughtful, building-oriented people who care about the internet they&apos;re constructing. That&apos;s worth more than product-market fit, even if you can&apos;t put it on a pitch deck.</p><p>I&apos;ve grown deeply suspicious of the impulse to leave. Every few months, a new wave of platform migration sweeps through the internet, people fleeing X for Bluesky, fleeing Bluesky for Threads, fleeing Threads for Mastodon, fleeing whatever is currently on fire for whatever is currently promising not to be on fire. And these migrations are almost always driven by the same fantasy: that a new platform will fix the problem. The problem is the set of incentives that govern all platforms, the economic logic that turns every online space into either an engagement farm or a ghost town, and changing platforms without changing those incentives is like rearranging deck chairs on the Titanic, except the Titanic is the entire attention economy and the iceberg is the incompatability between advertising revenue and human flourishing.</p><p>What does it actually mean to give a shit about a platform in 2026? I think it means loyalty to the conversations you&apos;re having and the people you&apos;re having them with. The platform is scaffolding. Scaffolding gets removed when the building is done or abandoned, or when someone decides the scaffolding itself is the product and starts charging rent for standing on it.</p><p>And if those conversations and those people happen to be on Farcaster right now, then that&apos;s where I&apos;ll be, until they&apos;re somewhere else, at which point I&apos;ll be somewhere else.</p><p>This is a less inspiring position than &quot;I believe in the decentralized web.&quot;</p><p>But at least it&apos;s honest.</p><p>Neynar might surprise us. Crypto might stop sucking. Farcaster may yet become something none of us predicted, something that Dan and Varun&apos;s original infrastructure enables even if it looks nothing like what they originally imagined.</p><p>The best thing about open protocols is, after all, that they can outlive the intentions of their creators.</p><p>I&apos;ve stopped waiting for a single platform to replace Twitter. The dream I&apos;ve settled on is smaller: pockets of genuine discourse, distributed across protocols and platforms and group chats and mailing lists, connected by people rather than by algorithms, sustained by care rather than by capital. That dream doesn&apos;t need a billion-dollar valuation. It doesn&apos;t need much in the way of product-market fit. It barely requires a protocol.</p><p>It does require, though, that at least a few people keep posting.</p> Weekend Happenings - Cool As Heck https://cool-as-heck.blog/weekend-happenings 2026-02-23T13:19:31.000Z <div>Friday night, my wife had book club here. Only a few of the ladies were able to make it. They brought their kids, most of whom are the same age as our youngest, 13, so they had a good time taking silly selfies and playing the Switch. </div> <div><br></div> <div>One of our twin daughters (18) was here and hung out with me which was a lot of fun. It's nice to know that our kids actually want to hang out with us. I guess we're not that lame after all. 😝</div> <div><br></div> <div>Saturday I spent most of the day with just me, myself, and I. The wife and kids had places to be and things to do. I got caught up on some personal chores and watched Pluribus. I only have a couple of episodes left. That is a fantastic show. </div> <div><br></div> <div>Sunday was spent walking around the mall at Tyson's Corner Center. I say walking around because not much shopping was actually done. The girls did end up buying a few things here and there but everything is so expensive and we've raised kids who are very price conscious. Mostly we just enjoyed spending time together and eating at the conveyor belt sushi place in the mall. 🍣 </div> <div><br></div> <div>Back to the grind today. Looks like it's going to be a busy week. I'm sad the Olympics are over. It's been giving me so much joy. I'm really happy for all of our athletes for not only going over there and giving their best performances, but also for truthfully speaking their minds when asked about the current political situation in the States. Maybe the future is still worth saving.</div> Brainstorming search engine ranking introspection - James' Coffee Blog https://jamesg.blog/2026/02/23/search-engine-ranking-introspection/ 2026-02-23T12:57:26.000Z <p>Search is one of my favourite disciplines in computing. In 2024 I spent a lot of time working on a <a href="https://jamesg.blog/2024/09/20/search-query-lifecycle">NoSQL engine that I called JameSQL</a>. This tool now powers the search engine on my website.</p><p>Designing search engine ranking systems is tricky to say the least. When I use my blog search engine, I sometimes notice that the article for which I am looking does not show up at the top of the search results. Google set a high standard for search; when I type something in Google in a <code>site</code> search, I can often find what I am looking for.</p><p>I am not yet ready to delve back into the world of search, but I did want to take a note of an idea I had today: I want my next search project to have tooling for ranking introspection. By this I mean I want to have tools that let me know <em>why</em> a particular article ranks above another.</p><p>At present, JameSQL only returns a single attribute, <code>_score</code>, which is computed using either TF-IDF or BM25, with any additional boosts you have specified (i.e. give h1s 3x more weight). I imagine having a value like <code>_score_answer</code> that would tell me how much weight each attribute used in ranking had, for example:</p><div class="highlight"><pre><span></span><span class="p">[</span> <span class="w"> </span><span class="p">{</span><span class="nt">"bm25_on_post"</span><span class="p">:</span><span class="w"> </span><span class="mf">100.01</span><span class="p">},</span> <span class="w"> </span><span class="p">{</span><span class="nt">"score_after_h1_boost"</span><span class="p">:</span><span class="w"> </span><span class="mf">101.01</span><span class="p">},</span> <span class="w"> </span><span class="p">{</span><span class="nt">"score_after_inlinks_added"</span><span class="p">:</span><span class="w"> </span><span class="mf">109.01</span><span class="p">},</span> <span class="w"> </span><span class="err">...</span> <span class="p">]</span> </pre></div> <p>This would be an ordered list that specifies what calculation has been made, followed by the score at that point in time. This could then be used to calculate how many points each ranking factor added onto the final score. This can be done by calculating the difference between scores after each weight is applied.</p><p>This information would help me answer the question “why is this post ranking in this place for this query?” much more effectively than right now, by letting me see exactly how each calculation and ranking factor affects the final search engine ranking.</p><p>I started building a tool that lets me interactively experiment with different algorithms (<a href="https://playground.jamesg.blog/screenshots/search_algorithm.png">see image of the dashboard</a>) which was useful. I think I would like to revisit that dashboard to make it more useful if/when I work on a search project in the future.</p><p>Outside of the scope of this particular, developer-focused context, I generally want to use software that gives me a clear idea as to why I am seeing what I am seeing. As a user, I shouldn’t be left thinking “why did this show up?” With many opaque recommendation systems used on the web today, I am often left feeling exactly like that: “why did this show up?” This makes it a lot harder for me to understand, and therefore trust, a system.</p>