<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>bret.io</title>
  <id>https://bret.io/feed.xml</id>
  <updated>2026-03-27T17:00:20.676Z</updated>
  <link rel="self" type="application/atom+xml" href="https://bret.io/feed.xml"/>
  <link rel="alternate" type="application/json" href="https://bret.io/feed.json"/>
  <link rel="alternate" type="text/html" href="https://bret.io"/>
  <author>
    <name>Bret Comnes</name>
    <uri>https://bret.io</uri>
  </author>
  <generator uri="https://github.com/bcomnes/jsonfeed-to-atom#readme" version="1.2.5">jsonfeed-to-atom</generator>
  <rights>© 2026 Bret Comnes</rights>
  <subtitle>A running log of announcements, projects and accomplishments.</subtitle>
  <entry>
    <id>https://bret.io/blog/2026/buy-my-graphics-card/#2026-03-27T17:00:20.676Z</id>
    <title>Buy my graphics card</title>
    <updated>2026-03-27T17:00:20.676Z</updated>
    <published>2026-03-27T17:00:20.676Z</published>
    <author>
      <name>Bret Comnes</name>
      <uri>https://bret.io</uri>
    </author>
    <content type="html"><![CDATA[<p>I’m selling my graphics card from my <a href="'../../2025/you-can-just-build-a-steam-machine/'">You can just build a Steam Machine</a> blogpost.</p>
<p><a href="https://www.ebay.com/itm/298164483067">GIGABYTE Radeon RX 7700 XT GAMING OC 12GB GDDR6 Graphics Card</a></p>
<p>I ended up upgrading to a 9070xt for better 4k performance on the large TV, will post more details later, but in the meantime, you can buy the card on ebay!</p>
]]></content>
    <link rel="alternate" href="https://bret.io/blog/2026/buy-my-graphics-card/"/>
  </entry>
  <entry>
    <id>https://bret.io/blog/2026/simple-tanstack-query-in-preact/#2026-02-07T20:35:20.001Z</id>
    <title>Simple TanStack Query with Preact</title>
    <updated>2026-02-07T20:35:20.001Z</updated>
    <published>2026-02-07T20:35:20.001Z</published>
    <author>
      <name>Bret Comnes</name>
      <uri>https://bret.io</uri>
    </author>
    <content type="html"><![CDATA[<p><strong>UPDATE</strong></p>
<p>About 3 days after writing this, Tanstack <a href="https://github.com/TanStack/query/pull/9935">realeased a native preact tanstack query adapter</a>! Just use that and ignore the rest. Thanks to everyone invovled in that.</p>
<ul>
<li>(<a href="https://www.npmjs.com/package/@tanstack/preact-query"><code>@tanstack/preact-query</code></a>)</li>
</ul>
<hr>
<p><img src="./img/preact-tanstack.jpg" alt=""></p>
<p>Here is a simple approach to getting <a href="https://tanstack.com/query/latest"><code>@tanstack/react-query</code></a> working in a <a href="https://preactjs.com/guide/v10/differences-to-react#features-exclusive-to-preactcompat">preact</a> project.
I’m certain I’m not the only person to arrive at this, but I also didn’t manage to find anyone suggesting this under the noise of your usual bundlerslop tutorials.</p>
<p>I’m intentionally trying to write this at more of a beginner’s level, so if anything is confusing, feel free to ask questions.</p>
<p><a href="https://www.npmjs.com/package/@tanstack/react-query"><code>@tanstack/react-query</code></a> internally imports and defines as a peerDependency <a href="https://www.npmjs.com/package/react"><code>react</code></a> (meaning they need to be installed together in your project).
When running a preact project, <code>react</code> generally shouldn’t be installed, so it will be missing, or auto-installed alongside <code>@tanstack/react-query</code> in newer versions of <code>npm</code> (we don’t want this!).</p>
<h2 id="package-alias-to-the-rescue" tabindex="-1">Package alias to the rescue</h2>
<p>So the simplest solution here is to define <code>react</code> as a project <code>dependency</code>, next to <code>@tanstack/react-query</code>, and specify its version to a dependency override <a href="https://docs.npmjs.com/cli/v11/using-npm/package-spec#aliases">“package alias”</a> pointing at <code>@preact/compat</code>:</p>
<pre><code class="hljs language-jsonc"># package.json
<span class="hljs-punctuation">{</span>
  <span class="hljs-attr">&quot;dependencies&quot;</span><span class="hljs-punctuation">:</span> <span class="hljs-punctuation">{</span>
    <span class="hljs-attr">&quot;@tanstack/react-query&quot;</span><span class="hljs-punctuation">:</span> <span class="hljs-string">&quot;^5.90.20&quot;</span><span class="hljs-punctuation">,</span>
    <span class="hljs-attr">&quot;preact&quot;</span><span class="hljs-punctuation">:</span> <span class="hljs-string">&quot;^10.27.0&quot;</span><span class="hljs-punctuation">,</span>
    # This swaps react for @preact/compat in node_modules!
    <span class="hljs-attr">&quot;react&quot;</span><span class="hljs-punctuation">:</span> <span class="hljs-string">&quot;npm:@preact/compat@^18.3.1&quot;</span>
  <span class="hljs-punctuation">}</span> 
<span class="hljs-punctuation">}</span>
</code></pre>
<p>This tells <code>npm</code>, <code>pnpm</code>, etc to install <code>@preact/compat</code> inside the <code>react</code> directory in <code>node_modules</code>.
Anything resolving <code>react</code> out of <code>node_modules</code> will instead import <code>@preact/compat</code>. Simple!</p>
<p>The advantage to this approach is ALL tools get the override, not just the frontend bundle.</p>
<p>If you do any kind of <a href="https://github.com/preactjs/preact-render-to-string"><code>preact-render-to-string</code></a> for SSR, or skeleton page generation or lightweight component testing inside of a Node.js process, things just work.
The code you run through a bundler which sits on top of <code>node_modules</code> will inject the correct dependency, without any additional bundler config.</p>
<p>You don’t need to alias <code>react-dom</code>, unless something else is trying to import that. Typically, your DOM <code>render</code> function comes out of this package, but you are likely already using <code>preact</code> directly for this.</p>
<h2 id="tanstack-getting-started-(preact-edition)" tabindex="-1">TanStack Getting Started (Preact edition)</h2>
<p>After performing the above alias, the getting started TanStack example becomes:</p>
<pre><code class="hljs language-ts"><span class="hljs-comment">// client.ts</span>
<span class="hljs-keyword">import</span> {
  useQuery,
  useMutation,
  useQueryClient,
  <span class="hljs-title class_">QueryClient</span>,
  <span class="hljs-title class_">QueryClientProvider</span>,
} <span class="hljs-keyword">from</span> <span class="hljs-string">&#x27;@tanstack/react-query&#x27;</span>
<span class="hljs-keyword">import</span> { html } <span class="hljs-keyword">from</span> <span class="hljs-string">&#x27;htm/preact&#x27;</span>
<span class="hljs-keyword">import</span> { render } <span class="hljs-keyword">from</span> <span class="hljs-string">&#x27;preact&#x27;</span>
<span class="hljs-keyword">import</span> { getTodos, postTodo } <span class="hljs-keyword">from</span> <span class="hljs-string">&#x27;../my-api&#x27;</span>

<span class="hljs-comment">// Create a client</span>
<span class="hljs-keyword">const</span> queryClient = <span class="hljs-keyword">new</span> <span class="hljs-title class_">QueryClient</span>()

<span class="hljs-keyword">export</span> <span class="hljs-keyword">function</span> <span class="hljs-title function_">App</span>(<span class="hljs-params"></span>) {
  <span class="hljs-keyword">return</span> html`<span class="language-xml">
    &lt;</span><span class="hljs-subst">${QueryClientProvider}</span><span class="language-xml"> client=</span><span class="hljs-subst">${queryClient}</span><span class="language-xml">&gt;
      &lt;</span><span class="hljs-subst">${Todos}</span><span class="language-xml"> /&gt;
    &lt;//&gt;
  `</span>
}

<span class="hljs-keyword">function</span> <span class="hljs-title function_">Todos</span>(<span class="hljs-params"></span>) {
  <span class="hljs-comment">// Access the client</span>
  <span class="hljs-keyword">const</span> queryClient = <span class="hljs-title function_">useQueryClient</span>()

  <span class="hljs-comment">// Queries</span>
  <span class="hljs-keyword">const</span> query = <span class="hljs-title function_">useQuery</span>({ <span class="hljs-attr">queryKey</span>: [<span class="hljs-string">&#x27;todos&#x27;</span>], <span class="hljs-attr">queryFn</span>: getTodos })

  <span class="hljs-comment">// Mutations</span>
  <span class="hljs-keyword">const</span> mutation = <span class="hljs-title function_">useMutation</span>({
    <span class="hljs-attr">mutationFn</span>: postTodo,
    <span class="hljs-attr">onSuccess</span>: <span class="hljs-function">() =&gt;</span> {
      <span class="hljs-comment">// Invalidate and refetch</span>
      queryClient.<span class="hljs-title function_">invalidateQueries</span>({ <span class="hljs-attr">queryKey</span>: [<span class="hljs-string">&#x27;todos&#x27;</span>] })
    },
  })

  <span class="hljs-keyword">return</span> html`<span class="language-xml">
    <span class="hljs-tag">&lt;<span class="hljs-name">div</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">ul</span>&gt;</span>
        </span><span class="hljs-subst">${query.data?.map((todo) =&gt; html`<span class="language-xml">
          <span class="hljs-tag">&lt;<span class="hljs-name">li</span> <span class="hljs-attr">key</span>=</span></span><span class="hljs-subst">${todo.id}</span><span class="language-xml"><span class="hljs-tag">&gt;</span></span><span class="hljs-subst">${todo.title}</span><span class="language-xml"><span class="hljs-tag">&lt;/<span class="hljs-name">li</span>&gt;</span>
        `</span>)}</span><span class="language-xml">
      <span class="hljs-tag">&lt;/<span class="hljs-name">ul</span>&gt;</span>

      <span class="hljs-tag">&lt;<span class="hljs-name">button</span>
        <span class="hljs-attr">onClick</span>=</span></span><span class="hljs-subst">${() =&gt; {
          mutation.mutate({
            id: <span class="hljs-built_in">Date</span>.now(),
            title: <span class="hljs-string">&#x27;Do Laundry&#x27;</span>,
          })
        }}</span><span class="language-xml"><span class="hljs-tag">
      &gt;</span>
        Add Todo
      <span class="hljs-tag">&lt;/<span class="hljs-name">button</span>&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span>
  `</span>
}

<span class="hljs-keyword">if</span> (<span class="hljs-keyword">typeof</span> <span class="hljs-variable language_">window</span> !== <span class="hljs-string">&#x27;undefined&#x27;</span>) {
  <span class="hljs-keyword">const</span> container = <span class="hljs-variable language_">document</span>.<span class="hljs-title function_">getElementById</span>(<span class="hljs-string">&#x27;root&#x27;</span>)
  <span class="hljs-keyword">if</span> (container) {
    <span class="hljs-title function_">render</span>(html`<span class="language-xml">&lt;</span><span class="hljs-subst">${App}</span><span class="language-xml"> /&gt;`</span>, container)
  }
}
</code></pre>
<p>If you wanted a separate Node.js entrypoint that ran <code>preact-render-to-string</code> you could then do:</p>
<pre><code class="hljs language-ts"><span class="hljs-comment">// ssr</span>
<span class="hljs-keyword">import</span> { html } <span class="hljs-keyword">from</span> <span class="hljs-string">&#x27;htm/preact&#x27;</span>
<span class="hljs-keyword">import</span> { <span class="hljs-title class_">App</span> } <span class="hljs-keyword">from</span> <span class="hljs-string">&#x27;./client.js&#x27;</span>
<span class="hljs-keyword">import</span> { render } <span class="hljs-keyword">from</span> <span class="hljs-string">&#x27;preact-render-to-string&#x27;</span>

<span class="hljs-keyword">const</span> out = <span class="hljs-title function_">render</span>(html`<span class="language-xml">&lt;</span><span class="hljs-subst">${App}</span><span class="language-xml"> /&gt;`</span>)
</code></pre>
<h2 id="where-this-might-not-work" tabindex="-1">Where this might not work</h2>
<p>If you are importing a wide array of react components from npm, and any of them erroneously define a direct dependency on <code>react</code>, something else may download a second copy of <code>react</code>.
<code>react</code> hates it when there is more than one copy of <code>react</code> in your bundle, so if you find yourself in this case,
you may need to introduce bundler config to solve that.
The more general package manager override directives might also work if you have transitive copies of <code>react</code> you want to hammer out, however these are specific to the package manager you use (<a href="https://docs.npmjs.com/cli/v11/configuring-npm/package-json#overrides">npm</a>, <a href="https://pnpm.io/settings#overrides">pnpm</a> etc).</p>
<p>If you rely heavily on <a href="https://legacy.reactjs.org/docs/introducing-jsx.html"><code>jsx</code></a> but still hope to directly run your <code>.tsx</code>/<code>.jsx</code> through node, you will also need to <a href="https://nodejs.org/api/module.html">find a solution</a> to do that, or use something like <a href="https://github.com/developit/htm">htm</a> if you want to avoid that problem altogether.</p>
]]></content>
    <link rel="alternate" href="https://bret.io/blog/2026/simple-tanstack-query-in-preact/"/>
  </entry>
  <entry>
    <id>https://bret.io/blog/2025/you-can-just-build-a-steam-machine/#2025-05-08T02:49:10.637Z</id>
    <title>You can just build a Steam Machine</title>
    <updated>2025-05-08T02:49:10.637Z</updated>
    <published>2025-05-08T02:49:10.637Z</published>
    <author>
      <name>Bret Comnes</name>
      <uri>https://bret.io</uri>
    </author>
    <content type="html"><![CDATA[<p>I built a <a href="https://en.wikipedia.org/wiki/Steam_Machine_(computer)">“Steam Machine”</a> in 2024.
I’ve run it for 8 months, and I’m really happy with it!
I hope that after reading this, you’ll be encouraged to build one too!</p>
<figure class="borderless">
  <a
    class="imageSwipe"
    href="./img/steam-machine.webp"
    data-pswp-width="4032"
    data-pswp-height="3024"
    target="_blank"
  >
    <img
      loading="auto"
      src="./img/thumbs/steam-machine@1x.webp"
      srcset="./img/thumbs/steam-machine@2x.webp 2x"
      alt="Screenshot of steam machines"
    >
  </a>
  <figcaption>Why yes, that is a 65&quot;, HDR OLED Steam Deck in my living room.</figcaption>
</figure>
<p>I’m running the latest <a href="../its-time-to-install-steamos-3.7/">Steam OS 3.7</a> (the OS running on the <a href="https://en.wikipedia.org/wiki/Steam_Deck">Steam Deck</a>) on conventional AMD desktop hardware, running at equal or improved performance over Windows.</p>
<p>We’re here! You can just build a Steam Machine now, and get a console-like experience running PC games in your living room, on Linux, with great controllers, on your 4K OLED TV (and surround sound if you have it), running in parity with your Steam Deck.</p>
<h2 id="the-os-(steamos-3.7)" tabindex="-1">The OS (SteamOS 3.7)</h2>
<p>The OS to install is SteamOS 3.7.
It’s available in Valve’s “preview” update channel for the Steam Deck, but you can download the recovery image and try installing it on anything.</p>
<p>There isn’t much more to say about this, other than yes, the rumors that Valve is expanding hardware support are true. It works on a lot more devices now, including conventional AMD desktop hardware.</p>
<p>If you are interested in the details on installing not-quite-released software, read:</p>
<ul>
<li><a href="../its-time-to-install-steamos-3.7/">It’s time to install SteamOS 3.7</a></li>
</ul>
<p>Prior to updating to SteamOS 3.7, I ran <a href="https://github.com/SteamFork">SteamFork</a>, and before that, <a href="https://chimeraos.org/">Chimera OS</a>.
SteamFork (the project) has thrown in the towel, and Chimera OS remains a great option.
If SteamOS 3.7 isn’t quite working for your hardware yet, read about other similar projects that are helping bridge the compatibility gaps:</p>
<ul>
<li><a href="../bazzite-isnt-steamos">“Bazzite isn’t SteamOS (and that’s okay!)”</a></li>
</ul>
<h2 id="the-hardware" tabindex="-1">The Hardware</h2>
<p>I targeted a budget of $1500 for the core hardware. Final cost came to $1799.71.
Part lists go stale REALLY quickly, but if you want to build exactly what I have, here is the list.
I wrote up the build on <a href="https://pcpartpicker.com/b/h7QD4D">PCPartPicker</a> too, but hey—feel free to use my referral links if you found this useful:</p>
<ul>
<li>CPU: <a href="https://amzn.to/3RGxNPe">AMD Ryzen 7 7800X3D</a></li>
<li>Cooler: <a href="https://amzn.to/3GuMaDL">Noctua NH-L12S</a> (Run this CPU fan in this case unless you have a good reason not to)</li>
<li>Motherboard: <a href="https://amzn.to/3EJXueG">ASUS ROG Strix B650E-I</a></li>
<li>RAM: <a href="https://amzn.to/4lUsIR6">G.SKILL Flare X5 Series (AMD Expo) DDR5 32GB</a> (Use RAM matching tools)</li>
<li>NVMe: <a href="https://amzn.to/3GyccGa">WD_BLACK 2TB SN850X</a> (Avoid low-end Samsung Gen5s)</li>
<li>GPU: <a href="https://amzn.to/44Kq8XG">GIGABYTE Radeon RX 7700 XT</a> (Go better here if you increase on anything)</li>
<li>Case: <a href="https://amzn.to/42zzyUt">Fractal Design Ridge</a></li>
<li>PSU: <a href="https://amzn.to/3RFKsSl">CORSAIR SF750</a> (Avoid bulky, cheaper SFX-L PSUs in this case)</li>
<li>Case Fans: 4 × <a href="https://amzn.to/4iCUK0o">Noctua NF-A6x25</a></li>
<li>Top Fans: 3 × <a href="https://amzn.to/4jvN0i2">ARCTIC P8 Slim PWM</a></li>
<li>Fan Hub: <a href="https://amzn.to/4cT1A0R">Noctua NA-FH1, 8 Channel Fan Hub</a></li>
<li><a href="https://amzn.to/4lUtu0s">WiFi Antennas</a> (I don’t have Ethernet near my TV)</li>
<li><a href="https://amzn.to/3Hq8Pld">Left Angle IEC C14 to C13 Power Adapter</a> (Helpful to turn the power cable on the Ridge case)</li>
</ul>
<div class="figure-grid">
  <figure class="borderless">
    <a class="imageSwipe"
       href="./img/side-close.webp"
       data-pswp-width="3024"
       data-pswp-height="4032"
       target="_blank">
      <img loading="auto"
           src="./img/thumbs/side-close@1x.webp"
           srcset="./img/thumbs/side-close@2x.webp 2x"
           alt="side‑close">
    </a>
    <figcaption>The Fractal Ridge case is about the size of a deep PS5 and fits nicely behind TVs and on mantles.</figcaption>
  </figure>
  <figure class="borderless">
    <a class="imageSwipe"
       href="./img/side-off.webp"
       data-pswp-width="3024"
       data-pswp-height="4032"
       target="_blank">
      <img loading="auto"
           src="./img/thumbs/side-off@1x.webp"
           srcset="./img/thumbs/side-off@2x.webp 2x"
           alt="side‑off">
    </a>
    <figcaption>Airflow around the case hasn't been an issue.</figcaption>
  </figure>
  <figure class="borderless">
    <a class="imageSwipe"
       href="./img/side-on.webp"
       data-pswp-width="4032"
       data-pswp-height="3024"
       target="_blank">
      <img loading="auto"
           src="./img/thumbs/side-on@1x.webp"
           srcset="./img/thumbs/side-on@2x.webp 2x"
           alt="side‑on">
    </a>
    <figcaption>When running heat-producing games, I pull the TV off the wall to give more space between the panel and the heat exhaust.</figcaption>
  </figure>
</div>
<h3 id="the-controllers" tabindex="-1">The controllers</h3>
<p>Finding good controllers has been a challenge.
It needs to be on par with the Steam Deck controller, but unfortunately, nothing on the market matches that.
Skipping a lot of nuance and details, the Steam Controller, DualSense Edge, and an HTPC keyboard have covered all my needs.
Finding decent controllers that work well enough for PC games on the couch has been difficult, and I intend to write more about each one I’ve tried.</p>
<figure class="borderless">
  <a class="imageSwipe"
     href="./img/steam-deck-reference.webp"
     data-pswp-width="3520"
     data-pswp-height="2871"
     target="_blank">
    <img loading="auto"
         src="./img/thumbs/steam-deck-reference@1x.webp"
         srcset="./img/thumbs/steam-deck-reference@2x.webp 2x"
         alt="Photo of two Steam Decks, one facing up and one facing down.">
  </a>
  <figcaption>Steam Deck controls for reference. Why isn't there any PC controller on the market with these inputs?</figcaption>
</figure>
<p>It’s surprising and hard to explain to people unfamiliar with the issue just how important of a development Gyro Aim is.
This thesis is required watching if you haven’t had hands-on time with a Gyro input:</p>
<figure class="borderless">
  <div class="video-container">
    <iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/binPB4YbWmM?si=YZsQxKNbD9u3GwqV" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
  </div>
  <figcaption>People have been discussing the importance of motion control for over 6 years now!</figcaption>
</figure>
<p>The only controller that comes close to what is offered by the Steam Deck controls is the DualSense Edge (unfortunately!).
It’s a great controller, with Gyro, paddles, and a trackpad, and has high-end OEM quality, but it’s the most expensive controller on the market—and Bluetooth only (among other flaws).</p>
<p>Speaking of Bluetooth, I recommend getting an extended range dongle and ensuring your controller has line of sight to it.
Latency-sensitive controls like Gyro need a solid connection, and I found that without line of sight, signal drops off around 6 feet.
With line of sight, controllers work great—even at 10 feet.</p>
<p>The Steam Controller is nice to have.
Probably not worth paying scalper prices for it these days, but if you still have yours lying around, get that thing going again.
They work better than you remember and have a 2.4GHz dongle!</p>
<p>Any HTPC keyboard-mouse combo works.
You mainly use it for one-off tasks where the on-screen keyboard is too cumbersome, and it mostly lives in the closet.
I recommend the Logitech K400 Plus over the no-name Chinese Amazon ones, having tried both.</p>
<ul>
<li><a href="https://amzn.to/4lXTjgd">PlayStation DualSense Edge Wireless Controller</a></li>
<li><a href="https://amzn.to/42NIqWQ">PlayStation DualSense Charging Station</a></li>
<li><a href="https://amzn.to/4cYI6Ys">PlayStation DualSense Edge Stick Module</a> – Pick some up before they go dead-stock pricing.</li>
<li><a href="https://amzn.to/4333IOS">TP-Link USB Bluetooth Adapter for PC, Bluetooth 5.3 Long Range Receiver</a></li>
<li><a href="https://www.amazon.com/Logitech-Wireless-Keyboard-Touchpad-PC-connected/dp/B014EUQOGK">Logitech K400 Plus Wireless Touch TV Keyboard</a></li>
<li><a href="https://store.steampowered.com/app/353370/Steam_Controller/">Steam Controller</a> (RIP)</li>
</ul>
<div class="figure-grid">
  <figure class="borderless">
    <a class="imageSwipe"
       href="./img/inputs-front.webp"
       data-pswp-width="3024"
       data-pswp-height="4032"
       target="_blank">
      <img loading="auto"
           src="./img/thumbs/inputs-front@1x.webp"
           srcset="./img/thumbs/inputs-front@2x.webp 2x"
           alt="inputs‑front">
    </a>
    <figcaption>Steam Controller is good for mouse pointer games. DualSense Edge is good for FPS games. The Logitech keyboard is helpful for typing and noodling around in the console.</figcaption>
  </figure>
  <figure class="borderless">
    <a class="imageSwipe"
       href="./img/controllers-back.webp"
       data-pswp-width="4032"
       data-pswp-height="3024"
       target="_blank">
      <img loading="auto"
           src="./img/thumbs/controllers-back@1x.webp"
           srcset="./img/thumbs/controllers-back@2x.webp 2x"
           alt="controllers‑back">
    </a>
    <figcaption>Controllers with Gyro and paddles are critical. Gyro gives you "mouse-like" input; paddles let you work sticks and buttons without letting off the analog sticks. It's this combo that allows for input that matches keyboard and mouse on your couch.</figcaption>
  </figure>
  <figure class="borderless">
    <a class="imageSwipe"
       href="./img/dual-sense-steam-controller-cradle.webp"
       data-pswp-width="3024"
       data-pswp-height="4032"
       target="_blank">
      <img loading="auto" src="./img/thumbs/dual-sense-steam-controller-cradle@1x.webp" srcset="./img/thumbs/dual-sense-steam-controller-cradle@2x.webp 2x" alt="dual-sense-steam-controller-cradle">
    </a>
    <figcaption>The DualSense Edge runs out of battery after a few hours (thanks rumble), so keeping it charged is important. The Steam Controller happens to live nicely on the dock as well, though no charging.</figcaption>
  </figure>
  <figure class="borderless">
    <a class="imageSwipe"
       href="./img/cradle-dongle.webp"
       data-pswp-width="3024"
       data-pswp-height="4032"
       target="_blank">
      <img loading="auto"
           src="./img/thumbs/cradle-dongle@1x.webp"
           srcset="./img/thumbs/cradle-dongle@2x.webp 2x"
           alt="cradle‑dongle">
    </a>
    <figcaption>Controllers with 2.4GHz dongles have better range and reliability than Bluetooth. Either way, line of sight is super important, so getting the dongles out from behind any obstructions is key. The Steam Controller included a dock for its dongle. Use it!</figcaption>
  </figure>
  <figure class="borderless">
    <a class="imageSwipe"
       href="./img/bt-close.webp"
       data-pswp-width="4032"
       data-pswp-height="3024"
       target="_blank">
      <img loading="auto"
           src="./img/thumbs/bt-close@1x.webp"
           srcset="./img/thumbs/bt-close@2x.webp 2x"
           alt="bt‑close">
    </a>
    <figcaption>Bluetooth has the worst range and is the slowest to connect. An extended Bluetooth antenna with line of sight has helped allow the DualSense Edge to work at 10 feet.</figcaption>
  </figure>
  <figure class="borderless">
    <a class="imageSwipe"
       href="./img/bt-far.webp"
       data-pswp-width="4032"
       data-pswp-height="3024"
       target="_blank">
      <img loading="auto"
           src="./img/thumbs/bt-far@1x.webp"
           srcset="./img/thumbs/bt-far@2x.webp 2x"
           alt="bt‑far">
    </a>
    <figcaption>If your controller can "see" the receiver, everything works better. The perpendicular power cables probably aren't helping, though.</figcaption>
  </figure>
</div>
<h3 id="the-screen" tabindex="-1">The Screen</h3>
<figure class="borderless">
  <a class="imageSwipe" href="./img/home-day.webp" data-pswp-width="4032" data-pswp-height="3024" target="_blank">
    <img loading="auto" src="./img/thumbs/home-day@1x.webp" srcset="./img/thumbs/home-day@2x.webp 2x" alt="home-day">
  </a>
  <figcaption>Seeing games on large 4K HDR formats is something else.</figcaption>
</figure>
<p>Whatever TV you have will work.
Playing on a large 4K HDR OLED TV has been a real joy, and it’s easy to forget how many pixels are in these things.
Just don’t forget to put it into game mode.
I would also note—if you can, avoid Android TV, and definitely disconnect the smart features from any network connection.</p>
<h3 id="the-audio" tabindex="-1">The Audio</h3>
<p>I can’t run surround in my current living room, so in the meantime, I run AirPods Max—mainly because they have the best audio sharing feature on the Apple TV.
They work well with the Steam Machine though.</p>
<h2 id="build-notes" tabindex="-1">Build notes</h2>
<p>Building in the Fractal Ridge was super easy, but seeing how other builds tackled little issues in the small case was helpful.
Here are the builds I found most useful as a reference:</p>
<ul>
<li><a href="https://pcpartpicker.com/b/btTJ7P">Fractal Design Ridge full of Noctua fans feat. delidded 7800X3D and deshrouded RTX 4080</a></li>
<li><a href="https://pcpartpicker.com/b/fTcTwP">ITX laptop destroyer</a></li>
<li><a href="https://pcpartpicker.com/builds/#e=3907">All Fractal Ridge Completed Builds</a></li>
</ul>
<div class="figure-grid">
  <figure class="borderless">
    <a class="imageSwipe"
       href="./img/case-back-assembled.webp"
       data-pswp-width="3024"
       data-pswp-height="4032"
       target="_blank">
      <img loading="auto"
           src="./img/thumbs/case-back-assembled@1x.webp"
           srcset="./img/thumbs/case-back-assembled@2x.webp 2x"
           alt="case-back-assembled">
    </a>
    <figcaption>I ran the ITX power cable behind the motherboard, similar to other Ridge builds. Worked well!</figcaption>
  </figure>
  <figure class="borderless">
    <a class="imageSwipe" href="./img/case-back-io.webp" data-pswp-width="3024" data-pswp-height="4032" target="_blank">
      <img loading="auto" src="./img/thumbs/case-back-io@1x.webp" srcset="./img/thumbs/case-back-io@2x.webp 2x" alt="case-back-io">
    </a>
    <figcaption>Rear I/O with the stubby Wi-Fi antennas. I ended up getting larger antennas than these.</figcaption>
  </figure>
  <figure class="borderless">
    <a class="imageSwipe" href="./img/case-bottom-assembled.webp" data-pswp-width="4032" data-pswp-height="3024" target="_blank">
      <img loading="auto" src="./img/thumbs/case-bottom-assembled@1x.webp" srcset="./img/thumbs/case-bottom-assembled@2x.webp 2x" alt="case-bottom-assembled">
    </a>
    <figcaption>With the stock GPU shroud, I could only fit 2 Noctua NF-A6x25 fans directly below the GPU chamber.</figcaption>
  </figure>
  <figure class="borderless">
    <a class="imageSwipe" href="./img/case-bottom-audio-routing.webp" data-pswp-width="3024" data-pswp-height="4032" target="_blank">
      <img loading="auto" src="./img/thumbs/case-bottom-audio-routing@1x.webp" srcset="./img/thumbs/case-bottom-audio-routing@2x.webp 2x" alt="case-bottom-audio-routing">
    </a>
    <figcaption>The Noctua NH-L12S cooler orientation works great, and the audio cable routes along the side and bottom easily.</figcaption>
  </figure>
  <figure class="borderless">
    <a class="imageSwipe" href="./img/case-bottom-center-routing.webp" data-pswp-width="3024" data-pswp-height="4032" target="_blank">
      <img loading="auto" src="./img/thumbs/case-bottom-center-routing@1x.webp" srcset="./img/thumbs/case-bottom-center-routing@2x.webp 2x" alt="case-bottom-center-routing">
    </a>
    <figcaption>Zip ties are helpful for routing the I/O and power bundle cables behind the PSU, keeping open space around the CPU cooler.</figcaption>
  </figure>
  <figure class="borderless">
    <a class="imageSwipe" href="./img/case-bottom-fan-empty.webp" data-pswp-width="4032" data-pswp-height="3024" target="_blank">
      <img loading="auto" src="./img/thumbs/case-bottom-fan-empty@1x.webp" srcset="./img/thumbs/case-bottom-fan-empty@2x.webp 2x" alt="case-bottom-fan-empty">
    </a>
    <figcaption>By routing only the power cable below the CPU cooler, you have room for 2 additional Noctua NF-A6x25 fans to intake cool air below the CPU cooler intake.</figcaption>
  </figure>
  <figure class="borderless">
    <a class="imageSwipe" href="./img/case-bottom-fan-shave.webp" data-pswp-width="4032" data-pswp-height="3024" target="_blank">
      <img loading="auto" src="./img/thumbs/case-bottom-fan-shave@1x.webp" srcset="./img/thumbs/case-bottom-fan-shave@2x.webp 2x" alt="case-bottom-fan-shave">
    </a>
    <figcaption>One of the fan tabs interfered with the power cable, so I used a Dremel to shave it down to reduce contact and pressure. Other people use 3D-printed extenders found on Etsy.</figcaption>
  </figure>
  <figure class="borderless">
    <a class="imageSwipe" href="./img/case-bottom-psu-routing.webp" data-pswp-width="4032" data-pswp-height="3024" target="_blank">
      <img loading="auto" src="./img/thumbs/case-bottom-psu-routing@1x.webp" srcset="./img/thumbs/case-bottom-psu-routing@2x.webp 2x" alt="case-bottom-psu-routing">
    </a>
    <figcaption>Cable routing below the PSU. This is an odd place to exhaust the PSU—I considered doing a PSU fan reverse but decided against it for now.</figcaption>
  </figure>
  <figure class="borderless">
    <a class="imageSwipe" href="./img/case-bottom-fans-assembled.webp" data-pswp-width="4032" data-pswp-height="3024" target="_blank">
      <img loading="auto" src="./img/thumbs/case-bottom-fans-assembled@1x.webp" srcset="./img/thumbs/case-bottom-fans-assembled@2x.webp 2x" alt="case-bottom-fans-assembled">
    </a>
    <figcaption>Bottom of the case with intake fans installed.</figcaption>
  </figure>
  <figure class="borderless">
    <a class="imageSwipe" href="./img/case-front-assembled.webp" data-pswp-width="3024" data-pswp-height="4032" target="_blank">
      <img loading="auto" src="./img/thumbs/case-front-assembled@1x.webp" srcset="./img/thumbs/case-front-assembled@2x.webp 2x" alt="case-front-assembled">
    </a>
    <figcaption>The front side of the case with the GPU installed.</figcaption>
  </figure>
  <figure class="borderless">
    <a class="imageSwipe" href="./img/case-top-fans.webp" data-pswp-width="3024" data-pswp-height="4032" target="_blank">
      <img loading="auto" src="./img/thumbs/case-top-fans@1x.webp" srcset="./img/thumbs/case-top-fans@2x.webp 2x" alt="case-top-fans">
    </a>
    <figcaption>The ARCTIC P8 Slim PWM fans fit above a full-size GPU and stock shroud, despite what the Ridge manual says.</figcaption>
  </figure>
  <figure class="borderless">
    <a class="imageSwipe" href="./img/mobo-assembled-side-1.webp" data-pswp-width="4032" data-pswp-height="3024" target="_blank">
      <img loading="auto" src="./img/thumbs/mobo-assembled-side-1@1x.webp" srcset="./img/thumbs/mobo-assembled-side-1@2x.webp 2x" alt="mobo-assembled-side-1">
    </a>
    <figcaption>The assembled mini ITX motherboard and comically large CPU cooler.</figcaption>
  </figure>
  <figure class="borderless">
    <a class="imageSwipe" href="./img/mobo-assembled-side-2.webp" data-pswp-width="4032" data-pswp-height="3024" target="_blank">
      <img loading="auto" src="./img/thumbs/mobo-assembled-side-2@1x.webp" srcset="./img/thumbs/mobo-assembled-side-2@2x.webp 2x" alt="mobo-assembled-side-2">
    </a>
    <figcaption>The top-down view below the CPU cooler.</figcaption>
  </figure>
  <figure class="borderless">
    <a class="imageSwipe" href="./img/mobo-assembled-top.webp" data-pswp-width="3024" data-pswp-height="4032" target="_blank">
      <img loading="auto" src="./img/thumbs/mobo-assembled-top@1x.webp" srcset="./img/thumbs/mobo-assembled-top@2x.webp 2x" alt="mobo-assembled-top">
    </a>
    <figcaption>The top view of the motherboard and cooler. The thing really is the size of the entire motherboard.</figcaption>
  </figure>
</div>
<h2 id="other-notes" tabindex="-1">Other notes</h2>
<p>If you are familiar with the Steam Deck, you will feel comfortable with a Steam Machine.
They are basically the same!</p>
<p>The following are helpful resources when trying to run games on it:</p>
<ul>
<li><a href="https://www.protondb.com/">ProtonDB</a> – Compatibility reports for games running on Linux in Proton. Any Deck Verified game will also run great.</li>
<li><a href="https://areweanticheatyet.com/">Are We Anti-Cheat Yet?</a> – If you must play multiplayer with cheating competitors who require kernel modules to stop them, you will run into some compatibility issues versus Windows. This site tracks those.</li>
<li><a href="https://www.gamingonlinux.com/">GamingOnLinux</a> – This has consistently been the best news site focusing on gaming on Linux.
<ul>
<li><a href="https://www.gamingonlinux.com/anticheat/">GoL AntiCheat Tracker</a> - GoL also has it’s own excellet data on Linux anti-cheat stats.</li>
</ul>
</li>
<li><a href="https://steamdb.info/">SteamDB</a> – General player and game price tracking on Steam.</li>
<li><a href="https://www.reddit.com/r/GyroGaming/">/r/GyroGaming/</a> – The GyroGaming subreddit can often be helpful when figuring out gyro on games with poor mouse and controller inputs.</li>
<li><a href="https://www.reddit.com/r/SteamDeck/">/r/SteamDeck/</a> - The SteamDeck subreddit is also a decent source of news fore SteamOS related info.</li>
</ul>
<p>If you end up building a Steam Machine or something similar, please share your results!
If you want to chat or ask more questions about the process, you can <a href="https://discord.gg/5KmBn5ttCa">join the former SteamFork Discord</a>, where there are still a bunch of users of SteamFork migrating to SteamOS and facing similar issues and questions.</p>
<h2 id="updates" tabindex="-1">Updates</h2>
<ul>
<li>Josh Nichols <a href="https://blog.joshnichols.com/post/ditch-the-console-build-a-steam-machine/">wrote up</a> his Steam Machine build with great photos and information. Check it out!</li>
</ul>
<h3 id="syndications" tabindex="-1">Syndications</h3>
<ul>
<li><a href="https://www.reddit.com/r/SteamDeck/comments/1khhamn/you_can_just_build_a_steam_machine_now/" rel="syndication" class="u-syndication">/r/SteamDeck</a></li>
<li><a href="https://x.com/bcomnes/status/1920686026185408584" rel="syndication" class="u-syndication">X</a></li>
<li><a href="https://fosstodon.org/@bcomnes/114475809790803818" rel="syndication" class="u-syndication">fosstodon</a></li>
<li><a href="https://bsky.app/profile/bret.io/post/3lopj5jivd22a" rel="syndication" class="u-syndication">https://bsky.app/profile/bret.io/post/3lopj5jivd22a</a></li>
</ul>
]]></content>
    <link rel="alternate" href="https://bret.io/blog/2025/you-can-just-build-a-steam-machine/"/>
  </entry>
  <entry>
    <id>https://bret.io/blog/2025/its-time-to-install-steamos-3.7/#2025-05-08T02:48:57.527Z</id>
    <title>It's Time to Install SteamOS 3.7</title>
    <updated>2025-05-08T02:48:57.527Z</updated>
    <published>2025-05-08T02:48:57.527Z</published>
    <author>
      <name>Bret Comnes</name>
      <uri>https://bret.io</uri>
    </author>
    <content type="html"><![CDATA[<p>I recently <a href="../you-can-just-build-a-steam-machine/">installed SteamOS 3.7 on my commodity AMD HTPC</a>, and it’s working great.
Here’s my procedure and notes.</p>
<h2 id="installing-steamos-3.7" tabindex="-1">Installing SteamOS 3.7</h2>
<p>(This will go stale quickly, but as of this writing, it works.)</p>
<ul>
<li><strong>EDIT 3</strong>: A <a href="https://store.steampowered.com/steamos">New SteamOS Home Page</a> has been released echoing most of the information written here.</li>
<li><strong>EDIT 2</strong>: <a href="https://store.steampowered.com/news/app/1675200/view/529845510803031952">3.7 Has been released to the stable channel</a>. Running updates on the stable channel will get you to a working state.</li>
<li><strong>EDIT 1</strong>: As of writing this, <a href="https://store.steampowered.com/news/app/1675200/view/529845510803031952">3.7 has been promoted to <strong>BETA</strong></a>, so check the system version you get on the hop to Beta. If you get to 3.7, you are good to go!</li>
</ul>
<h3 id="prep-tips" tabindex="-1">Prep Tips</h3>
<ul>
<li>The recovery image installs SteamOS <s>3.5</s> 3.7 or later. Descrete GPUs just work now.</li>
<li>SteamOS 3.5 has limited hardware support. On desktop systems, enable integrated graphics and plug your monitor directly into the integrated HDMI or DisplayPort during installation.</li>
<li>Some discrete GPUs I tested failed to boot SteamOS 3.5. I don’t recommend trying them. Use iGPUs for the initial install, then switch back to a discrete GPU once you’re on 3.7.</li>
</ul>
<h3 id="procedure" tabindex="-1">Procedure</h3>
<ul>
<li>Download the Steam Deck Recovery Image from: <a href="https://help.steampowered.com/en/faqs/view/1b71-edf2-eb6d-2bb3">https://help.steampowered.com/en/faqs/view/1b71-edf2-eb6d-2bb3</a></li>
<li>Flash the recovery image to a USB drive.</li>
<li>Boot the USB drive on your “Steam Machine” and “re-flash” your “Deck”—i.e., whatever hardware manages to boot the recovery image.</li>
<li>It flashes the primary NVMe drive, <strong>deleting everything on it</strong>. Be careful!</li>
<li>After flashing, reboot, remove the USB drive, and sign in.</li>
<li>In system settings, run updates on the <strong>stable</strong> channel to update to SteamOS 3.7 or later. You can also test the Beta and Preview channels if you wish.</li>
<li>Congrats! You’re now running SteamOS 3.7.</li>
<li>Head back into your BIOS, disable integrated graphics, and test your discrete GPU. It will probably work now!</li>
<li>SteamOS 3.7 should now be running on your hardware of choice!</li>
</ul>
<h2 id="why-steamos-now%3F" tabindex="-1">Why SteamOS Now?</h2>
<p>It’s been rumored for a while that SteamOS—the <a href="https://archlinux.org">Arch</a>-based Linux distro powering the Steam Deck—is moving toward becoming a general-purpose Linux OS for any device.</p>
<p><img src="./img/linus.webp" alt=""></p>
<p>I tried installing SteamOS on conventional hardware a few months ago without much success:</p>
<ul>
<li>It wouldn’t boot without integrated graphics.</li>
<li>There was lots of Steam Deck jank—rotated console output, ugly boot logs, etc.</li>
<li>Performance was bad, and games didn’t launch correctly.</li>
</ul>
<p>But recent <a href="https://www.gamingonlinux.com/2025/05/steamos-3-7-5-preview-improves-lenovo-legion-go-s-support-and-brings-more-bug-fixes/">announcements</a> have focused on handhelds like the <a href="https://www.bestbuy.com/site/lenovo-legion-go-s-8-120hz-gaming-handheld-amd-ryzen-z1-extreme-steamos-32gb-with-1tb-ssd-nebula/6619188.p?skuId=6619188">Lenovo Legion Go S</a>. Still, it’s clear that the <strong>preview</strong> channel for SteamOS now supports conventional hardware much better than before.</p>
<p>From testing in the <a href="https://github.com/SteamFork">SteamFork</a> community, it’s clear that running “vanilla” SteamOS is now viable for a lot of hardware—especially AMD desktop builds.
You should definitely give it a shot.</p>
<h2 id="what-if-i-can%E2%80%99t-boot-the-repair-image%3F" tabindex="-1">What If I Can’t Boot the Repair Image?</h2>
<p>If the repair image won’t boot, you could try flashing one of the other SteamOS images directly to your boot disk, or experiment with one of the newer, unreleased repair images.
If you do, please share your notes on what worked or didn’t!</p>
<ul>
<li><a href="https://steamdeck-images.steamos.cloud/steamdeck/?C=M&amp;O=D">All SteamOS ISO Images</a></li>
<li><a href="https://help.steampowered.com/en/faqs/view/1b71-edf2-eb6d-2bb3">Steam Repair Image (Current)</a></li>
<li><a href="https://steamdeck-images.steamos.cloud/steamdeck/20250320.1000/?C=M&amp;O=D">Unstable Repair Image - steamdeck-repair-20250320.1000-3.8.0.img.zip</a></li>
</ul>
<h2 id="come-chat" tabindex="-1">Come Chat</h2>
<p>If you’re running SteamOS on non–Steam Deck hardware, join the <a href="https://discord.gg/5KmBn5ttCa">SteamFork (RIP) Discord</a> to chat about your setup and share notes!</p>
]]></content>
    <link rel="alternate" href="https://bret.io/blog/2025/its-time-to-install-steamos-3.7/"/>
  </entry>
  <entry>
    <id>https://bret.io/blog/2025/bazzite-isnt-steamos/#2025-05-08T02:48:43.724Z</id>
    <title>Bazzite isn't SteamOS (and that's ok!)</title>
    <updated>2025-05-08T02:48:43.724Z</updated>
    <published>2025-05-08T02:48:43.724Z</published>
    <author>
      <name>Bret Comnes</name>
      <uri>https://bret.io</uri>
    </author>
    <content type="html"><![CDATA[<p>I’m thrilled to see Linux becoming a more viable platform for PC gaming, and it’s awesome to see so many new Linux users joining the scene.
Unlike commercial operating systems that pay people to clarify and simplify product details, open source exposes users to maximum complexity, clarification, and particularities (IT’S GNU/LINUX!!).
With a new generation of users, it’s helpful to know when apples are apples and oranges are oranges.</p>
<p>SteamOS, being a Linux distro, is no different.
There are quite a few things being called “SteamOS” these days in YouTube tutorials, so I figured I could help clear things up.</p>
<pre class="mermaid">
graph TD
  SteamOSDebian["🎮 SteamOS (Debian)"]
  SteamOSArch["🎮 SteamOS (Arch)"]

  SteamFork["🛠️ SteamFork"]
  HoloISO["🌈 HoloISO"]
  ChimeraOS["🔥 ChimeraOS"]
  Bazzite["🔧 Bazzite"]

  SteamOSDebian
  SteamOSArch <-->|derived from| SteamFork
  SteamOSArch <-->|derived from| HoloISO

  ChimeraOS <-.->|pulls from| SteamOSArch
  Bazzite <-.->|pulls from| SteamOSArch
</pre>
<figure class="">
  <a href="https://www.youtube.com/watch?v=pVI_smLgTY0">
    <img loading="auto"
         src="./img/pew.webp"
         alt="thumbnail of PewDiePie's Linux video">
  </a>
  <figcaption>Is the PewDiePie-switched-to-Linux video not a clear enough sign we are entering a transitional age of PC gaming on Linux?</figcaption>
</figure>
<h2 id="valve%E2%80%99s-recovery-image-is-%E2%80%9Csteamos%E2%80%9D" tabindex="-1">Valve’s Recovery Image is “SteamOS”</h2>
<p>SteamOS has been around for a while—so much so that there are two distinct versions of it.</p>
<ul>
<li>Valve’s original Debian-based “SteamOS” that shipped with the <a href="https://en.wikipedia.org/wiki/Steam_Machine_(computer)">Alienware Steam Machines</a>.</li>
</ul>
<figure class="borderless">
  <a href="https://store.steampowered.com/steamos">
    <img loading="auto"
         src="./img/steam-os-old.webp"
         alt="The ancient Debian-based SteamOS">
  </a>
  <figcaption>SteamOS is here? No! This is the old, ancient SteamOS. Don't use this one!</figcaption>
</figure>
<ul>
<li>The Arch-based <a href="https://help.steampowered.com/en/faqs/view/1b71-edf2-eb6d-2bb3">“Steam Deck Recovery image”</a> that ships Gamescope and Valve’s latest 10-foot UI.</li>
</ul>
<figure class="borderless">
  <a href="https://help.steampowered.com/en/faqs/view/1b71-edf2-eb6d-2bb3">
    <img loading="auto"
         src="./img/steam-deck-recovery-new.webp"
         alt="The Steam Deck recovery image instructions page">
  </a>
  <figcaption>Not very intuitive, but this is the SteamOS you want!</figcaption>
</figure>
<p>The modern Deck Recovery Image isn’t quite ready for prime time, but it’s totally usable on a lot of hardware (in the preview 3.7 branch).</p>
<p><strong>If you aren’t running one of these, you aren’t “running SteamOS”!</strong></p>
<h2 id="steamos-derivatives" tabindex="-1">SteamOS Derivatives</h2>
<p>There have been two known SteamOS derivative distributions: <a href="https://github.com/HoloISO/releases"><strong>HoloISO</strong></a> and <a href="https://github.com/SteamFork"><strong>SteamFork</strong></a>.</p>
<p>A derivative distribution directly takes Valve’s SteamOS and other sources as a base and builds from there.
This guarantees they track SteamOS as closely as possible while layering on their own fixes or hardware compatibility support.</p>
<p><strong>Both HoloISO and SteamFork are now discontinued</strong> (though you can still run them):</p>
<ul>
<li>
<p>HoloISO was run out of a Russian Telegram channel and had questionable build/distribution practices.
It was hard to verify that it wasn’t distributing malware.
While I don’t think it ever did, the project wasn’t transparent or easy to contribute to, and the maintainer was a college student who would disappear regularly.</p>
</li>
<li>
<p>SteamFork was a US-based project that was fully open source on GitHub.
It had a reproducible toolchain, real build infrastructure, and was run much more professionally.
It produced a very high-quality SteamOS derivative that supported both desktop hardware and a wide variety of handhelds.</p>
</li>
</ul>
<p>In the end, I agree with the SteamFork maintainers: the Chinese handheld vendors should be the ones footing the bill for SteamOS support—not the community.
The decision to shut down, while disappointing, makes complete sense.
SteamOS 3.7 is basically here—let’s use it.</p>
<p>Credit to the SteamFork team (<a href="https://github.com/uejji">@uejji</a> and <a href="https://github.com/fewtarius">@fewtarius</a>) for their beautiful build pipeline and for sharing their discoveries around SteamOS 3.7.</p>
<figure class="borderless">
  <a href="https://github.com/SteamFork">
    <img loading="auto"
         src="./img/steamfork.webp"
         alt="Screenshot of SteamFork">
  </a>
  <figcaption>We hardly knew you, SteamFork. Thanks for all the work you put in!</figcaption>
</figure>
<p><strong>If you are running a SteamOS derivative, you’re <em>nearly</em> running SteamOS that differs only to the extent the patches change it.</strong></p>
<h2 id="gamescope-distros" tabindex="-1">Gamescope Distros</h2>
<p>The two other notable distros I see frequently labeled as “SteamOS” in the ecosystem are: <a href="https://chimeraos.org">ChimeraOS</a> and <a href="https://bazzite.gg">Bazzite</a>.</p>
<p><strong>ChimeraOS and Bazzite are not SteamOS.</strong></p>
<p>They’re both excellent choices if SteamOS doesn’t run on your hardware, or if you prefer their broader project goals.
They boot into <a href="https://github.com/ValveSoftware/gamescope"><strong>Gamescope</strong></a> and use <a href="https://www.protondb.com">Proton</a> for game compatibility.</p>
<blockquote>
<p>Gamescope: the micro-compositor formerly known as steamcompmgr</p>
</blockquote>
<p>Gamescope is the layer that the Steam 10-foot UI and games run inside on Linux.
It handles resolution scaling, performance overlays, FSR upscaling, and other key features made popular by the Steam Deck.</p>
<p><strong>We’ll call non-SteamOS-derivative distros that boot into Gamescope “Gamescope distros.”</strong></p>
<h3 id="chimeraos" tabindex="-1">ChimeraOS</h3>
<figure class="borderless">
  <a href="http://chimeraos.org">
    <img loading="auto"
         src="./img/chimera.webp"
         alt="Screenshot of ChimeraOS's website">
  </a>
  <figcaption>ChimeraOS: the best logo of them all.</figcaption>
</figure>
<p>Chimera is a great project that predates modern SteamOS.
It’s an immutable Linux distribution built to provide a console-like experience across many devices.</p>
<p>It’s Arch-based like SteamOS, but it uses GNOME for desktop mode instead of KDE.
It also runs its own update schedule, separate from Valve or the broader SteamOS ecosystem.
It ships its own 10-foot UI, emulation support, and web-based management tools.</p>
<p>If SteamOS 3.7 or SteamFork doesn’t work for your device—or if you prefer Chimera’s vision and community—this is a solid choice.</p>
<h3 id="bazzite" tabindex="-1">Bazzite</h3>
<figure class="borderless">
  <a href="https://bazzite.gg">
    <img loading="auto"
         src="./img/bazzite.webp"
         alt="Screenshot of Bazzite's website">
  </a>
  <figcaption>Bazzite has the most "enthusiastic" community, in my experience.</figcaption>
</figure>
<p>I haven’t used Bazzite personally, but like Chimera, it’s not a SteamOS derivative.
It boots into Gamescope and feels a lot like SteamOS.</p>
<p>Bazzite is based on Fedora and makes heavy use of process containers, which makes it even more distinct from SteamOS.</p>
<p>It’s very popular on Reddit, has strong community support, and offers good hardware compatibility.
However, because it’s Fedora-based, you’ll encounter many differences when tinkering under the hood.
If you like Arch Linux (or want to learn more about it), you may find Fedora-specific quirks frustrating.</p>
<p><strong>If you’re using a Gamescope distro, you are not running SteamOS.</strong>
You’re running a separate Linux distro with its own lineage that borrows components from SteamOS to work similarly.</p>
<h2 id="%E2%80%9Cthis-doesn%E2%80%99t-matter%E2%80%94the-outcome-is-the-same!%E2%80%9D" tabindex="-1">“This doesn’t matter—the outcome is the same!”</h2>
<p>Yes and no.
While the direction is similar, details matter in open source.
Different starting points create different outcomes, even if the UX looks the same.
I’m not here to say x is better than y—just that x is not y.
Saying you run x when you actually run y is misleading or mistaken.</p>
<p>Welcome to open source!</p>
<p>The point here is to highlight the differences and help you understand when those details might matter.
In many cases, they won’t.</p>
<h2 id="%E2%80%9Cwhich-one-are-you-saying-i-should-run%3F%E2%80%9D" tabindex="-1">“Which one are you saying I should run?”</h2>
<p>Try them all!
See what works best for your goals and your hardware.</p>
<h2 id="conclusion" tabindex="-1">Conclusion</h2>
<p>If you enjoyed this post, you might also like:</p>
<ul>
<li><a href="../you-can-just-build-a-steam-machine/">You can just build a Steam Machine</a> — A look at building a “Steam Machine” with SteamOS</li>
<li><a href="../its-time-to-install-steamos-3.7/">It’s time to install SteamOS 3.7</a> — My notes on installing SteamOS on non–Steam Deck hardware</li>
</ul>
]]></content>
    <link rel="alternate" href="https://bret.io/blog/2025/bazzite-isnt-steamos/"/>
  </entry>
  <entry>
    <id>https://bret.io/blog/2025/i-love-monorepos/#2025-03-09T21:53:14.740Z</id>
    <title>I Love Monorepos—Except When They Are Annoying</title>
    <updated>2025-03-09T21:53:14.740Z</updated>
    <published>2025-03-09T21:53:14.740Z</published>
    <author>
      <name>Bret Comnes</name>
      <uri>https://bret.io</uri>
    </author>
    <content type="html"><![CDATA[<p>I love monorepos, but monorepos can be annoying, especially in open source.
They make sense in some cases, but they come with a lot of baggage and patterns I’ve noticed over the years—so I need to write about them.</p>
<p>I’m primarily talking about JS-based “monorepos,” a.k.a. workspaces when used in open source packages, but the whole space is confused enough that I might stray a bit.</p>
<h2 id="historical-context" tabindex="-1">Historical Context</h2>
<p>JS monorepos (or “workspaces”) emerged with tools like <a href="https://github.com/lerna/lerna"><code>lerna</code></a>, later influencing similar features in <a href="https://docs.npmjs.com/cli/v11/using-npm/workspaces"><code>npm</code></a>, <a href="https://classic.yarnpkg.com/en/docs/workspaces#search"><code>yarn</code></a> and <a href="https://pnpm.io/npmrc#workspace-settings"><code>pnpm</code></a>. At their core, they allow developers to:</p>
<ul>
<li>Develop and publish multiple <code>npm</code> packages from a single <code>git</code> repository</li>
<li>Streamline dependency management with automatic linking and consolidated lockfiles</li>
<li>Allow for varying direct dependency versions in a single repo</li>
</ul>
<p>This approach gained popularity largely as a response to:</p>
<ul>
<li>The frustrating fragility of <code>npm link</code></li>
<li>React’s “unique” constraints that caused errors when linked across packages</li>
<li>The exponential growth of tooling complexity costs (Babel, Webpack, CSS-in-JS, TS)</li>
<li>The promise of O(1) tooling changes instead of O(n²) updates across multiple repositories</li>
</ul>
<figure>
  <img loading="auto" src="./img/og.jpg" alt="A picture of the tower of Babel">
  <figcaption>Babel is probably the most appropriately named project in open source history.</figcaption>
</figure>
<h2 id="what-i-love-about-monorepos" tabindex="-1">What I Love About Monorepos</h2>
<p>Monorepos have utility in some circumstances.</p>
<p>Monorepos are ideal for scenarios like this: a project with a series of APIs, a website, and background worker processes where a team of developers works on the codebase together
Each process has its own set of unique and shared dependencies, and also has a set of common queries and types shared between two or more services.
The primary trade-off of course is that any changes to shared code has to be reflected in all dependents at the time of introduction.</p>
<p>Outside of this context, it’s mostly just misery for dependents and contributors.</p>
<h2 id="all-the-ways-monorepos-(and-adjacent-hypertooling)-are-annoying" tabindex="-1">All the Ways Monorepos (and Adjacent Hypertooling) Are Annoying</h2>
<p>Most of the issues stem from “hypertooling”, a generally bigger issue in all development ecosystems, but they manifest at scale in the monorepo arrangement, so it’s a useful vehicle to point out these issues.</p>
<h3 id="%E2%80%9Clet-me-just-fix-this-little-bug%E2%80%9D" tabindex="-1">“Let Me Just Fix This Little Bug”</h3>
<p>You’re using a dependency in your project.
That dependency has a bug, and you need to fix it.
Node.js was designed with the intention that you could just open up <code>node_modules</code> and edit the code to generate patches.</p>
<p>With packages sourced from monorepos, this is not the case!</p>
<p>You open up the code in <code>node_modules</code> now, and it’s some franken-compile-to-es-1-ts-rollupviteparcel-webpack-babeldegook-lerna-pnpm-berry-workzone-playplace that has also been pre-minified for some reason. Also the sourcemaps and ESM type exports are broken for some reason.</p>
<h3 id="packages-published-from-monorepos-have-more-bugs" tabindex="-1">Packages Published from Monorepos Have More Bugs</h3>
<p>This is completely anecdotal but also completely true.
Packages published from monorepos have more defects, and finding and fixing the defects is more challenging for dependents.
I believe this is due to two factors:</p>
<ul>
<li>The development environment (the monorepo) varies more from the deployment environment (<code>node_modules/foo</code>) than single repo = single package project organization.</li>
<li>The developer who is prioritizing the monorepo DX over the consumption DX, has gone out of their way to avoid working in the deployment environment, and therefore fails to test things in realistic deployments.</li>
</ul>
<p>These two factors, plus the inherent complexity of all the tools required to make monorepos work, lead to encountering more defects in monorepo-published packages.</p>
<p>Also, trying to fix or upstream work to monorepo packages is memorably more miserable and painful.</p>
<p>This really comes down to thermodynamics—more entropy, more problems—and it’s true!</p>
<h3 id="finding-the-source-code-to-fix-is-a-lot-harder" tabindex="-1">Finding the Source Code to Fix Is a Lot Harder</h3>
<p>Noodling around on your machine-generated direct runtime dependencies in <code>node_modules</code> may still be possible, and you may even identify a quick fix you want to upstream, but now you’re tasked with actually finding the source code.
This leads to many additional challenges in monorepos!</p>
<h3 id="package-metadata-is-often-stripped-from-package.json" tabindex="-1">Package Metadata Is Often Stripped from <code>package.json</code></h3>
<p>Because you can’t simply publish packages from a monorepo without a mountain of scripts and tooling, monorepo-sourced packages often rewrite <code>package.json</code> (we have to differentiate which <code>package.json</code> we’re talking about in a monorepo!) in a way that accidentally (or intentionally, for devs who prefer to move a bespoke minification step into the <code>npm publish</code> lifecycle for no stated reason) <a href="https://docs.npmjs.com/cli/v6/configuring-npm/package-json#repository">strips useful and important metadata</a>.</p>
<h3 id="package-metadata-is-often-wrong-or-incomplete" tabindex="-1">Package Metadata Is Often Wrong or Incomplete</h3>
<p>Okay, so we’re lucky—this monorepo-published package has some metadata about the repo that created it.
But it only takes us to the repo homepage.
Now we have to find out if the package name matches the directory name used in the monorepo or how this thing is put together at all to hunt down the source code of the package.</p>
<h3 id="they-probably-don%E2%80%99t-have-a-readme.md" tabindex="-1">They probably don’t have a README.md</h3>
<p>The package probably has a super sub-par README.md or something that points to some random permutation of a (probably incomplete) docs website (that will go offline when the maintainer gets busy and forgets to renew the domain). If you are lucky you might get a generated typedoc website (good), but with zero JSDoc description  annotations (the part that describes things for humans) (bad).</p>
<p>Obviously, nothing in monorepos requires this to be the case, but the tools seem to facilitate this outcome.</p>
<h3 id="each-monorepo-is-a-unique-permutation-of-opinion-and-entropy" tabindex="-1">Each Monorepo Is a Unique Permutation of Opinion and Entropy</h3>
<p>Because there are always weird variations on which tool is used and how, finding the package entry point becomes a chore.
It’s hard to do on GitHub, and you basically have to clone the package and grep around, comparing the source code to try and find where the contents of the package tarball match up.</p>
<h3 id="which-package-manager-are-they-using-for-the-monorepo%3F" tabindex="-1">Which Package Manager Are They Using for the Monorepo?</h3>
<p>Okay, so the package itself has no hard requirements on which package manager you use, but the monorepo only works with <code>pnpm</code>.
No, wait, it requires <code>yarn</code>.
Oh, wait, not Yarn 1—why is that still the default? It needs Berry.
Why does this repo have more than one lockfile?!?
Oh crap, what is <a href="https://nodejs.org/api/corepack.html"><code>corepack</code></a>, do I need that?</p>
<h3 id="now-you-have-to-install-more-tools" tabindex="-1">Now You Have to Install More Tools</h3>
<p><code>corepack</code>, <code>yarn</code>, <code>berry</code>, <code>pnpm</code>, <code>lerna</code>, <code>volta</code>, <code>turborepo</code>, <code>nx</code>, etc., etc., etc…</p>
<p>By the way, Node.js ships with <code>npm</code>. It literally could be that easy—this is all opt-in hypertooling.</p>
<h3 id="no-one-uses-the-standard-node.js-tooling-(npm-workspaces)" tabindex="-1">No One Uses the Standard Node.js Tooling (<code>npm</code> Workspaces)</h3>
<p><code>npm</code> ships workspaces.
Nothing uses them.
To their credit, <code>npm</code> workspaces leave a lot to be desired.
Tools that support <code>workspaces</code> will often only work with <code>lerna</code> 1 workspaces or something like that, not <code>npm</code> workspaces for some reason.
Sad situation.</p>
<h3 id="the-monorepo-install-step-will-probably-fail" tabindex="-1">The Monorepo Install Step Will Probably Fail</h3>
<p>Because you have to install dependencies for N packages instead of 1 in a monorepo, and the chances of a monorepo dev running Gentoo or nix or something elite and weird are much higher than normal, don’t expect this to work on your machine.
The external native dependencies are probably not documented anywhere, or buried in a README in one of the packages.!</p>
<p>Remember, at scale, rare events become common! More dependencies, more  places to break.</p>
<h3 id="the-tests-aren%E2%80%99t-passing-locally" tabindex="-1">The Tests Aren’t Passing Locally</h3>
<p>We got through the install step.
We had to switch Node versions or install <code>pkgconfig</code> or something.
We go to land our patch, but before we do, we run <code>yarn test</code>.
The tests fail!
Not on the package we want to work on, but somewhere else.</p>
<p>Now we get to look into how to narrow the test harness and see if the relevant suite works or not, or just submit the patch and hope it works in CI.</p>
<h3 id="the-tests-aren%E2%80%99t-passing-in-ci" tabindex="-1">The Tests Aren’t Passing in CI</h3>
<p>We submit the PR upstream, and CI fails—again, for the same unrelated package that we saw locally.
I’m not here for that, and fixing it looks hairy.
The maintainer merges your changes anyway.
Oof. Let’s hope they have it under control despite appearances.</p>
<p>Isolated changes aren’t actually isolated at all in monorepos.
They end up requiring the whole suite to pass.
Depending on the nature of your contribution, it may come down to you to look into why something stopped working, unrelated to the task at hand.</p>
<h3 id="all-of-the-devtools-fall-over" tabindex="-1">All of the Devtools Fall Over</h3>
<p>The scale of monorepos will often far exceed the performance envelope that the devtools were targeting (small to medium-sized repos).
JS and TS require dev tooling.
JS requires a parsing linter to catch well-known but easy-to-miss language hazards.
TS requires tools to type-check and build.</p>
<p>These tools operate fine at a specific range of scale and get extremely slow and crappy beyond that scale.
Monorepos are an excellent pattern to follow if you want to exceed that scale quickly.</p>
<h3 id="needs-everything-rewritten-in-rust" tabindex="-1">Needs Everything Rewritten in Rust</h3>
<p>A big part of the effort to rewrite everything in Rust is because the JS-based tooling isn’t fast enough for the size of the monorepos people throw them at.
But also, people are just sick of the mess they’ve been a part of and want to hop ecosystems.
Many monorepos are quick to adopt Rust-based tools, along with all of their fresh bugs and defects.</p>
<p>It’s a good time to remind people that tools like the SASS compiler used to be written in C++ to be fast.
We’ve been here before!</p>
<h3 id="they-need-vscode-plugins" tabindex="-1">They Need VSCode Plugins</h3>
<p>Many monorepos assume and encourage people to not only install devDependencies but also VSCode plugins to work effectively in them.
No, it’s not available in Vim or Sublime or any other editors.
What, you don’t use VSCode?!</p>
<h3 id="monorepo-tooling-falls-out-of-date-quickly" tabindex="-1">Monorepo Tooling Falls Out of Date Quickly</h3>
<p>Maybe this is getting better these days, but why do I keep running into Yarn 1 and Lerna everywhere still?</p>
<p>Because any singular tooling change in a monorepo has to cover the workflow for N packages, it forces you to address any changes in ALL packages when making tooling updates.
This often leads to it never happening.</p>
<p>Remember the argument that monorepos promised O(1) tooling changes?
Well, that one change can’t go in until N packages (* N times you have to make updates) are modified to work with that change.
This distinction is always overlooked.
Centralizing tooling means every change requires mass coordination.
If each package were in its own repo, you could selectively apply the tooling changes to the 2-3 you are actively working on and get around to the rest when it matters.</p>
<h3 id="they-ship-hoisting-bugs" tabindex="-1">They Ship Hoisting Bugs</h3>
<p>Hoisting bugs are more common in monorepos and are easily captured in lockfiles, where they can’t be reproduced by dependents.
Why install any dependencies when you can assume your peers have them?!?</p>
<p><code>pnpm</code> forces you to fix these—great—but <code>pnpm</code> doesn’t ship with Node, so only a fraction of monorepos address this.</p>
<h3 id="versioning-and-publishing-hazards" tabindex="-1">Versioning and Publishing Hazards</h3>
<p>Monorepos inadvertently create several versioning challenges that single-package repos typically don’t encounter:</p>
<ul>
<li>
<p><strong>Overactive Versioning</strong>: Tooling automatically bumps multiple packages simultaneously, leading to unnecessary version noise for downstream dependents.</p>
</li>
<li>
<p><strong>Hazardous Permutations</strong>: When packages are published in groups but updated selectively by dependents, untested version permutations emerge—especially problematic with peer dependencies.</p>
</li>
<li>
<p><strong>Partial Publishes</strong>: Complex automations sometimes only publish a portion of interdependent changes, creating temporarily broken package states.</p>
</li>
<li>
<p><strong>Cross-Module Side Effects</strong>: Changes in unrelated modules in monorepos can introduce defects in the modules you depend on, something far less likely with separate repositories.</p>
</li>
</ul>
<h3 id="overmodularized-internals" tabindex="-1">Overmodularized internals</h3>
<p>Overmodularizing (adding versioned module boundaries between code where just a separate file or export in the same module would do) is a hazard in general, but it seems to often be worse in monorepos.
This tends to be a mistake you see less experienced developers make, but monorepos deserve unique recognition here: by lowering the spin-up cost of modules, monorepos make this mistake easier and more common.</p>
<h3 id="probably-overusing-peerdependencies" tabindex="-1">Probably overusing peerDependencies</h3>
<p>I actually don’t understand this one, but modules out of monorepos tend to heavily utilize peer dependencies, where regular dependencies would actually be preferable.
I suspect its some frontend bundler need that somehow has leaked into the Node.js module graph but I haven’t ever gotten an answer that makes sense on this one.</p>
<h3 id="monorepos-break-github-and-tooling" tabindex="-1">Monorepos Break GitHub and Tooling</h3>
<p>Because N projects run out of one repo, the entire GitHub resource model (One project = one repo) is made largely useless.
Issues, CI, and permissions now have to scale down to the folder level instead of hanging off the repo resource boundary.
This has incredible implementation costs for the entire tooling ecosystem as they attempt to accommodate large monorepos.</p>
<p>Most tools just simply fail to work by default in the monorepo arrangement because your monorepo is unique and bespoke compared to all the others. Because of its sheer size, tools have to implement complex scaling solutions just to listen to webhooks off monorepos. It really sucks.</p>
<h3 id="%E2%80%9Cbut-google-does-it!%E2%80%9D" tabindex="-1">“But Google Does It!”</h3>
<p>This comparison fundamentally misunderstands Google’s approach.
Google doesn’t use JS-based workspaces or monorepos in their organization (at least in the example everyone’s reaching for)—they’ve built custom tooling with dedicated engineering teams specifically to make their monorepo approach viable at their scale.</p>
<p>Google has invested millions in proprietary build systems and infrastructure that most teams simply don’t have access to.
The contexts are so different that the comparison provides little practical value for most JavaScript projects.</p>
<p>Your startup or open-source project operates under completely different constraints and with different goals than Google’s engineering organization, on top of the fact Google isn’t really a great company to emulate these days.</p>
<h3 id="%E2%80%9Cbut-npm-link-sucks%E2%80%9D" tabindex="-1">“But <code>npm link</code> Sucks”</h3>
<p>“I don’t want to link 2+ repos together locally.”</p>
<p>This is a legitimate pain point—<code>npm link</code> becomes tedious and fragile beyond a single layer of linking. However, this limitation actually encourages better architectural decisions about module boundaries and dependencies.</p>
<p>The core issue isn’t that <code>npm link</code> is flawed; it’s that we’re often creating unnecessary dependencies between packages that could be designed with cleaner separation. When packages are truly independent enough to warrant separate publishing, they should rarely need simultaneous development.</p>
<p>For general-purpose libraries especially, isolating code into separate repositories with well-defined boundaries often leads to better design decisions and more maintainable code over time. Reaching for monorepos to avoid these challenges can sometimes mask architectural problems rather than solve them.</p>
<p>That said, every project has unique requirements—if yours genuinely benefits from the tight coupling a monorepo enables, that’s a valid choice. The key is making that decision deliberately rather than defaulting to it out of convenience.</p>
<h3 id="%E2%80%9Cbut-small-modules-are-annoying%E2%80%9D" tabindex="-1">“But Small Modules Are Annoying”</h3>
<p>Monorepos and many/deep module graphs are pretty orthogonal, but I have heard this argument a few times.
The idea is that it’s okay to have many dependencies sourced from the same repo—this is better than having them sourced from many repos.
Okay, sure, as long as you can live with the above issues!</p>
<p>If all those repos are owned by the same person, I don’t really see the issue.</p>
<p>Generally though, small modules aren’t annoying because they live in a singularly scoped repo, (they are annoying because their they lack <a href="https://web.stanford.edu/~ouster/cgi-bin/aposd.php">API depth</a>).
Annoying modules are annoying
Get rid of your annoying dependencies, and cross your fingers the replacement is less annoying.</p>
<h3 id="%E2%80%9Call-of-these-problems-apply-to-single-package-repos-too!%E2%80%9D" tabindex="-1">“All of These Problems Apply to Single-Package Repos Too!”</h3>
<p>A lot of the above problems are just hazards with the Node module system.
Yes, you can run into a lot of the same issues with single-package repos.
But in practice, you don’t.
Monorepos act as a multiplier on these hazards on top of their own set of issues.</p>
<h3 id="%E2%80%9C%5Binsert-new-runtime%5D-fixes-this!%E2%80%9D" tabindex="-1">“[Insert New Runtime] Fixes This!”</h3>
<p>Give any JS ecosystem incumbent some time in the spotlight, and you will be surprised at the “wild” ideas people will come up with to make peoples lives more complicated!</p>
<h3 id="%E2%80%9Ci%E2%80%99m-an-overworked%2C-underpaid-maintainer%2C-i-need-this%E2%80%9D" tabindex="-1">“I’m an overworked, underpaid maintainer, I need this”</h3>
<p>This is probably true. Do whatever you need. Just enumerating a few common hazards to avoid.</p>
<h3 id="%E2%80%9Cyou-or-someone-should-write-up-single-package-repo-hazards%E2%80%9D" tabindex="-1">“You or someone should write up single package repo hazards”</h3>
<p>Yeah Agreed.
Single module repo strategies are sadly very underdeveloped and misunderstood.</p>
<h2 id="conclusion" tabindex="-1">Conclusion</h2>
<p>Monorepos have legitimate uses in specific contexts—particularly when sharing code between multiple processes in a single project or coordinating work across closely related sub-projects and teams. In these situations, they can remove barriers to a developer workflow that would otherwise be necessary in open source.</p>
<p>But for open-source modules, the costs often outweigh the benefits and are actually creating a reputational hazard for an otherwise completely functional and scalable module system. Instead of defaulting to monorepos, consider these alternatives:</p>
<ul>
<li>
<p><strong>Focused Single-Package Repos</strong>: For libraries with a clear, cohesive purpose, maintaining separate repositories provides cleaner boundaries and more reliable publishing workflows.</p>
</li>
<li>
<p><strong>Minimal Dependencies</strong>: Rather than splitting functionality across numerous tiny packages that require a monorepo to manage, consider whether your design truly benefits from such granular separation.</p>
</li>
<li>
<p><strong>Strategic Module Boundaries</strong>: Create module boundaries only where they provide genuine benefits—at natural seams in your architecture rather than arbitrary divisions.
Frequent cross boundary linking indicates unnecessary boundaries.</p>
</li>
</ul>
<p>The JavaScript ecosystem moves quickly, but we should be careful not to adopt complex solutions for problems that could be solved more elegantly with simpler approaches. Sometimes the answer isn’t more tooling or more packages—it’s thoughtful design and careful consideration of the downstream experience.</p>
<p>Monorepos aren’t inherently bad, but they’re also not a silver bullet. Understanding when they help and when they hinder is key to using them effectively.</p>
<h3 id="syndications" tabindex="-1">Syndications</h3>
<ul>
<li><a href="https://news.ycombinator.com/item?id=43314580" rel="syndication" class="u-syndication">Hacker News</a></li>
<li><a href="https://bsky.app/profile/bret.io/post/3ljy3huggns22" rel="syndication" class="
u-syndication">Bsky:bret.io</a></li>
<li><a href="https://bsky.app/profile/ecmascript.news/post/3ljzzbe3rks2w" rel="syndication" class="u-syndication">Bsky:ECMASCript.news</a></li>
<li><a href="https://x.com/bcomnes/status/1898863058015158590" rel="syndication" class="u-syndication">X:@bcomnes</a></li>
<li><a href="https://fosstodon.org/@bcomnes/114134823764166508" rel="syndication" class="u-syndication">Mastodon:@bcomnes</a></li>
<li><a href="https://fosstodon.org/@ecmascript_news@mastodon.online/114139170554774533" rel="syndication" class="u-syndication">Mastodon:@ecmascript_news</a></li>
<li><a href="https://www.reddit.com/r/node/comments/1j7jk05/i_love_monorepos_except_when_they_are_annoying/">reddit.com/r/node</a></li>
<li><a href="https://ecmascript.news/archive/es-next-news-2025-03-12.html">ecmascript.news 2025-03-12</a></li>
</ul>
]]></content>
    <link rel="alternate" href="https://bret.io/blog/2025/i-love-monorepos/"/>
  </entry>
  <entry>
    <id>https://bret.io/blog/2024/dont-be-evil-is-an-excuse-for-evil/#2024-10-03T18:25:30.405Z</id>
    <title>"Don't Be Evil" Is An Excuse For Evil</title>
    <updated>2024-10-03T18:25:30.405Z</updated>
    <published>2024-10-03T18:25:30.405Z</published>
    <author>
      <name>Bret Comnes</name>
      <uri>https://bret.io</uri>
    </author>
    <content type="html"><![CDATA[<p>Google’s former and founding motto <a href="https://en.wikipedia.org/wiki/Don%27t_be_evil">“Don’t be evil”</a> sounds benevolent, but is it?</p>
<figure>
  <a href="https://web.archive.org/web/20180421105327/https://abc.xyz/investor/other/google-code-of-conduct.html">
    <img loading="auto" src="./img/google-dont-be-evil.jpg" alt="Book Cover">
  </a>
  <figcaption>The original Google motto.</figcaption>
</figure>
<p><a href="https://web.archive.org/web/20180421105327/https://abc.xyz/investor/other/google-code-of-conduct.html">“Don’t be evil”</a> paired with the companies premise: lets build products in a way that allows for immeasurable corporate access into people’s private lives for fun and profit, unlocking all the positive potential this data can provide.
We’ll only use it for good, the Evil™️ things will just be off limits.</p>
<p>Of course the motto changes to “Do the right thing” in 2015 when it’s very obvious it’s impossible Google to not indulge in the Evil with the potential for it sitting right there.
By loosening up the motto to allow for at least some Evil, so long as its the “right thing”, Google can at least be honest with the public that “Evil” is definitely on the table.</p>
<p>Maybe its time to demand “Can’t be evil”? Build in a way where the evil just isn’t possible.
Reject the possibility of evil in what software you choose to use.</p>
]]></content>
    <link rel="alternate" href="https://bret.io/blog/2024/dont-be-evil-is-an-excuse-for-evil/"/>
  </entry>
  <entry>
    <id>https://bret.io/blog/2024/the-art-of-doing-science-and-engineering/#2024-02-13T18:24:49.707Z</id>
    <title>The Art of Doing Science and Engineering</title>
    <updated>2024-02-13T18:24:49.707Z</updated>
    <published>2024-02-13T18:24:49.707Z</published>
    <author>
      <name>Bret Comnes</name>
      <uri>https://bret.io</uri>
    </author>
    <content type="html"><![CDATA[<figure>
  <a href="./img/cover.jpeg">
    <img loading="auto" src="./img/cover.jpeg" alt="Book Cover">
  </a>
  <figcaption>The art of doing science and engineering : Learning to Learn by Richard W. Hamming</figcaption>
</figure>
<p>Richard Hamming’s “The Art of Doing Science and Engineering” is a book capturing the lessons he taught in a course he gave at the U.S. Navy Postgraduate School in Monterey, CA.
He characterizes what he was trying to teach was “style” of thinking in science and engineering.</p>
<p>Having a physics degree myself, and also finding myself periodically ruminating on the agony of professional software development and hoping to find some overlap between my professional field and Hamming’s life experience, I gave it a read.</p>
<p>The book is filled with nuggets of wisdom and illustrates a career of move-the-needle science and engineering at Bell Labs.
I didn’t personally find much value in many of the algebraic walk-through’s of various topics like information theory, but learning about how Hamming discovered error correcting codes definitely was interesting and worth a read.</p>
<p>The highlight of book comes in the second half where he includes interesting stories, analogies and observations on nearly every page. Below are my highlights I pulled while reading.</p>
<h2 id="on-what-makes-good-design" tabindex="-1">On What Makes Good Design</h2>
<blockquote>
<p>That brings up another point, which is now well recognized in software for computers but which applies to hardware too. Things change so fast that part of the system design problem is that the system will be constantly upgraded in ways you do not now know in any detail! Flexibility must be part of the modern design of things and processes. Flexibility built into the design means not only will you be better able to handle the changes which will come after installation, but it also contributes to your own work as the small changes which inevitably arise both in the later stages of design and in the field installation of the system…</p>
<p>Thus rule two:</p>
<p><strong>Part of systems engineering design is to prepare for changes so they can be gracefully made and still not degrade the other parts.</strong></p>
<p>– p.367</p>
</blockquote>
<p>This quote is my favorite out of the entire book.
It feels like a constant fight in software engineering between the impulse to lock down runtime versions, specific dependency versions, and other environmental factors versus developing software in such a way that accommodates wide variance in all of these different component factors.
Both approaches argue reliability and flexibility, however which approach actually tests for it?</p>
<p>In my experience, the tighter the runtime dependency specifications, the faster fragility spreads, and it’s satisfying to hear Hamming’s experience echo this observation. Sadly though, his observation that those writing software will universally understand this simply hasn’t held up.</p>
<blockquote>
<p>Good design protects you from the need for too many highly accurate components in the system. But such design principals are still, to this date, ill understood and need to be researched extensively. Not that good designers do not understand this intuitively, merely it is not easily incorporated into the design methods you were thought in school.</p>
<p>Good minds are still needed in spite of all the computing tools we have developed. The best mind will be the one who gets the principle into the design methods taught so it will be automatically available for lesser minds!.</p>
<p>– p.268</p>
</blockquote>
<p>Here Hamming is describing H.S. Black’s feedback circuit’s tolerance for low accuracy components as what constitutes good design. I agree! Technology that works at any scale, made out of commodity parts with minimal runtime requirements tends to be what is most useful across the longest amount of time.</p>
<h2 id="on-committees" tabindex="-1">On Committees</h2>
<blockquote>
<p>Committee decisions, which tend to diffuse responsibility, are seldom the best in practice—most of the time they represent a compromise which has none of the virtues of any path and tends to end in mediocrity.</p>
<p>– p.274</p>
</blockquote>
<p>I appreciated his observations on committees, and their tendency to launder responsibility.
They serve a purpose, but its important to understand their nature.</p>
<h2 id="on-data-and-observation" tabindex="-1">On Data and Observation</h2>
<blockquote>
<p>The Hawthorne effect strongly suggests the proper teaching method will always to be in a state of experimental change, and it hardly matters just what is done; all that matters is both the professor and the students believe in the change.</p>
<p>– p.288</p>
</blockquote>
<blockquote>
<p>It has been my experience, as well as the experience of many others who have looked, that data is generally much less accurate than it is advertised to be. This is not a trivial point—we depend on initial data for many decisions, as well as for the input data for simulations which result in decisions.</p>
<p>– p.345</p>
</blockquote>
<blockquote>
<p>Averages are meaningful for homogeneous groups (homogeneous with respect to the actions that may later be taken), but for diverse groups averages are often meaningless. As earlier remarked, the average adult has one breast and one testicle, but that does not represent the average person in our society.</p>
<p>– p.356</p>
</blockquote>
<blockquote>
<p>You may think the title means that if you measure accurately you will get an accurate measurement, and if not then not, but it refers to a much more subtle thing—the way you choose to measure things controls to a large extent what happens. I repeat the story Eddington told about the fishermen who went fishing with a net. They examined the size of the fish they caught and concluded there was a minimum size to the fish in the sea. The instrument you use clearly affects what you see.</p>
<p>– p.373</p>
</blockquote>
<p>Intuitively I think many people who attempt to measure anything understand that their approach reflects in the results to some degree.
I hadn’t heard of the <a href="https://en.wikipedia.org/wiki/Hawthorne_effect">Hawthorne effect</a> before, but intuitively it makes sense.</p>
<p>People with an idea on how to improve something implement their idea and it works, because they want it to work and allow the effects to be fully effective.
Someone else is prescribed this idea or brought into the fold where the idea is implemented and the benefits of the idea evaporate.</p>
<p>I’ve long suspected that in the context of professional software development, where highly unscrutinized benchmarks and soft data are the norm, people start with an opinion or theory and work back to data that supports it.
Could it just be that people need to believe that working in a certain way is necessary for them to work optimally? Could it be “data” is often just a work function used to out maneuver competing ideas?</p>
<p>Anyway, just another thing to factor for when data is plopped in your lap.</p>
<h2 id="on-theory" tabindex="-1">On Theory</h2>
<blockquote>
<p>Moral: there need not be a unique form of a theory to account for a body of observations; instead, two rather different-looking theories can agree on all the predicted details. You cannot go from a body of data to a unique theory! I noted this in the last chapter.</p>
<p>–p.314</p>
</blockquote>
<blockquote>
<p>Heisenberg derived the uncertainty principle that conjugate variables, meaning Fourier transforms, obeyed a condition in which the product of the uncertainties of the two had to exceed a fixed number, involving Planck’s constant. I earlier commented, Chapter 17, this is a theorem in Fourier transforms-any linear theory must have a corresponding uncertainty principle, but among physicists it is still widely regarded as a physical effect from nature rather than a mathematical effect of the model.</p>
<p>–p.316</p>
</blockquote>
<p>I appreciate Hamming suggesting that some of our understanding of physical reality could be a byproduct of the model being used to describe it.
It’s not exactly examined closely in undergraduate or graduate quantum mechanics, and I find it interesting Hamming, who’s clearly highly intuitive with modeling, also raises this question.</p>
<h2 id="predictions" tabindex="-1">Predictions</h2>
<blockquote>
<p>Let me now turn to predictions of the immediate future. It is fairly clear that in time “drop lines” from the street to the house (they may actually be buried, but will probably still be called “drop lines”) will be fiber optics. Once a fiber-optic wire is installed, then potentially you have available almost all the information you could possibly want, including TV and radio, and possibly newspaper articles selected according to your interest profile (you pay the printing bill which occurs in your own house). There would be no need for separate information channels most of the time. At your end of the fiber there are one or more digital filters. Which channel you want, the phone, radio, or TV, can be selected by you much as you do now, and the channel is determined by the numbers put into the digital filter-thus the same filter can be multipurpose, if you wish. You will need one filter for each channel you wish to use at the same time (though it is possible a single time-sharing filter would be available) and each filter would be of the same standard design. Alternately, the filters may come with the particular equipment you buy.</p>
<p>– p.284-285</p>
</blockquote>
<p>Here Hamming is predicting the internet. He got very close, and it’s interesting to think that these signals would all just be piped to your house in a bundle you you pay for a filter to unlock access to the ones you want. Hey Cable TV worked that for a long time!</p>
<h2 id="on-leadership" tabindex="-1">On Leadership</h2>
<blockquote>
<p>But a lot of evidence on what enabled people to make big contributions points to the conclusion that a famous prof was a terrible lecturer and the students had to work hard to learn it for themselves! I again suggest a rule:</p>
<p><strong>What you learn from others you can use to follow;</strong></p>
<p><strong>What you learn for yourself you can use to lead.</strong></p>
<p>– p.292</p>
</blockquote>
<p>Learn by doing, not by following.</p>
<blockquote>
<p><strong>What you did to become successful is likely to be counterproductive when applied at a later date.</strong></p>
<p>– p.342</p>
</blockquote>
<p>It’s easy to blame changing trends in software development for the disgustingly short half-life of knowledge regarding development patterns and tools, but I think it’s probably just the nature of knowledge based work.
Operating by yourself may be effective and work well, but its not a recipe for success at any given moment in time.</p>
<blockquote>
<p>A man was examining the construction of a cathedral. He asked a stonemason what he was doing chipping the stones, and the mason replied, “I am making stones.” He asked a stone carver what he was doing; “I am carving a gargoyle.” And so it went; each person said in detail what they were doing. Finally he came to an old woman who was sweeping the ground. She said, “I am helping build a cathedral.”
If, on the average campus, you asked a sample of professors what they were going to do in the next class hour, you would hear they were going to “teach partial fractions,” “show how to find the moments of a normal distribution,” “explain Young’s modulus and how to measure it,” etc. I doubt you would often hear a professor say, “I am going to educate the students and prepare them for their future careers.”
This myopic view is the chief characteristic of a bureaucrat. To rise to the top you should have the larger view—at least when you get there.</p>
<p>– p.360</p>
</blockquote>
<p>Software bureaucrats aplenty. Really easy to fall into this role.</p>
<blockquote>
<p>I must come to the topic of “selling” new ideas. You must master three things to do this (Chapter 5):</p>
<ol>
<li>Giving formal presentations,</li>
<li>Producing written reports, and</li>
<li>Mastering the art of informal presentations as they happen to occur.</li>
</ol>
<p>All three are essential—you must learn to sell your ideas, not by propaganda, but by force of clear presentation. I am sorry to have to point this out; many scientists and others think good ideas will win out automatically and need not be carefully presented. They are wrong;</p>
<p>– p.396</p>
</blockquote>
<p>One thing I regret over the last 10 years of my career is not writing down more insights I have learned through experience.
Ideas simply don’t transmit if they aren’t written down or put into some consumable format like video or audio.
Nearly every annoying tool or developer trend you are forced to use is in play because it communicated the idea through blogs, videos and conference talks.
And those who watched echoed these messages.</p>
<h2 id="on-experts" tabindex="-1">On Experts</h2>
<blockquote>
<p><strong>An expert is one who knows everything about nothing; a generalist knows nothing about everything.</strong></p>
<p>In an argument between a specialist and a generalist, the expert usually wins by simply (1) using unintelligible jargon, and (2) citing their specialist results, which are often completely irrelevant to the discussion. The expert is, therefore, a potent factor to be reckoned with in our society. Since experts both are necessary and also at times do great harm in blocking significant progress, they need to be examined closely. All too often the expert misunderstands the problem at hand, but the generalist cannot carry though their side to completion. The person who thinks they understand the problem and does not is usually more of a curse (blockage) than the person who knows they do not understand the problem.</p>
<p>– p.333</p>
</blockquote>
<p>Understand when you are generalist and a specialist.</p>
<blockquote>
<p>Experts, in looking at something new, always bring their expertise with them, as well as their particular way of looking at things. Whatever does not fit into their frame of reference is dismissed, not seen, or forced to fit into their beliefs. Thus really new ideas seldom arise from the experts in the field. You cannot blame them too much, since it is more economical to try the old, successful ways before trying to find new ways of looking and thinking.</p>
<p><strong>If an expert says something can be done he is probably correct, but if he says it is impossible then consider getting another opinion.</strong></p>
<p>– p.336</p>
</blockquote>
<p>Anyone wading into a technical field will encounter experts at every turn.
They have valuable information, but they are also going to give you dated, myopic advice (gatekeeping?).
I like Hamming’s framing here and it reflects my experience when weighing expert opinion.</p>
<blockquote>
<p>In some respects the expert is the curse of our society, with their assurance they know everything, and without the decent humility to consider they might be wrong. Where the question looms so important, I suggested to you long ago to use in an argument, “What would you accept as evidence you are wrong?” Ask yourself regularly, “Why do I believe whatever I do?” Especially in the areas where you are so sure you know, the area of the paradigms of your field.</p>
<p>– p.340</p>
</blockquote>
<p>I love this exercise. It will also drive you crazy. Tread carefully.</p>
<blockquote>
<p>Systems engineering is indeed a fascinating profession, but one which is hard to practice. There is a great need for real systems engineers, as well as perhaps a greater need to get rid of those who merely talk a good story but cannot play the game effectively.</p>
<p>– p.372</p>
</blockquote>
<p>Controversial, harsh, but true.</p>
<h2 id="the-binding" tabindex="-1">The Binding</h2>
<p>The last thing I want to recognize is the beautiful cloth resin binding and quality printing of the book. Bravo Stripe Press for still producing beautiful artifacts at affordable pricing in the age of print on demand.</p>
]]></content>
    <link rel="alternate" href="https://bret.io/blog/2024/the-art-of-doing-science-and-engineering/"/>
  </entry>
  <entry>
    <id>https://bret.io/blog/2024/async-neocities-bin/#2024-01-15T23:55:33.582Z</id>
    <title>async-neocities has a bin</title>
    <updated>2024-01-15T23:55:33.582Z</updated>
    <published>2024-01-15T23:55:33.582Z</published>
    <author>
      <name>Bret Comnes</name>
      <uri>https://bret.io</uri>
    </author>
    <content type="html"><![CDATA[<p><a href="https://github.com/bcomnes/async-neocities"><code>async-neocities</code></a> <a href="https://github.com/bcomnes/async-neocities/releases/tag/v3.0.0">v3.0.0</a> is now available and introduces a CLI.</p>
<pre><code class="hljs language-console">Usage: async-neocities [options]

    Example: async-neocities --src public

    --help, -h            print help text
    --src, -s             The directory to deploy to neocities (default: &quot;public&quot;)
    --cleanup, -c         Destructively clean up orphaned files on neocities
    --protect, -p         String to minimatch files which will never be cleaned up
    --status              Print auth status of current working directory
    --print-key           Print api-key status of current working directory
    --clear-key           Remove the currently associated API key
    --force-auth          Force re-authorization of current working directory

async-neocities (v3.0.0)
</code></pre>
<p>When you run it, you will see something similar to this:</p>
<pre><code class="hljs language-console"><span class="hljs-meta prompt_">&gt; </span><span class="language-bash">async-neocities --src public</span>

Found siteName in config: bret
API Key found for bret
Starting inspecting stage...
Finished inspecting stage.
Starting diffing stage...
Finished diffing stage.
Skipping applying stage.
Deployed to Neocities in 743ms:
    Uploaded 0 files
    Orphaned 0 files
    Skipped 244 files
    0 protected files
</code></pre>
<p><a href="https://github.com/bcomnes/async-neocities"><code>async-neocities</code></a> was previously available as a GitHub Action called <a href="https://github.com/marketplace/actions/deploy-to-neocities">deploy-to-neocities</a>. This Action API remains available, however the CLI offers a local-first workflow that was not previously offered.</p>
<h2 id="local-first-deploys" tabindex="-1">Local First Deploys</h2>
<p>Now that <code>async-neocities</code> is available as a CLI, you can easily configure it as an <code>npm</code> script and run it locally when you want to push changes to <a href="https://neocities.org">neocities</a> without relying on GitHub Actions.
It also works great in Actions with side benefit of deploys working exactly the same way in both local and remote environments.</p>
<p>Here is a quick example of that:</p>
<ul>
<li>Install <code>async-neocities@^3.0.0</code> to your project’s <code>package.json</code>.</li>
<li>Set up a <code>package.json</code> deploy script:<pre><code class="hljs language-json"> <span class="hljs-attr">&quot;scripts&quot;</span><span class="hljs-punctuation">:</span> <span class="hljs-punctuation">{</span>
    <span class="hljs-attr">&quot;build&quot;</span><span class="hljs-punctuation">:</span> <span class="hljs-string">&quot;npm run clean &amp;&amp; run-p build:*&quot;</span><span class="hljs-punctuation">,</span>
    <span class="hljs-attr">&quot;build:tb&quot;</span><span class="hljs-punctuation">:</span> <span class="hljs-string">&quot;top-bun&quot;</span><span class="hljs-punctuation">,</span>
    <span class="hljs-attr">&quot;clean&quot;</span><span class="hljs-punctuation">:</span> <span class="hljs-string">&quot;rm -rf public &amp;&amp; mkdir -p public&quot;</span><span class="hljs-punctuation">,</span>
    <span class="hljs-attr">&quot;deploy&quot;</span><span class="hljs-punctuation">:</span> <span class="hljs-string">&quot;run-s build deploy:*&quot;</span><span class="hljs-punctuation">,</span>
    <span class="hljs-attr">&quot;deploy:async-neocities&quot;</span><span class="hljs-punctuation">:</span> <span class="hljs-string">&quot;async-neocities --src public --cleanup&quot;</span>
  <span class="hljs-punctuation">}</span><span class="hljs-punctuation">,</span>
</code></pre>
</li>
<li>Run a deploy once locally to set up the <code>deploy-to-neocities.json</code> config file. Example config contents:<pre><code class="hljs language-json"><span class="hljs-punctuation">{</span><span class="hljs-attr">&quot;siteName&quot;</span><span class="hljs-punctuation">:</span><span class="hljs-string">&quot;bret&quot;</span><span class="hljs-punctuation">}</span>
</code></pre>
</li>
<li>Run deploys locally with <code>npm run deploy</code>.</li>
<li>Configure your CI to run <code>npm run deploy</code> and configure the token secret.<pre><code class="hljs language-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">Deploy</span> <span class="hljs-string">to</span> <span class="hljs-string">neociteis</span>

<span class="hljs-attr">on:</span>
  <span class="hljs-attr">push:</span>
    <span class="hljs-attr">branches:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">master</span>

<span class="hljs-attr">env:</span>
  <span class="hljs-attr">node-version:</span> <span class="hljs-number">21</span>
  <span class="hljs-attr">FORCE_COLOR:</span> <span class="hljs-number">2</span>

<span class="hljs-attr">concurrency:</span> <span class="hljs-comment"># prevent concurrent deploys doing starnge things</span>
  <span class="hljs-attr">group:</span> <span class="hljs-string">deploy-to-neocities</span>
  <span class="hljs-attr">cancel-in-progress:</span> <span class="hljs-literal">true</span>

<span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">deploy:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>

    <span class="hljs-attr">steps:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v4</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Create</span> <span class="hljs-string">LFS</span> <span class="hljs-string">file</span> <span class="hljs-string">list</span>
      <span class="hljs-attr">run:</span> <span class="hljs-string">git</span> <span class="hljs-string">lfs</span> <span class="hljs-string">ls-files</span> <span class="hljs-string">-l</span> <span class="hljs-string">|</span> <span class="hljs-string">cut</span> <span class="hljs-string">-d&#x27;</span> <span class="hljs-string">&#x27; -f1 | sort &gt; .lfs-assets-id
    - name: Restore LFS cache
      uses: actions/cache@v3
      id: lfs-cache
      with:
        path: .git/lfs
        key: ${{ runner.os }}-lfs-${{ hashFiles(&#x27;</span><span class="hljs-string">.lfs-assets-id&#x27;)</span> <span class="hljs-string">}}-v1</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Git</span> <span class="hljs-string">LFS</span> <span class="hljs-string">Pull</span>
      <span class="hljs-attr">run:</span> <span class="hljs-string">git</span> <span class="hljs-string">lfs</span> <span class="hljs-string">pull</span>

    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Use</span> <span class="hljs-string">Node.js</span>
      <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/setup-node@v4</span>
      <span class="hljs-attr">with:</span>
        <span class="hljs-attr">node-version:</span> <span class="hljs-string">${{env.node-version}}</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">i</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">run</span> <span class="hljs-string">deploy</span>
      <span class="hljs-attr">env:</span>
        <span class="hljs-attr">NEOCITIES_API_TOKEN:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.NEOCITIES_API_TOKEN</span> <span class="hljs-string">}}</span>
</code></pre>
</li>
</ul>
<p>The <code>async-neocities</code> CLI re-uses the same ENV name as <code>deploy-to-neocities</code> action so migrating to the CLI requires no additional changes to the Actions environment secrets.</p>
<h2 id="clis-vs-actions" tabindex="-1">CLIs vs Actions</h2>
<p>This prompts some questions regarding when are CLIs and when are actions most appropriate. Lets compare the two:</p>
<h3 id="clis" tabindex="-1">CLIs</h3>
<ul>
<li>Pro: Local deploys</li>
<li>Pro: Easily re-usable in CI as well</li>
<li>Con: Requires Node.js, but this is not a problem when already using Node.js</li>
</ul>
<h2 id="actions" tabindex="-1">Actions</h2>
<ul>
<li>Pro: Works great with Non-node ecosystems without requiring a <code>package.json</code> or <code>node_modules</code></li>
<li>Con: Only runs in CI environments</li>
</ul>
<h2 id="conclusion" tabindex="-1">Conclusion</h2>
<p>In addition to the CLI, <code>async-neocities</code> migrates to full Node.js <code>esm</code> and internally enables <code>ts-in-js</code> though the types were far to dynamic to export full type support with the time I had available.</p>
<p>With respect to an implementation plan going forward regarding CLIs vs actions, I’ve summarized my thoughts below:</p>
<p>Implement core functionality as a re-usable library.
Exposing a CLI makes that library an interactive tool that provides a local first workflow and is equally useful in CI.
Exposing the library in an action further opens up the library to a wider language ecosystem which would otherwise ignore the library due to foreign ecosystem ergonomic overhead.
The action is simpler to implement than a CLI but the CLI offers a superior experience within the implemented language ecosystem.</p>
]]></content>
    <link rel="alternate" href="https://bret.io/blog/2024/async-neocities-bin/"/>
  </entry>
  <entry>
    <id>https://bret.io/blog/2023/reorganized/#2023-12-02T20:49:41.713Z</id>
    <title>Reorganized</title>
    <updated>2023-12-02T20:49:41.713Z</updated>
    <published>2023-12-02T20:49:41.713Z</published>
    <author>
      <name>Bret Comnes</name>
      <uri>https://bret.io</uri>
    </author>
    <content type="html"><![CDATA[<p>Behold, a mildly redesigned and reorganized landing page:</p>
<p><img src="./img/screenshot.png" alt="screenshot of the new website"></p>
<p>It’s still not great, but it should make it easier to keep it up to date going forward.</p>
<p>It has 3 sections:</p>
<ul>
<li><a href="/#">Featured Projects</a>: important projects of note.</li>
<li><a href="/#recent-posts">Recent Posts</a>: now that this site has proper blog support, I can highlight recent posts on the landing page.</li>
<li><a href="/#open-source">Open Source</a>: interesting and notable projects that have found some use and that I still maintain. This section now includes a bunch of open source work from the past year that I’ve never had time to write about.</li>
<li><a href="/#past-projects">Past projects</a>: inactive projects that are not longer active, but still interesting enough to share.</li>
</ul>
<p>I removed a bunch of older inactive projects and links and stashed them in a <a href="/projects/previous-projects/">project</a>.</p>
<p>Additionally, the edit button in the page footer now takes you to the correct page in GitHub for editing, so if you ever see a typo, feel free to send in a fix!</p>
<p>Finally, the <a href="/about/">about</a> page includes a live dump of the dependencies that were used to build the website.</p>
]]></content>
    <link rel="alternate" href="https://bret.io/blog/2023/reorganized/"/>
  </entry>
</feed>