<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Software As She&apos;s Developed</title>
    <description>Technology, Programming, Startups, Trendspotting by Michael Mahemoff</description>
    <link>http://softwareas.com/</link>
    <atom:link href="http://softwareas.com/feed.xml" rel="self" type="application/rss+xml"/>
    <pubDate>Sat, 21 Dec 2024 09:36:50 +0000</pubDate>
    <lastBuildDate>Sat, 21 Dec 2024 09:36:50 +0000</lastBuildDate>
    <generator>Jekyll v3.10.0</generator>
    
      <item>
        <title>2025 Tech Predictions</title>
        <description>&lt;p&gt;In no particular order, here are the top trends and product directions I’ll be watching out for in 2025.&lt;/p&gt;

&lt;h1 id=&quot;web-automation-and-ai-browsers&quot;&gt;Web automation and AI browsers&lt;/h1&gt;

&lt;p&gt;Is this the year AI agents break out? I won’t go that far, but the piping is underway and the web will be a key part of it.&lt;/p&gt;

&lt;p&gt;Using ChatGPT today still feels like talking to someone who wakes up to answer a question, then goes into hibernation until you ask it something else. This stateless flow was a great way to kick off the AI hype: easy to use, scalable to host, and relatively safe insofar as a single text response can only do so much damage. However, it’s left people wanting something far more autonomous, which is why software agents quickly followed ChatGPT, e.g. BabyGPT and AutoGPT. Those were cool in concept, but tools like this need to interact with the internet to do useful things, and that’s where they often came up short due to anti-bot mechanisms. It’s now a couple of years later, and industry leaders are working on ways to grant agents internet access, safely and with consent of both users and service providers.&lt;/p&gt;

&lt;p&gt;Web browsers look like they’ll be a big part of this, since they share common patterns across services and are easier to augment than native apps, as browser extensions have long exemplified. There’s already been various efforts to make browsers AI-capable, such as Copilot being bolted onto Bing. Startup Arc has gone further by organising tabs and downloads with AI, and performing searches by going out to the web and building a summary result.&lt;/p&gt;

&lt;p&gt;OpenAI has already baked search into ChatGPT and is rumoured to be contemplating a browser. It would make sense in this context. While I’m sure they would love to build out an entire hardware and app ecosystem longer term, a browser doesn’t require experimenting with entirely new form factors or convincing developers to port their apps across to a new OS. Thanks to the open nature of the web and the open-source foundation of Chromium, startups such as Brave and Arc &lt;em&gt;have&lt;/em&gt; been able to gain traction in the browser space.&lt;/p&gt;

&lt;p&gt;Google is also pushing the frontier in this space. They recently announced they’re testing a &lt;a href=&quot;https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/#project-mariner&quot;&gt;browser extension that supports Gemini-powered automation&lt;/a&gt;. Watch this space … or ask your AI agent to watch it for you.&lt;/p&gt;

&lt;h1 id=&quot;synthetic-podcasts&quot;&gt;Synthetic podcasts&lt;/h1&gt;

&lt;p&gt;Text-to-speech is not yet perfect, but getting to be good enough for long-form consumption. It’s becoming a powerful enabler for fully automated podcast production when combined with the capacity of (a) LLMs to generate content and (b) AI image generators for artwork.&lt;/p&gt;

&lt;p&gt;Soon after ChatGPT launched, there were already fully automated podcasts in production, such as &lt;a href=&quot;https://theresanaiforthat.com/ai/hacker-news-recap/&quot;&gt;Hacker News Recap&lt;/a&gt;. I subscribed for a while and found it quite handy for a catchup; my main issue was the language could get quite repetitive. That’s the “ChatGPT voice” that professors could use to detect plagiarism with certain idiosyncratic expressions. However, this effect will dissipate as LLMs improve and prompts are better engineered to inject some more randomness and distinct personas.&lt;/p&gt;

&lt;p&gt;While HN Recap is a single-voice narration, Google’s NotebookLLM generated excitement this year when people discovered its ability to produce podcasts using two synthetic hosts on any topic, covered even by &lt;a href=&quot;https://www.youtube.com/watch?v=jx2imp33glc&quot;&gt;CNBC&lt;/a&gt;. There’s not yet flexibility on the voices, but there’s certain to be startups as well as Google working on a whole set of options for anyone to spin up a podcast.&lt;/p&gt;

&lt;p&gt;The immediate implication will be a lot of junk filling podcast catalogues, which I fear will lead to a breakdown of the long-lasting open nature of podcasting. Within that bucket of AI content, I believe some will turn out to be interesting enough if they can just inject a little bit of human guidance and editing in the mix. The software can still handle much of the grunt work of gathering content, setting up a plan, recording, and editing, but humans can still touch up each of these. In a few years, the current production model, with humans micromanaging everything, will feel positively quaint. There will certainly be demand for human talent doing “real recording”, but even this audio or video will be editable with the touch of a keyboard or the utterance of a command.&lt;/p&gt;

&lt;h1 id=&quot;video-podcasts&quot;&gt;Video podcasts&lt;/h1&gt;

&lt;p&gt;Video is becoming a key part of the mix for modern podcasts. While video has always been possible in the 20-odd years since podcasts were invented, it was far from practical or desirable for publishers to bother with. Today video is easier to monetize, particularly because of the ubiquity of YouTube on smart TVs and mobile devices, and the bandwidth available for billions of people to access video at low cost. Where hosting costs used to be a major burden for publishers, YouTube makes it free and then adds a revenue share on top. Spotify is also working aggressively to handle video podcasts and with hosting costs now much cheaper than the 2000s, community platforms like Substack are able to host video on behalf of premium subscribers.&lt;/p&gt;

&lt;p&gt;While video production has also become easier, it’s still a big investment and not something every creator can do well. Part of the appeal of podcasts is how open they are to anyone who can speak passionately about a topic. However, audiences only have 24 hours a day and their attention is divided between all players. Within any niche, there’s a winner-takes-all dynamic taking place as podcast budgets continue to grow. As technology makes video as readily available as audio, we can expect to see video becoming almost a requirement for aspiring podcasters. When you consider how easy it’s becoming to AI-generate a half-decent audio podcast (see previous section), there’s even more reason to differentiate with a video format.&lt;/p&gt;

&lt;p&gt;Many users will no doubt continue to listen to podcasts rather than actively watch them. However, there’s still going to be increasing pressure to have the video available to those users as part of the complete package.&lt;/p&gt;

&lt;h1 id=&quot;neatfakes-authorised-replica-media&quot;&gt;Neatfakes: Authorised replica media&lt;/h1&gt;

&lt;p&gt;Deepfakes have gotten some bad press because — quite rightly — no-one wants to be impersonated without their consent and there are serious security concerns. However, the underlying technology of digital twin replication has genuine applications when it’s used in an authorised context.&lt;/p&gt;

&lt;p&gt;One use of authorised digital media is to have your “digital twin” act as yourself on phone calls. To be clear, I’m not sold on that idea and not referring to it here. It’s deceptive to use such a voice if the other side is not clear they’re talking to a machine. And if they &lt;em&gt;are&lt;/em&gt; clear they’re talking to a digital version of someone, what are we achieving here exactly? Why couldn’t it be a different voice?&lt;/p&gt;

&lt;p&gt;Where this tech is more practical is in media production. Dubbing movies and TV shows can now be done via AI. AI speech preserves the voice of the original actor, while AI video smooths over their face movements as this speech takes place. This will dramatically lower the cost of translation while improving quality. In the future, we can expect any given production to be available in dozens of languages.&lt;/p&gt;

&lt;p&gt;The production ease and speed means we can expect to see translations in contexts we have not yet seen them before. Lex Fridman’s &lt;a href=&quot;https://lexfridman.com/javier-milei/&quot;&gt;recent interview of President Milei&lt;/a&gt; showcased how translation software can be used for an interview as he was able to release it in both full-English and full-Spanish, while preserving either speaker’s voice and expressions. Expect to see more use of this technology in podcasts and mainstream news media.&lt;/p&gt;

&lt;p&gt;Another more controversial use is to “revive” the deceased in video form. People can’t give consent after they die (although their digital twin can). They can, however, establish rights agreements and wills with that technology in mind, and I would imagine this will start to become standard practice in future even for non-celebrities. Netflix recently released
&lt;a href=&quot;https://theankler.com/p/netflix-churchill-at-war-documentary-imagine-ai&quot;&gt;a Churchill documentary&lt;/a&gt; along these lines, with the following preamble making it clear the text has been authorised to the extent that’s possible:&lt;/p&gt;

&lt;p&gt;“With permission from the Churchill Estate, this series utilizes voice enhancement to present many of those written and reported words aloud. For the first time.”&lt;/p&gt;

&lt;h1 id=&quot;prediction-markets&quot;&gt;Prediction markets&lt;/h1&gt;

&lt;p&gt;I’m make a prediction that prediction markets will rise in consciousness among the pundit and journalist class. This includes dedicated prediction markets such as PolyMarket, Kalshi, and Manifold; as well as conventional betting sites that are expanding their scope to cover broader topics than sports such as politics and current affairs.&lt;/p&gt;

&lt;p&gt;When it comes to politics, there’s a rich history of betting on US presidential elections dating back to the 1800s, with trading volumes at times &lt;a href=&quot;https://www.jstor.org/stable/3216894?utm_source=chatgpt.com&quot;&gt;exceeding stock and bond volumes&lt;/a&gt;. Before the polling industry became more sophisticated, price action on candidates was seen as a useful gauge during election campaigns. However, the activity was eventually swept up in anti-gambling legislation and a century later, activity is still very restricted in the US and many other jurisdictions.&lt;/p&gt;

&lt;p&gt;However, the notion persists that such markets can be a valuable indicator on politics, technology, or just about any other topic. The economy already benefits from cues provided by prices of stocks, bonds, currencies, and other instruments, allowing businesses and governments to tap into the wisdom of the crowds and plan accordingly. Thus, the thinking goes, why not also use the greed motive to inform businesses and governments about the odds of elections, product launches, or Nobel Prize winners. Compared to polls, prediction markets are a more direct way to estimate probabilities at any given time; and they are not susceptible to selection biases or respondents who are too shy to disclose their thoughts truthfully.&lt;/p&gt;

&lt;p&gt;This has been a start-and-stop industry for decades. There’s not just concerns about gambling (which a cynic might associate with heavy funding from gambling lobbyists), but also worries about insider information and manipulation of results. In the extreme case of manipulation, detractors have pointed out the risk of &lt;a href=&quot;https://en.wikipedia.org/wiki/Assassination_market&quot;&gt;“assassination markets”&lt;/a&gt; where tragic events are betted on and then brought into reality by an actor with a stake in the outcome.&lt;/p&gt;

&lt;p&gt;Before 2024, most conversations about prediction markets were limited to very niche communities, e.g. political data wonks like Nate Silver, “rationalist” writers, some economists. The recent US election saw conversations break out into broader media. As one example, the founder of PolyMarket appeared on All-In’s election night livestream, where he pointed out in real-time how much traditional media was lagging the market’s forecasts state by state. On social media, there was widespread speculation around one “Théo”, a French trader &lt;a href=&quot;https://www.wsj.com/finance/trump-whale-scored-85-million-windfall-on-election-7c2cd906&quot;&gt;who bet over $70 million on Trump&lt;/a&gt;. By way of contrast, it was frequently mentioned after the Brexit result that hedge fund traders had made fortunes shorting the pound, but no-one ever mentioned prediction markets in all this. The volumes would have been too minuscule for anyone to care.&lt;/p&gt;

&lt;p&gt;Why now? I believe the timing was due to the growth of those prediction market platforms in the past few years. That begs the question, why did those platforms grow all of a sudden when anyone could have made a smartphone app to do this in 2010? I think the primary answer is crypto. Decentralised currency made it easy for some platforms to skirt the regulations in allowing trading with real money.&lt;/p&gt;

&lt;p&gt;While there will be ever-growing opposition to this trend, there’s a powerful viral element in speculation like this that should continue to see interest rise. Through social media and podcasts, the public is becoming aware that these markets can forecast events in ways that polls cannot. And it doesn’t hurt that they’re hearing about big winners like Théo.&lt;/p&gt;

&lt;h1 id=&quot;defence-tech&quot;&gt;Defence-Tech&lt;/h1&gt;

&lt;p&gt;Defence-tech is becoming popular with investors and entrepreneurs from within mainstream tech industry. Workers and investors have often avoided the industry due to moral objections as well as a general perception that the industry is too dull, too slow-moving compared to the familiar Just Ship It mentality.&lt;/p&gt;

&lt;p&gt;The objections reached a peak a few years ago, when (a) the threat of superintelligent AI seemed at least decades away (b) enough tech employment for sufficient portions of the tech industry to feel empowered enough to &lt;a href=&quot;https://www.business-humanrights.org/en/latest-news/the-business-of-war-google-employees-protest-work-for-the-pentagon/&quot;&gt;lobby their bosses against that line of work&lt;/a&gt; (c) there were less hot wars happening to bring about a sense of urgency.&lt;/p&gt;

&lt;p&gt;Two companies in particular have been working to combat the notion that defence-tech work has to be slow and boring: Palantir and Anduril. Both take a product-oriented approach to their work, going against the tradition of years-long waterfall projects built to spec and billed hourly, thus creating an incentive for contractors to drag their heels and creep the scope. Both have also benefitted from the recent AI boom as they were already anchored in that world before it was cool.&lt;/p&gt;

&lt;p&gt;Palantir was founded in 2003 and has built a suite of data analysis products for customers spanning from secretive intelligence and defence organisations to high-profile enterprises and government departments. By sharing common components between all of these customers, Palantir is able to provide highly competitive products and timelines to defence that would otherwise have been custom-built, even if there’s a degree of customisation required.&lt;/p&gt;

&lt;p&gt;Founded in 2017, Anduril represents an even newer and more radical approach than Palantir (where one of its co-founders had previously worked). The goal is to build vertically-integrated defence hardware in the style of a tech startup, taking an approach that is agile and embraces a software-first model that would not be out of place in makers of physical consumer products such as Apple, Tesla, or Peloton. Going a step further, Anduril has also developed an operating system, Lattice, with the intention of leading an entire ecosystem of software-focused companies that can plug in and communicate with each other. Remarkably for a project of this nature, the company recently &lt;a href=&quot;https://www.anduril.com/article/the-contours-of-war-are-changing-we-must-prepare-for-interoperability-at-the-edge/&quot;&gt;made public the SDK and docs&lt;/a&gt; just like any startup with an API would do.&lt;/p&gt;

&lt;h1 id=&quot;severance&quot;&gt;Severance&lt;/h1&gt;

&lt;p&gt;To round out the list, here’s a fun prediction and frankly a recommendation too. Severance season 2 is due in January. The first season rates alongside Black Mirror in the pantheon of sci-fi productions over the past decade. I recently re-watched the season again in anticipation of S2 and found the second time round just as engaging. Seriously, go binge if you haven’t already. I’m expecting this will be a series that accumulates a fan base as time goes on, in the mould of Breaking Bad or White Lotus. The unique premise and the cast of characters in this universe lend themselves to an intriguing set of story arcs ahead.&lt;/p&gt;
</description>
        <pubDate>Sat, 21 Dec 2024 00:00:00 +0000</pubDate>
        <link>http://softwareas.com/2025-tech-trends</link>
        <guid isPermaLink="true">http://softwareas.com/2025-tech-trends</guid>
        
        
      </item>
    
      <item>
        <title>2025 Tech Year in Review: An Opinionated Guide to the Events that will have the Biggest Long-Term Impacts</title>
        <description>&lt;h1 id=&quot;podcasts-ascend-in-cultural-influence&quot;&gt;Podcasts ascend in cultural influence&lt;/h1&gt;

&lt;p&gt;Four-year cycles - whether elections, Olympics, or World Cup - have long been a revealing way to observe changes in technology. (Fun fact: four years represents a more than a tripling in tech capability, if you take Moore’s Law as a generality, i.e. &lt;em&gt;log(4)/log(1.5) ~= 3.4&lt;/em&gt; ). In this election cycle, podcasts were a hot topic, with some dubbing it &lt;a href=&quot;https://www.poynter.org/reporting-editing/2024/podcast-role-presidential-election-results-trump-rogan/&quot;&gt;“The Podcast Election”&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Both candidates went on &lt;a href=&quot;https://time.com/7099104/presidential-podcast-media-tour-donald-trump-kamala-harris/?utm_source=chatgpt.com&quot;&gt;presidential podcast tours&lt;/a&gt; alongside traditional rallies. Trump, in particular, appeared on &lt;a href=&quot;https://www.forbes.com/sites/stephenpastis/2024/10/29/here-are-the-biggest-moments-from-trumps-bro-podcast-tour-ahead-of-joe-rogan-appearance/&quot;&gt;at least fourteen podcasts&lt;/a&gt;, including some of the most popular podcasters across all genres. His VP, JD Vance, appeared on even more podcasts than Trump.&lt;/p&gt;

&lt;p&gt;Just as telling was the pressure and intrigue around a Joe Rogan appearance. Joe Rogan Experience is by no means a conventional political show, but is easily the most popular podcast in terms of audience and cultural influence. After much drama, Trump appeared weeks before the election, quickly followed by Vance, and leading to even more speculation around a Harris appearance that never happened and was blamed by some for losing the election (or at least, as an indicator that the winning candidate in future would need to be the kind of person who is comfortable appearing in this kind of space).&lt;/p&gt;

&lt;p&gt;The election also shows the limits of politicians generating their own podcast. Their external appearances were more decisive, and that’s because those podcasts have spent years building up loyal followings. Anecdotally, politicians around the world have taken note and are gearing up in their relations with podcasters.&lt;/p&gt;

&lt;p&gt;In other podcasting news,  Lex Fridman demonstrated the possibilities of AI translation when he interviewed Argentina’s president. Fridman recorded the interview with him speaking English and with President Milei speaking Spanish;  his podcast was then able to publish all-English and all-Spanish versions, both in their own voices.&lt;/p&gt;

&lt;p&gt;In coming years, we can expect to see all forms of media available in any language. This is an example of innovation that (a) boosts quality - since it’s possible to keep original voices and match face movements in video - while (b) smashing costs at the same time since it’s completely automated. The implications are dramatic as anyone with talent no longer needs to speak a major language in order to be discovered by the global community. This will lead to a further flattening of the world and winner-takes-all dynamic across media. It may also help with language preservation as it takes away some of the major motivations for people to learn a second language.&lt;/p&gt;

&lt;p&gt;On the theme of podcasts and influence, there was also a new podcast designed to promote an activist investor campaign, so that’s a first (as far as I know). The prominent activist investor Elliot launched the StrongerSouthwest podcast as part of an activist campaign against Southwest, It’s already gone, though, so I guess it was either a misfire or an overnight success?&lt;/p&gt;

&lt;p&gt;On a personal note, I predicted “2008 will be the Podcast Election” in ye olde &lt;a href=&quot;http://podca.st&quot;&gt;podcast FAQ&lt;/a&gt;. That probably says more about my techno-optimistic bias than anything meaningful about podcasts or politics.&lt;/p&gt;

&lt;h1 id=&quot;nuclear-energy-glows-up&quot;&gt;Nuclear energy glows up&lt;/h1&gt;

&lt;p&gt;Nuclear fission energy is enjoying a renaissance on multiple fronts.&lt;/p&gt;

&lt;p&gt;Institutions concerned with climate change have, to some degree, warmed to nuclear energy. The European Union, for example, finally included nuclear as part of its sustainable activities taxonomy (albeit “transitional” state for now). Anecdotally, there appears to be a generation gap among environmentalists, with older members sticking to the long-held party lines while newer members seeing the compromise as valid. While renewable energy tech has improved immeasurably, there’s still a big gap between tomorrow’s potential and today’s immediate needs.&lt;/p&gt;

&lt;p&gt;Sanctions and sentiment have caused many nations to become more open-minded about energy solutions. Oil was an exception made as part of sanctions against Russia and drives the economies of other countries with frozen relations.&lt;/p&gt;

&lt;p&gt;Then there’s AI and the every-growing needs of cloud providers. This was the marked change in 2024. Microsoft, Amazon, Google, and Meta all have big plans to fuel their data centres this way, which will validate the technology in the eyes of governments and businesses. These companies have driven entire revolutions when it comes to software, hardware, and network infrastructure, meaning we’re likely to see their brightest engineers breathe new life into nuclear, an industry that stagnated in the west for decades.&lt;/p&gt;

&lt;h1 id=&quot;ai-speech-captures-attention&quot;&gt;AI speech captures attention&lt;/h1&gt;

&lt;p&gt;The biggest AI story of 2024 was equal parts drama and substance, namely The Her Incident. In May, ChatGPT announced its upgraded voice assistant with &lt;a href=&quot;https://www.youtube.com/watch?v=c2DFg53Zhvw&quot;&gt;an impressive set of demos&lt;/a&gt;. While ChatGPT already had a voice assistant for premium users, most people hadn’t noticed because (a) they’re not paying (the source of many falsehoods about ChatGPT’s capabilities) (b) it was not very good, particularly because it had a habit of jumping in with a pile of verbiage after the user makes the slightest pause, making it the world’s most superintelligent interrupter. Even founder/CEO Sam Altman was rubbishing it in his &lt;a href=&quot;https://www.youtube.com/watch?v=nSM0xd8xHUM&quot;&gt;All-In interview&lt;/a&gt;, which as it turns out was just a few days before they would announce the voice upgrade.&lt;/p&gt;

&lt;p&gt;I’ll get the drama out of  the way. The voice in the demos sounded suspiciously like Scarlett Johansson, the star of Her, the 2013 movie that envisioned the near-future reality of a lonely dude forming a romantic relationship with his intelligent voice assistant. Turned out she had been approached by OpenAI prior to this to license her voice for training, and declined. She subsequently expressed dismay and threatened a lawsuit as OpenAI pulled the Johansson-like voice option (“Sky”).&lt;/p&gt;

&lt;p&gt;As for the voice feature itself, remarkable. It’s getting tantalisingly close to speaking with a real human assistant. It’s fast. It gets straight to the point instead of waffling on like a long-form text-based chat session. It’s not perfect, but you can easily interrupt it and say something else if you feel it’s missed the point or is waffling on.&lt;/p&gt;

&lt;p&gt;While it might take a little while longer to match the conversation skills of an average human, one can’t lose sight that it already holds great advantages over any human, i.e., vast built-in knowledge on any subject, ability to look things up online and make sense of them in a fraction of a second, and always being present. Furthermore, many people just find it easier to work with voice than reading and typing - there’s a reason text-based social media like Twitter and Reddit never captured the masses like Instagram or TikTok. Even those who find comfort in text nevertheless find themselves equipped with the ability to multitask using their voice at the same time. Or use it while walking around or doing chores, just like people heretofore listen to music and podcasts.&lt;/p&gt;

&lt;p&gt;For these reasons, I believe we’re close to the tipping point envisioned in Her. The core functionality is already there; it’s just a matter of hardware integration to make communication seamless and always-available instead of having to pull out your phone and launch into a specific screen within a specific app.&lt;/p&gt;

&lt;h1 id=&quot;machines-learns-to-reason&quot;&gt;Machines learns to reason&lt;/h1&gt;

&lt;p&gt;Voice was far from the only OpenAI upgrade this year. OpenAI also launched o1, their first model to be explicitly based on &lt;a href=&quot;https://proceedings.neurips.cc/paper_files/paper/2022/hash/9d5609613524ecf4f15af0f7b31abca4-Abstract-Conference.html&quot;&gt;Chain Of Thought reasoning&lt;/a&gt; (CoT). Anthropic and Google likewise enhanced the reasoning capabilities of their models.&lt;/p&gt;

&lt;p&gt;To understand CoT,  it’s important to be aware that a basic LLM works by producing one “token” (~ word) at a time, being guided by what has been composed up to that point. This is append-only, which is to say, there is no backtrack or explicit planning capability. It’s remarkable how well this has worked in practice. but still leaves opportunity for advanced strategies that would be closer to the way humans reason.&lt;/p&gt;

&lt;p&gt;In the simplest case, prompt engineering can be used, like appending an instruction with “think about this problem, outline the steps involved first, and address it step by step”. This can encourage the LLM to break down its response into sub-tasks and ensure each sub-task is more aware of what subsequent sub-tasks will need of it. Nevertheless, this is still a sequential activity.&lt;/p&gt;

&lt;p&gt;Chain-of-Thought goes a step further by &lt;em&gt;baking reasoning directly into the model&lt;/em&gt;. With GPT 4-o, OpenAI refined GPT-4 by training it on examples of reasoning, where a complex task is broken into sub-tasks. Thus the LLM becomes inherently more capable at breaking &lt;em&gt;any&lt;/em&gt; complex task into sub-tasks.&lt;/p&gt;

&lt;p&gt;A side benefit of this “divide-and-conquer” logic is a degree of transparency, where users can get some insight into the LLM’s inner workings. Indeed, ChatGPT updated its user-interface for 4-o to explicitly show steps as they’re being worked on before any response is produced. During this preparation phase, it conceptually has the ability to cycle through steps, go back and forth, before it finally begins outputting.&lt;/p&gt;

&lt;p&gt;In some cases, the LLM might “think” for 30 seconds or more, while for simpler tasks, it can immediately output an answer just like earlier models did. A future enhancement which will speed thing up is parallel processing, where the LLM can determine in advance that certain sub-tasks may be computed independently of each other, thus running them in parallel. Since a single human cannot multitask, this is another example where LLMs will be able to surpass human capability.&lt;/p&gt;

&lt;h1 id=&quot;consumer-robotics-becoming-a-reality&quot;&gt;Consumer robotics becoming a reality&lt;/h1&gt;

&lt;p&gt;After centuries of mythology and science-fiction - and decades of actual research - artificial humanoids are now becoming a reality. LLMs have been the most obvious beneficiary of AI research recently, but robotics stands to benefit from many of the same underlying technologies at both a software and a hardware level. In the same way &lt;a href=&quot;https://blogs.nvidia.com/blog/first-gpu-gaming-ai/&quot;&gt;LLMs benefitted from years of spend from video game enthusiasts&lt;/a&gt;, the nascent field of humanoid robotics is benefitting indirectly from spend on LLMs due to the value they are already delivering their customers today.&lt;/p&gt;

&lt;p&gt;China has long been investing heavily to automate manufacturing, thereby hoping to keep its lead even as the industry is disrupted by robotics, which could make it viable for some countries to promote a resurgence of local factories. Humanoid robotics is likely to form part of China’s strategy in this effort, and Chinese car maker BYD is also operating Walker robots on its assembly line.&lt;/p&gt;

&lt;p&gt;In the US, robotics startup Figure AI raised at a $2.6 billion valuation and announced Figure 02 in August. Figure robots &lt;a href=&quot;https://www.bmwgroup-werke.com/spartanburg/en/news/2024/successful-test-of-humanoid-robots.html&quot;&gt;worked on BMW’s assembly line&lt;/a&gt; as part of a trial this year.&lt;/p&gt;

&lt;p&gt;These industrial applications - where the robot’s environment is controlled and limited to repetitive tasks - should help to build confidence, revenue, and capability to expand into domestic applications over the next decade.&lt;/p&gt;

&lt;p&gt;Tesla is one company aiming to provide home robots and appears to be making good progress on its Optimus line. No-one would ever bet on Musk’s timelines, but for what it’s worth, the company claims it will be using 1000 robots internally next year. Tesla is a recent entrant and may not yet have caught up, but has among the deepest pockets and Musk has a track record of taking mere ideas from ideas to one. He also now finds himself with significant political power that will be helpful in attracting funding and navigating regulation.&lt;/p&gt;

&lt;h1 id=&quot;alphafold-enriches-the-foundation-of-bio-engineering&quot;&gt;AlphaFold enriches the foundation of bio-engineering&lt;/h1&gt;

&lt;p&gt;In 2024, DeepMind’s Demis Hassabis and John Jumper were awarded Nobel prizes for the contribution to solving the “folding problem”. This is a decades-old effort to figure out how the shape of a protein molecule will turn out, based solely on the sequence of amino acids that constitute its raw ingredients. Being able to make such predictions is far more than an intellectual exercise, as it unlocks the ability to generate therapies through software models, which is much faster and easier to automate than conventional lab-based techniques.&lt;/p&gt;

&lt;p&gt;As with LLMs, AlphaFold achieves this by training on verified pairings of inputs and outputs. In this case, the inputs are amino acids and the outputs are 3-dimensional shapes. As was mentioned with respect to robotics, this is another area where the technology is converging across domains, leading to economies of scale that make all of those domains more productive.&lt;/p&gt;

&lt;p&gt;While the Nobel was awarded for earlier work, DeepMind achieved another milestone this year in launching AlphaFold 3. This is the first major version to expand its scope beyond protein molecules, into predicting shape of &lt;a href=&quot;https://www.sciencemediacentre.org/expert-reaction-to-the-latest-version-of-deepminds-alphafold&quot;&gt;DNA and other small molecules&lt;/a&gt;.  It it also more “automated” insofar as it doesn’t rely on certain knowledge and rules of physics and chemistry, i.e., it is more capable of figuring out these things on its own just by pattern-matching. This mirrors the broader trend in AI away from the pile of “hard-coded if-then statements” to “it just works” if you throw enough data and compute at the problem.&lt;/p&gt;

&lt;p&gt;There’s increasing interest in AI drug discovery, with one drug &lt;a href=&quot;https://insilico.com/blog/first_phase2&quot;&gt;already in Phase 2 testing&lt;/a&gt;, and one of the enabling technologies is software that models how the body will respond to such interventions. Hence the relevance of AlphaFold. Its applications also go beyond therapies to industrial design &lt;a href=&quot;https://www.forbes.com/sites/amyfeldman/2024/05/03/this-startups-ai-designs-enzymes-that-can-eat-plastic-waste/&quot;&gt;such as new forms of plastics&lt;/a&gt;.&lt;/p&gt;

&lt;h1 id=&quot;social-media-fragments&quot;&gt;Social media fragments&lt;/h1&gt;

&lt;p&gt;Social media continued to fragment in 2024. The latest wave of X departures came after the US election, in which its owner-CEO completed his transition from self-described politically neutral to giving full-throated support to one candidate. In my experience, the algorithm has become unbearable under Musk’s management, not necessarily because of any political bias, but because everything is heavily off-topic from my interests and because I’ve lost any meaningful connection to my contacts. Add potential bias to the mix and it’s easy to see why people continue to abandon the platform.&lt;/p&gt;

&lt;p&gt;Surprisingly, it was BlueSky that picked up most of the followers rather than Meta’s Threads. Ever since 2022 - when Musk announced his takeover - there has been a never-ending cycle of communities moving from one place to another and back again. I wouldn’t confidently predict BlueSky will still be the big winner in 2025, but it would certainly be pleasing to see it reach critical mass, since there’s a world of potential in its open standards and ability for anyone to create a feed algorithm.&lt;/p&gt;

&lt;p&gt;The problem for all these platforms is fragmentation. Just as social media apps can rise quickly when they have a large user base, so can they collapse quickly as their user base dwindles. For this reason, it’s not surprising that apps are focused on curating the best content from across the whole network instead of worrying about the follower network. It makes them more attractive to new users who don’t have any contacts present or who have lost all their content to another platform. Nevertheless, it also removes one of the most powerful aspects of these apps in the first place. As Mark Zuckerberg mentions in just about every interview, his company is all about social connections, which makes it ironic to see an app like Threads focus so much of users’ attention on the celebrities of Instagram instead of those they follow.&lt;/p&gt;

&lt;p&gt;The biggest beneficiary may be a discussion app that never pretended to be about social connections. Reddit has always been about topics first, posts and comments second, user reputations a mere afterthought. After disappearing third-party apps in 2023, the site went public in 2024, making it the highest-profile tech IPO. The stock has performed exceptionally even taking to account the bull market backdrop, with a stock price gain of over 200%.&lt;/p&gt;

&lt;p&gt;Reddit operates in a similar space as X and its clones, all of them being geeky and fundamentally text-based platforms designed for discussions, debates, rants rather than the shallow entertainment scrollfests of Facebook, Instagram, TikTok. In a media landscape that is increasingly full of bots talking to other bots, Reddit’s topic-centric moderation model puts it in a good place as both a user experience and a training ground to sell on to AI platforms.&lt;/p&gt;

&lt;h1 id=&quot;smart-glasses-focus-the-road-ahead-for-extended-reality&quot;&gt;Smart Glasses Focus the Road Ahead for Extended Reality&lt;/h1&gt;

&lt;p&gt;AI may have taken the limelight away from headsets in recent years, but the Company Now Known As Meta continued to dive in the metaverse with a range of announcements concerning Extended Reality (XR).&lt;/p&gt;

&lt;p&gt;In this space, augmented reality (AR) has taken priority over virtual reality (VR). It seems that manufacturers have concluded VR is “cool” for games and demos, but it’s too isolating, too binary, insofar as it requires the user to don the headset and enter a separate mode of reality. You’re either in VR or you’re not. That’s similar to the experience of sitting down to use a desktop PC. In contrast, AR is more like a smartphone, and smartphones overtook PCs because you can take them anywhere and use them for tasks that are as short or as long as you wish. They’re integrated into your life rather than splitting your time into IRL and Second Life.&lt;/p&gt;

&lt;p&gt;With Oculus and Apple Vision Pro (another new product of 2024), their version of Augmented Reality is more accurately called Mixed Reality (MR). Cameras mounted on the headset give you the illusion you’re interacting with the real world while you’re in fact entirely locked inside the headset. From personal experience, both devices are getting impressively close to reality, but they still suffer from the headset form factor. You’re either in it or you’re not, and few people will dare to step in public with a $3000 machine strapped to their head.&lt;/p&gt;

&lt;p&gt;All of this makes Meta’s Orion Glasses the standout XR event of 2024.  These glasses are very close to a conventional pair of specs,  but true augmented reality via waveguides. The way this works is by light project directly on your retina via the glass, rather than “lighting up pixels” on the glass the way a conventional monitor or VR headset would, and more or less how Google Glass worked. Waveguides promise to deliver higher quality display while making the real world look more like the real world.&lt;/p&gt;

&lt;p&gt;Orion is still in early prototype stage, but reports from few external reviewers are optimistic, albeit on a tight set of use cases. Verge reported a game of Pong worked “surprisingly smoothly, and I noticed little to no lag in the game”. For an external Messenger call, “I could see and hear him well in the 2D window floating in front of me” . Meanwhile, Meta is pushing forward with its RayBan partnership - smart glasses with camera AI audio - which will gradually help it build up expertise in many other necessary technologies outside of the key Waveguide functionality. They appear to have avoided the stigma of Glass, and as I mentioned in my &lt;a href=&quot;https://softwareas.com/grading-my-2024-tech-predictions&quot;&gt;grading of 2023 predictions&lt;/a&gt;, they are going out of their way to force this issue early on by releasing a variant that is transparent and therefore shows the camera proudly to anyone who care. Public acceptance is vital to taking such a device mainstream and eventually replacing the smartphone.&lt;/p&gt;

&lt;h1 id=&quot;silicon-valley---washington&quot;&gt;Silicon Valley 🤝  Washington&lt;/h1&gt;

&lt;p&gt;I mostly keep this blog politics-free, but can’t avoid observing the relevance of the US election to the technology sector. The next ivipresident’s inner circle now includes some of the most prominent venture capitalists and entrepreneurs. The next vice-president is a former venture capitalist who ascended to politics with the help of a prominent investor and is steeped in Silicon Valley culture. All of this especially matters because the president himself has shown little interest in technology, meaning his VP and advisors will be largely running the agenda.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=EoG5EammWI4&quot;&gt;Dominic Cummings recently remarked&lt;/a&gt; that Silicon Valley, for a long time enjoyed keeping their distance and not having to worry about the political class in Washington who were largely ignorant towards technology. He noted that this was no longer feasible when technology became both a major influence of political power via social media and a matter of national security as with LLMs. He also argued the EU has stifled innovation through its privacy (GDPR) and AI regulation (AI Act), something he notes has been echoed even by EU-friendly actors such as President Macron.&lt;/p&gt;

&lt;p&gt;The tech folks in the White House are likely to block and repeal most efforts towards regulation, wanting to avoid the fate of the EU and stay competitive with China. While Vance has expressed some admiration for Lina Khan’s clamping down on monopolistic practices, I don’t think this will extend to the FTC’s stance on acquisitions since there’s a view VCs and founders are being denied of an exit. Thus it seems likely the next four years will see a number of high-profile companies - private unicorns and some public small/mid caps - being acquired.&lt;/p&gt;

&lt;p&gt;There’s also concern about lagging military technology and admiration for the more modern example set by DefenseTech startups such as Anduril, which focuses on fast-moving software-first approach and billing against specs rather than hourly labour, making its work arguably more in line with the needs of its clients in the defense industry. There’s likely to be increased initiatives aiming to foster this industry.&lt;/p&gt;
</description>
        <pubDate>Thu, 12 Dec 2024 00:00:00 +0000</pubDate>
        <link>http://softwareas.com/2024-tech-year-in-review</link>
        <guid isPermaLink="true">http://softwareas.com/2024-tech-year-in-review</guid>
        
        
      </item>
    
      <item>
        <title>Housing Git projects inside an Obsidian Vault</title>
        <description>&lt;p&gt;Here is a new usage pattern I came up with recently. An Obsidian vault with folders like this, one per project:&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;blog/ # jekyll blog hosted on github-pages
game/ # web game project
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Now what’s significant about the folders is they are each separate Git projects. So the structure in more detail looks like this:&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;blog/
+--.git/
+--_drafts/
+--posts/
+--README.md
+--(etc)
game/ # web game project
+--.git/
+--physics/
+--assets/
+--README.md
+--(etc)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;So each folder has its own regular .git/ sub-folder as well as the usual project files.&lt;/p&gt;

&lt;p&gt;You could back this Obsidian project by Obsidian Sync or any other cloud service. You could also keep it standalone if you want to. That’s all beside the point.&lt;/p&gt;

&lt;p&gt;Why would you do this? The pattern is useful if you want to edit files in Obsidian, with all of its wonderful UI and sync capabilities, but also need to back projects by Git. I find Obsidian especially powerful on mobile. Even a programming project needs documentation that can be composed on a phone when necessary, e.g. you might wish to add some spontaneous ideas to ROADMAP.md on a whim. There’s several scenarios where it’s nice to back such a project by Git instead of - or in addition to - a cloud solution such as Obsidian Sync or iCloud. For example:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;You want to trigger a worker to process your project upon each commit, e.g. to run automated tests, send an email, whatever. In my case, I rely on Github workers to post Jekyll blog updates to Github Pages. You could rig something up to do this every time a file changes, but it would be overkill … in some cases, you really want to do a manual commit. Furthermore, you get much more fine-grained control, e.g. can run the worker only if a commit occurs on a particular branch.&lt;/li&gt;
  &lt;li&gt;You want to track changes over time in a more structured way than any time the file changes, i.e. create an audit history with manual commits, comments, branches, tags. This gives you a more powerful audit trail and helps others - and future you - understand how certain decisions were made.&lt;/li&gt;
  &lt;li&gt;You are collaborating with other people via Git, They may not have even heard of Obsidian. Everyone gets to choose their own editor, but are connected by a common Git project in the cloud.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Why host several git projects in the same vault instead of just one? Well, you could do just one, but I prefer to host several since Obsidian Sync plans are very limited by number of vaults on offer. It’s also potentially useful to co-mingle projects in the same vault if they are related to each other.&lt;/p&gt;

&lt;p&gt;I previously tried using only Git as a cloud store for my Jekyll blog, but I found it tedious as I had to keep committing, pushing, and pulling to keep devices in sync as I worked on a specific post. This is especially painful on iPhone, where there’s not really a practical plugin to do this within Obsidian, so you have to keep resorting to a third-party Git app such as Working Copy (which is nevertheless a great tool, for what it’s worth). With the solution above, I can easily switch between devices and everything is “just synced” and stored safely in the cloud. Then, when I’m ready to actually publish an article, I do an official Git commit and push from any device.&lt;/p&gt;

&lt;p&gt;One downside is the risk of sync conflicts. It could be a problem if a Git command is issued while Obsidian Sync is running. However, I think this is a low-level risk and a good precaution might be to wait a few seconds after saving before issuing any Git push command. I’m still testing this pattern though, so I’ll watch out for any issues.&lt;/p&gt;
</description>
        <pubDate>Thu, 05 Dec 2024 00:00:00 +0000</pubDate>
        <link>http://softwareas.com/housing-git-projects-inside-an-obsidian-vault</link>
        <guid isPermaLink="true">http://softwareas.com/housing-git-projects-inside-an-obsidian-vault</guid>
        
        
      </item>
    
      <item>
        <title>Grading My 2024 Tech Predictions</title>
        <description>&lt;p&gt;As I prepare this year’s review and predictions posts, here’s a grading of my tech predictions from a year ago.&lt;/p&gt;
&lt;h1 id=&quot;search-begins-its-long-decline-grade-a&quot;&gt;Search begins its long decline (Grade: A)&lt;/h1&gt;
&lt;p&gt;I think this prediction was mostly accurate. Firstly, there is now broad awareness that the entire web is filled with junk, with the phenomenon conveniently given its own label. Below is Google Trend results for “Dead Internet Theory” (yes I get the irony of using Google data). Awareness rose sharply, and while we can’t really be sure how many bots are out there, anecdotes abound from both social media and the broader web at large.
&lt;img src=&quot;https://i.imgur.com/34ZmgOK.png&quot; alt=&quot;&quot; /&gt;
Google and dead Internet theory  https://www.wsj.com/tech/googling-is-for-old-people-thats-a-problem-for-google-5188a6ed
As for search itself, Perplexity has attracted continuing interest from users and raised no less than four successive rounds in 2024, taking it reportedly to a $9B series E. ChatGPT launched SearchGPT and is steadily orienting the core product towards search. Here’s a screenshot of a session I began recently on regular ChatGPT - look familiar?
&lt;img src=&quot;https://i.imgur.com/mybnnb4.png&quot; alt=&quot;&quot; /&gt;
ChatGPT had incredible early adoption, but many of those people generated a funny poem and went back to Google. I can’t see how students and knowledge workers don’t return to a ChatGPT that has the same familiar interface as search, but is vastly more capable and no longer prone to hallucination. It will be a slow adoption to be sure, as habits are hard to break, which is why I predicted 2024 to be just the start of this 
A counterpoint is that Google continues to see search revenue increase, but this is just as likely to be part of the general improvement in the economy, which advertising is highly sensitive to (Meta’s ad business has also thrived). Even Google has had to bite the bullet and introduce its own AI results, which only furthers the decline of traditional search.&lt;/p&gt;
&lt;h1 id=&quot;industrial-ai-fud-campaigns-grade-c&quot;&gt;Industrial AI FUD Campaigns (Grade: C)&lt;/h1&gt;
&lt;p&gt;Anti-AI campaigns didn’t feature as much as I expected on the back of the 2023 Hollywood writers strike. The most notable set of events were the &lt;a href=&quot;https://en.wikipedia.org/wiki/PauseI&quot;&gt;Pause AI&lt;/a&gt; protests in multiple locations, but there wasn’t much of a coordinated effort from industry groups. Nor was AI even a consideration in the US election.
This may be in part because of a fairly robust economy or a lag in adoption, but regardless AI stayed out of any major crosshairs in 2024.&lt;/p&gt;
&lt;h1 id=&quot;ipos-are-back-grade-b&quot;&gt;IPOs are back (Grade: B)&lt;/h1&gt;
&lt;p&gt;There were &lt;a href=&quot;https://stockanalysis.com/ipos/2023/?utm_source=chatgpt.com&quot;&gt;154 US IPOs&lt;/a&gt; in 2023 and 2024 is set for approximately 212 IPOs, an increase of 38%. It wasn’t a resounding bubble year, with the highest-profile IPO being Reddit while the elephants in the room sat it out. Still, a marked increase and a turning of the tide that looks set to continue. &lt;em&gt;The 2024 extrapolation is based on 204 to date and 7 more occurred after today’s date in 2023, so using the same ratio, we get to a further 8 this year.&lt;/em&gt;&lt;/p&gt;
&lt;h1 id=&quot;calls-for-tiktok-spinoff-grade-b&quot;&gt;Calls for TikTok Spinoff (Grade: B)&lt;/h1&gt;
&lt;p&gt;Congress indeed passed &lt;a href=&quot;https://en.wikipedia.org/wiki/Protecting_Americans_from_Foreign_Adversary_Controlled_Applications_Act&quot;&gt;Protecting Americans from Foreign Adversary Controlled Applications Act&lt;/a&gt; in April, then signed by President Biden. The bill explicitly called out TikTok and would effectively require ByteDance to divest it. However, ByteDance/TikTok responded by bringing a first-amendment case against the US Government. So it didn’t eventuate, not in 2024 anyway.&lt;/p&gt;
&lt;h1 id=&quot;video-bots-grade-c&quot;&gt;Video bots (Grade: C)&lt;/h1&gt;
&lt;p&gt;As I mentioned in this prediction, this one is kind of hard to verify. What does appear to have strong evidence is there’s been a major trend towards AI influencers using a cocktail of text, images, and videos … disclosed or otherwise, deepfake or purely synthetic. On the other hand, I don’t see a proliferation of YouTube videos, like news or documentaries, presented by synthetic visual humans (many use AI voices though). Nor have I noticed any major trend in deepfake video scams (also limited to audio for the most part, apparently). So this is a mixed bag at best.&lt;/p&gt;
&lt;h1 id=&quot;normalisation-of-smart-glasses-grade-a&quot;&gt;Normalisation of smart glasses (Grade: A)&lt;/h1&gt;
&lt;p&gt;Meta glasses are becoming fairly normal, now. You don’t see anything like the wrath Google incurred back in the day with Glass. A good demonstration of the positive sentiment is the boldness with which Meta announced &lt;a href=&quot;https://www.zdnet.com/article/i-tested-metas-transparent-ray-ban-smart-glasses-and-theyre-a-near-perfect-accessory-for-me/&quot;&gt;a clear model of their Ray-Ban glasses&lt;/a&gt;. They’re going out of their way to poke the bear, making the camera blatantly obvious, and no-one’s complaining. The tech media response to Meta’s XR announcements were overwhelmingly positive and there’s a genuine sense of anticipation for their Orion project, which has taken some of the wind out of Apple’s Vision Pro by promising to deliver AR through actual glasses instead of headset.&lt;/p&gt;
</description>
        <pubDate>Tue, 03 Dec 2024 00:00:00 +0000</pubDate>
        <link>http://softwareas.com/grading-my-2024-tech-predictions</link>
        <guid isPermaLink="true">http://softwareas.com/grading-my-2024-tech-predictions</guid>
        
        
      </item>
    
      <item>
        <title>2024 Tech Predictions</title>
        <description>&lt;h1 id=&quot;search-begins-its-long-decline&quot;&gt;Search begins its long decline&lt;/h1&gt;
&lt;p&gt;AI chat is, quite simply, a superior experience to search in many cases. You get an answer right away instead of trawling through a dozen spammy search results, each time threading the needle through ads, newsletter prompts, and cookie warnings.
In 2024, the gap will widen as LLMs keep improving while search quality deteriorates in the wake of an abundance of AI-generated content. This is a negative feedback loop. The quality of high-ranking search destination sites will continue to decrease as their traffic declines.
At this stage, most people are still using search and few people are paying for premium LLM access. Therefore, the trend will be slow at first, but I expect it will start to become an agreed long-term trend among tech followers and investors in 2024.&lt;/p&gt;
&lt;h1 id=&quot;industrial-ai-fud-campaigns&quot;&gt;Industrial AI FUD Campaigns&lt;/h1&gt;
&lt;p&gt;2024 is the year when many professionals will be exposed to generative AI as it will begin to be shipped as a production feature in mainstream tools such as Microsoft Office, Google Workspace, Adobe Creative Cloud, as well as all the AI-native startups. Professionals will begin to notice clients getting advice from LLMs, such as doctors hearing from patients who shift from “Doctor Google” to “Doctor GPT”.
This all starts to feel like quite a threat to entrenched interests.
We’ve seen pushback from the designer community since 2022, often with bizarre arguments about what constitutes art mixed in with naieve opinions about copyright law. We then saw the 2022 Hollywood strike, the first major campaign to be entangled with AI. The spring is coiled for protected professions such as doctors, lawyers, teachers to ramp up their opposition.
Expect to hear horror stories about people who took the wrong medicine, lost their home, failed their exam. Warnings about hallucinations and misalignment. There’s elements of truth here, but you can rely on publicity campaigns that dwell on the negatives and ignore the dramatic improvements that can be made.
As with other revolutionary technologies, FUD campaigns can only add temporary friction for an unlucky subset of founders and investors. FUD campaigns won’t really put any meaningful brakes on AI progress.&lt;/p&gt;
&lt;h1 id=&quot;ipos-are-back&quot;&gt;IPOs are back&lt;/h1&gt;
&lt;p&gt;Companies get to choose the timing of their IPOs and, after a dearth of IPOs, 2024 is shaping up as a good choice.
Since the bear market began two years ago, there’s been a lot of money sitting on the sidelines due to nervous investors who would rather benefit from bonds, where they can yield north of 5% with negligible risk. There’s now an expectation, based on Chairman Powell’s commentary and the fact it’s an election year, that rates will gradually be lowered again, and if that proves to be the case, said funds will be poured back into the markets. (They already have ploughed in on expectation, leading to all-time highs, but a bull market, fuelled by AI promise and growth companies maturing, has potential to rise a lot further. Especially if geopolitical conflicts cool.)
A strong bull market is the reason there were so many tech IPOs in the late ’90s and the early pandemic ’20s. Companies who IPO’d funds at those irrational levels built themselves a solid war chest to survive the recent bear market.
There are mature companies like Stripe and SpaceX, who chose not to take advantage of that climate, as well as companies like Reddit who have leaned down and ramped up profits at a time when they can be more ruthless towards their users and employees. Their metrics will look good for IPO and investors will be hungry for it.&lt;/p&gt;
&lt;h1 id=&quot;calls-for-tiktok-spinoff&quot;&gt;Calls for TikTok Spinoff&lt;/h1&gt;
&lt;p&gt;TikTok’s algorithms have been controversial and there have been calls to ban the app in the US and other western countries. However, TikTok also has many adoring users, some of whom vote, and investors with a strong interest in keeping the giant asset alive and well. It also appears to be well-positioned to take advantage of AI innovation, with its algo-first approach lending itself to virtual characters, AI-enhanced posts, and an overall trajectory towards a total “audience of one” feed.
Instead of banning TikTok, or trying hopelessly to sever its operations geographically, a better solution is simply to spin off a “western” (essentially ex-China) version. I’m not really sure if calls for it happen this year as there’s no major catalyst, so it’s a dark horse bet. If there’s a case for it happening, it will be the aforementioned hype for tech IPOs combined with concerns over the US election and many other elections happening around the world.&lt;/p&gt;
&lt;h1 id=&quot;video-bots&quot;&gt;Video bots&lt;/h1&gt;
&lt;p&gt;It’s already possible to generate realistic synthetic video with basic tooling. A social media bot can work 24-7 and a-b test content that is as compelling as possible. I believe we’ll see a huge rise in these kinds of accounts.
This one is hard to verify short term because it’s much more effective for bots to masquerade as a real person; while we’ve seen “AI newsreaders” and so on for decades, they never really take off. At least until now, people gravitate towards accounts that present as real people. Therefore, I think the prediction would need to be verified indirectly, such as a marked rise in crypto scams arising from such accounts or an increase in agencies offering to build virtual person brands, ideally with case studies.&lt;/p&gt;
&lt;h1 id=&quot;normalisation-of-smart-glasses&quot;&gt;Normalisation of smart glasses&lt;/h1&gt;
&lt;p&gt;A decade on from Google Glass and it seems smart glasses are going to be acceptable now. There’s a degree of inevitability that may dampen anyone’s resistance to it. The Glass lesson has been learned by Meta, who’s partnered with Ray-Ban to produce glasses that look pretty much like normal. The cyborg dream has been reigned in for now, with audio being the only sense augmented by reality. Audio AR turns out to be pretty great when there’s LLM capability combined with always-available vision.
Most people won’t even realise someone is wearing smart glasses - out of sight, out of mind, and there will be little pushback. The video they produce could only be dreamed of in the early 2010s, enough benefit to offset whatever stigma remains for many potential users.&lt;/p&gt;
</description>
        <pubDate>Thu, 25 Jan 2024 00:00:00 +0000</pubDate>
        <link>http://softwareas.com/2024-tech-predictions</link>
        <guid isPermaLink="true">http://softwareas.com/2024-tech-predictions</guid>
        
        
      </item>
    
      <item>
        <title>2023: Year in review</title>
        <description>&lt;p&gt;Happy New Year, dear readers. These were the tech stories that interested me most in 2023, along with my commentary.&lt;/p&gt;
&lt;h1 id=&quot;society-scrambled-to-deal-with-the-unexpected-arrival-of-alien-intelligence&quot;&gt;Society scrambled to deal with the unexpected arrival of alien intelligence&lt;/h1&gt;

&lt;p&gt;For all their sophistication, large language models are still poorly understood, with researchers having to rely on trial-and-error and human assessments to make any sorts of meaningful predictions about what a given model is capable of. This is one reason why ChatGPT caught everyone surprise with its public release in late 2022, despire the underlying GPT-3 model having been public for some months. The immense implications only became clear when a hundred million people put it through its paces.&lt;/p&gt;

&lt;p&gt;By start of 2023, it was clear the world would be changing in multiple dimensions. School essays were one of the obvious applications. GPT-4 already receives high grades in written university-level exams and is capable of writing at any level, even introducing deliberate mistakes if instructed. It’s a fantasy to suggest any system will ever be able to detect AI writing. Therefore, schools are now in the position of rethinking everything about how projects are assigned and evaluated. It’s easy to go to the other extreme and say “we should embrace the tools as students will have to work with them for the rest of their lives”, but if you hadn’t even considered this technology a year ago, how could you even begin to imagine what today’s students will be working with a decade from now. It certainly won’t be anything as simple as GPT-4.&lt;/p&gt;

&lt;p&gt;As with schools, so go blogging and journalism. Several publications, such as &lt;a href=&quot;https://www.pbs.org/newshour/economy/sports-illustrated-found-publishing-ai-generated-stories-photos-and-authors&quot;&gt;Sports Illustrated&lt;/a&gt;, were sprung using AI journalists, while others like &lt;a href=&quot;https://www.axios.com/2023/04/13/insiders-newsroom-will-start-experimenting-with-ai&quot;&gt;Insider&lt;/a&gt; were more honest about this new direction. Meanwhile, OpenAI struck a deal with &lt;a href=&quot;https://www.wired.com/story/openai-axel-springer-news-licensing-deal-whats-in-it-for-writers/&quot;&gt;Axel-Springer&lt;/a&gt; (who owns Insider among others) to integrate their content. Concern about fake articles may prove to be futile since it’s not even obvious how much people will want to consume articles at all. If OpenAI can funnel high-integrity news into its assistants, the assistant can write or speak about the content in a manner that’s completely customized to the user.&lt;/p&gt;

&lt;p&gt;It’s not just LLMs, but the related AI technologies that are able to generate out-of-this-world quality images in a matter of seconds. These are having profound impacts on the way people work and a key cause of the Hollywood strikes that occurred in 2023, a precursor of tension that’s about to come across the whole economy. Lawyers will be busy, no matter how many AI tools there are to scan cases and make their arguments.&lt;/p&gt;
&lt;h1 id=&quot;tech-world-bifurcated-into-accel-versus-decel&quot;&gt;Tech world bifurcated into accel versus decel&lt;/h1&gt;

&lt;p&gt;Doomsday stories are nothing new to humanity and there’s a long backlog of concerns over AI taking out humanity, but it’s suddenly became a major concern among practitioners. The sudden emergence of technology that can - for all intents and purposes - pass the Turing test has raised concerns about superintelligence arriving before long. Not only will the usual laws of exponential growth apply, but now there are a theoretically unlimited quantum of human-like intelligences supporting the effort. With all the frailties of human cognitive capability and society’s structured, the prospect of taming such AI has been likened to a medieval army battling against a modern drone army.&lt;/p&gt;

&lt;p&gt;Increasing AI capability - but also concern - has been the great strides made in open-source AI over the past year. Meta officially released its Llama Model as open-source, free for experimentation by all but its largest competitors. Many hobbyists and startups figured out ways to get the best bang from buck running on local machines or self-hosted servers, and ways to train on readily available data or even learn from interacting with existing commercial models.&lt;/p&gt;

&lt;p&gt;Open-source makes the technology more widespread and also allows for the possibility of removing the AI’s values that are, in theory, aligned with the interests of humanity. Furthermore, it can be run on one’s own machine, removing the ability for platforms and authorities to monitor for concerning usage. Advocates will counter that centralised control of superintelligent power will lead to tyranny -for which many precedents exist. Even more likely, they argue, we’ll lose control altogether. It would be the equivalent of an ant creating humanity to make its life easier. Being the creator doesn’t confer unlimited control rights.&lt;/p&gt;

&lt;p&gt;All of this has heightened focus from politicians. Many in government lament ignoring the need to regulate social media in the 2010s and want to make amends by with AI regulation in the 2020s. In a few short months, government awareness went from &lt;a href=&quot;https://www.independent.co.uk/news/world/americas/us-politics/karine-jeanpierre-peter-doocy-ai-b2312168.html&quot;&gt;the president’s spokesperson ridiculing AI fears&lt;/a&gt; to &lt;a href=&quot;https://oversight.house.gov/release/hearing-wrap-up-federal-government-use-of-artificial-intelligence-poses-promise-peril/&quot;&gt;congressional hearings&lt;/a&gt; to &lt;a href=&quot;https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/&quot;&gt;an executive order on safe AI&lt;/a&gt;&lt;/p&gt;
&lt;h1 id=&quot;web-closed-after-33-years-of-public-service&quot;&gt;Web closed after 33 years of public service&lt;/h1&gt;

&lt;p&gt;The web began as an open platform - open-source browsers exchanging public content over open protocols. As search grew to prominence, more attention was paid to the robots that would read content programmatically, leading to a universe of metadata. This philosophy peaked in the Web 2.0 era, when companies took a further step and exposed their services as HTTP-powered APIs, enabling them to be accessed programmatically.&lt;/p&gt;

&lt;p&gt;Around 2010, the tide began to turn. Newer companies like Facebook and Uber were more apprehensive, releasing limited APIs if at all, and even shuttering them. Those who wanted access to services were now forced to scrape content, simulating a human using a browser. Websites responded with CAPTCHA. It became a cat and mouse game.&lt;/p&gt;

&lt;p&gt;Several things changed in 2023 that made sites lock down even harder. Interest rates continued to rise, making the economy even more unpleasant for the tech industry than the previous year (unless you append AI to your name). Companies like Reddit and Twitter had allowed third-party apps to siphon away much of the value they creeated, and wanted that to proceed no longer. With Reddit eyeing an IPO and Twitter’s new austere policies, they simply turned off the drip at short notice and obsoleted dozens of apps, to the disappointment of many millions of users.&lt;/p&gt;

&lt;p&gt;Second, proprietary data became the new oil as it can be used to train large language models. With programmers characteristically being early adopters, it was StackOverflow that immediately felt the effects of ChatGPT, Copilot and similar tools that could solve programming questions more effectively. Site traffic for the 15 year old site &lt;a href=&quot;https://www.reddit.com/r/ChatGPT/comments/15ju114/chatgpt_is_putting_stack_overflow_out_of_business/&quot;&gt;reduced by 50% eight months afer ChatGPT launched&lt;/a&gt;.  As well as banning AI-generated answers, the company blocked ChatGPT from scraping. Preventing AI bots was also the official reason given by the aforementioned Reddit and Twitter too.&lt;/p&gt;

&lt;p&gt;Indeed, the whole model of search and browsing has begun its long decline. Why jump through dozens of blue links, wading past each of their unique sets of cookie disclaimers and popup flyover slideunder advertising, reading the text, when you can just talk to your AI assistant about it. This not only means the AI assistant will need training - see previous point - but in many cases, it will also need to make real-time queries of those websites, given that the training date lags by months and may be incomplete. Again, this means AI is exacerbating the traditional open model of the web.&lt;/p&gt;

&lt;p&gt;Publishers will want to be compensated if users are no longer directly monetizable, so they will continue to put up barriers to browsing. And since CAPTCHA can longer be expected to work in an era where machines are rapidly outsmarting humans, the only viable solution will be authenticating on every site.&lt;/p&gt;

&lt;p&gt;A silver lining here is this probably leads to a golden age for API monetization, so we may have come full circle where sites start to charge for their data. Users simply aren’t going to visit sites like they used to, and they aren’t going to use every company’s single-serving bot. The primary use case will be aggregators that pull together thousands of sources into a coherent experience mediated by AI. That’s why OpenAI has already partnered with a news publisher. Conventional browser traffic, along with search, will go away.&lt;/p&gt;

&lt;p&gt;A final downside for the web is that search engines may become overwhelmed with AI-powered content. Google has been dealing with this problem for many years, but that’s no great comfort since Google search has - anecdotally, at least - gotten worse for many years. If the proliferation of AI content degrades search even further, it will be a positive feedback loop for AI assistants, trained on higher-quality data because they can pay directly for it to replace search altogether.&lt;/p&gt;
&lt;h1 id=&quot;enhanced-reality-overtook-virtual-reality-for-now&quot;&gt;Enhanced reality overtook virtual reality, for now&lt;/h1&gt;

&lt;p&gt;Apple’s Vision Pro was finally unveiled, in demo form at least, mimicking Apple’s uncharacteristically early announcement of the original iPhone. That’s a wise move on Apple’s part, since they will need an abnormally high amount of developer input, as well as having manufacturing challenges that will not see the device gain Apple-scale adoption for several years, no matter how much demand is present.&lt;/p&gt;

&lt;p&gt;Vision Pro is fundamentally a device for single-player use at home. Similar to the iPad bootstrapping itself by supporting iPhone apps, Apple can leverage the existing universe of iOS and MacOs to launch Vision Pro with thousands of use cases supported. Judging from early reviews, the display is so ridiculously good that it genuinely achieves the dream of a portable multi-screen setup, albeit noticeably heavy (and perhaps with a soft bias from those journalists lucky enough to try it off the bat). No doubt that’s textbook skeuomorphistic design; it’s only a stepping stone towards applications that are designed directly for the medium. However, it’s a powerful enough v1 for Apple’s affluent market base to buy in and create a giant sucking force of demand to attract the best developers in coming years.&lt;/p&gt;

&lt;p&gt;(I had &lt;a href=&quot;https://twitter.com/mahemoff/status/1600063831178608640&quot;&gt;predicted&lt;/a&gt;  the device would mostly be about in-home use, which turned out to be correct, but the keynote also included footage of airplane and hotel usage, which shows Apple wanting to emphasise the portability benefit.)&lt;/p&gt;

&lt;p&gt;Meta refreshed its &lt;a href=&quot;https://about.fb.com/news/2023/09/new-ray-ban-meta-smart-glasses/&quot;&gt;Ray-Ban smart glasses&lt;/a&gt; and they appear to be gaining traction. The obvious comparison, given their camera mode, is to Google Glass a decade prior, but society is more ready for cranium-mounted streaming nowadays and companies are more careful about how they design and message these devices. The glasses highlight the importance of audio for augmented reality. The technology is not yet practical for augmented vision as you move around the world, but recent AI developments endow audio intereactions with super powers. Being able to look at something and talk to your assistant about it, or just having Her-like conversations as you move through the world, is imminent.&lt;/p&gt;

&lt;p&gt;Apple and Meta’s products both signal a shift from virtual reality to augmented reality. Or what we might call “enhanced reality” (ER) if we’re being pedantic, since Vision Pro uses cameras to show your surroundings. So far, VR has mostly been a niche gaming market. Even in a post-pandemic world of remote work, there’s been little interest in virtual meetings or other forms of productivity usage on Oculus et al.&lt;/p&gt;

&lt;p&gt;It’s likely Apple and Meta will converge on their vision, recapitulating Google’s Glass vision of always-on glasses that overlay imagery onto the wearer’s field of vision, as well as conducting conversations like a squad of guardian angels riding on the wearer’s shoulders. The direction for the next few years is therefore away from VR, but also working towards VR for the long term. These devices will be VR-ready (Vision Pro already is) and will pave the way for a gradual rise in VR applications. Killer apps will sureely emerge along the way, especially social ones that create a viral loop of adoption, and eventually VR will achieve critical mass along with ER.&lt;/p&gt;
&lt;h1 id=&quot;lk-99-and-nuclear-fusion-reminded-us-to-solve-hard-problems&quot;&gt;LK-99 and nuclear fusion reminded us to solve hard problems&lt;/h1&gt;

&lt;p&gt;The LK-99 story lit up the internet as quickly as it died down, its half-life proving to be as short as [the Barbenheimer phenomen]. One minute we were promised superconductive properties that would short-circuit the economy by decades, the next minute it was a nothingburger that failed to replicate across many frenzied attempts.&lt;/p&gt;

&lt;p&gt;That said, it was a timely reminder, as the growth-sector economy begins to rebound, that tech is about more than pumping out the Nth addictive game, payment processor, or SaaS clone. There are hard problems to be solved in fields such as energy, climate, computing hardware, space, robotics.&lt;/p&gt;

&lt;p&gt;All of the above made considerable progress in 2023, especially energy. Nuclear fusion researchers were able to generate [a net energy increase]. While that’s far from achieving it in production, there are several startups actively pursuing the goal of commercial fusion by the 2030s and the US government contributed a record amount to R &amp;amp; D, $1.4 billion.&lt;/p&gt;

&lt;p&gt;The saddening geopolitical conflicts of 2022-2023 put more immediate pressure on energy production. A (dim) silver lining of these crises has been a more sensible attitude towards nuclear fission, which has long been deviled by FUD campaigns. Adding to the argument, one of the main detractors has been the immense scale of reactor projects - leading to boondoggle projects - and this has been alleviated by the emergence of modular reactors that can deliver value faster, lower the risks, and help societies build up long term skills to sustain the industry.&lt;/p&gt;
&lt;h1 id=&quot;social-media-fragmented-then-merged&quot;&gt;Social media fragmented, then merged&lt;/h1&gt;

&lt;p&gt;Elon Musk continued to play main character in 2023, with his public persona now firmly tethered to the platform he reluctantly acquired in late-2022. His tenure at, known as Twitter until he rebranded it this year, has been marked by intermittent waves of exodus, each triggered by a specific event such as an outage or an outrage.&lt;/p&gt;

&lt;p&gt;I felt the departure of geopolitical strategist Peter Zeihan &lt;a href=&quot;https://www.youtube.com/watch?v=0kwtVbM5o-o&quot;&gt;epitomized the situation well&lt;/a&gt;, simply because he’s not known as a bleeding-heart liberal, Musk hater, or open-source enthusiast, those being some of the main demographics who left the platform early on. Zeihan fundamentally had practical concerns with the utility he was getting from the app. The algorithm broke. Instead of showing content relevant to his interests, it was now overwhelmed with toxic messages and crypto shilling from the anonymous accounts who had paid $7.99 a month to amplify their reach. Furthermore, despite being an authority figure with a huge following, he was now unable to reach out to seek information from other authority figures. They, too, had presumably left the platform or been bombarded by noise.&lt;/p&gt;

&lt;p&gt;In the early days of X 2.0, there was no clear place to go to. Mastodon was the main choice, but it’s a terrible user experience with a community that’s largely hostile to algorithmic recommendations. Expecting regular users to curate their own feed and find enough value from daily updates is a recipe for disaster. Just ask Google Plus how micromanging circles went. Bluesky then gained some interest and was much more Twitter-like, offtering an algorithm and hiding its distributed nature behind a simple onboarding exeprience. However, it stayed in closed beta far too long and didn’t rise to critical mass. Then came Threads.&lt;/p&gt;

&lt;p&gt;Meta launched Threads in mid-2023. Building on Instagram probably helped them to get this up and running quickly, but more importantly it removed much of the friction for new users. They cleverly allowed users to follow Instagram contacts before those contacts had even signed up for threads. They relied heavily on the algorithm to show relevant results, solving the coldstart problem more eloquently than the other X alternatives, and the design was clean and intuitive and - for now - ad-free. Of course, all of this was coming from the unfair advantage of being one of the biggest companies in the world, but it worked regardless. And while the FTC continued to come down hard on acquisitions in 2023, they’ve said nothing about companies subsidising new product development to outrun the competition, so Meta is acting well within the rules here.&lt;/p&gt;

&lt;p&gt;Threads quickly grew to tens of millions of users, far eclipsing the alternatives combined, and has become the de-facto X alternative. Triumphant claims of X’s demise are greatly exaggerated, as X remains the place where real-time conversations are happening and still has the active movers and shakers in many verticals, as the November &lt;a href=&quot;https://www.latimes.com/opinion/story/2023-11-29/openai-sam-altman-firing-chatgpt-artificial-intelligence&quot;&gt;OpenAI drama&lt;/a&gt;) demonstrated, with dozens of employees weighing in on X and crickets from them on Threads. However, it’s firmly put up sticks as the obvious alternative and continues to see an inflow of user every time X or Musk misfires.&lt;/p&gt;
&lt;h1 id=&quot;predictions-incoming&quot;&gt;Predictions incoming&lt;/h1&gt;

&lt;p&gt;So that’s a roundup of tech stories that interested me in 2023. As you’ve seen from my recent &lt;a href=&quot;https://softwareas.com/jekyll-github-codespaces&quot;&gt;Jekyll - CodeSpaces - Github Pages post&lt;/a&gt;, I’ve been working to make blogging faster and more productive around here and I now have an Obsidian publishing rig setup as part of that effort. Stay tuned for a 2024 predictions post shortly. Wishing you and yours a happy new year and a magic 2024.&lt;/p&gt;
</description>
        <pubDate>Tue, 02 Jan 2024 00:00:00 +0000</pubDate>
        <link>http://softwareas.com/2023-year-in-review</link>
        <guid isPermaLink="true">http://softwareas.com/2023-year-in-review</guid>
        
        
      </item>
    
      <item>
        <title>Jekyll on Github Codespaces</title>
        <description>&lt;p&gt;I recently needed to write a blog post. Being a developer, this naturally amounted to spendinig half a day tinkering with the blog infrastructure. In lieu of the post I never wrote, here is a little overview of the blog infrastructure, which is a Jekyll instance, published to Github pages, running on Github Codespaces.&lt;/p&gt;

&lt;p&gt;As a quick background, I migrated the blog from WordPress a few years ago as I wanted a simpler solution with pre-built pages. No more dealing with MySQL, malware, or comment spam. Github Pages has built-in Jekyll support, so it’s pretty easy to just compose posts in markdown and have it automatically compile them. It’s run well like this for a few years. Indeed, you don’t really need any development environment at all - you could just create files in Github’s web interface - or on your own PC connected to the Github repo - and Github will automatically publish them. This is all orchestrated using the Pages tab in the repo’s settings - there’s no need for custom workers.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://i.imgur.com/QZ0sp4w.png&quot; /&gt;&lt;/p&gt;

&lt;p&gt;So why have a dev environment for your Jekyll blog? Mainly so you can preview changes as you write blog posts or update the structure? I’m using a few Jekyll plugins and want to tweak them at times. Moreover, I want to preview blog posts before publication.&lt;/p&gt;

&lt;p&gt;As I’m mainly using a Chromebook for dev nowadays, I wanted this environment in Codespaces. Furthermore, I wanted to automate the Codespace setup, so it can always be re-created (Github will delete it automatically by default after thirty days).&lt;/p&gt;

&lt;h1 id=&quot;prerequisite-jekyll-blog-repo-preferably-using-relative-paths&quot;&gt;Prerequisite: Jekyll blog repo, preferably using relative paths&lt;/h1&gt;

&lt;p&gt;In this post, I’ll assume you already know Jekyll and have a Jekyll repo established. The only thing I’ll mention about this is I recommend your templates use relative paths, i.e. don’t prepend &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;baseuri&lt;/code&gt; unless you absolutely have to (which might be necessary). This is because Github Pages is still using an older version of Jekyll (v3.9) which insists on prepending the URI in development even if you configure against it. Thus port-forwarding fails, which is necessary to follow links on browser when serving via Codespaces.&lt;/p&gt;

&lt;h1 id=&quot;codespace-setup&quot;&gt;Codespace setup&lt;/h1&gt;

&lt;p&gt;To create a Codespace, you go into &lt;a href=&quot;https://github.com/mahemoff/softwareas&quot;&gt;the Jekyll repo in your browser&lt;/a&gt; and hit “+” to launcht the new Codespace instance. (You can also do this stuff on command line with the &lt;a href=&quot;https://github.com/cli/cli&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;gh&lt;/code&gt; cli tool&lt;/a&gt; if you wish.)&lt;/p&gt;

&lt;p&gt;Now this will make a new Codespace using the default image. You would then need to manually install Ruby, Jekyll, bundle it, etc. So it’s not automated. We want to automate that.&lt;/p&gt;

&lt;h1 id=&quot;automating-the-codespace-setup&quot;&gt;Automating the Codespace setup&lt;/h1&gt;

&lt;p&gt;There are two key files to be aware of:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;devcontainer.json&lt;/li&gt;
  &lt;li&gt;Dockerfile&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Put these both in a top-level folder called &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;.devcontainer&lt;/code&gt; in the Jekyll repo. Codespaces will look in this magic location for instructions on how to spin up the Codespace instance, whenever you click on it from the repo. Specifically it will look at devcontainer.json, which in turn should point to the [Dockerfile]&lt;/p&gt;

&lt;h1 id=&quot;devcontainer&quot;&gt;devcontainer&lt;/h1&gt;

&lt;p&gt;Here is how &lt;a href=&quot;https://github.com/mahemoff/softwareas/blob/41f4f718cf140d03116f1c895f0aeb187f065f03/.devcontainer/devcontainer.json&quot;&gt;devcontainer.json&lt;/a&gt; looks:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;    {
    &quot;name&quot;: &quot;Jekyll Blog Development&quot;,
    &quot;build&quot;: {
        &quot;dockerfile&quot;: &quot;Dockerfile&quot;
    },
    &quot;customizations&quot;: {
        &quot;vscode&quot;: {
        &quot;settings&quot;: {
            &quot;terminal.integrated.shell.linux&quot;: &quot;/bin/bash&quot;
        }
        },
        &quot;extensions&quot;: [
        &quot;streetsidesoftware.code-spell-checker&quot;
        ]
    },
    &quot;postCreateCommand&quot;: &quot;sh /root/post-boot.sh&quot;,
    &quot;forwardPorts&quot;: [4000]
    }
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;As mentioned, it points to the Dockerfile, which the Codespace will dutifully build for you. It also has some settings for the editing environment. It runs a script called &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;post-boot.sh&lt;/code&gt; after the machine has spun up (I’ll get to that in a minute) and it passes browser requests through to port 4000, the default Jekyll environment.&lt;/p&gt;

&lt;h1 id=&quot;dockerfile&quot;&gt;Dockerfile&lt;/h1&gt;

&lt;p&gt;Here is how the Dockerfile looks:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;    # Extend from the base image
    FROM bretfisher/jekyll-serve:stable-20231215-2119a31

    # Set bash as the default shell
    SHELL [&quot;/bin/bash&quot;, &quot;-c&quot;]

    RUN echo &quot;Building from SoftwareAs Dockerfile&quot;

    # Install Vim and Tig
    RUN apt-get update &amp;amp;&amp;amp; apt-get install -y vim tig

    RUN echo &apos;[[ -f ~/.bash_profile ]] &amp;amp;&amp;amp; source ~/.bash_profile&apos; &amp;gt; /root/.bashrc

    RUN git clone https://github.com/mahemoff/dotfiles.git ~/dotfiles &amp;amp;&amp;amp; \
        bash ~/dotfiles/make.sh

    COPY post-boot.sh /root/post-boot.sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The first line says it derives from another Docker config by &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;bretfisher&lt;/code&gt;, which is setup for Jekyll development (mainly ensures it has an appropriate Ruby version). In fact, I could have just used that one and not have a Dockerfile at all, but I wanted to do some customisation. The reset of the Dockerfile sets up my environment how I like it, with vim and tig apps, and my custom dockerfiles. It also copies that post-boot script to /root folder (the Jekyll repo itself appears elsewhere, in /workspaces/softwareas).&lt;/p&gt;

&lt;h1 id=&quot;post-boot-script&quot;&gt;Post-Boot Script&lt;/h1&gt;

&lt;p&gt;The post-boot script runs bundle itself, ensuring Jekyll is ready to go. You could also do this in the Dockerfile or devcontainer.json, but it’s convenient to have a separate script. It makes it easier to experiment with it on the command-line.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;    # Runs after container has been spun up
    # Adapted from https://github.com/devcontainers/images/blob/main/src/jekyll/.devcontainer/post-create.sh

    cd /workspaces/softwareas

    # Install the version of Bundler.
    if [ -f Gemfile.lock ] &amp;amp;&amp;amp; grep &quot;BUNDLED WITH&quot; Gemfile.lock &amp;gt; /dev/null; then
        cat Gemfile.lock | tail -n 2 | grep -C2 &quot;BUNDLED WITH&quot; | tail -n 1 | xargs gem install bundler -v
    fi

    # If there&apos;s a Gemfile, then run `bundle install`
    # It&apos;s assumed that the Gemfile will install Jekyll too
    if [ -f Gemfile ]; then
        bundle install
    fi
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Now you can go to the Codespace’s terminal tab, even if it’s new, and simply run &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;bundle exec jekyll serve --incremental&lt;/code&gt;. The “ports” tab will include a link you can hit in your browser, which will point your browser to the Codespace and preview the blog for you.&lt;/p&gt;

&lt;h1 id=&quot;debugging-and-troubleshooting&quot;&gt;Debugging and Troubleshooting&lt;/h1&gt;

&lt;p&gt;When changing the devcontainer and Dockerfile, and want to test it, you can open the command palette (ctrl-shift-p or cmd-shift-p) and filter for “rebuild”. When something goes wrong, the command palette can also be filtered for “creation log”, which is usually helpful in seeing what part of the devcontainer or Dockerfile failed. If you want to keep pon creating new Codespaces from the repo page, I made a &lt;a href=&quot;http://localhost:4000/jekyll-github-codespaces&quot;&gt;script&lt;/a&gt; to quickly blast all but the most recent one.&lt;/p&gt;

&lt;p&gt;And that’s it. Having previewed this post in the Codespace, now it’s time for me to push this to the repo so Github will automatically publish it.&lt;/p&gt;

&lt;h1 id=&quot;appendix-fancy-a-prebuild&quot;&gt;Appendix: Fancy a prebuild?&lt;/h1&gt;

&lt;p&gt;The above setup is based on a Dockerfile, which means the server will have to be built whenever a new codespace is created. That’s totally fine for my purposes, since it’s not an elaborate container, it’s only me using it, and I rarely need to create a new codespace anyway. In other circumstances, it might be worth taking the time to &lt;a href=&quot;https://docs.github.com/en/codespaces/prebuilding-your-codespaces/about-github-codespaces-prebuilds&quot;&gt;prebuild the image&lt;/a&gt;, host it somewhere like Dockerhub, and then automate that process to allow for future changes. test&lt;/p&gt;
</description>
        <pubDate>Tue, 19 Dec 2023 00:00:00 +0000</pubDate>
        <link>http://softwareas.com/jekyll-github-codespaces</link>
        <guid isPermaLink="true">http://softwareas.com/jekyll-github-codespaces</guid>
        
        
      </item>
    
      <item>
        <title>Chrome OS Flex: Old laptop, new life</title>
        <description>&lt;p&gt;I’ve been getting back into Chromebooks recently. They’ve always been fabulously lightweight, low on admin headaches, and fast per dollar spent, but there are extra benefits nowadays.&lt;/p&gt;

&lt;h2 id=&quot;chromebooks-power-packed-with-google-play-and-linux&quot;&gt;Chromebooks: Power-packed with Google Play and Linux&lt;/h2&gt;

&lt;p&gt;Now that they have Google Play built in, they work fabulously offline for all the platforms who never got the offline web memo. Particularly joyous is mile-high Netflix, a superior alternative to having your in-flight entertainment interrupted by every cabin announcement (“Ladies and gentlement, we’re experiencing turbulence, fasten the seatbelt you already fastened! Let me repeat this message in three languages!”)&lt;/p&gt;

&lt;p&gt;They also have built-in Linux, which is great for some light coding work. As I favour the lightweight nature of Chromebooks, my hardware is never good enough to do anything serious with it, but it has a lot of potential.&lt;/p&gt;

&lt;h2 id=&quot;security-matters-why-chromebooks-are-a-smart-choice&quot;&gt;Security Matters: Why Chromebooks Are a Smart Choice&lt;/h2&gt;

&lt;p&gt;For work purposes, cybersecurity threats may be the most compelling benefit. The web model is built around partitioning domains from each other, a model that has subsequently proven beneficial with smartphones too. ChromeOS benefits from this baked-in capability and adds numerous other security benefits, such as automatically encrypting partitions.&lt;/p&gt;

&lt;p&gt;Security keeps becoming more important every year and I feel it’s on the verge of considerable chaos in a post-GPT world of deepfakes and viruses of genius intelligence. Security  considerations are primarily why I recommend Chromebook as a no-brainer choice to most non-technical friends and family. Even those who need productivity apps will generally be satisfied by the modern web, with GSuite and Office 365 covering most use cases.&lt;/p&gt;

&lt;p&gt;(Note that the aforementioned benefits are in some conflict with each other. For max security, you’ll have to forego Play and Linux, though they should theoretically be sufficiently sandboxed.)&lt;/p&gt;

&lt;h2 id=&quot;chrome-os-flex-breathe-new-life-into-your-crusty-old-laptop&quot;&gt;Chrome OS Flex: Breathe new life into your crusty old laptop&lt;/h2&gt;

&lt;p&gt;All that takes me to &lt;a href=&quot;https://chromeenterprise.google/os/chromeosflex/&quot;&gt;ChromeOS Flex&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;While Chrome hardware is relatively cheap, many of us have old laptops that are no longer in use. Google recently introduced ChromeOS Flex as an official way to repurpose them into Chromebooks and I can report my first experience went well.&lt;/p&gt;

&lt;p&gt;It’s no different to installing a modern Linux distribution. You typically have two challenges: getting the OS onto a USB drive and booting the USB drive. The USB drive was initially a challenge. I followed the instructions to use Chrome’s official extension for this (&lt;a href=&quot;https://chrome.google.com/webstore/detail/chromebook-recovery-utili/pocpnlppkickgojjlmhdmidojbmbodfm&quot;&gt;Chromebook Recovery Utility&lt;/a&gt;), but kept getting errors at the end of the process (after waiting some time for the 1GB payload - I wish the extension cached it!). I then tried using the same extension on a Macbook and it worked right away. So, ironically, flashing the drive with ChromeOS was actually easier on the non-Chromebook device.&lt;/p&gt;

&lt;p&gt;Booting the USB was easy enough. A quick Google told me to hold down F2 on Acers while booting. It booted in no time, asked me a few questions, and BOOM ChromeOS was installed in under a minute. What made this so fast, compared to a typical Linux install, was there’s no dual boot option and no questions about storage allocation at all. It simply wipes the entire hard drive and installs ChromeOS into whatever partitioning scheme it sees fit. Linux distros could learn from this.&lt;/p&gt;

&lt;p&gt;It rebooted (the 60 second countdown to remove USB drive, with no obvious way to bypass it, was about the longest delay in the process) and everything worked fine. Entered Wifi details and Google account creds, then it was Chrome as always.&lt;/p&gt;

&lt;p&gt;Google notes the new device won’t be quite as secure as a bona fide Chromebook, but let’s face it, it’s infinitely more secure - not to mention useful - than a crusty old Windows PC gathering dust. Furthermore, you can’t set up Google Play or Linux, which does limit functionality, but enhances security, making this an excellent choice for frugal professionals or students. I imagine it can serve well as a second fixed-location device to keep at home, while the primary device is used at work or for travel.&lt;/p&gt;

&lt;h2 id=&quot;game-changer&quot;&gt;Game changer&lt;/h2&gt;

&lt;p&gt;As for wider implications, ChromeOS Flex is in some ways what desktop Linux had hoped to be. Obviously that’s not true for hardcore Linux users, but it’s a powerful option when it comes to extending the lifetime of old laptops. Linux has never been a great solution for muggles who just want to continue working with their existing software. The browser, though, is something everyone is intimately familiar with, so it’s like they’re using a subset of the existing device instead of a completely new environment. And many of those people were only using the browser in the first place, so it’s actually a step up in ease of use.&lt;/p&gt;

&lt;p&gt;For schools and organisations with large workforces doing average workloads, Flex is a way to improve the device pipeline. New Windows devices can be used first by power users, then “handed down” to workers with less demanding tasks, and finally converted to ChromeOS devices for workers who can get everything done in the browser. Over time, it’s likely organisations will realise they don’t actually need much else and get Chromebooks in the first place.&lt;/p&gt;

&lt;p&gt;Like to give it a whirl? &lt;a href=&quot;https://chromeenterprise.google/os/chromeosflex/&quot;&gt;Here’s Google’s instructions for installing ChromeOS Flex&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://i.imgur.com/WmC8S1Nl.jpg&quot; /&gt;&lt;/p&gt;
</description>
        <pubDate>Sat, 22 Apr 2023 00:00:00 +0000</pubDate>
        <link>http://softwareas.com/chromeos-flex-old-laptop-new-life</link>
        <guid isPermaLink="true">http://softwareas.com/chromeos-flex-old-laptop-new-life</guid>
        
        
      </item>
    
      <item>
        <title>The Missing Man Page</title>
        <description>&lt;p&gt;If only man pages were as good as the first Google result, &lt;a href=&quot;https://twitter.com/mahemoff/status/1117355866225893377&quot;&gt;I idly tweeted&lt;/a&gt; while trying to coerce curl into post a form [1]. The ensuing conversation led me to think about exactly what is missing in man pages.&lt;/p&gt;

&lt;p&gt;When I say “the first Google result”, I’m &lt;em&gt;not&lt;/em&gt; talking about StackOverflow in this case. We can all agree Stack has mystic levels of ability to answer the question your man page can’t, or not in under 15 minutes anyway. I’m thinking more about reference material; what exactly is wrong with man pages? And why do the newer alternatives, such as &lt;a href=&quot;https://tldr.sh/&quot;&gt;tldr pages&lt;/a&gt;, fall short of replacing them?&lt;/p&gt;

&lt;p&gt;My primary thesis is that man pages are an artifact of a long-bygone era, where the only electonics docs available were those that shipped with the tool itself. There was no web and no convenient way to collaborate at scale.&lt;/p&gt;

&lt;p&gt;With man pages being tethered to the tool itself, this means that the docs are maintained by the tool maintainer, who is not always best equipped to write documentation. This is probably better nowadays, for tools using Github/Gitlab etc, where others can come in and improve docs, however it was suggested on Twitter that some maintainers may block valuable contributions. There’s also the well-known lack of funding for generic tools, which means even if project maintainers wanted someone to create better docs, there’s no funding for it.&lt;/p&gt;

&lt;p&gt;Compare that to the incentive to create a gret web reference, where a company like W3CSchools can rake in $millions as the top Google result for just about any search. It’s a competitive marketplace.&lt;/p&gt;

&lt;p&gt;In fact, this is a problem with all official docs. Even tools which do have viable commercial models still have poor online documentation in many cases. Take a look at MySQL’s documentation on creating a new user. User management is one of the main pain point of friction in the system, and this is their official documentation for it. You land from Google and you’re greeted by this giant complicated regex, wrapped in a jungle of navigation links! Is it any wonder DigitalOcean pays a writer to &lt;a href=&quot;https://www.digitalocean.com/community/tutorials/how-to-create-a-new-user-and-grant-permissions-in-mysql&quot;&gt;make a sane third-party reference guide&lt;/a&gt; in an SEO play that could end up outranking the official doc?&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;http://softwareas.com/wp-content/uploads/2019/04/Selection_755.png&quot;&gt;&lt;img src=&quot;https://i.imgur.com/2NYQkrO.png&quot; alt=&quot;&quot; width=&quot;300&quot; height=&quot;193&quot; class=&quot;alignnone size-medium wp-image-2297&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This illustrates that the people creating the tool aren’t necessarily the people best placed to document it? That doesn’t have to be the case of course. These days, companies understand the value of developer experience and some, like Stripe, produce &lt;a href=&quot;https://stripe.com/docs&quot;&gt;outstanding docs&lt;/a&gt;. However, that doesn’t translate to your run-of-the-mill Unix staple.&lt;/p&gt;

&lt;p&gt;This is where tools like tldr and cheat can make a difference. They are separate from the tool itself, anyone can contribute. However, their explicit goal is to be concise, which means they are not an apples-to-apples alternative because man pages are designed to be exhaustive.&lt;/p&gt;

&lt;p&gt;My biggest gripe with the format of man pages is that they try to act too much like textbooks, which is entirely inappropriate when you are trying to get some quick info to solve a problem. Again, it’s an artifact of a time before the web, where the most convenient way to ship around an essay about the tool was ship it with the tool. Here, for example, is what I see with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;man find&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;http://softwareas.com/wp-content/uploads/2019/04/Selection_756.png&quot;&gt;&lt;img src=&quot;https://i.imgur.com/doav1Pe.png&quot; alt=&quot;&quot; width=&quot;269&quot; height=&quot;300&quot; class=&quot;alignnone size-medium wp-image-2298&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You’d never see this level of caveats and hand-waving in a Digital Ocean sponsored post on how to find files on your file system. At the least, it would be buried at the bottom of the doc, with the important stuff, like a simple annotated example first.&lt;/p&gt;

&lt;p&gt;So what I’d like to see is something exhaustive, but with options grouped logically, not alphabetically, and each one of them illustrated by examples. Man pages don’t do this and nor do tldr or cheat.&lt;/p&gt;

&lt;p&gt;Making better documentation is a lot of hard work, which is why StackExchange abandoned its &lt;a href=&quot;https://meta.stackoverflow.com/questions/354217/sunsetting-documentation&quot;&gt;Documentation experiment&lt;/a&gt;. So I don’t see the problem being solved anytime soon, and it’s why I’ll continue using a web search for now.&lt;/p&gt;

&lt;h3 id=&quot;footnotes&quot;&gt;Footnotes&lt;/h3&gt;

&lt;ol&gt;
  &lt;li&gt;I’d normally be using the more inuitive &lt;a href=&quot;https://httpie.org/&quot;&gt;HTTPie&lt;/a&gt; for this, but sometimes you need to share it with people who are used to curl.&lt;/li&gt;
&lt;/ol&gt;
</description>
        <pubDate>Mon, 15 Apr 2019 09:00:23 +0000</pubDate>
        <link>http://softwareas.com/the-missing-man-page/</link>
        <guid isPermaLink="true">http://softwareas.com/the-missing-man-page/</guid>
        
        
        <category>SoftwareDev</category>
        
      </item>
    
      <item>
        <title>Black Mirror Bandersnatch Spoilers</title>
        <description>&lt;p&gt;Here’s just a few random thoughts on &lt;a href=&quot;https://www.netflix.com/title/80988062&quot;&gt;Black Mirror Bandersnatch&lt;/a&gt;, which came out yesterday. I’d have posted these as a few lazy tweets, but didn’t want to post spoilers there.&lt;/p&gt;

&lt;p&gt;Haven’t watched it yet? Congratulations, you have a life. But go watch it anyway. Binge the first four seasons beforehand if you haven’t seen them yet. Also worth it.&lt;/p&gt;

&lt;p&gt;Thoughts with some very mild spoilers ahead:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;&quot;Choose Your Own Adventure&quot; previously moved from book form to interactive game form, as depicted in Bandersnatch. Netflix&apos;s experiment takes it another step to streaming form, which is interesting because there&apos;s no way to look at the code and figure out every path. It also means they could change the path dynamically, e.g. introduce an easter egg for just one day.&lt;/li&gt;
    &lt;li&gt;But what about narrative, in a world where viewer decide the story? This is a popular trope in interactive storytelling, the tension between the producer guiding the story and the viewer feeling free to make decisions. The entire story of Bandersnatch addresses that tension better than anyone could express in a linear sentence. Free will in this medium is an illusion.&lt;/li&gt;
    &lt;li&gt;Netflix&apos;s long term goal likely involves VR and personalization, i.e. personalized videos similar to the recent bubble of books where $child_name is the star. It&apos;s not hard to see how gaming and movies eventually converge. A degree of interaction is an important step towards this future.and&lt;/li&gt;
    &lt;li&gt;Netflix&apos;s short term goal likely see this as a popular move for some kids&apos; content. I&apos;ve noticed Netflix also has Jeaopardy in its catalogue and it&apos;s a no-brainer for the platform to be used for interactive TV game shows, as has long been done with cable remotes, but better.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A few technical observations:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;The number input (for the safe) shows this is not just going to be a simple 2-way choice mechanism. I&apos;m guessing they have set up a protocol to support input of any unicode string.&lt;/li&gt;
    &lt;li&gt;&lt;del&gt;I noticed the episode doesn&apos;t work on my Shield running recent (probably latest?) Android TV OS&lt;/del&gt; (Update: it works now, a day later, it must have required an app update.) Netflix has a lot of clients and it will be a big effort to implement - and evolve - this protocol ubiquitiously. They&apos;ll also have to think about different input constraints, e.g. typing on TV might support voice input.&lt;/li&gt;
    &lt;li&gt;I also noticed the episode isn&apos;t possible to download, and at one point saw it buffering after making a decision (on a bad network). The app probably pre-emptively downloads a minute or two of both decision paths so it can continue seamlessly after each decision. It would also probably hang on to the unchosen path in the event it&apos;s needed later. Overall, there are some very interesting computer science and network topography problems for anyone wanting to optimise a system like this.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href=&quot;https://www.netflix.com/title/80988062&quot;&gt;&lt;img class=&quot;alignnone size-medium wp-image-2285&quot; src=&quot;https://i.imgur.com/4r2tpMV.jpg&quot; alt=&quot;Netflix Bandersnatch&quot; width=&quot;400&quot; height=&quot;240&quot; /&gt;&lt;/a&gt;&lt;/p&gt;
</description>
        <pubDate>Sat, 29 Dec 2018 14:53:58 +0000</pubDate>
        <link>http://softwareas.com/black-mirror-bandersnatch-spoilers/</link>
        <guid isPermaLink="true">http://softwareas.com/black-mirror-bandersnatch-spoilers/</guid>
        
        
        <category>Uncategorized</category>
        
      </item>
    
  </channel>
</rss>
