Skip to content

Instantly share code, notes, and snippets.

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.

@VivianBalakrishnan
VivianBalakrishnan / VB-NANOCLAW-MEMORY-OBSI-WIKI-PUBLIC.md
Created April 24, 2026 09:34
NanoClaw — Personal Claude Assistant (second brain for a diplomat)

NanoClaw — Personal Claude Assistant

A self-hosted, compounding-memory AI assistant running on a Raspberry Pi.


What Is This?

NanoClaw is a personal AI assistant built on Anthropic's Claude that runs entirely on a Raspberry Pi. It connects to messaging channels (WhatsApp, Telegram, Slack, Discord), processes voice and images, schedules recurring tasks, and — unlike a standard chatbot — accumulates knowledge over time through a structured memory system.


@nitefood
nitefood / Alexa-Gemini Step by Step Guide.md
Last active April 27, 2026 06:18
How to Connect Alexa to Gemini: A Step-by-Step Guide Using n8n

Step-by-Step Setup

  1. Access the Alexa Developer Console: Go to https://developer.amazon.com/alexa/console/ask.
  2. Create a New Skill: Click on Create Skill, give it a name, and choose your preferred language.
  3. Choose a Template: Select the "Start from Scratch" template and leave the rest as the default.

Configure the Skill

@Antosik
Antosik / Internship.md
Last active April 27, 2026 06:17
IT Стажировки в Москве и России

Список IT стажировок и вакансий без опыта в Москве и России

  • Яндекс
    • Программы стажировок
      • Разработка (Python, C++, Java, Go, Kotlin, Scala, C, Flutter, фронтенд, DevOps, Android, iOS) *Машинное обучение
      • Аналитика данных
      • Информационная безопасность
      • Тестирование
  • Менеджмент
@VictorTaelin
VictorTaelin / gist:7ae3d262e4d0b80a4e8817a80f976a68
Created April 27, 2026 04:43
SupVM - distilled version of HVM for SupGen - one shot by GPT 5.5
#!/usr/bin/env node
// SupVM.ts
// ========
// A tiny evaluator for the HVM subset used by InsertGen_hardcoded.hvm.
// Sup-fork captures are the only DUP form needed by that file. SupVM gives
// them the same observable branch correlation through a labelled-choice map,
// and charges captured variables as DUP-like interactions.
declare function require(nam: string): any;
@docularxu
docularxu / blog-claude-code-china-zh.md
Last active April 27, 2026 06:16
在中国使用 Claude Code 解决 403 错误 - 完整指南(中文版)

在中国使用 Claude Code 解决 403 错误 - 完整指南(中文版)

在中国使用 Claude Code 解决 403 错误

2026 年 2 月 12 日

如果你在中国尝试使用 Claude Code,大概率会撞上 403 错误。本文覆盖三种使用场景的解决方案:

  • macOS 终端 (shell) - 在终端里直接使用 claude 命令行
  • VS Code 终端 - 在 VS Code 内置终端里使用 claude 命令行

ZSH CheatSheet

This is a cheat sheet for how to perform various actions to ZSH, which can be tricky to find on the web as the syntax is not intuitive and it is generally not very well-documented.

Strings

Description Syntax
Get the length of a string ${#VARNAME}
Get a single character ${VARNAME[index]}
@rohitg00
rohitg00 / llm-wiki.md
Last active April 27, 2026 06:13 — forked from karpathy/llm-wiki.md
LLM Wiki v2 — extending Karpathy's LLM Wiki pattern with lessons from building agentmemory

LLM Wiki v2

A pattern for building personal knowledge bases using LLMs. Extended with lessons from building agentmemory, a persistent memory engine for AI coding agents.

This builds on Andrej Karpathy's original LLM Wiki idea file. Everything in the original still applies. This document adds what we learned running the pattern in production: what breaks at scale, what's missing, and what separates a wiki that stays useful from one that rots.

What the original gets right

The core insight is correct: stop re-deriving, start compiling. RAG retrieves and forgets. A wiki accumulates and compounds. The three-layer architecture (raw sources, wiki, schema) works. The operations (ingest, query, lint) cover the basics. If you haven't read the original, start there.