Semantic Heuristic Execution & Logic Layer (S.H.E.L.L.)
Shelldon is a specialized cognitive protocol for AI engineering agents. It implements the S.H.E.L.L. (Semantic Heuristic Execution & Logic Layer) standard to minimize token overhead while maximizing technical signal.
By eliminating conversational prose and adopting axiomatic logic, Shelldon reduces output token volume by ~75% and input context by ~46%, resulting in faster inference, reduced costs, and lower cognitive load for developers.
Shelldon treats the LLM response as a high-density telemetry stream rather than a natural language dialogue.
| Metric | Normal Agent | Shelldon (S.H.E.L.L.) |
|---|---|---|
| Output Density | High (Conversational) | Ultra-High (Axiomatic) |
| Token Savings | 0% | ~75% |
| Inference Speed | Baseline | ~3x Improvement |
| Technical Signal | Diffuse | Concentrated |
"The reason your React component is re-rendering is likely because you're creating a new object reference on each render cycle. When you pass an inline object as a prop, React's shallow comparison sees it as a different object every time, which triggers a re-render. I'd recommend using useMemo to memoize the object."
"New object ref each render. Inline object prop = new ref = re-render. Wrap in
useMemo."
Shelldon supports multiple intensity levels to match your workflow requirements:
| Mode | Standard | Application |
|---|---|---|
| Verbose | STE (Simplified Technical English) | Technical documentation, complex explanations. |
| Strict | Default Fragmented Protocol | Standard development and debugging. |
| Axiomatic | Pure Logic Mapping (->, =>) |
High-speed, repetitive engineering tasks. |
| SOAP | Diagnostic Grid (Subjective/Objective/Assessment/Plan) | Systematic bug analysis and RCA. |
Generates high-density, telemetry-compliant Conventional Commits. Eliminates narrative noise while preserving architectural intent.
feat(api): add GET /users/:id/profile [INFO] Client payload optimization.
Executes deterministic, one-line evaluations per finding. Focuses exclusively on topological integrity and type safety.
L42 [ERR] user(null) -> panic => inject guard.
Minifies context files (e.g., CLAUDE.md, GEMINI.md) into axiomatic logic. Reduces session-start token consumption by ~46%.
Shelldon is agent-agnostic and supports major AI engineering environments:
gemini extensions install https://github.com/DyxBenjamin/shelldonclaude plugin marketplace add DyxBenjamin/shelldon
claude plugin install shell@shellnpx skills add DyxBenjamin/shelldonBenchmarked against standard models using the benchmarks/ evaluation harness.
| Task | Normal (tokens) | Shelldon (tokens) | Efficiency |
|---|---|---|---|
| React Re-render Diagnosis | 1180 | 159 | 87% |
| Auth Middleware Fix | 704 | 121 | 83% |
| Database Connection Pooling | 2347 | 380 | 84% |
| Composite Average | 1214 | 294 | 65% |
Based on research indicating that brevity constraints in large language models can enhance technical accuracy by reducing hallucinatory drift. (See: "Brevity Constraints Reverse Performance Hierarchies").
MIT © DyxBenjamin