Skip to main content
Craft & tools

Developers

A place for how we work, not what stack this site happens to use. Workflows you can steal, Cursor habits that hold up, a short tour of tools worth keeping nearby, and an honest split between vibe coding (fast, intuitive iteration) and the slower virtues that keep software legible: names, boundaries, tests, and kindness to the next person who reads the diff—including you in six months.

Dev playbook — searchable, collapsible tips with model provenance (Cursor, ChatGPT, other AIs—record yours). Living data in src/data/dev-playbook.json; humans and assistants can learn from the same page.

Spelunker's guide — on-site rendering of docs/SPELUNKERS_GUIDE.md (how to wander the codebase; pair with CODEBASE_MAP.md on GitHub).

Source on GitHub · Updated

Quick reference

Handy anchors while you are in this repo—pair with the playbook for narrative tips.

  • L0 route smoke: npm run test:probe (dev or preview running) — probes in scripts/site-probe.mjs.
  • L1 built HTML: npm run build then npm run test:site.
  • Playbook data: src/data/dev-playbook.json — tag + model filters and Copy tip on each card.
  • Deep link a tip: /developers/playbook/#tip-mp-001 (replace id; recipient may need to expand nested details).

Development workflows

Good workflow is less about a branded methodology and more about closing loops: make a small change, see it run, record what you learned, repeat. A few patterns that tend to age well:

  • Shrink the batch. Smaller diffs are easier to review, revert, and reason about—whether the reviewer is a human or a model.
  • Define “done.” Done might mean tests green, linter clean, manual click-through, or a note in a devlog. Pick one bar per task so you know when to stop thrashing.
  • Verify in layers. Cheap checks first (format, types, unit tests), then heavier ones (integration, browser, staging). Fast feedback preserves momentum; deep checks catch the scary stuff.
  • Leave breadcrumbs. Commit messages, ADRs, or a dated log entry beat heroic memory. Future you is not smarter—just busier.
  • Prefer additive change. Extend before you rewrite. Deletion and rewrites need explicit intent, backup, and usually a second pair of eyes.

Layered verification (L0–L3)

Separate how fast you learn from how hard you squeeze the system. Cheap signals first; expensive tools only when they earn their keep. On this repo the mapping looks like this—steal the pattern for your own stack.

Layer What it answers How we run it (here)
L0 — Route smoke Are key URLs up, returning HTML, and showing stable markers (layout ids, canonicals, critical islands)? npm run test:probe with dev or preview running. Defined in scripts/site-probe.mjs—extend the probes array as routes grow.
L1 — Built HTML Does production output under dist/ still match our contracts (nav shell, sections, blog shape)? npm run build then npm run test:site (tests/site/).
L2 — Scans Links, SEO, accessibility against a running server—heavier, richer signal. Tools listed in scripts/multitool-registry.json, run via npm run multitool.
L3 — Browser truth JS behavior, layout edge cases, interactions. Manual pass or automated browser tests; automate when the same break happens twice.

Build onto it: when a check prevents a real regression, promote it—tighten L0 for instant feedback, mirror invariants in L1, and keep slow scans in L2/L3 so the default loop stays fast.

Cursor & AI-assisted coding

Cursor (and editors like it) treat the codebase as the prompt: inline edits, multi-file refactors, chat, and agents that can run commands. Used well, they compress boilerplate and exploration; used carelessly, they spray plausible-looking sludge across your repo.

  • Project rules. Put durable conventions in .cursor/rules or equivalent so the model stops re-deriving your stack from scratch every session.
  • Slash commands. Reusable instructions (e.g. “run tests then summarize”) live in .cursor/commands; you can symlink or copy favorites into ~/.cursor/commands for global use.
  • Context is the product. Narrow open files, point at symbols, paste errors. The best completion is the one grounded in the file you actually meant to change.
  • Review like it’s someone else’s PR. AI diffs need the same skepticism as junior commits: naming, edge cases, security, and “did this delete something on purpose?”
  • Agents and autonomy. Let tools run tests or grep, but keep destructive or irreversible steps behind explicit human confirmation—especially git history and production config.

Chatbot instruction prompts

Good assistants respond to constraints, not vibes. The cards below are ready-made instructions you can paste into any chatbot to steer behavior—same idea as project rules, but portable. This page uses a small ChatbotInstructionCopier component (forest/cream styling, copy button, live status); steal the pattern for your own site or docs.

Build a component that matches our branding

Use when you want UI that fits existing tokens and components—not a one-off alien widget.

Verification: think in L0–L3 layers

Use when you want audits to start cheap and escalate deliberately—mirrors the table above.

Safe collaboration defaults

Use at the start of a session with an agent that can run commands or touch git.

Draft a playbook tip (JSON)

Use after a session to capture a concrete lesson into dev-playbook.json shape.

Hand off to another AI or human

Use when switching tools or threads so the next assistant does not cold-start blind.

Learning together

The playbook is deliberately shared surface: humans, Cursor sessions, and other models can all append tips with explicit model and contributor fields. That provenance is a feature—it helps the next reader calibrate tone, risk, and depth.

Prefer pull requests for playbook edits so wording and tags get a second glance; mention playbook: in the commit subject when the JSON changes. If you only have a chat window, use the Draft a playbook tip prompt above, then paste the JSON into a branch when you are back at a keyboard.

Tools worth using

No single stack—patterns that transfer across languages and frameworks:

  • Version control. Git, meaningful branches, small commits. Learn rebase vs merge enough to stay out of trouble; learn when to ask for help before force-pushing.
  • Formatter + linter. Argue about taste once in config; after that, let the machine enforce it so review stays about behavior.
  • Test runner. Fast unit tests for logic; a few integration tests for wiring; browser or contract tests for the user-visible path that breaks most often.
  • CI as truth. If it matters, run it on a clean machine. Local “works on my laptop” is a prototype, not a guarantee.
  • Browser DevTools. Network tab, console, accessibility tree—still unbeatable for “what is actually happening?”
  • Orchestrators (optional). Makefiles, npm scripts, or a small CLI that lists “what we can run here” beats tribal knowledge scattered across chat logs.

Vibe coding & software philosophy

Vibe coding usually means: move fast, trust intuition, let tools fill gaps, optimize for the feeling of progress. That energy is real—it gets prototypes and spikes unblocked. The failure mode is mistaking velocity for direction: clever code nobody can read, missing tests for the paths that actually hurt users, and “the AI said it was fine.”

A stance that tends to survive contact with reality:

  • Vibe for exploration; rigor for boundaries. Sketch freely; when you touch auth, money, data, or persistence, slow down and make the invariant explicit.
  • Readable is a feature. Code is read far more than it is written. Boring names and straight-line control flow beat clever abstractions unless the abstraction pays rent every week.
  • Tests encode what you refuse to break again. You don’t need 100% coverage; you need guardrails on the regressions that already burned you.
  • Curiosity over dogma. Patterns (DDD, TDD, microservices, monoliths) are hypotheses. Try them; measure; don’t build a cathedral around the first blog post you liked.
  • Software is social. Even solo work is a conversation with future maintainers. Document intent, not just mechanics.

TL;DR: Use speed and AI where they buy learning; use discipline where mistakes are expensive. The best teams—and the best solo builders—alternate without guilt.

This repository

If you landed here from Johnny Autoseed specifically: implementation detail, scripts, and machine-oriented docs live next to the code on GitHub—CLAUDE.md, CODEBASE_MAP.md, and the rest of the tree. Public guides and PDFs: Resources. Scratch-pad experiments: Lab.