Visarga

Mind Map for Coding Agents

Nov 4th, 2025
102
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 3.40 KB | None | 0 0
  1. # Mind Map Format - Self-Documentation
  2.  
  3. > **For AI Agents:** This mind map is your primary knowledge index. Read overview nodes [1-5] first, then follow links [N] to find what you need. Always reference node IDs. When you encounter bugs, document your attempts in relevant nodes. When you make changes, update outdated nodes immediately—especially overview nodes since they're your springboard. Add new nodes only for genuinely new concepts. Keep it compact (20-50 nodes typical). The mind map wraps every task: consult it, rely on it, update it.
  4.  
  5. [1] **Mind Map Format Overview** - A graph-based documentation format stored as plain text files where each node is a single line containing an ID, title, and inline references [2]. The format leverages LLM familiarity with citation-style references from academic papers, making it natural to generate and edit [3]. It serves as a superset structure that can represent trees, lists, or any graph topology [4], scaling from small projects (<50 nodes) to complex systems (500+ nodes) [5]. The methodology is fully detailed in PROJECT_MIND_MAPPING.md with bootstrapping tools available.
  6.  
  7. [2] **Node Syntax Structure** - Each node follows the format: `[N] **Node Title** - node text with [N] references inlined` [1]. Nodes are line-oriented, allowing line-by-line loading and editing by AI models [3]. The inline reference syntax `[N]` creates bidirectional navigation between concepts, with links embedded naturally within descriptive text rather than as separate metadata [1][4]. This structure is both machine-parseable and human-readable, supporting grep-based lookups for quick node retrieval [3].
  8.  
  9. [3] **Technical Advantages** - The format enables line-by-line overwriting of nodes without complex parsing [2], making incremental updates efficient for both humans and AI agents [1]. Grep operations allow instant node lookup by ID or keyword without loading the entire file [2]. The text-based storage ensures version control compatibility, diff-friendly editing, and zero tooling dependencies [4]. LLMs generate this format naturally because citation syntax `[N]` mirrors academic paper references they've seen extensively during training [1][5].
  10.  
  11. [4] **Graph Topology Benefits** - Unlike hierarchical trees or linear lists, the graph structure allows many-to-many relationships between concepts [1]. Any node can reference any other node, creating knowledge clusters around related topics [2][3]. The format accommodates cyclic references for concepts that mutually depend on each other, captures cross-cutting concerns that span multiple subsystems, and supports progressive refinement where nodes are added to densify understanding [5]. This flexibility makes it suitable as a universal knowledge representation format [1].
  12.  
  13. [5] **Scalability and Usage Patterns** - Small projects typically need fewer than 50 nodes to capture core architecture, data flow, and key implementations [1]. Complex topics or large codebases can scale to 500+ nodes by adding specialized deep-dive nodes for algorithms, optimizations, and subsystems [4]. The methodology includes a bootstrap prompt (linked gist) for generating initial mind maps from existing codebases automatically [1]. Scale is managed through overview nodes [1-5] that serve as navigation hubs, with detail nodes forming clusters around major concepts [3][4]. The format remains navigable at any scale due to inline linking and grep-based search [2][3].
  14.  
Advertisement
Add Comment
Please, Sign In to add comment