CyberMe: Visual knowledge base maintained by LLM Agent

Wiki search

https://github.com/hzw1199/CyberMe-LLM-Wiki

CyberMe is a general-purpose knowledge base framework maintained by an LLM Agent, suitable for personal, team, or enterprise knowledge bases. It keeps raw materials, a structured Wiki, and Agent working rules in the same repository, allowing the knowledge base to be maintained continuously like a codebase; it also provides a Wikipedia-like visual browsing experience, automatically maintains bidirectional links between entries, and includes both Chinese and English interfaces.

Core idea:

Obsidian is the IDE, the LLM is the programmer, and the Wiki is the codebase.

Quick Start

  1. First configure the Front Matter template below in Obsidian Web Clipper, ensuring that Markdown files saved to raw/ include source, type, clipped time, and processing status
  2. Use Obsidian Web Clipper to convert web articles to Markdown and save them to raw/articles/. Put papers in raw/papers/, code snippets in raw/code/, and materials that cannot be classified yet in raw/_inbox/
  3. Download images locally in Obsidian: in Settings -> Files and links, set “Attachment folder path” to a fixed directory (for example, raw/articles/media/), then in Settings -> Hotkeys, bind a shortcut for “Download attachments for current file”; after clipping an article, trigger that shortcut to save remote images locally, making it easier for the LLM to read images separately for additional context
  4. Open an LLM Agent that can read and write files (for example, Cursor or Claude Code), and have it read the schema and workflow conventions in CLAUDE.md
  5. Run Ingest on new materials, letting the Agent organize the original text into concept pages, topic pages, and analysis pages under wiki/
  6. Ask the Agent questions directly, letting it answer based on the existing Wiki instead of searching from scratch every time
  7. Run Lint regularly, letting the Agent check for contradictions, broken links, isolated pages, and missing concepts

All Markdown files under raw/ must include the following Front Matter:

---
source: "Source URL"
type: article          # article | paper | repository | transcript | screenshot
clipped_at: "2026-04-05"
status: unprocessed    # unprocessed | ingested | error
title: "Article Title"
tags: []
---

Obsidian Web Clipper Example

When saving a web page as Markdown with Obsidian Web Clipper, you can confirm fields such as sourcetypeclipped_atstatus, and title in the sidebar, then save it to the corresponding directory:

Obsidian Web Clipper example

Ingest and Query Examples

Send an Ingest request to the Agent and specify the source file under raw/articles/ to ingest:

Ingest request

The Agent reads wiki/index.md and the source file, creates or updates concept entries, the index, and the log, then outputs a processing report at the end:

Ingest result

After that, you can ask the Agent questions directly. It will read the wiki index and related entries first, then answer based on the existing knowledge base:

Wiki-based Query

Directory Structure

CyberMe/
├── CLAUDE.md              # Agent working rules and Wiki schema
├── visualize/             # Frontend visualization app
├── raw/                   # Raw materials
│   ├── articles/          # Article materials
│   │   └── media/         # Images, videos, and attachments referenced by articles
│   ├── papers/            # Paper materials
│   ├── code/              # Code snippet materials
│   └── _inbox/            # Materials to be organized
└── wiki/                  # Knowledge base maintained by the Agent
    ├── concepts/          # Concept and entity entries
    ├── topics/            # Topic overviews and long-form analyses
    ├── analyses/          # Analysis pages produced by Query
    ├── index.md           # Content index
    └── log.md             # Timeline log

raw/ and wiki/ usually contain private materials, team materials, or enterprise internal materials, and are not committed to public repositories by default.

Core Conventions

CLAUDE.md is the core of this framework. It specifies that before any operation, the Agent must first read wiki/index.md, then determine whether the current task is operation management, Ingest, Batch Ingest, Lint, or a knowledge query.

All Wiki pages use Markdown and YAML front matter:

---
aliases: [Alias 1, Alias 2]
related: [Related Concept 1, Related Concept 2]
sources: [raw/articles/xxx.md]
created_at: "2026-04-05"
updated_at: "2026-04-05"
---

Use Obsidian-style [[bidirectional links]] in the body to connect concepts.

Common Commands

After opening the repository in an LLM Agent that can read and write files, you can operate it directly using natural language:

Please ingest raw/articles/xxx.md
List the materials in raw/ that have not been ingested
Batch ingest the un-ingested materials in raw/
What does a certain concept mean?
Run a quality check on the wiki

The Agent automatically maintains wiki/index.mdwiki/log.md, cross-references between concept pages, and the ingest status of raw files according to CLAUDE.md.

Frontend Visualization

visualize/ is CyberMe’s frontend visualization app. It renders the Markdown knowledge base in wiki/ as a browsable website.

It supports both Chinese and English UI languages: on first visit, it automatically follows the browser language; you can also switch manually from the language button in the top-right corner, and the choice is saved locally in the browser.

It provides two views:

  • Wiki page: browse concepts, topics, analyses, and log in a Wikipedia-like style
  • Galaxy page: render concepts, topics, and analysis pages as a 3D force-directed knowledge graph

Visualization Examples

The Wiki home page shows an overview of the knowledge base, including topic, concept, and analysis counts, plus recent content:

Wiki home page

Use the search box to quickly locate concepts, topics, and analysis pages:

Wiki search

Concept pages show definitions, detailed explanations, related links, and entry metadata:

Concept page

Analysis pages store long-form analyses distilled from Query workflows:

Analysis page

The timeline log records ingest, query, and lint operations chronologically:

Timeline log

The Galaxy page renders the knowledge base as an interactive 3D knowledge graph:

Galaxy knowledge graph

Start the development server:

cd visualize
npm install
npm run dev

The development server reads wiki/ from the repository root and automatically refreshes when Wiki files change.

Build the static site:

cd visualize
npm run build
npm run preview

The build output is written to visualize/dist/. During build, the current wiki/ content and wiki-manifest.json are copied into the output directory, making it suitable for deployment to static hosting services such as GitHub Pages, Cloudflare Pages, and nginx.

Suitable Use Cases

  • Long-term maintenance of personal, team, or enterprise knowledge bases
  • Turning saved articles into a queryable concept network
  • Browsing an Agent-organized bidirectional Wiki in Obsidian
  • Letting an LLM answer questions based on an existing knowledge base and persist useful analyses
Share

You may also like...

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注