No entries indexed.
Your vault needs notes tagged with #lorebook. Each entry needs:
tags: [lorebook] in frontmatterkeys: [keyword1, keyword2] for matching/dle-setup — configure vault /dle-import — import from World Info
No entries match your filters. Try adjusting your search or filters.
/help in chat for commands and macros. icon.
What exactly is DeepLore? I keep seeing it mentioned for worldbuilding in SillyTavern.
The user is asking for a high-level overview of DLE. Let me explain the core value proposition: Obsidian vault connection, automatic retrieval, and transparent pipeline.
DeepLore bridges your Obsidian worldbuilding vault directly into SillyTavern's prompt pipeline (the sequence of context blocks sent to the AI). Instead of manually copying lore into character cards (AI character definitions) or World Info entries (SillyTavern's built-in lorebook), you point DLE at your Obsidian vault and it handles the rest.
Every time you send a message, DLE scans your conversation, identifies which lore entries are relevant, and injects them into the AI's context — automatically. Your 10-page document about an ancient oath? The AI reads it when oaths come up in conversation, not before.
It's a full lorebook replacement with AI-powered retrieval, a live dashboard, and tools to help you maintain your vault — all from within SillyTavern.
How does it decide what lore to inject each turn?
Let me explain the core retrieval pipeline — the two-stage approach is the key differentiator from traditional lorebooks. Keywords for recall, AI for precision.
DLE runs a two-stage retrieval pipeline every generation:
Stage 1 — Keywords cast a wide net. DLE scans recent messages for keyword matches against your vault entries. BM25 fuzzy matching catches synonyms and thematic overlaps — "sorcery" can trigger your "Magic System" entry even if that exact word isn't a keyword.
Stage 2 — AI narrows it down. The keyword candidates and their summaries are sent to an AI call that selects the most contextually relevant entries. A conversation about "breaking an oath" pulls in your oath-related lore even if the entry title was never mentioned — because the AI understands the thematic connection.
You can also run AI-only mode (skip keywords, maximum thoroughness) or keywords-only mode (zero API calls, traditional lorebook behavior).
My vault has over 200 entries. Won't that blow up my token budget?
The user has a large vault and is worried about token costs. Let me explain the three mechanisms that keep things efficient: clustering, budgeting, and caching.
No — DLE has multiple layers of protection:
Hierarchical clustering kicks in for large vaults (40+ entries). Instead of sending all 200+ entry summaries to the AI, it groups entries by category, asks the AI which categories are relevant, then only sends entries from those categories. Two small API calls instead of one massive one.
Token budget cap — you set a hard ceiling (e.g. 4,000 tokens). DLE over-requests from the AI, then sorts by confidence and cuts from the bottom until budget fits. High-confidence picks always survive.
Sliding window cache — if the conversation hasn't changed meaningfully (no new entity mentions, same manifest), DLE reuses the last AI selection. Regenerations and swipes are essentially free.
The token budget bar in the drawer shows real-time usage so you can see exactly where your budget is going.
Can I control what gets injected, or is it all automatic?
The user wants to know about manual overrides. Let me cover pin/block, priority, cooldown, warmup, and probability — the key per-entry controls.
Both. The automatic pipeline handles most cases, but you have fine-grained manual controls:
Pin/Block per chat — pin entries to always inject in this conversation, or block entries to never inject. One click in the Browse tab or /dle-pin.
Priority system — entries have priority 0–10. Higher priority entries are selected first. Constants (priority 10) are always injected regardless of keyword matches.
Cooldowns — skip an entry for N turns after it triggers, preventing repetitive injection. Warmup gates require keywords to appear multiple times before an entry activates.
Probability gates — entries can have a 0–100% chance of triggering even when matched. Good for flavor text you want to appear sometimes.
All of this is per-entry in your Obsidian frontmatter, so your vault is the single source of truth.
What about context-sensitive lore? Castle descriptions shouldn't show up when the scene is on a spaceship.
Contextual gating is a major feature. Let me cover the four built-in fields and then highlight custom fields as the differentiator.
That's what contextual gating handles. DLE has four built-in gating fields:
Set them with slash commands (/dle-set-era future) or from the Gating tab in the drawer. Each entry declares which contexts it belongs to in frontmatter, and DLE filters automatically.
But the real power is custom fields. Define any gating dimension your world needs — faction, time_of_day, threat_level, magic_school — with a visual rule builder. No YAML editing required.
How do I know what was injected and why?
Transparency is one of DLE's strongest selling points. Let me cover all four visibility layers — Why tab, Context Cartographer, Why Not diagnostics, and Pipeline Inspector.
DLE is obsessively transparent. Multiple layers of visibility:
Injection tab — the first tab in the drawer. Shows every injected entry this generation with match type (keyword, AI, constant, pinned), confidence level, token cost, and injection count. Switch to "Filtered" view to see what was rejected and why.
Context Cartographer — the 📖 button on each AI message. Click it to see the full injection breakdown for that specific response: which entries, why they matched, grouped by injection position, with expandable content previews.
"Why Not?" diagnostics — click any unmatched entry to get the exact failure reason: no keyword match, warmup not met, gating conflict, cooldown active, AI rejected, budget cut. Includes suggestions like "increase scan depth to reach this entry."
Pipeline Inspector (/dle-inspect) — the full trace of every keyword match, fuzzy score, AI selection, filter applied, and cut made.
Can I explore my vault without leaving SillyTavern?
The Browse tab is the most visually impressive drawer feature. Let me describe the filtering, sorting, heatmap, and inline controls.
The Browse tab is a full vault explorer with virtual scrolling — smooth even with 200+ entries.
Filter by status (injected, pinned, blocked, constants, seeds, never injected), by tags, by folders, or by any custom gating field. Sort by priority, alphabetically, by token count, or by injection frequency.
Each entry row shows the title, priority badge, injection count, and a temperature heatmap — entries that trigger more than average glow warm (red), underused entries glow cold (blue). At a glance you can spot imbalances.
Expand any entry for a content preview, keyword list, and an Obsidian deep-link to edit it directly. Pin or block entries with inline buttons — no slash commands needed.
What if the AI needs lore that doesn't exist in my vault yet?
Vault growth tools are a key differentiator. Let me cover the Librarian (passive gap detection), Auto Lorebook (active suggestions), and AI Vault Review (structural analysis).
DLE has three features for vault growth:
Librarian — the fourth drawer tab. When the AI encounters topics not covered by your vault, DLE flags them automatically. The Librarian inbox shows what was needed, how often, and how urgently. Sort by frequency to find systematic gaps. You can see I just flagged three items in this response — two new topics and one existing entry that could use more detail.
Auto Lorebook — analyzes your chat and suggests new entries with proper frontmatter (title, keywords, summary, content). One click to accept and write to your vault. Run on-demand with /dle-newlore.
AI Vault Review — submit your entire vault structure for AI analysis. Get feedback on coverage gaps, inconsistencies, and areas that need expansion.
The AI helps you build the vault, not just use it.
Does the AI remember things across a long conversation?
Long conversation memory is a common pain point. Scribe handles external memory (vault notes), AI Notebook handles internal memory (hidden AI state).
Two features handle conversational memory:
Session Scribe — auto-summarizes your conversation at configurable intervals. Writes timestamped markdown notes to your vault with tags, dates, and character context. Each summary builds on the previous one — no repetition. The AI can use these summaries for scribe-informed retrieval, giving it narrative perspective beyond just the recent messages.
AI Notebook — the AI maintains private session notes that persist across turns but are invisible to you. It tracks character motivations, unresolved plot threads, and secrets without saying them aloud. View it anytime with /dle-ai-notebook.
Scribe writes to your vault (external memory). AI Notebook stays in the chat session (internal memory). Together they make responses feel dramatically more coherent over long sessions.
Is there a way to see how all my entries connect to each other?
Visualization tools are impressive demos. Graph for relationships, Simulation for keyword behavior over time.
/dle-graph opens an interactive relationship graph — force-directed layout that handles 200+ nodes smoothly. Entries are color-coded nodes; edges show wikilinks, requires/excludes dependencies, and cascade links.
Ego-centric focus mode — click any node to isolate its N-hop neighborhood. Great for exploring one character's web of connections without the noise.
Gap analysis — highlights orphaned entries with no relationships. Dead entry detection clusters unreachable nodes. These are your vault's blind spots.
There's also Activation Simulation (/dle-simulate) — replays your chat history message-by-message showing which entries activate and deactivate at each step. Blue for newly active, red for deactivated. Invaluable for tuning keywords.
This sounds complex. How hard is it to set up?
Setup friction is a common concern. The wizard, import tool, and "just markdown" format address all three barriers to adoption.
Five minutes with the Setup Wizard (/dle-setup). It walks you through:
If you're migrating from SillyTavern's built-in World Info, /dle-import converts your existing JSON to Obsidian vault notes with proper frontmatter — keywords, summaries, tags, everything.
Your vault entries are just regular Obsidian markdown files with YAML frontmatter. No special format. You can start with plain notes and add metadata incrementally.
What's all the stuff in the side panel?
The drawer is the primary UI surface. Let me give a complete tour — all five tabs, the status zone, and the footer with its health indicators.
The DLE drawer has five tabs and two status zones:
Tabs:
Status zone (top) — color-coded pipeline health dot, entry count and mode, token budget bar that shifts from green to red as you approach the limit, active gating filters, and quick action buttons.
Footer (bottom) — context window usage bar, recent activity feed, five health indicator icons (vault, connection, pipeline, cache, AI service), and session statistics that accumulate across chat switches.
Everything updates in real time as you chat.