⚡ This product was generated with Kupkaike in under 4 minutes
Create Your Own Product →94 chapters, 14k+ words. Ready to sell in minutes — not months.
A complete, field-tested Obsidian system that transforms 500+ scattered PDFs and fragmented notes into a networked knowledge base where literature reviews are 60–70% pre-assembled and every paper you read makes the next one easier to write. Built exclusively for PhD students and postdocs managing serious research workflows—not a generic productivity guide.

No editing, no design skills, no copywriting — just a niche idea and Kupkaike did the rest.
Generated by Claude Opus 4.6. Real content, unedited.
You've read hundreds of papers. You've taken notes—in Zotero, in Notion, in the margins of PDFs, in a Word document you can no longer find. And yet every time a new literature review comes due, you start almost from scratch. You vaguely remember reading something relevant six months ago, but you can't locate it, can't remember the argument, and can't connect it to what you're writing now. The reading effort never compounds. Each paper exists in isolation. You're not building a knowledge base—you're filling a leaky bucket. After years of graduate training, that's a quietly devastating realization.
Like what you see?
The Academic Vault Blueprint is not a Zettelkasten tutorial repurposed for academics, and it's not another Obsidian YouTube walkthrough. It's an end-to-end research operating system designed specifically around four workflows that define academic output: systematic literature review, manuscript drafting, grant proposal development, and research ideation. Every structural decision—folder architecture, note atomization rules, linking conventions, plugin configuration—was made with one question in mind: does this make publishable output faster? The system integrates directly with Zotero, uses Dataview to surface connections across hundreds of notes automatically, and includes naming conventions and templates that work whether you're in molecular biology or political science.
The blueprint covers eight chapters moving from vault architecture through long-game career strategy, with step-by-step implementation at each stage. You'll build a Paper-to-Knowledge Pipeline that turns every PDF into permanent, linked, retrievable insight. You'll create Maps of Content that reveal cross-disciplinary patterns no single paper shows. You'll run a 72-Hour Literature Review process that assembles drafts from notes you've already written. Three bonuses ship with the guide: a downloadable Obsidian Starter Vault (.zip) with 14 Templater templates, 20+ Dataview queries, and a pre-built ATLAS folder structure ready to populate on day one; a Plugin Configuration Bible covering exact settings for 12 recommended plugins with screenshots; and 50 copy-paste Dataview Query Recipes built for academic use cases. Researchers who implement this system stop re-reading papers they've already annotated, stop losing cross-paper insights, and start producing literature reviews that feel like retrieval rather than reconstruction.
---
Like what you see?
---
You've read that paper before. You know you have, because you can almost remember the argument — something about measurement invariance, or maybe it was the sampling frame — but you can't find your notes, and now you're re-reading it at 11pm the week your draft is due.
That feeling isn't a productivity problem. It's a structural one, and it's quietly bleeding your career.
---
Most researchers assume their reading effort compounds over time — that the 400 hours they've spent annotating PDFs since Year 1 are somehow "in there," accessible when needed. The uncomfortable truth is that for most PhD students and postdocs, reading effort has a half-life. Without a system that captures, connects, and surfaces what you've read, roughly 70% of the intellectual value you extract from a paper is gone within two weeks.
The Knowledge Leak Audit™ is a diagnostic process that maps exactly where your knowledge is escaping — and quantifies what it's costing you in time, insight, and publications. It works by examining six specific failure points that appear in virtually every academic workflow that hasn't been deliberately engineered.
Step 1: Identify Your Active Leak Points
Work through each of the six knowledge leak categories below. For each one, estimate how many hours per month it costs you.
Step 2: Calculate Your Knowledge ROI Score
This is a simple ratio with brutal implications.
Knowledge ROI Score = (Hours of reading per month) ÷ (Number of distinct insights reused in writing per month)
A researcher reading 15 hours per month who reuses 3 insights in their writing has a score of 5.0 — meaning it costs five hours of reading to produce one reused insight. Researchers with well-engineered systems routinely get this below 1.5.
Track this honestly for two weeks before moving forward. Count "reused insight" strictly: a specific claim, connection, or synthesis point from your notes that appears in a draft, grant, or presentation. Paraphrasing a paper you just re-read doesn't count.
Step 3: Select Your Vault Architecture Archetype
Once you've mapped your leaks and scored your ROI, you're ready to make the single most important structural decision in this entire system: which vault architecture fits your research life right now.
There are three archetypes, and choosing the wrong one is the most common reason researchers abandon their Obsidian setup within a month.
---
Dr. Amara is a third-year sociology PhD student studying housing precarity and mental health outcomes. She has 340 papers in Zotero, annotations scattered across PDF Expert and a physical notebook, and a Notion database she built in Year 1 that she stopped updating after six months.
When she sits down to write her second dissertation chapter, she spends the first three weeks essentially re-reading her own field. She knows she read something about the Adverse Childhood Experiences framework being critiqued for its cross-cultural validity — it was directly relevant to her measurement section — but she cannot find it. She re-reads four papers trying to locate it. She never does.
Running the Knowledge Leak Audit™, Amara identifies her primary leaks as Annotation Loss (all her PDF highlights are stranded in files she never revisits) and Context Collapse (her Notion notes say things like "good on methodology" with no explanation of what that means for her argument). Her Knowledge ROI Score is 6.8 — nearly seven hours of reading per reused insight.
Her research profile: one active dissertation project, working independently, in a qualitative/mixed-methods social science discipline. The decision matrix points clearly to The Monograph Vault. She doesn't need a sophisticated tagging taxonomy or cross-project linking yet. She needs a simple, durable structure that gets her annotations out of PDFs and into connected, argument-ready notes.
By the end of Chapter 3, Amara's vault will have her entire measurement literature pre-organized by theoretical tension, not by author — and her next chapter draft will open with 60% of its citations already in place.
---
Rate each statement from 1 (never) to 5 (constantly). Be honest — this is a diagnostic, not a performance review.
Annotation Loss
Citation Orphaning
Insight Decay
Context Collapse
Retrieval Failure
Connection Blindness
Knowledge ROI
Scoring: Add your scores for Q1–Q12.
Vault Architecture Decision Matrix
| Research Profile | Active Projects | Collaboration Level | Recommended Archetype |
|---|---|---|---|
| PhD, Years 1–4, single dissertation | 1 | Solo or supervisor only | Monograph Vault |
| PhD Year 5 / Postdoc, multiple papers | 2–4 | Occasional co-authors | Portfolio Vault |
| Postdoc / PI, lab or group work | 3+ | Regular multi-author | Lab Vault |
| Postdoc, highly interdisciplinary | 2–3 | Solo with broad reading | Portfolio Vault |
My archetype: `_______________________`
---
---
Like what you see?
You've already diagnosed where your knowledge is leaking. Now it's time to build the container that holds it—permanently, scalably, and in a way that makes retrieval feel effortless rather than archaeological.
Most researchers who open Obsidian for the first time make the same mistake: they start creating notes immediately, before they've built the scaffolding. Two months later they have 200 notes in a flat list, a folder called "Misc," and the same anxiety they had in their old system. The ATLAS Vault Structure™ prevents this by giving every note a predetermined home before it exists.
ATLAS stands for Annotations, Theories, Literature Maps, Active Projects, Source Index. These five folders map directly to the five cognitive modes of academic research work—reading, thinking, synthesizing, producing, and tracking. Here's the full hierarchy:
```
📁 00_INBOX
📁 01_ANNOTATIONS
📁 01a_LiteratureNotes
📁 01b_ConferenceNotes
📁 01c_SeminarNotes
📁 02_THEORIES
📁 02a_AtomicNotes
📁 02b_Arguments
📁 02c_Counterarguments
📁 03_LITERATURE_MAPS
📁 03a_ConceptMaps
📁 03b_FieldOverviews
📁 03c_ResearchGaps
📁 04_ACTIVE_PROJECTS
📁 04a_Manuscripts
📁 04b_Grants
📁 04c_Presentations
📁 04d_ReviewResponses
📁 05_SOURCE_INDEX
📁 05a_AuthorPages
📁 05b_JournalPages
📁 05c_DatasetRegistry
📁 06_TEMPLATES
📁 07_ARCHIVE
```
The rules that make this work:
Folder structure solves where. Naming convention solves what. The protocol is: `[YYYYMMDD]-[TypePrefix]-[ShortTitle]`
Type prefixes:
Example filenames:
The date prefix gives you chronological sorting without relying on file metadata (which breaks when you move files). The type prefix tells you the note's function before you open it. The short title uses CamelCase with no spaces—this keeps filenames compatible across operating systems and makes them readable in graph view.
Install these in order. Each builds on the last.
Core Infrastructure:
Synthesis & Visualization:
Project & Time Management:
Maintenance:
Dr. Priya Nair is a third-year PhD student in cognitive neuroscience. She's read 340 papers on working memory and has highlights scattered across Zotero, a Notion database she abandoned six months ago, and a folder called "Important PDFs" on her desktop.
She implements ATLAS on a Tuesday afternoon. By Thursday, she has:
When her advisor asks for a draft literature review section on the phonological loop two weeks later, Priya opens her `MN-PhonologicalLoopMechanisms` map note. It already links to 23 literature notes and 11 atomic notes she's written over the past year. The section takes four hours to draft, not four weeks. This is what the system is designed to produce.
Section A: Folder Creation (8 points)
Section B: Plugin Installation (12 points)
Section C: Hotkey Assignments (8 points)
You've read the paper. You highlighted it, maybe even annotated it. Three months later, you're staring at a literature review section and you can't remember what it said—so you open it again, from scratch. This chapter ends that cycle permanently.
---
The EXTRACT Method is a six-stage processing protocol designed to transform a single PDF into a cluster of atomic, linked notes in under 25 minutes. It's not a reading strategy—it's a conversion protocol. The goal isn't comprehension alone; it's permanent integration into your knowledge network so that every paper you process makes your next paper easier to write.
Each letter maps to a discrete cognitive action performed at a specific point in the workflow.
E — Examine (Minutes 0–5: The Skim Pass)
Open the PDF in your Zotero-linked reader. Your only job here is to capture the paper's skeleton. Read the abstract, introduction's final paragraph, all headings, the first sentence of each Results section, and the conclusion. Do not read full paragraphs yet. In Obsidian, open a new Literature Note using your Templater template (covered below) and populate the auto-filled metadata fields: citekey, authors, year, journal, DOI. Then manually add three fields: Central Claim (one sentence), Method Type (e.g., RCT, ethnography, meta-analysis, simulation), and Relevance Flag (High / Medium / Low relative to your current project). This five-minute pass prevents you from spending 45 minutes on a paper that turns out to be tangential.
T — Tag Claims (Minutes 5–12: Deep Pass, Phase 1)
Now read with intention. As you move through the paper, tag every claim that matters using a consistent in-text notation system. Use three tags only: `[C]` for a core claim the authors are defending, `[E]` for empirical evidence they cite or generate, and `[L]` for a limitation they acknowledge or one you notice. Don't write notes yet—tag first. This keeps you in reading mode rather than constantly switching to writing mode, which fragments comprehension.
R — Restate Atomically (Minutes 12–20: Deep Pass, Phase 2)
Now convert your tagged claims into Atomic Notes. Each note captures exactly one idea. Use the Claim-Evidence-Implication (CEI) format:
Name each note using a descriptive title that works as a standalone concept, not a paper reference. For example: `Cortisol reactivity is blunted in chronic stress populations` rather than `Smith2019_finding3`. The note title should be searchable by concept, not by source.
A — Associate Links (Minutes 20–23: Integration Pass, Phase 1)
For each Atomic Note you've just created, ask five linking questions:
Add bidirectional links (`[[note title]]`) for every yes. Minimum three links per Atomic Note. If you can't find three, that's a signal—either your vault is thin in this area (note the gap explicitly) or you haven't read enough in this subfield yet (flag it for your reading queue).
C — Cross-Reference (Minutes 23–24: Integration Pass, Phase 2)
Return to your Literature Note and add a Connections section. List every Atomic Note you created from this paper and every pre-existing note you linked to. Then add one field: Contradicts. If this paper challenges anything in your vault, name it explicitly. Contradiction notes are gold—they become the seeds of your Discussion sections and grant proposal arguments.
T — Triage Importance (Minute 25)
Final 60 seconds. Add one of three tags to your Literature Note: `#foundational` (this paper must appear in any literature review you write in this area), `#supporting` (useful for specific arguments), or `#peripheral` (relevant but not central). This triage system means that when you're assembling a literature review at 11pm before a submission deadline, you can filter your vault by `#foundational` and have your core sources in seconds.
---
Your Literature Note template should auto-populate via the Citations plugin pulling from Zotero. Here's the core structure:
```
---
citekey: {{citekey}}
authors: {{authors}}
year: {{year}}
journal: {{journal}}
doi: {{doi}}
tags: [literature-note, {{field}}, ]
relevance:
---
Like what you see?
```
Your Atomic Note template is simpler:
```
---
source: [[]]
tags: [atomic-note, ]
created: {{date}}
---
Like what you see?
```
The Templater plugin inserts the date automatically. The `source` field links back to the Literature Note, creating a two-way trail: from the concept back to the paper, and from the paper forward to every idea it generated.
---
Dr. Priya Nair is a third-year PhD student in cognitive neuroscience studying attention regulation in ADHD populations. Her vault has 47 Literature Notes but almost no Atomic Notes—she's been summarizing papers, not atomizing them. She processes a new paper: "Working memory load modulates attentional capture in adults with ADHD" (fictional citation).
Examine: She skims in five minutes. Central Claim: High working memory load reduces attentional capture in ADHD adults, contrary to predictions from load theory. Method Type: Within-subjects EEG paradigm. Relevance: High—directly intersects her dissertation chapter 2.
Tag Claims: She reads fully and tags six `[C]` claims, four `[E]` tags pointing to specific EEG frequency band data, and two `[L]` tags—one the authors acknowledge (small N=24), one she notices (no active control group).
Restate Atomically: She creates four Atomic Notes. One example:
Associate Links: This note links to `[[Load Theory - Lavie 1995]]`, `[[Alpha oscillations as attentional gating mechanism]]`, `[[ADHD attentional capture - meta-analysis notes]]`, and `[[Intervention designs using cognitive load]]`. Five links, all bidirectional.
Cross-Reference: Her Literature Note now shows four Atomic Notes created and a Contradicts field pointing to `[[Load Theory predicts worse filtering under high load]]`—a note she wrote six months ago that she now needs to update.
Triage: `#foundational`. This paper will appear in her dissertation.
Total time: 23 minutes. The paper is now permanently woven into her knowledge network.
---
Use this sheet for each of your first three papers. Print it or copy it into a note.
```
PAPER PROCESSING SHEET — EXTRACT METHOD
========================================
Paper Title: ___________________________________
Citekey: _______________ Date Processed: _______
Timer Start: _________
─── EXAMINE (0–5 min) ──────────────────────────
Central Claim (1 sentence, your words):
_________________________________________________
Method Type: ____________________________________
Relevance to Current Project: [ ] High [ ] Med [ ] Low
Reason for relevance (1 sentence):
_________________________________________________
─── TAG CLAIMS (5–12 min) ──────────────────────
Number of [C] tags placed: ______
Number of [E] tags placed: ______
Number of [L] tags placed: ______
Most surprising claim tagged:
_________________________________________________
─── RESTATE ATOMICALLY (12–20 min) ─────────────
Atomic Note 1 Title: ____________________________
Claim: _______________________________________
Evidence: ____________________________________
Implication: _________________________________
Atomic Note 2 Title: ____________________________
Claim: _______________________________________
Evidence: ____________________________________
Implication: _________________________________
Atomic Note 3 Title: ____________________________
Claim: _______________________________________
Evidence: ____________________________________
Implication: _________________________________
─── ASSOCIATE LINKS (20–23 min) ────────────────
For each Atomic Note, check all that apply:
Note 1: [ ] Method [ ] Theory [ ] Finding [ ] Gap [ ] Contradiction
Links added: [[ ]] [[ ]] [[ ]]
Note 2: [ ] Method [ ] Theory [ ] Finding [ ] Gap [ ] Contradiction
Links added: [[ ]] [[ ]] [[ ]]
Note 3: [ ] Method [ ] Theory [ ] Finding [ ] Gap [ ] Contradiction
Links added: [[ ]] [[ ]] [[ ]]
─── CROSS-REFERENCE (23–24 min) ────────────────
Total Atomic Notes created this session: ______
Pre-existing notes linked to: ______
Contradicts (if any):
_________________________________________________
─── TRIAGE (min 25) ────────────────────────────
[ ] #foundational [ ] #supporting [ ] #peripheral
Reason: _________________________________________
Timer End: _______ Total Time: _______
```
---
Like what you see?
You've been atomizing notes for weeks, and your vault is filling up—but a vault full of isolated atomic notes is just a very organized pile. The real intellectual leverage comes when those notes start talking to each other.
A Map of Content (MOC) is not a summary, an outline, or a table of contents. It's a navigable hub note that pulls together atomic notes around a shared axis—a concept, a method, a debate, a timeline—and makes the relationships between them visible. Where your atomic notes are the stars, MOCs are the constellations: the patterns that only emerge when you step back and draw the lines.
The Constellation Mapping System™ organizes your vault into four distinct MOC types, each designed to answer a different kind of research question. You don't build all four for every topic—you choose the type based on what you're trying to understand.
---
Type 1: Theme Constellations
Purpose: What do we know about concept X?
Theme Constellations group atomic notes by shared conceptual territory. This is the MOC type you'll build most often, and it's the one that makes literature reviews 60-70% pre-assembled.
Steps:
---
Type 2: Method Galaxies
Purpose: How has this phenomenon been studied?
Method Galaxies compare methodological approaches across papers. This MOC type is essential before designing your own study and invaluable when writing a methods justification.
Steps:
---
Type 3: Debate Maps
Purpose: Where does the field disagree, and why?
Debate Maps are where your vault starts generating original research questions. They surface the productive friction between competing positions.
Steps:
---
Type 4: Timeline Arcs
Purpose: How has thinking on this topic evolved?
Timeline Arcs track conceptual development across publications and reveal when paradigm shifts happened—and what triggered them.
Steps:
---
Once you have 20+ atomic notes on a topic, open your Theme Constellation MOC and activate the Local Graph View (Ctrl/Cmd + G). Set depth to 2. What you're looking for:
The structural holes you find here are not failures—they're your research agenda.
---
Paste this Dataview query into any MOC note to automatically surface notes with opposing conclusions on the same topic:
```dataview
TABLE conclusion, tags, source
FROM #your-topic-tag
WHERE contains(tags, "contradicts") OR contains(conclusion, "no effect") OR contains(conclusion, "negative")
SORT source ASC
```
Setup requirement: When writing atomic notes, add a `conclusion::` inline field with a one-line summary (e.g., `conclusion:: spaced repetition improves retention`) and tag notes where your conclusion conflicts with another source with `#contradicts`. The query does the rest.
Two additional query recipes for MOC auto-drafting:
Query 2 — Pull all unlinked notes on a topic for a Theme Constellation draft:
```dataview
LIST
FROM #cognitive-load
WHERE length(file.inlinks) = 0
SORT file.ctime ASC
```
Query 3 — Build a Method Galaxy table automatically:
```dataview
TABLE design, sample-size, key-limitation
FROM #empirical AND #your-topic
SORT publication-year ASC
```
These queries don't build your MOC for you—they give you the raw material so you're not hunting through 400 notes manually.
---
Scenario: Dr. Priya Nair is a third-year PhD student in educational psychology. Her dissertation examines how worked examples affect problem-solving transfer. She has 67 atomic notes across papers on cognitive load theory, example-problem pairs, and transfer-appropriate processing.
She starts with a Theme Constellation for "worked examples," runs the Dataview query, and finds her notes cluster into three groups: theoretical explanations, lab-based studies, and classroom-based studies. But the local graph view reveals a structural hole: her lab studies and classroom studies share no connecting notes. She's been reading two literatures that rarely cite each other.
She then builds a Debate Map for the contested claim "worked examples improve transfer." Position A (lab studies): yes, consistently. Position B (classroom studies): mixed, often null results. The Methodological Explanation column reveals the difference—lab studies use near-transfer tasks, classroom studies use far-transfer assessments. That single observation becomes the central argument of her dissertation's contribution chapter.
Finally, she builds a Timeline Arc and discovers that the near/far transfer distinction was actually raised in a 1999 paper by Sweller—but then largely ignored for 15 years before resurfacing in 2014. That gap is now a paragraph in her introduction.
Three MOCs. One week of work. A dissertation argument she couldn't see when her notes were in folders.
---
MOC Type you're building: _______________________________________________
Central question this MOC answers: _________________________________________
---
Step 1 — Seed Collection
List 5-10 atomic notes that belong in this MOC (note title + one-line summary):
| Note Title | One-Line Summary |
|---|---|
| | |
| | |
| | |
| | |
| | |
---
Step 2 — Cluster Identification
Group your notes into 2-5 sub-clusters. Name each cluster:
| Cluster Name | Notes Included | What This Cluster Reveals |
|---|---|---|
| | | |
| | | |
| | | |
---
Step 3 — Structural Holes
Which clusters have fewer than 2 notes? What papers might fill this gap?
Gap identified: _________________________________________________________________
Possible sources to read: _________________________________________________________
---
Step 4 — MOC Template (paste into Obsidian)
```markdown
2-3 sentences: what does this MOC cover and why does it matter to your research?
Like what you see?
Synthesis sentence
Synthesis sentence
Like what you see?
```dataview
TABLE conclusion, source
FROM #your-tag
WHERE contains(tags, "contradicts")
```
```
---
Step 5 — Cross-MOC Connections
Which other MOCs does this one connect to? (Reference the MOC types you've already built or plan to build)
This MOC connects to: ____________________________________________________________
The connection is significant because: ______________________________________________
---
---
You've spent two years reading, annotating, and atomizing notes into your vault. The literature review due in two weeks shouldn't feel like starting from scratch—and after this chapter, it won't.
---
The PRISM Review Protocol™ is a five-stage system that converts your existing vault into a structured, citation-ready literature review in 72 hours. It works because your vault is no longer a storage system—it's a pre-assembled argument waiting to be sequenced. Each stage maps directly to one of three days.
P — Pull Relevant MOCs
Open every Map of Content related to your review topic. If you built your MOCs correctly in Chapter 4, each one represents a conceptual cluster in your field. Pull them all into a single "Review Hub" note using `[[wikilinks]]`. Don't filter yet—cast wide. Your MOC for "predictive coding" and your MOC for "attention mechanisms" might both belong in a review on active inference. Judgment comes later.
R — Refine Scope with Gap Analysis
Run a tag intersection query (template provided in the Worksheet section) to surface every atomic note tagged with your core constructs. Then open your Constellation Map from Chapter 4 and identify which nodes have no outgoing links to synthesis notes. Those are your gaps—topics your vault has raw material on but hasn't yet integrated. Gaps become your review's contribution framing: "This review synthesizes X and Y, which have not previously been examined together."
I — Identify Narrative Threads
This is the intellectual core of Day 1. Open the Kanban plugin and create a board called `[Review Title] — Narrative Threads`. Each column is a candidate section of your review. Drag atomic note cards into columns based on the argument they support, not the paper they came from. A single paper might contribute cards to three different columns. This is the moment your vault stops being a bibliography and becomes a manuscript skeleton.
S — Synthesize into Sections
Day 2 is the Synthesis Sprint. Working column by column on your Kanban board, open each atomic note in a split pane and write connecting prose between them using Note Composer to merge related notes into a single section draft. Your job isn't to summarize each note—it's to write the argument that makes them cohere. The notes are your evidence; you supply the logic.
M — Map Citations Back to Sources
Day 3 is verification. Every claim in your draft traces back to a source note, and every source note links to a Zotero entry. You'll run a Coverage Audit query to catch papers in your vault that are relevant but uncited in the current draft—these are your "orphaned sources," and they either belong in the review or belong in a footnote explaining why you excluded them.
---
Scenario: Dr. Amara Osei is a third-year PhD student in cognitive neuroscience writing a literature review on interoceptive predictive processing for a target journal submission in 14 days. Her vault has 340 atomic notes, 12 MOCs, and 180 Zotero-linked papers accumulated over 18 months.
Day 1 (6 hours): Amara pulls her MOCs on `interoception`, `predictive processing`, `allostasis`, and `affective neuroscience`. She runs a Dataview query filtering for notes tagged `#interoception AND #prediction-error` and surfaces 47 atomic notes she'd forgotten writing. She builds a Kanban board with six columns: Historical Background, Computational Models, Neural Substrates, Clinical Implications, Methodological Debates, and Open Questions. She spends 90 minutes dragging note cards into columns. Fourteen notes don't fit cleanly—she creates a seventh column called Contested Claims rather than forcing them.
Day 2 (7 hours): Amara works through each Kanban column in order. For Computational Models, she has 11 atomic notes. She opens them in a split pane, uses Note Composer to merge the three most closely related into a single draft block, then writes 200 words of original connecting prose explaining why Friston's free energy formulation diverges from Clark's predictive processing account on the question of motor control. She doesn't write from memory—she writes from evidence already in her vault. By end of Day 2, she has 4,200 words of rough draft across five sections.
Day 3 (4 hours): Amara runs the Coverage Audit query. It flags eight papers in her vault tagged `#interoception` that appear in zero citations in her current draft. Three are genuinely relevant and get integrated. Two are methodological papers she consciously excluded—she adds a one-sentence exclusion rationale. Three are tangential and get moved to a "Future Reading" note. She uses the Zotero Integration plugin to auto-generate her bibliography, then cross-checks 15 direct quotes against their source PDFs. Final word count before revision: 5,800 words.
The review that would have taken her three weeks of re-reading and reorganizing took 17 focused hours because the intellectual work was already done.
---
Section 1: Review Scope Definition
```
Review Working Title: ________________________________
Target Journal/Venue: ________________________________
Word Count Target: ________________________________
Submission Deadline: ________________________________
Core Constructs (3-5 terms): ________________________________
Primary Tags in Vault: ________________________________
MOCs to Pull (list all): ________________________________
```
Section 2: Narrative Thread Mapper
Use this to pre-plan your Kanban columns before opening the plugin.
```
Section 1 Title: ________________
Supporting MOCs: ________________
Key Tags: ________________
Estimated Note Count: ____
Section 2 Title: ________________
Supporting MOCs: ________________
Key Tags: ________________
Estimated Note Count: ____
Section 3 Title: ________________
Supporting MOCs: ________________
Key Tags: ________________
Estimated Note Count: ____
Section 4 Title: ________________
Supporting MOCs: ________________
Key Tags: ________________
Estimated Note Count: ____
Section 5 Title: ________________
Supporting MOCs: ________________
Key Tags: ________________
Estimated Note Count: ____
Gap/Contested Claims Column: ________________
```
Section 3: Five Copy-Paste Dataview Queries
Save these in a note called `_Review Query Templates` in your `_Templates` folder.
Query 1 — Pull all notes with two intersecting tags:
```dataview
TABLE file.name, tags, file.mtime
FROM #tag1 AND #tag2
SORT file.mtime DESC
```
Replace `#tag1` and `#tag2` with your core constructs. Run this for every pair of constructs in your review.
Query 2 — Pull all notes linked to a specific MOC:
```dataview
TABLE file.name, file.mtime
FROM [[Your MOC Name]]
SORT file.mtime ASC
```
This surfaces notes you linked to the MOC but may have forgotten.
Query 3 — Pull notes created during a specific reading period:
```dataview
TABLE file.name, tags, file.ctime
FROM "Atomic Notes"
WHERE file.ctime >= date(2023-01-01) AND file.ctime <= date(2023-12-31)
AND contains(tags, "#your-tag")
SORT file.ctime DESC
```
Useful for recovering notes from a specific literature sprint.
Query 4 — Pull all notes with a Zotero citation key:
```dataview
TABLE file.name, zotero-key, tags
FROM "Atomic Notes"
WHERE zotero-key != null
AND contains(tags, "#your-tag")
SORT file.name ASC
```
Requires your EXTRACT template to include a `zotero-key:` frontmatter field.
Query 5 — Coverage Audit (orphaned sources):
```dataview
TABLE file.name, zotero-key, tags
FROM "Atomic Notes"
WHERE contains(tags, "#your-tag")
AND !contains(file.outlinks, [[Your Draft Note Name]])
SORT file.mtime DESC
```
This flags every tagged note that has NOT been linked to your current review draft. Run this at the start of Day 3.
Section 4: Day 1–2–3 Time-Blocked Schedule
```
DAY 1 — Architecture (Target: 5-6 hours)
09:00–10:00 Pull MOCs + run tag intersection queries
10:00–11:30 Review all surfaced notes; flag irrelevant ones
11:30–12:30 Build Kanban board; create section columns
12:30–14:00 Drag note cards into columns
14:00–15:00 Identify gaps; write gap analysis paragraph
End of Day 1 Deliverable: Populated Kanban board with all notes assigned
DAY 2 — Synthesis Sprint (Target: 6-8 hours)
09:00–09:30 Prioritize column order; start with strongest section
09:30–11:30 Draft Section 1 (merge notes + write connecting prose)
11:30–13:30 Draft Section 2
14:00–16:00 Draft Sections 3–4
16:00–17:00 Draft Section 5 + Introduction skeleton
End of Day 2 Deliverable: Complete rough draft (all sections present)
DAY 3 — Verification Pipeline (Target: 3-4 hours)
09:00–09:30 Run Coverage Audit query; review flagged notes
09:30–10:30 Integrate or exclude orphaned sources
10:30–11:30 Cross-check 20% of citations against source PDFs
11:30–12:30 Generate bibliography via Zotero Integration
12:30–13:00 Final structural read; flag sections needing expansion
End of Day 3 Deliverable: Submission-ready draft + complete bibliography
```
Section 5: Coverage Audit Checklist
```
[ ] Coverage Audit query run and results reviewed
[ ] All flagged notes triaged (integrate / exclude / defer)
[ ] Exclusion rationale noted for any deliberately omitted sources
[ ] Every direct quote verified against source PDF
[ ]
Like what you see?
You've spent months building a vault that thinks alongside you — now it's time to make it write alongside you. The gap between "I have great notes" and "I have a submitted manuscript" is where most researchers stall, not because they lack ideas, but because they lack a system for converting accumulated knowledge into structured prose.
---
The Scaffold-to-Draft Pipeline™ is a six-stage workflow that transforms your linked atomic notes into a submittable manuscript without ever starting from a blank page. The core principle: assembly before authorship. You build the argument's skeleton from existing notes, then write connective tissue — not content you already have.
Stage 1: Argument MOC (The Reverse Outline)
Before writing a single sentence, create a dedicated Map of Content for the manuscript. Name it `MOC - [Manuscript Short Title]`. Inside, don't list topics — list claims. Each bullet is a falsifiable assertion your paper will defend:
```
```
Link each claim to the atomic notes and literature notes already in your vault that support, complicate, or contradict it. This is your Constellation Map (Chapter 4) applied to a specific rhetorical purpose.
Stage 2: Section Scaffolding
Create a new note titled `DRAFT - [Manuscript Title]`. Use Obsidian's embed syntax to pull your existing notes directly into the manuscript structure:
```markdown
Like what you see?
![[MOC - Blockchain Supply Chain]]
![[LIT - Nakamoto 2008 - Key Claims]]
![[IDEA - Friction Hypothesis]]
![[METHOD - Hybrid Verification Protocol]]
![[DATA - SME Survey Instrument v2]]
```
This is your zero-draft — a manuscript assembled entirely from existing notes before you write a single new sentence. In most cases, a well-maintained vault will give you 40–60% of your word count here.
Stage 3: Gap Analysis
Read through your embedded zero-draft and annotate every transition, missing argument, and logical leap with a `%%comment%%` in Obsidian's native comment syntax. These comments become your writing queue — a prioritized list of exactly what needs to be written, not a vague sense of "I need to work on the introduction."
Stage 4: The Daily 1,000-Word Protocol
Each writing session follows this exact six-step sequence:
This sequence is non-negotiable. Researchers who skip Step 1 lose context. Researchers who skip Step 3 open Google Scholar and lose an hour.
Stage 5: The Academic Writing Dashboard
Create a note titled `000 - Writing Dashboard`. Using Dataview, build a live table that tracks every active manuscript:
```dataview
TABLE
status as "Status",
wordcount as "Words",
target as "Target",
deadline as "Deadline",
coauthor as "Lead Author",
(date(deadline) - date(today)).days + " days" as "Countdown"
FROM "Manuscripts"
SORT deadline ASC
```
Each manuscript lives in a folder under `Manuscripts/` with a YAML frontmatter block:
```yaml
---
title: Hybrid Verification in SME Supply Chains
status: Drafting
wordcount: 4200
target: 8000
deadline: 2025-03-15
coauthor: Dr. Reyes
journal: Journal of Operations Management
---
```
Status moves through five stages: `Outlining → Drafting → Revising → Submitted → In Review`. Your dashboard makes the entire manuscript portfolio visible at a glance — no spreadsheet, no separate project manager.
Stage 6: Export and Collaboration Handoff
When a section is ready for co-author review or journal submission, export via Pandoc. Install the Pandoc plugin or run commands from terminal. Version control runs through the Git plugin (configured in Chapter 2), which commits your vault automatically every 30 minutes during writing sessions.
For co-authors who don't use Obsidian, create a Collaboration Handoff Note — a standalone document that explains your vault's logic without requiring them to navigate it:
```markdown
Like what you see?
Your role: Revise Methods and Results sections
Files to review: DRAFT-HybridVerification.md (exported as Word, attached)
Key sources: See References folder — all PDFs named [AuthorYear-ShortTitle]
My argument structure: See MOC-BlockchainSupplyChain.md (PDF export attached)
Version control: All changes tracked in Git; message me before editing Section 3
Questions? Slack me — do not email
```
This note takes 10 minutes to write and eliminates the three-email clarification loop that derails every co-authored project.
---
Scenario: Dr. Amara Osei, a third-year PhD student in organizational behavior, needs to submit a conference paper on remote work and team cohesion in 21 days. She has 340 notes in her vault from 18 months of reading.
She opens a new `MOC - Remote Cohesion Paper` and lists six claims, linking each to existing literature notes from her EXTRACT processing (Chapter 3). She creates `DRAFT - RemoteCohesion-ASQ2025` and embeds 14 notes into the Introduction and Literature Review sections. Her zero-draft is 2,800 words — assembled in 90 minutes.
Gap analysis reveals she needs original prose for: the theoretical contribution paragraph, the methods justification, and the discussion of boundary conditions. That's three focused writing tasks, not a 7,000-word manuscript written from scratch.
Following the Daily 1,000-Word Protocol, she produces 1,100 words on Day 1, 950 on Day 2, and 1,200 on Day 3. By Day 5, the draft is complete. She exports to Word via Pandoc using the APA template, writes a Collaboration Handoff Note for her advisor, and commits the final version to Git.
The paper that would have taken six weeks took twelve days — because the vault did the heavy lifting.
---
Part A: Manuscript Project Template
Copy this into a new note under `Manuscripts/[ProjectName]/`:
```markdown
---
title: [Full Manuscript Title]
status: Outlining
wordcount: 0
target: [Target word count]
deadline: [YYYY-MM-DD]
coauthor: [Name or "Solo"]
journal: [Target journal or conference]
---
![[DRAFT - Abstract - [Title]]]
<!-- Claim: -->
![[MOC - [Topic]]]
%%WRITE: Hook, gap statement, contribution%%
Like what you see?
![[LIT - [Key Paper 1]]]
![[LIT - [Key Paper 2]]]
%%WRITE: Synthesis paragraph connecting these streams%%
![[METHOD - [Protocol Name]]]
%%WRITE: Justification for methodological choice%%
Like what you see?
![[DATA - [Dataset or Analysis Note]]]
%%WRITE: Narrative interpretation of key findings%%
![[IDEA - [Core Theoretical Claim]]]
%%WRITE: Boundary conditions, limitations, future directions%%
Like what you see?
[CITE placeholders to resolve in Zotero before export]
```
---
Part B: Pandoc Export Command Recipes
Run these from your vault's root directory in terminal:
| Format | Command |
|---|---|
| APA 7th (Word) | `pandoc draft.md -o output.docx --citeproc --bibliography refs.bib --csl apa.csl` |
| IEEE (PDF via LaTeX) | `pandoc draft.md -o output.pdf --citeproc --bibliography refs.bib --csl ieee.csl --pdf-engine=xelatex` |
| Nature | `pandoc draft.md -o output.docx --citeproc --bibliography refs.bib --csl nature.csl` |
| Chicago Author-Date | `pandoc draft.md -o output.docx --citeproc --bibliography refs.bib --csl chicago-author-date.csl` |
| Custom LaTeX | `pandoc draft.md -o output.tex --template=mythesis.tex --bibliography refs.bib --natbib` |
Download `.csl` files from [zotero.org/styles](https://www.zotero.org/styles). Store your `refs.bib` in `Vault/References/refs.bib` and update it from Zotero using Better BibTeX's auto-export.
---
Part C: Daily Writing Protocol Card (Print and tape to your monitor)
```
DAILY 1,000-WORD PROTOCOL
─────────────────────────────
□ 1. Read last 200 words written
□ 2. Pick ONE %%comment%% gap
□ 3. Open linked notes — no browser
□ 4. Write 45 min, no editing
□ 5. Insert [CITE: AuthorYear] placeholders
□ 6. Update Dashboard word count
─────────────────────────────
TARGET: 1,000 words | TIME: 60 min
```
---
You've built the vault, processed the papers, and mapped the constellations. Now comes the part most researchers leave entirely to chance: generating the ideas that become projects, proposals, and publications.
Most researchers wait for insight to arrive—a shower thought, a conference hallway conversation, a lucky juxtaposition of two papers read in the same week. The Serendipity Engine Protocol™ replaces passive waiting with a repeatable, scheduled process that manufactures those collisions deliberately. It has three interlocking components: the Random Collision technique, the Gap Registry, and the Research Question Stress Test. Run them together in a weekly 30-minute session and you'll generate more viable research directions in a month than most PhD students produce in a year.
---
#### Component 1: The Random Collision Technique
Install the Random Note core plugin in Obsidian (it ships with the app—just enable it in Settings → Core Plugins). Assign it a hotkey, something like `Ctrl+Shift+R`. Now you have a one-keystroke way to surface any note in your vault.
Here's the protocol:
- What assumption does Note A make that Note B challenges?
- If the method from Note A were applied to the population/phenomenon in Note B, what would you find?
- What variable is central to Note A but completely absent from Note B's model?
- If both findings are true simultaneously, what third explanation would reconcile them?
- Disciplinary distance: Are these notes from different subfields or methodological traditions? (0 = same paper's citations, 1 = same field different methods, 2 = cross-disciplinary)
- Tension quality: Is there genuine contradiction or just surface difference? (0 = no tension, 1 = partial overlap, 2 = direct conflict or unexplained gap)
- Generativity: Did the 5-minute write produce at least one sentence you'd want to develop? (0 = no, 1 = maybe, 2 = yes)
Threshold: 4+ points → move this collision to your Gap Registry. Below 4, discard and repeat.
Run three collision pairs per session. You'll get one or two worth keeping most weeks.
---
#### Component 2: The Gap Registry
Your Gap Registry is a dedicated folder in your vault: `04-Ideas/Gap-Registry/`. Every entry is a note created from a template (set this up in Templater using the skills from Chapter 3). The Gap Registry is not a wishlist—it's a structured database. Each entry must have:
Over time, your Gap Registry becomes searchable by method, feasibility, and funding body. When an RFP lands in your inbox, you run a tag search in Obsidian and surface every gap that matches in seconds—instead of trying to reverse-engineer a project idea from a deadline.
---
#### Component 3: The Research Question Stress Test
Before any Gap Registry entry gets serious development time, it runs the gauntlet. The Stress Test is seven questions, each scored 0–2. Maximum score: 14. Go threshold: 10+. Below 10, the idea either needs more development or should be archived.
| # | Question | 0 | 1 | 2 |
|---|----------|---|---|---|
| 1 | Is it novel? Does a search of your Literature Map (Chapter 4) confirm no published answer? | Already answered | Partially answered | Genuinely open |
| 2 | Is it feasible with my current resources? Data, equipment, IRB, time within funding cycle. | Requires resources I don't have and can't get | Requires one significant acquisition | Achievable now |
| 3 | Does it extend existing theory? Would a positive result require revising or qualifying a named framework in your field? | No theoretical implication | Adds a boundary condition | Challenges or extends core theory |
| 4 | Can I access the data? Not "could someone access it"—can you, within 6 months? | No realistic path | Possible with collaboration | Yes, clear path |
| 5 | Is there a funding match? Can you name a specific program, not just a general agency? | No match found | Possible match, not confirmed | Specific program identified |
| 6 | Does it advance my career trajectory? Does this build toward your target job market position or tenure case? | Tangential | Somewhat aligned | Directly aligned |
| 7 | Can you articulate it in one sentence? Write it now. If it takes three sentences, it's not ready. | Cannot do it | Requires two sentences | Clean single sentence |
Any question scoring 0 is a red flag—a single zero in questions 1, 2, or 4 is usually disqualifying regardless of total score.
---
Dr. Priya Nair is a third-year PhD student in computational social science studying online misinformation. Her vault has 340 processed notes. During a Tuesday ideation session, her Random Note hotkey surfaces two notes: one from a 2021 paper on emotional contagion in Twitter networks, and one from a 2019 paper on motivated reasoning in vaccine hesitancy communities.
She applies the Collision Prompts. The method question hits: "If the emotional contagion network model from Note A were applied to the vaccine hesitancy communities in Note B, what would you find?" Her 5-minute write produces this: "We don't actually know whether vaccine hesitancy spreads through emotional contagion dynamics or through identity-protective cognition—and the two mechanisms predict different network structures."
Collision score: disciplinary distance = 1 (same broad field, different mechanisms), tension quality = 2 (genuine mechanistic conflict), generativity = 2. Score: 5. This goes to the Gap Registry.
She fills the entry: Gap type `#gap/theoretical`, methods include "exponential random graph modeling + sentiment analysis", feasibility `#feasibility/high` (Twitter API access already approved), funding alignment: "NSF SBE — Science of Broadening Participation and Human Resource Development, also CISE — Information Integration and Informatics." Status: `#status/seed`.
Three weeks later she runs the Stress Test. Scores: novelty = 2 (confirmed via Literature Map search), feasibility = 2, theory extension = 2 (challenges both competing frameworks), data access = 2, funding match = 2, career alignment = 2, one-sentence test = 1 (still needs tightening). Total: 13/14. She starts developing a full proposal outline.
Six months later, that collision becomes her first first-author submission.
---
Date: _______________
Session Duration: 30 minutes
Vault Note Count: _______________
---
PHASE 1 — Random Collisions (15 minutes, 3 pairs)
Collision Pair 1
Note A: `[[ ]]`
Note B: `[[ ]]`
Collision prompt used: _______________________________________________
5-minute write output (key sentence): _______________________________________________
Score — Distance: ___ / Tension: ___ / Generativity: ___ / Total: ___ / 6
→ Gap Registry? YES / NO
Collision Pair 2
Note A: `[[ ]]`
Note B: `[[ ]]`
Collision prompt used: _______________________________________________
5-minute write output (key sentence): _______________________________________________
Score — Distance: ___ / Tension: ___ / Generativity: ___ / Total: ___ / 6
→ Gap Registry? YES / NO
Collision Pair 3
Note A: `[[ ]]`
Note B: `[[ ]]`
Collision prompt used: _______________________________________________
5-minute write output (key sentence): _______________________________________________
Score — Distance: ___ / Tension: ___ / Generativity: ___ / Total: ___ / 6
→ Gap Registry? YES / NO
---
PHASE 2 — Gap Registry Review (10 minutes)
Open your Gap Registry. Review entries with `#status/seed` added in the last 30 days.
Seeds to advance to `#status/developing` this week: _______________________________________________
Seeds to archive (no longer viable): _______________________________________________
Seeds to run through Stress Test today: _______________________________________________
---
PHASE 3 — Stress Test (5 minutes, one idea)
Idea being tested: _______________________________________________
One-sentence articulation: _______________________________________________
| Question | Score (0–2) | Notes |
|----------|-------------|-------|
| 1. Novel? | | |
| 2. Feasible with my resources? | | |
| 3. Extends theory?
Like what you see?
You've built the system. You've processed papers with EXTRACT, mapped constellations, linked contradictions, and assembled MOCs that would have taken you three weeks to write from scratch six months ago. The real threat now isn't ignorance—it's entropy. Every abandoned productivity system in your past died not from a bad launch, but from a failure to maintain.
---
A vault that grows without maintenance becomes a digital attic: full of things you vaguely remember owning but can't find when you need them. The Evergreen Vault Lifecycle™ is a three-layer maintenance architecture designed to keep your system compounding across career stages—not just across weeks.
The lifecycle operates at three tempos:
Layer 1 — Weekly Gardening (45 minutes, every Friday)
Layer 2 — Quarterly Pruning (2 hours, end of each semester/quarter)
Layer 3 — Career-Stage Migration (as needed, with a structured protocol)
Each layer addresses a different type of decay: link rot, conceptual staleness, and structural obsolescence.
---
#### Layer 1: The Weekly Gardening Session
Block 45 minutes every Friday afternoon—not Monday morning when you're optimistic, Friday when the week is actually done. This session has four fixed components:
Step 1 — Orphan Note Triage (10 minutes)
Run your Orphan Note Dataview query (see Vault Health Dashboard below). Every note with zero incoming links is either underdeveloped, miscategorized, or genuinely isolated. For each orphan: ask whether it connects to an existing MOC, deserves its own atomic expansion, or should be archived. Don't delete—archive. Deleted knowledge has a way of being needed six months later.
Step 2 — MOC Refresh (15 minutes)
Open your three most active MOCs—typically your current project MOC and your two most-trafficked concept MOCs. Scan notes added since your last gardening session. Add any new links that belong. Update the MOC's synthesis paragraph if a new paper has shifted your understanding of the concept cluster. This is where the Constellation Mapping System from Chapter 4 pays compounding dividends: you're not rebuilding the map, you're extending it.
Step 3 — Project Archive Pass (10 minutes)
Any project note for work that's been submitted, rejected, or shelved for more than 60 days gets moved to `40-Projects/Archive/`. Before archiving, add a single `## Post-Mortem` section: what the project was, what it produced, and which concept notes it generated that belong in your permanent knowledge base. This takes four minutes and saves you from re-reading a 40-page draft in two years to remember what you learned.
Step 4 — Vault Health Query Review (10 minutes)
Run your Vault Health Dashboard queries. Review the velocity metric (notes per week). If you're below 3 notes per week during an active reading period, your processing pipeline has a bottleneck—usually in the EXTRACT stage. If you're above 20, you're likely creating low-quality stubs instead of atomic notes. Both are signals, not judgments.
---
#### Layer 2: Quarterly Pruning
Every 12-14 weeks, spend two hours on structural review. Audit your tag taxonomy for drift (tags that have become redundant or too broad). Review your `00-Inbox` for anything older than 30 days—these are processing failures that need a decision, not more waiting. Update your `00-MOC-Index` to reflect new concept clusters that have emerged. This is also when you run a full graph view analysis using the Constellation Mapping approach from Chapter 4 to identify new structural holes that have opened as your vault has grown.
---
#### Layer 3: Career-Stage Migration
This is the layer most productivity systems never address, which is why a vault built in Year 2 of your PhD becomes useless by Year 3 of your postdoc. Each career transition requires a deliberate migration protocol, not a fresh start.
PhD → Postdoc Migration Checklist:
Onboarding a Research Assistant:
Create a `00-Vault-Onboarding` note that explains your ATLAS structure, your tagging conventions, your naming protocol from Chapter 2, and which MOCs are currently active. Give them read-only access to your vault via a shared Git repository. Have them process their first three papers using your EXTRACT template before they touch any existing notes. This protects your link architecture from well-intentioned but structurally disruptive additions.
---
#### The Anti-Fragile Vault Principles
Your vault should survive Obsidian going bankrupt, your laptop dying, and a future version of yourself who decides to switch tools. Three non-negotiable rules:
Rule 1 — Plain Markdown Only. Never use Obsidian-specific formatting that doesn't render in a standard markdown editor. Avoid canvas files as primary knowledge storage. Your notes should be readable in VS Code, iA Writer, or a plain text editor without losing meaning.
Rule 2 — Git Backup, Always On. The Obsidian Git plugin should be configured to auto-commit every 30 minutes and push to a private GitHub repository daily. This gives you version history (you can recover a note you accidentally overwrote three weeks ago), a remote backup, and a migration path to any future tool that reads from a Git repository.
Rule 3 — Structure Over Features. Every time you're tempted to add a new plugin that creates a proprietary data format, ask: "If this plugin disappears, do I lose data or just convenience?" Lose convenience freely. Never accept data lock-in.
---
Dr. Priya Nair is a third-year postdoc in computational neuroscience. She built her vault during the final year of her PhD using the ATLAS structure and EXTRACT method. Eighteen months later, her vault contains 847 notes, 12 active MOCs, and 3,400 links.
Without maintenance, she'd have 200 orphan notes, 4 MOCs that haven't been updated since her PhD, and a `00-Inbox` with 60 unprocessed papers. Instead, her Friday Gardening Sessions have kept her orphan count below 30 at any given time. When she started her postdoc and pivoted from rodent electrophysiology to human fMRI, she ran the Career-Stage Migration protocol. She archived her rodent-specific project notes, created a `MOC-fMRI-Methods` from scratch, and discovered—through her existing `30-Concepts` notes on signal processing—that three of her PhD-era concept notes connected directly to fMRI preprocessing debates she was just encountering. Those connections became a paragraph in her first postdoc paper that her PI called "unusually sophisticated for someone new to the method." It wasn't sophistication. It was compounding.
---
Use this as your implementation contract. Fill in specific dates before you close this chapter.
---
Phase 1 — Foundation (Weeks 1–2)
Target completion date: _______________
| Milestone | Done? | Notes |
|---|---|---|
| ATLAS folder structure created | ☐ | |
| Core plugins installed and configured | ☐ | |
| EXTRACT template built in Templater | ☐ | |
| Literature note template created | ☐ | |
| First 10 papers processed with EXTRACT | ☐ | |
| Git backup configured and tested | ☐ | |
Papers I will process first (list 10 by author/year):
---
Phase 2 — Structure (Weeks 3–4)
Target completion date: _______________
| Milestone | Done? | Notes |
|---|---|---|
| First MOC created (topic: ___________) | ☐ | |
| Second MOC created (topic: __________) | ☐ | |
| Third MOC created (topic: ___________) | ☐ | |
| `00-MOC-Index` note built and linked | ☐ | |
| First Constellation Map generated | ☐ | |
| One structural hole identified and documented | ☐ | |
My three founding MOC topics (the three concept clusters most central to my current research):
---
Phase 3 — Integration (Weeks 5–8)
Target completion date: _______________
| Milestone | Done? | Notes |
|---|---|---|
| Active writing project linked to vault | ☐ | |
| Literature review section 60%+ assembled from existing notes | ☐ | |
| At least one cross-disciplinary connection surfaced | ☐ | |
| Contradiction Detector used on at least one concept cluster | ☐ | |
| 25+ papers processed total | ☐ | |
| Project MOC created for current writing project | ☐ | |
Current writing project I will integrate: _______________
Estimated % of literature review I can pre-assemble from existing notes: ___%
Target % after Phase 3: ___%
---
Phase 4 — Maintenance and Ideation (Weeks 9–12)
Target completion date: _______________
| Milestone | Done? | Notes |
|---|---|---|
| First Weekly Gardening Session completed | ☐ | |
| Vault Health Dashboard queries installed | ☐ | |
| Orphan note count below 20 | ☐ | |
| Friday Gardening block recurring in calendar | ☐ | |
| Quarterly Pruning date scheduled | ☐ | |
| First speculative/ideation note written | ☐ | |
My
---
---
#### Template 1: Literature Note (Atomic Paper Processing Card)
```markdown
---
*What is the single most important thing this paper argues or demonstrates?*
---
Like what you see?
---
| Finding | Strength of Evidence | My Assessment |
|---------|---------------------|---------------|
| | | |
| | | |
| | | |
---
Like what you see?
Each bullet becomes a candidate permanent note. One idea per bullet.
---
What does this paper argue against? What does it leave unresolved?
---
Like what you see?
Finish these sentences to force linking:
---
Anything here that only I would write — hunches, disagreements, synthesis sparks:
---
Like what you see?
"Exact quote, p.XX"
"Exact quote, p.XX"
---
#literature-note #methodology/ #domain/ #year/{{year}} #status/processed
```
---
#### Template 2: Permanent Note (Evergreen Concept Card)
```markdown
Like what you see?
Created: {{date}} | Last Modified: {{date:YYYY-MM-DD}}
---
This note's title should be a complete, falsifiable assertion — not a topic label.
Example: "Interleaved practice outperforms blocked practice for long-term retention" not "Interleaved Practice"
---
Like what you see?
Write 150-300 words in your own voice. No quotes. No hedging. What do YOU think this means?
---
---
Like what you see?
---
---
Like what you see?
---
---
Like what you see?
#permanent-note #concept/ #domain/ #maturity/developing
```
---
#### Template 3: Literature Review Assembly Note (Pre-Writing Scaffold)
```markdown
Project: [[Project - ]] | Target Venue: | Deadline: | Status: #review/drafting
---
Like what you see?
---
Core argument of this theme in 2 sentences:
| Paper | Core Contribution | Agrees/Disagrees with Theme? |
|-------|------------------|------------------------------|
| [[]] | | |
| [[]] | | |
Core argument of this theme in 2 sentences:
| Paper | Core Contribution | Agrees/Disagrees with Theme? |
|-------|------------------|------------------------------|
| [[]] | | |
---
Like what you see?
Map the intellectual fault lines:
---
---
Like what you see?
Pull from permanent notes. Each paragraph = one permanent note expanded:
Source note: [[Permanent Note - ]]
Source note: [[Permanent Note - ]]
---
---
Like what you see?
#literature-review #project/ #status/in-progress
```
---
#### Template 4: Research Project Hub Note
```markdown
Created: {{date}} | Status: #project/active | Target:
---
Like what you see?
What will this paper add that does not yet exist in the literature?
---
| Section | Status | Word Count | Notes |
|---------|--------|------------|-------|
| Abstract | | | |
| Introduction | | | |
| Literature Review | | | |
| Methods | | | |
| Results | | | |
| Discussion | | | |
| Conclusion | | | |
---
Record methodological and framing decisions so you can justify them in peer review:
| Date | Decision | Rationale | Alternative Considered |
|------|----------|-----------|----------------------|
| | | | |
---
Like what you see?
```dataview
LIST
FROM #literature-note
WHERE contains(file.outlinks, this.file.link)
SORT file.mtime DESC
```
---
```dataview
LIST
FROM #permanent-note
WHERE contains(file.outlinks, this.file.link)
```
---
Like what you see?
---
What will Reviewer 2 say? Write it now and address it:
---
Like what you see?
#project #domain/ #status/active
```
---
#### Template 5: Weekly Research Review Note
```markdown
Date: {{date}} | Previous: [[Weekly Review - ]] | Next: [[Weekly Review - ]]
---
Like what you see?
Rate 1-5 and note why:
---
```dataview
LIST
FROM #literature-note
WHERE file.ctime >= date({{date:YYYY-MM-DD}}) - dur(7 days)
SORT file.ctime DESC
```
```dataview
LIST
FROM #permanent-note
WHERE file.ctime >= date({{date:YYYY-MM-DD}}) - dur(7 days)
```
---
Like what you see?
The most important section. What did you learn that you didn't expect?
---
Cross-paper or cross-domain links that emerged:
---
Like what you see?
| Item | Why Stalled | Unblocking Action |
|------|-------------|-------------------|
| | | |
---
---
Like what you see?
The productive uncertainty that will drive next week's reading:
---
---
#### Script 1: Cold Email to Request a Paper or Collaboration (Post-Reading a Key Paper)
Use case: You've just processed a paper in Obsidian, your Dataview query surfaced it as highly cited in your network, and you want to reach the author.
Subject line: Your [Year] paper on [specific finding] — a question from a PhD researcher
```
Dear Dr. [Last Name],
I've been working through the literature on [specific topic] for my dissertation
at [University], and your [Year] paper "[Exact Title]" has become one of the
most heavily linked notes in my research system — specifically your argument
that [one-sentence accurate summary of their core claim].
I have a genuine question I haven't been able to resolve from the paper alone:
[One specific, intelligent question that shows you read it carefully —
e.g., "In Study 2, you excluded participants who scored below X on the Y measure.
---
The definitive system for academic researchers to build a networked knowledge base in Obsidian that turns years of scattered papers, notes, and ideas into a living intellectual engine that accelerates literature reviews, surfaces novel connections, and produces publishable insights faster.
This product was designed for: PhD students (years 2-5) and early-career postdoctoral researchers in STEM and social sciences who are drowning in 500+ PDFs across folders, losing track of cross-paper insights, spending 3-4 weeks on every literature review, and feeling like their reading effort never compounds. They've tried Zotero or Mendeley for references and maybe dabbled with Notion or Roam, but nothing sticks because no tool was configured for the specific workflows of academic knowledge work. They want a single system that makes them feel intellectually dangerous—where every paper read makes every future paper easier to write.
Your transformation: From: Fragmented notes across 6+ apps, re-reading papers you've already annotated, literature reviews that start from scratch every time, and a constant anxiety that you're missing critical connections in your field → To: A fully operational Obsidian vault with 4 integrated workflows (reading, writing, reviewing, ideating) where every annotation is atomized, linked, and retrievable in seconds, literature reviews are 60-70% pre-assembled from your existing notes, and you surface cross-disciplinary connections that become the seeds of original research contributions.
Like what you see?
Generated with DALL-E 3. No design tools needed.

1200×1800 optimized images generated with Puppeteer HTML rendering.





Your literature review is already 70% written. You just can't find it yet.
Primary hook500 PDFs. 3 years of highlights. Zero publishable drafts. There's a better way.
What if every paper you read made the next one faster to write—automatically?
You didn't enter a PhD program to spend Sundays rebuilding the same literature review from scratch. You did it because you have something worth saying. But somewhere between the 400th PDF and the third restructured Zotero library, the ideas stopped connecting and the deadlines started winning. The Academic Vault isn't another productivity framework dressed up in academic language. It's a complete, battle-tested Obsidian system built specifically for researchers who are drowning in knowledge they can't mobilize. Imagine opening your vault before a manuscript deadline and watching your argument assemble itself from three years of linked, searchable, atomized notes. Imagine your dissertation work compounding into your postdoc, your postdoc compounding into your career. That's not a fantasy. That's a properly built second brain—and it starts on day one.
This entire product — 94 chapters, 14,000+ words, cover image, sales copy, and Pinterest pins — was created by AI in minutes.
Not days. Not weeks. Minutes.
Try Kupkaike Free — 20 Credits →Everything on this page was generated from a single niche idea. No design skills. No copywriting. No code. Just your idea — and Kupkaike does the rest.
Free account includes 20 cupcakes · No credit card required
The Academic Vault: Obsidian System for Researchers Who Need to Publish
AI-generated digital product