⚡ This product was generated with Kupkaike in under 4 minutes
Create Your Own Product →76 chapters, 14k+ words. Ready to sell in minutes — not months.
A complete, field-tested system for PhD students and early-career researchers to build an Obsidian vault that transforms scattered papers and annotations into an interconnected knowledge base — so literature reviews take days, not weeks, and original ideas stop slipping away. Built exclusively for academic workflows, not productivity influencers.

No editing, no design skills, no copywriting — just a niche idea and Kupkaike did the rest.
Generated by Claude Opus 4.6. Real content, unedited.
You have hundreds of PDFs. You've read many of them — some twice. You have highlights in Zotero, margin notes in printed copies, half-finished thoughts in Google Docs, and a sticky note on your monitor that says "connect this to Foucault somehow." When your advisor asks about the literature on a topic you know you've covered, you spend an afternoon re-reading papers you've already annotated, reconstructing arguments you've already worked through. This isn't a time management problem. It's a knowledge infrastructure problem — and no one in your program taught you how to solve it.
Like what you see?
Most PKM guides teach Zettelkasten philosophy designed for writers and productivity enthusiasts. This system was built from the ground up for one specific workflow: academic research from database search to peer-reviewed publication. Every decision — folder structure, note templates, linking conventions, Dataview queries — is calibrated to the artifacts researchers actually produce: literature notes with proper citation metadata, argument maps across competing theoretical frameworks, methodology comparison matrices, and draft-to-submission pipelines. The PRISM literature note template alone encodes a decade of best practices for extracting and connecting scholarly arguments in a format your future self can actually use.
The blueprint walks you through eight chapters covering vault architecture, a 25-minute literature processing pipeline, atomic note-writing, idea incubation, literature review assembly, multi-project management, and long-term system maintenance. You also get three bonuses: a pre-built Academic Starter Vault with 12 ready-to-use templates and a sample 15-note network, a visual Zotero-to-Obsidian Integration Masterclass covering the full setup and the 8 most common errors, and a library of 25 copy-paste Dataview queries built for academic use cases. Researchers who implement this system report assembling literature review drafts in a single afternoon from notes they've already written — and surfacing research questions they wouldn't have seen any other way.
---
Like what you see?
---
You already know more than enough to publish. The problem is that you can't find what you know when you need it — and that gap between what you've read and what you can actually use is quietly killing your productivity.
---
Most researchers assume their knowledge problem is a reading problem. They haven't read enough, haven't read carefully enough, haven't highlighted the right passages. So they read more, highlight more, and end up with 300 PDFs in Zotero and the same creeping dread every time a lit review deadline approaches.
The actual problem is leakage — the systematic loss of intellectual work at seven distinct points in your workflow. Every hour you spend reading a paper that doesn't make it into a usable, retrievable, connectable note is an hour you'll pay for again when you re-read the same paper in six months. The Knowledge Leakage Audit™ maps exactly where your system is hemorrhaging, so you can stop patching symptoms and fix the architecture.
The 7 Knowledge Leak Points
Work through each stage of a typical academic workflow and identify where material falls through the cracks:
1. Search — You find 40 papers in a database search. You save 20 "just in case." Twelve of those are never opened. Leak: unfocused saving creates a backlog that induces avoidance.
2. Save — Papers land in a folder called "To Read," a Zotero library with no tags, or your Downloads folder. There's no metadata about why you saved it or which argument you expected it to support. Leak: decontextualized saving means you can't triage intelligently later.
3. Read — You read with a highlighter (physical or digital) but no synthesis layer. You mark what's interesting, not what's useful for your specific argument. Leak: passive highlighting without processing produces no transferable intellectual output.
4. Annotate — Your annotations live in Zotero's PDF reader, in the margins of a printed paper, in a Google Doc, and in a voice memo you recorded while walking. They are not in the same place. Leak: fragmented annotation systems make retrieval probabilistic rather than reliable.
5. Connect — You have a vague sense that Paper A's argument about institutional trust relates to Paper C's findings on compliance behavior, but you've never written that connection down. Leak: unrecorded connections exist only in working memory, which has a half-life of days.
6. Retrieve — When you sit down to write, you search your Zotero library by keyword and hope. You re-read abstracts. You open PDFs you've already annotated. Leak: retrieval without a note system forces re-processing of already-processed material — this is your Re-Reading Tax.
7. Write — You open a blank document and try to reconstruct arguments from memory, re-reading as you go. Your notes don't translate into prose because they were never written as prose-ready units. Leak: writing from scratch instead of assembling from existing notes multiplies your time-to-draft by 3-5x.
---
Calculating Your Re-Reading Tax
The Re-Reading Tax is the most expensive leak in most researchers' systems, and it's almost entirely invisible because it feels like productive work.
Here's how to calculate yours:
For the average PhD student in years 3-4, this number lands between 6 and 14 hours per month. That's one to two full writing days, every month, spent recovering ground you've already covered.
---
Identifying Your Research Archetype
Before you build a vault, you need to know which vault structure matches how you actually work. There are three dominant research archetypes, and building the wrong architecture for your archetype is the single most common reason researchers abandon their Obsidian setup after two weeks.
The Surveyor reads broadly across a field to map the landscape. You're writing a systematic review, a dissertation introduction, or a grant background section. You need a system organized around themes and debates, not individual papers. Your vault's backbone will be concept notes that aggregate multiple sources, with literature notes as spokes.
The Deep-Diver works intensively within a narrow corpus — perhaps 40-80 papers that you know intimately. You're building a theoretical argument or tracing an intellectual genealogy. You need a system organized around authors, texts, and argument chains. Your vault's backbone will be author nodes and close-reading notes with dense interlinking.
The Multi-Project Juggler is running 2-4 projects simultaneously — a dissertation chapter, a co-authored paper, a grant proposal — each drawing on partially overlapping literatures. You need a system organized around project contexts with shared concept infrastructure underneath. Your vault's backbone will be a project layer sitting above a shared literature layer, with clear tagging to prevent cross-contamination.
You may be a hybrid, but one archetype dominates. Identify it now — it determines your folder structure, your note templates, and your linking strategy in every subsequent chapter.
---
Dr. Amara Chen is a third-year sociology PhD student studying housing policy and urban displacement. She has 247 papers in Zotero, organized into four folders: "Housing," "Policy," "Methods," and "Other." She has annotations in Zotero's PDF reader for about 60 papers, a Google Doc called "Reading Notes 2023" with summaries of 30 more, and a physical notebook with observations from a conference she attended in October.
When her advisor asked her to draft the theoretical framework section of her dissertation in January, Amara spent the first three days re-reading papers she'd already annotated — because she couldn't remember which papers contained which arguments, and her Zotero annotations weren't searchable by concept. She calculated her Re-Reading Tax at 11 hours per month.
Amara is a Surveyor. Her dissertation spans urban sociology, public policy, and critical geography — three literatures she needs to synthesize, not master individually. The right vault structure for her is concept-first: notes on debates like "right to the city," "displacement as accumulation," and "policy feedback loops" that pull together multiple sources, rather than one note per paper.
Her 90-day milestone: a complete theoretical framework section (approximately 4,000 words) assembled from interconnected concept notes in Obsidian, submitted to her advisor by April 15th.
After running the Knowledge Leakage Audit™, Amara identified her three highest-impact leak points: fragmented annotation (Leak Point 4), zero connection infrastructure (Leak Point 5), and writing from scratch (Leak Point 7). Those three points became her implementation priorities — in that order.
---
Rate each statement from 1 (never true) to 5 (always true). Be brutally honest — optimistic scoring only delays the fix.
Section A: Retrieval Speed (max 20 points)
Section B: Connection-Making (max 20 points)
Section C: Writing Readiness (max 20 points)
Section D: Idea Capture (max 20 points)
Section E: System Consistency (max 20 points)
---
Scoring Your Results
| Score | Diagnosis |
|-------|-----------|
| 80–100 | Optimized — you're here for refinement, not rescue |
| 60–79 | Functional but leaking — 2-3 targeted fixes will unlock significant gains |
| 40–59 | Fragmented — your system is costing you 8+ hours per month |
| 20–39 | Critical leakage — you are re-doing intellectual work constantly |
| Under 20 | No system — everything you've read is functionally inaccessible |
Your three lowest-scoring sections (identify them now):
Like what you see?
You've already identified where your knowledge is leaking. Now it's time to build the container that stops the leak permanently — and unlike the scattered folder systems you've tried before, this one is designed specifically around how academic thinking actually works.
---
Most researchers who open Obsidian for the first time make the same mistake: they replicate their existing folder chaos inside a new app. ATLAS is different. It's built on a single governing principle — every note has exactly one home, decided by its function, not its topic. This eliminates the decision fatigue that kills PKM systems before they reach 50 notes.
ATLAS stands for five root folders: Archives, Thinking, Literature, Active Projects, Sources.
---
A — Archives
Completed project materials, old course notes, finished manuscripts, and anything you're keeping for reference but no longer actively developing. Archive is not a trash bin — it's a cold storage unit. Notes here are fully processed and linked. You'll visit them when a new project reaches back for prior work.
T — Thinking
Your intellectual engine room. This is where two note types live: your Daily Research Journal (DRJ) entries and your Permanent Notes (also called Evergreen Notes in some systems). The distinction matters enormously in academic work:
Never mix these two types. A DRJ entry that contains a genuinely good idea gets extracted into a Permanent Note — it doesn't become one by sitting there.
L — Literature
One note per source. Every paper, book chapter, report, or preprint you've engaged with gets a Literature Note here. These are generated automatically through your Zotero pipeline (detailed below) and follow a strict template. Format: `LIT-[citekey]`
A — Active Projects
One subfolder per active research project, dissertation chapter, grant application, or paper under review. Notes here are working documents: outlines, argument maps, draft sections, reviewer response letters. When a project is submitted or shelved, the entire subfolder moves to Archives intact.
S — Sources
Raw material that hasn't been processed yet: imported PDFs (if you store them in Obsidian), images of whiteboards, voice memo transcripts, screenshots of key figures. Think of Sources as your inbox — material enters here and gets processed into Literature or Thinking notes. Nothing stays in Sources permanently.
---
Install these four plugins in this order. Each one depends on the previous.
1. Zotero Integration (by mgmeyers)
In Obsidian: Settings → Community Plugins → Browse → search "Zotero Integration" → Install → Enable. In the plugin settings, set your PDF attachment path to match your Zotero storage location. Set the note import folder to `L — Literature`. This plugin is the bridge.
2. Templater (by SilentVoid)
Install and enable. Set your templates folder to a subfolder called `_Templates` inside your vault root (not inside any ATLAS folder — it sits outside the five). Templater will inject dynamic content — citekeys, dates, author names — into your note stubs automatically.
3. Dataview (by blacksmithgu)
Install and enable. No configuration needed at setup. You'll use Dataview queries later to generate dynamic literature review tables, track reading status across your L folder, and surface unlinked permanent notes. Leave default settings.
4. Citations (by hans)
Point this plugin to your Zotero Better BibTeX `.bib` export file. In Zotero: File → Export Library → Better BibTeX → check "Keep Updated" → save to your vault root as `references.bib`. This keeps your citation database live without manual exports.
---
Here's how a paper moves from Zotero into your vault as a clean, linked Literature Note:
Your Literature Note template should contain these fields at minimum:
```
---
citekey: {{citekey}}
authors: {{authors}}
year: {{year}}
journal: {{journal}}
status: unprocessed
tags: []
---
```
The `status` field (unprocessed → reading → processed) becomes a Dataview filter later. The "My Response" section is where your thinking enters — this is what separates a literature note from a glorified abstract.
---
Generic tags like `#interesting` or `#toread` are useless at scale. Design your taxonomy across three axes:
Methodological tags: `#method/ethnography`, `#method/RCT`, `#method/discourse-analysis`, `#method/computational`
Theoretical framework tags: `#theory/institutionalism`, `#theory/feminist`, `#theory/STS`, `#theory/behaviorist` — use whatever frameworks anchor your field
Evidence-strength tags: `#evidence/strong`, `#evidence/suggestive`, `#evidence/contested`, `#evidence/anecdotal`
Apply all three axes to every Literature Note. A Dataview query can then pull every `#method/ethnography` + `#evidence/strong` paper in your vault in seconds — which is exactly what you need when a reviewer asks you to justify your methodological choices.
---
Dr. Priya Nair is a third-year sociology PhD student studying housing precarity. Before ATLAS, she had 340 papers in Zotero, annotations in three different apps, and a folder on her desktop called "FINAL READING NOTES v3" containing 12 Word documents.
She sets up ATLAS in an afternoon. Her first pipeline test: she imports Desmond's Evicted (2016), a key theoretical text she'd annotated six months ago but couldn't remember in detail. Zotero Integration pulls 23 annotations with page numbers. Templater formats them into `LIT-Desmond2016`. She spends 20 minutes writing her "My Response" section and links it to two existing Permanent Notes: `PN-housing-as-welfare` and `PN-eviction-feedback-loops`.
Three weeks later, drafting her literature review chapter, she runs a Dataview query: `WHERE contains(tags, "#theory/institutionalism") AND status = "processed"`. Fourteen notes surface. She opens each "My Response" section, copies her own sentences into a draft document, and has a 600-word literature review skeleton in 90 minutes — built entirely from her own prior thinking, not re-reading.
---
Complete these 35 steps sequentially. Check each box only when fully verified, not just attempted.
Phase 1: Obsidian Installation (Steps 1–7)
Phase 2: Plugin Installation (Steps 8–18)
Phase 3: Template Creation (Steps 19–25)
You've done the Knowledge Leakage Audit and you know exactly where your ideas are disappearing. Now comes the fix — a reading protocol that transforms a PDF from a temporary experience into a permanent intellectual asset, in less time than it takes to find your old notes on the same paper.
---
Most researchers read in extraction mode: scanning for relevant quotes, highlighting sentences that feel important, and trusting future-you to remember why. The problem isn't laziness — it's that highlighting is cognitively cheap. It requires recognition, not analysis. You can highlight an entire paper and understand nothing about how it actually fits your argument.
The PRISM Reading Protocol™ forces analytical reading in a single pass by giving each minute of your attention a specific job. PRISM stands for:
Each letter is a layer of thinking, not just a category of information. The sequence matters: you can't identify Mesh Points until you've assessed Strengths and Weaknesses, because a methodologically weak paper meshes differently with your argument than a robust one.
The 3-Layer Annotation Strategy
While reading, you're making three distinct types of annotations — and keeping them visually separate:
If you're working in a PDF reader like Zotero's built-in viewer, Highlights.app, or PDF Expert, you can use color-coded highlights plus comments. If you're working with physical printouts, use three different colored pens. The medium matters less than the discipline of keeping the three layers distinct.
---
This is a timed structure. Use a timer. The constraint is the point.
Minutes 1–15: Read and Annotate
Read with the PRISM categories in mind, but don't stop to write the note yet. Annotate using the 3-layer strategy. For empirical papers, spend extra time on the methods section — this is where most researchers skim and later regret it. For theoretical papers, slow down on the conceptual definitions; authors often bury their actual argument in a subordinate clause on page 8. For review articles, focus on how the authors are organizing the field — their taxonomy is itself an argument worth capturing. For methodology papers, your factual extraction should include specific procedural details you might need to replicate or adapt.
Minutes 16–22: Write the PRISM Note in Obsidian
Open your literature note template (see Worksheet below) and complete each section. Don't write prose — write in compressed, analytical sentences. Each PRISM section should be 2–4 sentences maximum. The goal is density, not coverage. If you're summarizing rather than analyzing, you're doing it wrong.
End every PRISM note with the "So What?" test: one sentence that answers the question How does this paper change, complicate, or confirm my current thinking? This is a forcing function. If you can't write that sentence, you haven't processed the paper — you've just described it. Go back to the Implications and Mesh Points sections and push harder.
Minutes 23–25: Link to Existing Vault Notes
Open your Obsidian graph or use the Quick Switcher (Cmd/Ctrl + O) to find the notes this paper connects to. Add `[[wikilinks]]` in the Mesh Points section. If a connection doesn't have a note yet, create a stub — a note with just a title and a one-sentence placeholder. These stubs become your intellectual to-do list. Three minutes is enough to add 3–5 meaningful links. If you're finding zero connections, that's diagnostic information: either this paper is genuinely isolated (rare) or your vault is still too sparse (common in early weeks, which is why you start with your most-cited papers first).
---
The PRISM protocol adapts slightly depending on what you're reading:
Empirical Studies: Weight the S (Strengths/Weaknesses) section heavily. Note sample characteristics, measurement validity, and generalizability limits explicitly. Your future self writing a methods critique will thank you.
Theoretical Papers: The R (Results) section becomes "core claims" rather than findings. Capture the paper's central conceptual move — what distinction it draws, what it reframes, what it argues is wrong about existing theory.
Review Articles: Treat the paper's organizational logic as a finding. In your Mesh Points, note which papers the review authors cite that you haven't read yet — these are priority additions to your reading list.
Methodology Papers: Add a sixth field to your template: Applicability — one sentence on whether and how you could use this method in your own work.
---
Dr. Amara Chen is a third-year sociology PhD student studying organizational responses to climate disclosure mandates. She has 240 papers in Zotero, 60 of which have annotations — scattered across the Zotero comment field, a Google Doc titled "notes misc," and three physical notebooks.
She opens a 2022 paper on institutional isomorphism and ESG reporting. Old method: she'd highlight 15 passages, write "important — connects to DiMaggio?" in a comment, and close the PDF. Six months later, writing her literature review, she'd re-read the entire paper because she couldn't reconstruct her thinking from those highlights.
Using PRISM, her 25-minute session produces this note:
P: Investigates whether regulatory pressure or peer mimicry drives ESG disclosure adoption in Fortune 500 firms. Directly relevant to my Chapter 2 argument about coercive vs. mimetic isomorphism.
R: Finds that mimetic pressure (peer adoption rates) predicts disclosure quality more strongly than regulatory mandates, particularly in sectors with high analyst coverage. Effect size is moderate (β = .34) but consistent across three industry subsamples.
I: Complicates my assumption that regulatory frameworks are the primary driver. May need to revise my theoretical model to weight peer effects more heavily in high-visibility industries.
S: Strong: longitudinal design (2015–2021), large N (487 firms). Weak: operationalizes "disclosure quality" using a proprietary index that isn't publicly available — limits replicability and my ability to use their measure.
M: Connects to [[DiMaggio & Powell 1983]] (foundational isomorphism framework), [[Marquis & Lounsbury 2007]] (field-level variation), and my own emerging note [[Regulatory Salience vs. Peer Salience]] which I haven't fully developed yet.
So What?: This paper forces me to treat mimetic and coercive isomorphism as empirically separable rather than theoretically equivalent, which changes how I need to operationalize "institutional pressure" in my survey instrument.
That last sentence took her four minutes to write. It also just saved her three hours of confusion during her dissertation proposal defense.
---
Instructions: Select 5 papers from your current project — ideally a mix of paper types. For each paper, run the 25-minute timed session and complete the template below. After all five, complete the reflection section.
---
PRISM Literature Note Template (Copy into Obsidian for each paper)
```
---
title: [Author Last Name, Year — Short Title]
authors:
year:
journal/source:
paper-type: [empirical / theoretical / review / methodology]
date-processed:
tags: [topic-tags] [method-tags]
---
Like what you see?
What question does this paper answer? Why does it matter to my project?
[Your 2–3 sentences here]
What did they actually find or argue?
[Your 2–3 sentences here]
Like what you see?
What does this mean for the field and for my research problem specifically?
[Your 2–3 sentences here]
What makes this credible? What limits it?
Strengths:
Weaknesses:
Like what you see?
Where does this connect in my vault?
One sentence: How does this paper change, complicate, or confirm my current thinking?
[Your sentence here]
---
Processing time: ___ minutes
Annotation layers used: Factual / Analytical / Connection
```
---
Reflection Section (Complete after processing all 5 papers)
| Paper | Processing Time | # Mesh Points Found | Could you write a paragraph from this note alone? (Y/N) |
|---|---|---|---|
| Paper 1 | | | |
| Paper 2 | | | |
| Paper 3 | | | |
| Paper 4 | | | |
| Paper 5 | | | |
Comparison questions (write 2–3 sentences each):
Like what you see?
You've done the hard work of reading carefully and capturing literature notes. Now those notes are sitting in your vault, organized by author and year, and you're starting to suspect they're just a slightly better version of your old PDF folder. That suspicion is correct — and this chapter fixes it.
The fundamental problem with source-organized notes isn't that they're wrong — it's that they mirror the structure of your library, not the structure of your thinking. When you write a literature review, you don't think "first I'll discuss Smith (2018), then Jones (2020)." You think "the debate about X has three positions." The Concept Extraction Engine™ is the process of rebuilding your notes around that second structure — the one that actually produces arguments.
Here's how it works across five stages:
Stage 1: Concept Scan (15 minutes per batch of 10 literature notes)
Open your existing PRISM literature notes and read only the "Key Arguments" and "My Response" fields. Don't re-read the papers. As you scan, write down every noun phrase that appears more than once across different notes. These recurring phrases — "institutional isomorphism," "measurement invariance," "narrative identity," "settler colonialism" — are your concept candidates. You're not inventing categories; you're noticing what your reading has already been circling.
Stage 2: Atomic Note Creation
For each concept candidate, create a new note. The anatomy of a valid atomic concept note has exactly five components:
Stage 3: The Three-Source Rule
A concept note earns `#evergreen` status only when you can articulate how three different sources relate to it — and "relate" means something specific. Source A might define the concept, Source B might challenge its scope, and Source C might apply it in a context that reveals a hidden assumption. That triangulation is what transforms a note from a summary into a scholarly position. Until you have three sources, the note stays `#developing`. This isn't arbitrary gatekeeping — it's the minimum threshold for a claim you could defend in print.
Stage 4: Splitting and Lumping
As your concept notes accumulate, two failure modes appear. The first is notes that are too broad — if your note titled "Power operates through discourse" contains four distinct mechanisms, each mechanism deserves its own note. A reliable test: if the note has more than two `[[source]]` backlinks doing completely different argumentative work, it probably needs to be split. The second failure mode is notes that are secretly identical — you might have "Foucauldian surveillance" and "panopticism as social control" as separate notes when they're the same idea approached from different angles. When two notes would always appear together in an argument, merge them and keep both titles as aliases.
Stage 5: Building Argument Chains
Once you have 15-20 atomic notes, you can start linking them sequentially to represent a line of reasoning. In Obsidian, create a "chain note" — a note whose entire purpose is to link atomic notes in logical order with one-sentence transitions. An argument chain might read: `[[Institutions seek legitimacy]] → [[Legitimacy requires conformity to field norms]] → [[Conformity suppresses technical efficiency]] → [[This explains isomorphism in nonprofits]]`. Each arrow is a link. Each linked note is a fully developed atomic note with sources. You've just built the skeleton of a literature review section in under an hour.
---
Imagine a fourth-year sociology PhD student, Priya, studying how remote work policies reproduce gender inequality. She has 47 PRISM literature notes covering organizational theory, feminist labor studies, and COVID-era workplace research. Her vault is organized by author. She can't see the argument she wants to make.
She runs Stage 1 and finds "flexibility stigma" appearing in notes on Williams (2010), Munsch (2016), and a 2021 policy brief. She creates an atomic note titled: "Flexibility stigma penalizes workers who use accommodations designed for them, regardless of productivity." She writes three sentences in her own words, links all three sources, marks it `#developing` because she only has two academic sources so far.
Two weeks later, while processing a new paper by Chung (2022) on remote work uptake, she finds a third source that reframes flexibility stigma as operating differently by gender in remote versus in-person contexts. She updates the note, adds the backlink, upgrades it to `#evergreen`, and adds an agreement marker: "I find Chung's gender-asymmetry argument more precise than Williams's original framing — it explains the cases Williams's model can't."
She then builds an argument chain: `[[Remote work increases schedule autonomy]] → [[Schedule autonomy triggers flexibility stigma for women but not men]] → [[Flexibility stigma suppresses women's promotion rates]] → [[Remote work policies reproduce inequality despite neutral framing]]`. That chain is the core argument of her third dissertation chapter, assembled from notes she already had.
---
Use this template after completing at least 10 PRISM literature notes. Work through each section in order.
---
PART 1: Concept Scan
List the 10 literature notes you're mining (use note titles):
Recurring concept candidates (noun phrases appearing in 2+ notes):
---
PART 2: Atomic Note Template (copy for each concept)
Concept Note Title (stated as a claim):
`_______________________________________________`
Body (3-5 sentences, your own words, no quotes):
`_______________________________________________`
`_______________________________________________`
`_______________________________________________`
Source Backlinks:
`[[ ]]` — role in this note: `_______________`
`[[ ]]` — role in this note: `_______________`
`[[ ]]` — role in this note: `_______________`
Status Tag: ☐ `#seed` ☐ `#developing` ☐ `#evergreen`
My Position (circle one): Agree / Disagree / Partially agree / Undecided
Agreement/Disagreement Marker:
`I find this [convincing/problematic] because: _______________`
---
PART 3: Split or Merge Check
For each atomic note, answer:
Notes to split: `_______________`
Notes to merge: `_______________`
---
PART 4: Argument Chain Builder
Arrange 4-6 atomic notes in logical sequence to form one argument:
`[[ ]]` → `[[ ]]` → `[[ ]]` → `[[ ]]`
Transition logic (one sentence between each link):
Step 1→2: `_______________`
Step 2→3: `_______________`
Step 3→4: `_______________`
---
PART 5: Graph View Annotation
After linking your atomic notes in Obsidian, open Graph View and take a screenshot. Print or paste it here. Annotate with:
---
---
You've built the vault, processed the papers, and extracted the concepts. Now comes the part most researchers never systematize: the moment when two ideas collide and something genuinely new emerges. That moment is not random — it's engineerable.
---
Most researchers generate research questions the same way they always have: by staring at a blank page after reading a pile of papers, hoping inspiration strikes. The Collision Protocol™ replaces that passive hope with a structured weekly practice that forces your vault to surface contradictions, gaps, and cross-disciplinary opportunities you would otherwise miss entirely.
There are three types of productive collisions your vault can generate:
Type 1: Contradiction Collisions
Source A makes a claim. Source B, which you've also read and noted, directly contradicts it. These contradictions are gold — they signal either a genuine empirical dispute, a methodological incompatibility, or a theoretical boundary condition that hasn't been named yet. When your concept notes are properly linked (as built in Chapter 4), these contradictions become visible as competing claims on the same concept node.
Type 2: Gap Collisions
This is the subtler type. Every field has foundational assumptions that everyone cites but nobody tests. A Gap Collision occurs when you notice that a cluster of papers all assume a relationship without ever directly measuring it, or when a concept appears repeatedly in your vault but has no empirical literature note attached to it — only theoretical ones. These are the "everyone knows this, but does anyone actually know this?" moments.
Type 3: Transfer Collisions
A method, framework, or analytical tool from one field lands on a problem in a completely different field. Behavioral economics concepts applied to science communication. Network analysis methods applied to historical correspondence. Ecological resilience theory applied to organizational change. Your vault, spanning multiple literatures if you've been reading broadly, is uniquely positioned to surface these.
---
Run this session once per week, ideally on the same day. Block it in your calendar as non-negotiable research time.
Step 1 — Random Note Activation (5 minutes)
Open Obsidian and trigger the Random Note feature three times. Don't choose the notes — let the vault choose. Open all three in split panes. Read only your own synthesis sentences, not the full note.
Step 2 — Local Graph Scan (5 minutes)
For each of the three random notes, open its Local Graph (right-click → Open Local Graph). Set depth to 2. Look for nodes that appear in two or more of the three graphs simultaneously. These shared nodes are your collision candidates.
Step 3 — Dataview Surfacing (10 minutes)
Run the following queries in a dedicated `Collision Session` note. These are designed to surface structurally weak areas of your vault — the places where your knowledge is thin, disconnected, or over-reliant on a single source.
To find orphan concept notes (concepts you extracted but never linked to literature):
```dataview
LIST
FROM "Concepts"
WHERE length(file.inlinks) = 0
SORT file.mtime ASC
```
To find under-connected literature notes (papers you processed but haven't linked to concepts):
```dataview
LIST
FROM "Literature"
WHERE length(file.outlinks) < 3
SORT file.ctime DESC
LIMIT 20
```
To find over-cited sources (potential blind spots where you're over-relying on one voice):
```dataview
TABLE length(file.inlinks) AS "Cited By"
FROM "Literature"
SORT length(file.inlinks) DESC
LIMIT 10
```
Step 4 — Question Generation (10 minutes)
Using what the random notes and Dataview results surfaced, write at least two candidate research questions in your `Questions Queue` MOC. Don't filter yet — volume matters at this stage.
---
Not every question that emerges from a Collision Session is ready for a grant proposal. Use this four-rung ladder to track and develop each question over time:
Rung 1 — Vague Curiosity: "There's something interesting about how X relates to Y."
Template: `I notice that [concept/phenomenon] seems connected to [concept/phenomenon] but I don't know how.`
Rung 2 — Specific Question: "Does X influence Y under condition Z?"
Template: `What is the relationship between [variable/concept A] and [variable/concept B] in the context of [population/setting/condition]?`
Rung 3 — Testable Hypothesis: "I predict that X increases Y when Z is present, because of mechanism M."
Template: `[Variable A] will [increase/decrease/moderate] [Variable B] when [condition Z] because [theoretical mechanism].`
Rung 4 — Positioned Contribution: "This study addresses gap G in literature L by testing H using method M, which advances theory T."
Template: `While prior work has established [what we know], no study has examined [specific gap]. This study tests [hypothesis] using [method], contributing to [theoretical framework] by [specific advancement].`
Tag each question in your Questions Queue with its current rung: `#rung1`, `#rung2`, etc. Your goal each week is to move at least one question up one rung.
---
Create a note titled `Questions Queue` in your `MOCs` folder. This is a living document — not a graveyard of abandoned ideas, but an active pipeline. Structure it as follows:
Each entry gets three ratings on a 1–5 scale:
A question scoring 12 or above across all three dimensions moves to your `Active Projects` folder immediately.
---
Dr. Yemi Adeyinka is a third-year sociology PhD student studying urban food access. Her vault has 140 literature notes built using the PRISM Protocol from Chapter 3. During a Tuesday Collision Session, her Random Note feature surfaces three notes: one on cognitive load theory (from a psychology paper), one on participatory mapping methods (from a geography paper), and one on food desert measurement critiques (her core literature).
Her Local Graph scan reveals that the concept node `spatial perception` appears in both the cognitive load note and the food desert note — but there's no literature note connecting them. Her Dataview orphan query surfaces `mental maps` as a concept she extracted but never linked to any empirical study.
She writes this Rung 1 question: "There's something interesting about how residents cognitively represent food access versus how researchers spatially measure it."
By the end of the session, she's moved it to Rung 2: "Do residents' mental maps of food access diverge systematically from GIS-measured food desert boundaries, and does that divergence predict actual shopping behavior?"
She scores it: Novelty 4, Feasibility 4, Field Demand 3. Total: 11. Close — she flags it for one more week of development. Two weeks later, after finding two papers on mental mapping in transportation research (a Transfer Collision), it scores 13 and moves to Active Projects. That question becomes her dissertation's third chapter.
---
Use this template for your first three weekly sessions. Copy it into a new note each week titled `Collision Session — [Date]`.
---
SESSION DATE: _______________
SESSION NUMBER (circle): 1 / 2 / 3
RANDOM NOTES PULLED:
SHARED NODES FROM LOCAL GRAPH SCAN:
Concept/node appearing in 2+ graphs: _______________
Why is this overlap interesting? _______________
DATAVIEW RESULTS:
COLLISION TYPE IDENTIFIED (circle): Contradiction / Gap / Transfer
RAW QUESTIONS GENERATED (write at least 2, don't filter):
TOP QUESTION THIS SESSION:
Question text: _______________
Current Maturity Rung (circle): 1 / 2 / 3 / 4
Novelty score (1–5): ___ | Feasibility score (1–5): ___ | Field Demand score (1–5): ___
Total: ___ / 15
If total ≥ 12: Move to Active Projects folder → ✓ / Not yet
Next development action for this question: _______________
---
PASTE THESE QUERIES INTO YOUR VAULT NOW:
Orphan concepts:
```dataview
LIST
FROM "Concepts"
WHERE length(file.inlinks) = 0
SORT file.mtime ASC
```
Under-connected literature:
```dataview
LIST
FROM "Literature"
WHERE length(file.outlinks) < 3
SORT file.ctime DESC
LIMIT 20
```
Over-cited sources:
```dataview
TABLE length(file.inlinks) AS "Cited By"
FROM "Literature"
SORT length(file.inlinks) DESC
LIMIT 10
```
---
Like what you see?
You've done the hard work — you've run the PRISM protocol on dozens of papers, extracted atomic concept notes using the Concept Extraction Engine, and built a vault that actually reflects your thinking. Now comes the moment every PhD student dreads: opening a blank document and trying to write a literature review section from memory and scattered notes. This chapter eliminates that experience entirely.
The central insight here is uncomfortable but liberating: writing a literature review is not a writing problem — it's a curation and sequencing problem. If you've built your vault correctly through Chapters 2–4, the intellectual content of your literature review already exists. Your concept notes contain the claims. Your literature notes contain the evidence. Your contradiction notes capture the counterpoints. Your synthesis annotations hold your analytical contribution. The draft is already there — it just hasn't been assembled yet.
The Mosaic Drafting Method™ treats your vault like a physical mosaic: the tiles already exist, your job is to arrange them into a coherent image, then grout the seams.
Step 1: Build a Literature Review MOC (Map of Content)
Create a new note titled `LitReview_MOC_[YourSection]` — for example, `LitReview_MOC_InstitutionalTrust`. This becomes your staging area. Open it alongside Obsidian's graph view filtered to your relevant tags. Now drag-link every concept note, literature note, and atomic idea that belongs in this section. Don't filter aggressively yet — include anything adjacent. You're casting wide before you cut.
Group these linked notes into thematic clusters by adding second-level headers inside your MOC. These clusters become your section headings. If you have eight concept notes that all orbit around "measurement validity in survey instruments," that's a subsection. Let the density of your notes reveal the structure — your vault is telling you what matters.
Step 2: Sequence the Argument
Within each thematic cluster, order your notes into a logical argumentative sequence. Ask: what does the reader need to understand first? What claim sets up the next? This is where the intellectual work happens — not in writing sentences, but in deciding which tile goes next to which.
Step 3: Apply the 4-Move Paragraph Structure
Each paragraph in an academic literature review performs four moves. Structure your note sequence to execute all four:
This structure prevents the most common lit review failure mode: the annotated bibliography disguised as a literature review, where you summarize source after source without ever making an argument.
Step 4: Assemble the Transclusion Draft
In a new note titled `DRAFT_[SectionName]_v1`, use Obsidian's embedded transclusion syntax to pull your notes directly into the draft:
```
![[InstitutionalTrust_ConceptNote]]
![[Smith2019_LitNote]]
![[Jones2021_LitNote]]
![[Contradiction_TrustMeasurement]]
![[Synthesis_TrustInLowIncomeContexts]]
```
Reading mode will render these as a continuous document. You're looking at a rough draft assembled in minutes from notes that took weeks to build. The prose is rough — it reads like stitched-together notes, because it is. That's intentional.
Step 5: Rewrite for Flow
Copy the rendered text into a new note or directly into Word. Now you're not writing — you're editing. You're smoothing transitions, adjusting register, and adding connective tissue between ideas that are already there. This is cognitively easier by an order of magnitude. Most researchers report cutting their drafting time by 50–70% at this stage, not because they're faster typists, but because they're no longer doing two jobs simultaneously: thinking and writing.
Step 6: Run the Citation Integrity Check
Before exporting, run this Dataview query in your vault to verify every claim in your draft links to a properly cited source note with full bibliographic metadata:
```dataview
TABLE file.name, citekey, author, year, title
FROM #literature-note
WHERE !citekey OR !author OR !year
SORT file.name ASC
```
Any note surfaced by this query has incomplete metadata. Fix it in Zotero first, re-sync via the Zotero Integration plugin, and verify the updated fields appear in Obsidian before you export. A draft with 40 citations and three missing citekeys will create hours of cleanup in Word — catch it here.
Step 7: Export via Pandoc + Better BibTeX
With your draft note open, export using Pandoc from the terminal:
```bash
pandoc DRAFT_SectionName_v1.md \
--bibliography=/path/to/your/library.bib \
--csl=/path/to/apa7.csl \
-o SectionName_draft.docx
```
Your Better BibTeX plugin (configured in Chapter 2) has already generated the `.bib` file from your Zotero library. Every `[@smith2019]` citation key in your Obsidian notes becomes a formatted citation in the output document. For LaTeX users, swap `docx` for `pdf` and add `--pdf-engine=xelatex`. The citations are not placeholders — they're real, formatted, and linked to your full bibliography.
---
Scenario: Dr. Priya Mehta is a third-year PhD student in public health writing the theoretical framework section of her dissertation on community health worker retention in rural India. She has 67 literature notes in her vault tagged `#retention`, `#CHW`, and `#motivation-theory`.
She creates `LitReview_MOC_RetentionFrameworks` and links all 67 notes. Grouping by concept density, three clusters emerge naturally: intrinsic motivation theories, structural/systemic barriers, and community embeddedness factors. These become her three subsections — she didn't invent this structure, her vault revealed it.
For the intrinsic motivation subsection, she sequences eight concept notes and applies the 4-move structure: her concept note on self-determination theory as the Claim, four literature notes citing Deci, Ryan, and field adaptations as Evidence, a contradiction note flagging Bhattacharyya's critique of Western motivation models in LMIC contexts as Counterpoint, and her own synthesis note arguing for a hybrid framework as the Synthesis.
She assembles the transclusion draft in 22 minutes. Rewriting for flow takes 45 minutes. Total: 67 minutes for a 900-word subsection she estimated would take a full day. The Dataview citation check surfaces two notes missing publication years — she fixes them in Zotero before exporting. The Pandoc export produces a clean Word document with APA 7 citations intact. She sends it to her supervisor that afternoon.
---
Use this template to assemble one section of your current writing project. Time yourself honestly.
---
Section I: Project Identification
```
Writing project: _______________________________________________
Target section/chapter: ________________________________________
Estimated word count for this section: _________________________
Your estimate: how long would this take writing from scratch? ___
```
Section II: MOC Construction
```
MOC note title: LitReview_MOC_[________________]
Relevant tags to filter in your vault:
Tag 1: ________________________________________________________
Tag 2: ________________________________________________________
Tag 3: ________________________________________________________
Total notes linked to MOC: _____________________________________
```
Section III: Thematic Clustering
```
Cluster 1 (Subsection heading): ________________________________
Notes included: ____________________________________________
Note count: ___
Cluster 2 (Subsection heading): ________________________________
Notes included: ____________________________________________
Note count: ___
Cluster 3 (Subsection heading): ________________________________
Notes included: ____________________________________________
Note count: ___
```
Section IV: 4-Move Structure Map (per subsection)
```
Subsection: ___________________________________________________
Move 1 — Claim note: __________________________________________
Move 2 — Evidence notes (list citekeys): _______________________
Move 3 — Counterpoint note: ____________________________________
Move 4 — Synthesis note: _______________________________________
```
Section V: Assembly Log
```
Time to build MOC and cluster: _____________ minutes
Time to sequence argument: _____________ minutes
Time to assemble transclusion draft: _____________ minutes
Time to rewrite for flow: _____________ minutes
TOTAL actual time: _____________ minutes
Compared to your scratch estimate: saved _____________ minutes
Citation integrity issues found: _______________________________
Export format used: [ ] Word [ ] LaTeX [ ] PDF
Export successful: [ ] Yes [ ] No — issue: ____________________
```
---
---
You've built your vault, you're processing literature with PRISM, and your concept notes are multiplying — but now you're staring at four simultaneous deadlines: a dissertation chapter draft, a co-authored methods paper, a grant proposal, and next week's seminar prep. The anxiety isn't about not knowing the material. It's about not knowing which material belongs where and whether anything critical is slipping through the cracks.
The Research Portfolio Dashboard™ is a project management layer built inside your vault — not alongside it in Notion, not in a separate spreadsheet, not in a second Obsidian vault. Everything lives in one place because that's where the leverage is. Here's the five-step architecture.
Step 1: The One Vault Principle
Resist the urge to create a separate vault for your dissertation and another for your teaching prep. This feels organized but destroys the single most valuable feature of your Second Brain: cross-pollination. When your note on "methodological triangulation" lives in one vault, it can be linked from your dissertation chapter MOC, your co-authored methods paper MOC, and your grant proposal MOC simultaneously. Split the vault and you're copying notes, maintaining duplicates, and — inevitably — creating the same fragmentation problem you had before Obsidian.
Step 2: Project MOC Architecture
Every active project gets a dedicated Map of Content (MOC) note. This is the hub note you return to every time you touch that project. Create it in your `Projects/Active/` folder using this standard structure:
The MOC is not where you write — it's where you navigate. Keep it lean and functional.
Step 3: Dataview-Powered Project Dashboards
If you installed the Dataview plugin in Chapter 2, this is where it pays off. Add the following queries to a master `Research Portfolio Dashboard` note in your vault root.
Notes per project (tagged by project):
```dataview
TABLE length(rows) AS "Note Count"
FROM #project/dissertation-ch3 OR #project/methods-paper
GROUP BY tags
```
Recently modified project notes:
```dataview
TABLE file.mtime AS "Last Modified", status AS "Status"
FROM "Projects/Active"
SORT file.mtime DESC
LIMIT 15
```
Orphan notes needing integration (no outgoing links):
```dataview
LIST
FROM "Literature Notes"
WHERE length(file.outlinks) = 0
SORT file.ctime ASC
```
Approaching deadlines:
```dataview
TABLE deadline AS "Due", project AS "Project"
FROM "Projects/Active"
WHERE deadline <= date(today) + dur(30 days)
SORT deadline ASC
```
Tag every literature note and concept note with the project(s) it serves — `#project/dissertation-ch3`, `#project/grant-nsf-2025` — and these queries populate automatically. Your dashboard becomes a live status board that updates every time you work in the vault.
Step 4: The Shared Concept Advantage
This is the compounding return that justifies the entire system. When you wrote your atomic note on "methodological triangulation" back in Chapter 4, you probably wrote it in the context of one paper. Now tag it with every project it serves. A single well-developed concept note — with your synthesis, the key citations, and your critical commentary — can be linked from three different Project MOCs without any duplication. When you update the concept note with a new source, all three projects benefit instantly. This is the difference between a filing system and a thinking system.
Step 5: Academic Calendar Integration
Your vault doesn't exist in a vacuum. In each Project MOC, add a `deadline` property in YAML frontmatter (`deadline: 2025-03-15`). Then build a separate `Academic Calendar` note that aggregates conference submission windows, journal special issue deadlines, grant cycles, and chapter due dates. Cross-reference this with your Dataview deadline query. Review it every Friday.
The Weekly Research Review Ritual
Every Friday, block 45 minutes. This is non-negotiable protected time. The session follows this exact sequence:
This ritual prevents the vault from becoming a beautiful archive that never gets used. It keeps every project in active working memory without requiring you to hold it all in your head.
---
Dr. Priya Nair is a third-year sociology PhD student running four simultaneous workstreams: her dissertation Chapter 3 on immigrant labor networks, a co-authored paper on survey methodology with her advisor, a fellowship application due in six weeks, and weekly seminar reading for a graduate course she's TAing.
Before the Research Portfolio Dashboard, Priya kept a Notion board for her dissertation, a shared Google Doc for the co-authored paper, and a sticky note system for seminar prep. Her fellowship application existed only in her head and a half-finished Word document.
After implementing the system: Priya has four Project MOCs in her `Projects/Active/` folder. Her Dataview dashboard shows her that Chapter 3 has 34 linked literature notes, the methods paper has 12, the fellowship has 6 (flagged as underdeveloped), and seminar prep has 8. The orphan query surfaces 11 literature notes she processed two months ago that were never linked to anything — three of them turn out to be directly relevant to her fellowship application's theoretical framing.
Her concept note on "social capital in migrant communities" — originally written for Chapter 3 — is now tagged `#project/dissertation-ch3`, `#project/fellowship-2025`, and `#project/seminar-week9`. When she adds a new source to that note on a Tuesday afternoon, it enriches all three workstreams simultaneously. Her Friday review takes 40 minutes and leaves her with a clear, prioritized task list for each project. The fellowship application, previously a source of background dread, now has a visible note count, a deadline in the Dataview query, and a next action: "Draft theoretical framework section using social capital note + 3 linked sources."
---
Complete this in one focused 90-minute session. Do not skip steps.
Part 1: Project Inventory
List every active or upcoming project. Be exhaustive — include things you've been avoiding.
| Project Name | Type (dissertation/paper/grant/teaching) | Deadline | Current Status |
|---|---|---|---|
| __________________ | __________________ | __________________ | __________________ |
| __________________ | __________________ | __________________ | __________________ |
| __________________ | __________________ | __________________ | __________________ |
| __________________ | __________________ | __________________ | __________________ |
| __________________ | __________________ | __________________ | __________________ |
Part 2: Project MOC Creation
For each project above, create a MOC note in `Projects/Active/` using this template:
```markdown
---
project: [Project Name]
status: Active
deadline: YYYY-MM-DD
tags: [project/tag-name]
---
Like what you see?
[What is this project trying to answer or produce? 2–4 sentences.]
Like what you see?
Like what you see?
[One specific task for this project this week]
```
Part 3: Shared Concept Identification
List 5 concept notes already in your vault (or that should exist) that serve multiple projects. For each, list which projects benefit.
| Concept Note Title | Project 1 | Project 2 | Project 3 |
|---|---|---|---|
| __________________ | __________________ | __________________ | __________________ |
| __________________ | __________________ | __________________ | __________________ |
| __________________ | __________________ | __________________ | __________________ |
| __________________ | __________________ | __________________ | __________________ |
| __________________ | __________________ | __________________ | __________________ |
Part 4: Dashboard Installation
Part 5: First Weekly Research Review
Complete your first 45-minute Friday review session using the sequence in Step 5. After completing it, answer:
---
Like what you see?
You've built the architecture, populated it with literature notes, extracted concepts, and started seeing connections you never saw before. The question now isn't whether the system works — it's whether you will keep it working when a grant deadline hits, a committee meeting derails your week, and your vault sits untouched for three weeks.
This chapter is about making sure that never becomes a crisis.
---
Every knowledge system obeys a law that no one warns you about: Vault Entropy. Left unattended, even a beautifully structured vault degrades. Orphan notes accumulate. Tags drift. Templates get modified inconsistently. Projects stall mid-capture. The connections that made your system feel alive start to feel like noise.
The Compound Knowledge System™ is a four-tier maintenance and growth protocol designed specifically for the rhythms of academic life — where you might read intensively for two weeks, then go dark during conference travel, then resurface during a writing sprint. It doesn't require daily discipline. It requires strategic intervals.
Tier 1: The Weekly 20-Minute Tidy (Every Sunday or Friday)
Set a recurring calendar block. No longer than 20 minutes. The goal is not deep work — it's preventing accumulation.
That's it. Twenty minutes, three actions.
Tier 2: The Monthly Vault Health Audit (First Weekend of Each Month)
This is where you catch systemic drift before it becomes structural damage. Run these four Dataview queries — paste them into a dedicated `Vault Health` note:
```dataview
LIST
FROM ""
WHERE length(file.inlinks) = 0 AND length(file.outlinks) = 0
SORT file.mtime ASC
```
This surfaces orphan notes — notes with no connections in either direction. These are your biggest entropy risk.
```dataview
LIST
FROM #literature
WHERE !contains(file.tags, "status/processed")
SORT file.ctime ASC
```
This finds literature notes you captured but never fully processed through the PRISM protocol from Chapter 3.
```dataview
LIST
FROM #concept
WHERE file.mtime < date(today) - dur(90 days)
SORT file.mtime ASC
```
Concept notes untouched for 90+ days are candidates for either promotion to evergreen status or archiving.
```dataview
TABLE file.tags AS "Tags", file.mtime AS "Last Modified"
FROM #project/active
SORT file.mtime ASC
```
Active projects that haven't been modified recently are stalling — surface them before they disappear entirely.
Spend 45-60 minutes acting on what these queries surface. Link three orphans. Process two stale literature notes. Promote one concept note.
Tier 3: The Maturation Pipeline (Ongoing, Tracked Monthly)
Your notes exist on a developmental spectrum. The Compound Knowledge System™ formalizes this as three stages, each requiring a different kind of attention:
During your monthly audit, promote at least two notes up the pipeline. The goal by month six: 40+ evergreen notes. That's the threshold where literature reviews start assembling themselves.
Tier 4: The Quarterly Deep Restructure (4x Per Year)
Once per quarter, spend 2-3 hours on structural maintenance: rename tags that have drifted, consolidate duplicate concept notes, archive completed projects, and review your ATLAS folder structure (from Chapter 2) to see if it still reflects how you actually think. Your intellectual architecture should evolve with your research.
---
Dr. Amara Osei is a third-year sociology PhD student studying urban informality in West African cities. By month two of using her vault, she had 87 literature notes and 34 concept notes. By month four, she hit a wall: her `_INBOX` had 23 unprocessed notes, she had four different tags for the same concept ("informal economy," "informality," "urban-informality," "#informal"), and three project files she hadn't opened in six weeks.
She ran the orphan query and found 19 notes with zero connections — mostly PDFs she'd imported from Zotero but never processed. She spent one Sunday afternoon linking them, and in doing so, noticed that three papers she'd considered tangentially related were actually making the same methodological argument from different disciplinary angles. That observation became the framing for her second dissertation chapter.
At month six, Amara had 312 notes, 61 evergreen notes, and had converted four of them into a conference paper abstract that was accepted on first submission. The compounding had started.
---
Your vault will outlive your current institutional affiliation. Plan for it explicitly.
Qualifying Exams: Six weeks before your exam date, run a full audit of every concept note tagged with your exam fields. Identify gaps — concepts you've encountered but never synthesized. Use your evergreen notes as the basis for practice essays. Your vault is essentially a closed-book exam simulator.
Dissertation Defense: Create a `Dissertation/` project folder that maps your chapters to specific clusters of evergreen notes. Every claim in your dissertation should trace back to at least one evergreen note. This makes revision surgical rather than catastrophic.
Job Market: Your vault contains your intellectual identity. Before writing job letters, spend two hours mapping your evergreen notes into thematic clusters — these become your research agenda statements. Your teaching philosophy can draw directly from concept notes on pedagogy and methodology.
New Institution: Export your vault as a portable folder. When you arrive somewhere new, your intellectual infrastructure arrives with you. Archive your old institution's administrative notes; keep everything intellectual. Your vault is the one thing that doesn't get lost in the move.
---
At 40+ evergreen notes, you have enough intellectual material to begin publishing in public. This isn't a distraction from your research — it is your research, distributed.
Blog posts: Take one evergreen note with 5+ outbound links. Expand each linked concept into a paragraph. Add an introduction and conclusion. You have a 1,200-word post.
Twitter/X threads: Take the central claim of an evergreen note. Each supporting link becomes one tweet. A 7-link evergreen note is a 9-tweet thread.
Conference presentations: A cluster of 8-10 evergreen notes on a shared theme is a 20-minute conference paper. The structure is already there — you're just adding transitions.
Book proposals: A dissertation chapter mapped to 15+ evergreen notes, with clear connections to 3-4 adjacent clusters, is the intellectual architecture of a book chapter. Do this for five chapters and you have a proposal.
The 1,000-note milestone changes something concrete: you stop searching for ideas and start selecting among them. Writing speed increases not because you type faster but because the thinking is already done. Researchers who hit this milestone consistently report that their biggest problem shifts from "I don't know what to write" to "I have too many directions to pursue." That is a good problem. You can get there in 6-8 months if you follow the Tier 1 and Tier 2 maintenance rhythms without exception and process every paper you read — even partially — into at least a seed note the same day you read it.
---
Section 1: Recurring Calendar Events
Set these up before you close this chapter. Literally open your calendar now.
| Session Type | Frequency | Duration | Day/Time I'll Use | Calendar Event Created? |
|---|---|---|---|---|
| Weekly Tidy | Weekly | 20 min | _________________ | ☐ |
| Monthly Audit | Monthly | 60 min | _________________ | ☐ |
| Quarterly Restructure | Quarterly | 2-3 hrs | _________________ | ☐ |
Section 2: Dataview Audit Setup
Create a note titled `Vault Health Dashboard` in your `_META` or `_SYSTEM` folder. Paste all four queries from Tier 2 above. Then answer:
Section 3: My 6-Month Vault Growth Roadmap
Fill in your current state and targets tied to your actual academic calendar:
| Month | Note-Count Target | Evergreen Target | Key Academic Event This Month |
|---|---|---|---|
| Month 1 (now) | Current: _______ | Current: _______ | _________________ |
| Month 2 | _______ | _______ | _________________ |
| Month 3 | _______ | _______ | _________________ |
| Month 4 | _______ | _______ | _________________ |
| Month 5 | _______ | _______ | _________________ |
| Month 6 | _______ | _______ | _________________ |
Section 4: Public-Facing Content Pipeline
Identify 3 evergreen notes you already have (or will have within 30 days) that are ready to be repurposed:
| Evergreen Note Title | Target Format | Target Platform/Venue | Target Date |
|---|---|---|---|
| _________________ | Blog post / Thread / Talk / Proposal | _________________ | _________________ |
| _________________ | Blog post / Thread
---
A pre-built, download-and-go vault so you never stare at a blank screen again
---
---
#### Template 1: PRISM Literature Note
For every paper you read — structured to capture what matters for research, not just what the paper says
```markdown
---
title: "{{title}}"
authors: [{{authors}}]
year: {{year}}
citekey: {{citekey}}
journal: "{{journal}}"
doi: "{{doi}}"
tags: [literature-note, {{discipline}}, {{status}}]
status: "unread | skimming | read | processed | cited"
priority: "high | medium | low"
date-added: {{date}}
date-processed:
project: "[[{{project}}]]"
theoretical-framework:
methodology:
---
What is the central claim or argument of this paper? (1–2 sentences max)
Like what you see?
Why does this paper matter to MY research specifically? What gap does it address?
The 3–5 ideas I want to remember and use. Write in my own words.
Like what you see?
What did this paper claim that I didn't expect, disagree with, or that contradicts another source?
My original thoughts, questions, and reactions. What does this make me want to investigate next?
---
Like what you see?
"..." (p. )
"..." (p. )
How did they do it? Sample size, methods, limitations I should cite or critique?
Like what you see?
Link to concept notes, other papers, research questions, and project MOCs
```
---
#### Template 2: Atomic Concept Note
For theoretical constructs, recurring terms, and ideas that appear across multiple papers — the building blocks of your intellectual network
```markdown
---
title: "{{concept-name}}"
aliases: [{{alternative-terms}}]
tags: [concept-note, {{discipline}}, {{theoretical-framework}}]
date-created: {{date}}
date-updated:
maturity: "seedling | budding | evergreen"
---
Like what you see?
My working definition in my own words — not copy-pasted from a source
Who coined this? How has the definition evolved? Where is it contested?
Like what you see?
In the context of my specific research, what does this concept help me explain or analyze?
What does this concept get confused with? How is it different from [[related-concept]]?
| This Concept | [[Related Concept]] |
|---|---|
| | |
| | |
Like what you see?
```dataview
LIST
FROM [[]]
WHERE contains(theoretical-framework, this.file.name)
SORT year ASC
```
What do I actually think about this? Where do I agree/disagree with the literature?
Like what you see?
```
---
#### Template 3: Project MOC (Map of Content)
The command center for each research project — links every relevant note, tracks progress, and assembles your literature review scaffolding
```markdown
---
title: "MOC — {{project-name}}"
tags: [MOC, project, {{status}}]
status: "planning | active | writing | submitted | published"
deadline: {{deadline}}
target-journal: "{{journal}}"
word-count-target:
date-created: {{date}}
collaborators: []
---
Like what you see?
One paragraph: What is this paper/chapter/dissertation arguing? What is the intervention?
All papers directly relevant to this project
```dataview
TABLE authors, year, status
FROM "ATLAS/Literature"
WHERE contains(project, this.file.name)
SORT year ASC
```
Theoretical concepts this project depends on
[[ArgChain — {{project-name}}]]
[[Method — {{methodology}}]]
---
Like what you see?
Argument this section makes:
Key sources: [[]], [[]], [[]]
Gaps I'm addressing:
Argument this section makes:
Key sources: [[]], [[]], [[]]
Gaps I'm addressing:
Argument this section makes:
Key sources: [[]], [[]], [[]]
Gaps I'm addressing:
---
| Section | Status | Word Count | Notes |
|---|---|---|---|
| Introduction | | | |
| Lit Review | | | |
| Methods | | | |
| Results/Analysis | | | |
| Discussion | | | |
| Conclusion | | | |
Like what you see?
Ideas that emerged during this project that don't fit here but shouldn't be lost
```
---
#### Template 4: Methodology Comparison Note
For systematically comparing how different papers approach the same research problem — essential for methods sections and for identifying gaps
```markdown
---
title: "MethodComp — {{topic}}"
tags: [methodology, comparison, {{discipline}}]
date-created: {{date}}
project: "[[MOC — ]]"
---
The specific methodological question or problem I'm mapping across the literature
Like what you see?
| Paper | Method | Sample/Data | Key Strength | Key Limitation | Relevance to My Work |
|---|---|---|---|---|---|
| [[]] | | | | | |
| [[]] | | | | | |
| [[]] | | | | | |
| [[]] | | | | | |
| [[]] | | | | | |
What methods does this field default to and why?
Like what you see?
Where do scholars disagree about the right approach?
What methodological approach has NOT been tried that could yield new insights?
Like what you see?
Given the above, what am I doing and how do I justify it against this landscape?
```dataview
LIST
FROM "ATLAS/Literature"
WHERE contains(tags, "methodology") AND contains(project, [[]])
SORT year ASC
```
```
---
#### Template 5: Research Question Incubator
Where half-formed hunches become rigorous research questions — the most important template most researchers never build
```markdown
---
title: "RQ — {{question-shorthand}}"
tags: [research-question, {{status}}, {{discipline}}]
status: "raw-hunch | developing | viable | active | published | abandoned"
date-created: {{date}}
date-last-reviewed:
sparked-by: "[[]]"
project:
---
Like what you see?
Write the idea exactly as it occurred to you — messy, incomplete, that's fine
After reflection: What exactly am I asking? Is it empirical, theoretical, or normative?
Like what you see?
So what? Who cares? What would change in the field if this were answered?
Brief survey of existing answers — and why they're incomplete
Like what you see?
Data, methods, theoretical framework, access, time
Honest evaluation: Can I actually do this? In what timeframe?
| Factor | Assessment |
|---|---|
| Data availability | |
| Methodological expertise | |
| Time required | |
| Novelty/contribution | |
| Fit with my dissertation/agenda | |
Like what you see?
What other questions does this generate or depend on?
What did my advisor/peers say when I mentioned this?
Like what you see?
Why I'm pursuing, pausing, or abandoning this question
```
---
---
#### Script 1: The "Cold Vault" First-Day Setup Email to Yourself
---
The definitive system for academic researchers to build a networked knowledge base in Obsidian that turns years of scattered papers, notes, and ideas into a living intellectual engine that accelerates literature reviews, surfaces novel connections, and produces publishable insights faster.
This product was designed for: PhD students (years 2-5) and early-career researchers (postdocs, assistant professors) in humanities, social sciences, or STEM fields who are drowning in 200+ saved PDFs they've half-read, scattered annotations across Zotero/Mendeley/Google Docs/physical notebooks, and feel constant anxiety that they're forgetting critical arguments or missing connections between papers. They've heard about Obsidian and maybe even installed it, but stare at an empty vault with no idea how to structure it for academic work specifically. Their desired outcome: a trusted, searchable, interconnected system where every paper they read compounds into deeper understanding and where literature review sections practically write themselves.
Your transformation: FROM: Spending 3-4 hours re-reading papers they've already annotated because they can't find or remember their notes, feeling paralyzed during literature reviews, and losing original ideas that emerge between readings → TO: A fully operational Obsidian vault with 100+ interconnected literature notes where any concept can be traced across sources in under 60 seconds, literature review drafts assembled from existing atomic notes in a single afternoon, and a personal idea incubator that surfaces novel research questions they would never have seen otherwise.
Like what you see?
Generated with DALL-E 3. No design tools needed.

1200×1800 optimized images generated with Puppeteer HTML rendering.





Your literature review shouldn't take 6 weeks. Here's the system that makes it take 6 days.
Primary hookEvery PhD student loses brilliant ideas to scattered notes and broken workflows. This vault makes sure you never lose another one.
Built by researchers, for researchers — not recycled from a productivity influencer's morning routine.
You know that sinking feeling — three browser tabs of half-read papers, a Zotero library that's basically a graveyard, and a blank document where your literature review should be. You're not disorganized. You're using tools built for the wrong person. The Academic Obsidian Blueprint was designed for the specific, relentless demands of research life: tracking arguments across dozens of papers, spotting theoretical gaps, managing a dissertation while a conference deadline looms. When your knowledge base actually reflects how academic thinking works — connected, layered, cumulative — literature reviews stop feeling like excavation and start feeling like conversation. Your ideas stop disappearing. Your writing accelerates. And for the first time, your notes work as hard as you do.
This entire product — 76 chapters, 14,000+ words, cover image, sales copy, and Pinterest pins — was created by AI in minutes.
Not days. Not weeks. Minutes.
Try Kupkaike Free — 20 Credits →Everything on this page was generated from a single niche idea. No design skills. No copywriting. No code. Just your idea — and Kupkaike does the rest.
Free account includes 20 cupcakes · No credit card required
The Academic Obsidian Blueprint: A Knowledge System for Researchers
AI-generated digital product