This product was generated with Kupkaike in under 4 minutes

Create Your Own Product →
Real Product · Real Output · Zero Editing

The Academic Obsidian Blueprint: A Knowledge System for Researchers
Academic Productivity / Knowledge Management

76 chapters, 14k+ words. Ready to sell in minutes — not months.

A complete, field-tested system for PhD students and early-career researchers to build an Obsidian vault that transforms scattered papers and annotations into an interconnected knowledge base — so literature reviews take days, not weeks, and original ideas stop slipping away. Built exclusively for academic workflows, not productivity influencers.

The Academic Obsidian Blueprint: A Knowledge System for Researchers — AI-generated cover
AI Cover
76
Chapters
14k
Words
5
Pinterest Pins
  • A complete vault architecture designed for academic research workflows — not repurposed from productivity YouTube
  • The 25-minute Literature Processing Pipeline: a repeatable method to turn any PDF into a permanent, linked knowledge asset
  • 12 field-tested templates including the PRISM literature note, argument chain, methodology comparison matrix, and draft-to-submission MOC
  • Pre-built Academic Starter Vault with ATLAS folder structure, configured Dataview queries, and 15 interconnected sample notes demonstrating the full system
  • Full Zotero-to-Obsidian Integration Masterclass: Better BibTeX setup, plugin configuration, citation formatting, and troubleshooting the 8 most common import errors
What Kupkaike Generated

Everything Below Was AI-Generated

No editing, no design skills, no copywriting — just a niche idea and Kupkaike did the rest.

📖
Full Ebook
76 chapters, 14k words
🎨
Cover Image
AI-generated, print-ready
📌
Pinterest Pins
5 pins, 1200×1800
💬
Sales Copy
Hooks, bullets, email
Estimated Selling Price
$37 – $118
on Gumroad, Etsy, or your own store
Generated for ~$4 in cupcakesROI: sell 1 copy and you're profitable
Value Comparison

What This Product Replaces

$1,500/project
A freelance writer
$800/project
A graphic designer
$2,000/month
A marketing strategist
$500/project
A copywriter
Total value replaced
$4,800+
Generated with Kupkaike for
~$4
The Ebook

76 Chapters of Content

Generated by Claude Opus 4.6. Real content, unedited.

01The Academic Obsidian Blueprint: A Knowledge System for Researchers

02The Problem No One Talks About in Graduate School

You have hundreds of PDFs. You've read many of them — some twice. You have highlights in Zotero, margin notes in printed copies, half-finished thoughts in Google Docs, and a sticky note on your monitor that says "connect this to Foucault somehow." When your advisor asks about the literature on a topic you know you've covered, you spend an afternoon re-reading papers you've already annotated, reconstructing arguments you've already worked through. This isn't a time management problem. It's a knowledge infrastructure problem — and no one in your program taught you how to solve it.

03What This Blueprint Does Differently

Most PKM guides teach Zettelkasten philosophy designed for writers and productivity enthusiasts. This system was built from the ground up for one specific workflow: academic research from database search to peer-reviewed publication. Every decision — folder structure, note templates, linking conventions, Dataview queries — is calibrated to the artifacts researchers actually produce: literature notes with proper citation metadata, argument maps across competing theoretical frameworks, methodology comparison matrices, and draft-to-submission pipelines. The PRISM literature note template alone encodes a decade of best practices for extracting and connecting scholarly arguments in a format your future self can actually use.

04What's Included and What Changes

The blueprint walks you through eight chapters covering vault architecture, a 25-minute literature processing pipeline, atomic note-writing, idea incubation, literature review assembly, multi-project management, and long-term system maintenance. You also get three bonuses: a pre-built Academic Starter Vault with 12 ready-to-use templates and a sample 15-note network, a visual Zotero-to-Obsidian Integration Masterclass covering the full setup and the 8 most common errors, and a library of 25 copy-paste Dataview queries built for academic use cases. Researchers who implement this system report assembling literature review drafts in a single afternoon from notes they've already written — and surfacing research questions they wouldn't have seen any other way.

---

05Table of Contents

1.The Researcher's Knowledge Crisis: Why Your Current System Is Costing You Publications
2.Vault Architecture: Building Your Research Operating System from Day One
3.The Literature Processing Pipeline: From PDF to Permanent Knowledge in 25 Minutes
4.Atomic Notes and the Art of Thinking in Concepts, Not Papers
5.Research Question Incubation: Using Your Vault to Generate Original Ideas
6.The Literature Review Assembly Line: Writing from Your Vault, Not from Scratch
7.Multi-Project Management: Running Parallel Research Streams Without Losing Your Mind
8.Vault Maintenance and Long-Game Compounding: Your System 6 Months from Now

---

06Chapter 1: The Researcher's Knowledge Crisis: Why Your Current System Is Costing You Publications

You already know more than enough to publish. The problem is that you can't find what you know when you need it — and that gap between what you've read and what you can actually use is quietly killing your productivity.

---

The Knowledge Leakage Audit™

Most researchers assume their knowledge problem is a reading problem. They haven't read enough, haven't read carefully enough, haven't highlighted the right passages. So they read more, highlight more, and end up with 300 PDFs in Zotero and the same creeping dread every time a lit review deadline approaches.

The actual problem is leakage — the systematic loss of intellectual work at seven distinct points in your workflow. Every hour you spend reading a paper that doesn't make it into a usable, retrievable, connectable note is an hour you'll pay for again when you re-read the same paper in six months. The Knowledge Leakage Audit™ maps exactly where your system is hemorrhaging, so you can stop patching symptoms and fix the architecture.

The 7 Knowledge Leak Points

Work through each stage of a typical academic workflow and identify where material falls through the cracks:

1. Search — You find 40 papers in a database search. You save 20 "just in case." Twelve of those are never opened. Leak: unfocused saving creates a backlog that induces avoidance.

2. Save — Papers land in a folder called "To Read," a Zotero library with no tags, or your Downloads folder. There's no metadata about why you saved it or which argument you expected it to support. Leak: decontextualized saving means you can't triage intelligently later.

3. Read — You read with a highlighter (physical or digital) but no synthesis layer. You mark what's interesting, not what's useful for your specific argument. Leak: passive highlighting without processing produces no transferable intellectual output.

4. Annotate — Your annotations live in Zotero's PDF reader, in the margins of a printed paper, in a Google Doc, and in a voice memo you recorded while walking. They are not in the same place. Leak: fragmented annotation systems make retrieval probabilistic rather than reliable.

5. Connect — You have a vague sense that Paper A's argument about institutional trust relates to Paper C's findings on compliance behavior, but you've never written that connection down. Leak: unrecorded connections exist only in working memory, which has a half-life of days.

6. Retrieve — When you sit down to write, you search your Zotero library by keyword and hope. You re-read abstracts. You open PDFs you've already annotated. Leak: retrieval without a note system forces re-processing of already-processed material — this is your Re-Reading Tax.

7. Write — You open a blank document and try to reconstruct arguments from memory, re-reading as you go. Your notes don't translate into prose because they were never written as prose-ready units. Leak: writing from scratch instead of assembling from existing notes multiplies your time-to-draft by 3-5x.

---

Calculating Your Re-Reading Tax

The Re-Reading Tax is the most expensive leak in most researchers' systems, and it's almost entirely invisible because it feels like productive work.

Here's how to calculate yours:

Estimate how many papers you actively cite or plan to cite across your current projects: \_\_\_\_
Of those, how many have you read more than once in the last 12 months? \_\_\_\_
Average time per re-read (including finding your old notes): \_\_\_\_ minutes
Monthly Re-Reading Tax = (Re-reads per month) × (minutes per re-read) ÷ 60 = \_\_\_\_ hours

For the average PhD student in years 3-4, this number lands between 6 and 14 hours per month. That's one to two full writing days, every month, spent recovering ground you've already covered.

---

Identifying Your Research Archetype

Before you build a vault, you need to know which vault structure matches how you actually work. There are three dominant research archetypes, and building the wrong architecture for your archetype is the single most common reason researchers abandon their Obsidian setup after two weeks.

The Surveyor reads broadly across a field to map the landscape. You're writing a systematic review, a dissertation introduction, or a grant background section. You need a system organized around themes and debates, not individual papers. Your vault's backbone will be concept notes that aggregate multiple sources, with literature notes as spokes.

The Deep-Diver works intensively within a narrow corpus — perhaps 40-80 papers that you know intimately. You're building a theoretical argument or tracing an intellectual genealogy. You need a system organized around authors, texts, and argument chains. Your vault's backbone will be author nodes and close-reading notes with dense interlinking.

The Multi-Project Juggler is running 2-4 projects simultaneously — a dissertation chapter, a co-authored paper, a grant proposal — each drawing on partially overlapping literatures. You need a system organized around project contexts with shared concept infrastructure underneath. Your vault's backbone will be a project layer sitting above a shared literature layer, with clear tagging to prevent cross-contamination.

You may be a hybrid, but one archetype dominates. Identify it now — it determines your folder structure, your note templates, and your linking strategy in every subsequent chapter.

---

Real-World Example

Dr. Amara Chen is a third-year sociology PhD student studying housing policy and urban displacement. She has 247 papers in Zotero, organized into four folders: "Housing," "Policy," "Methods," and "Other." She has annotations in Zotero's PDF reader for about 60 papers, a Google Doc called "Reading Notes 2023" with summaries of 30 more, and a physical notebook with observations from a conference she attended in October.

When her advisor asked her to draft the theoretical framework section of her dissertation in January, Amara spent the first three days re-reading papers she'd already annotated — because she couldn't remember which papers contained which arguments, and her Zotero annotations weren't searchable by concept. She calculated her Re-Reading Tax at 11 hours per month.

Amara is a Surveyor. Her dissertation spans urban sociology, public policy, and critical geography — three literatures she needs to synthesize, not master individually. The right vault structure for her is concept-first: notes on debates like "right to the city," "displacement as accumulation," and "policy feedback loops" that pull together multiple sources, rather than one note per paper.

Her 90-day milestone: a complete theoretical framework section (approximately 4,000 words) assembled from interconnected concept notes in Obsidian, submitted to her advisor by April 15th.

After running the Knowledge Leakage Audit™, Amara identified her three highest-impact leak points: fragmented annotation (Leak Point 4), zero connection infrastructure (Leak Point 5), and writing from scratch (Leak Point 7). Those three points became her implementation priorities — in that order.

---

Worksheet: The Knowledge Leakage Scorecard

Rate each statement from 1 (never true) to 5 (always true). Be brutally honest — optimistic scoring only delays the fix.

Section A: Retrieval Speed (max 20 points)

1.I can locate a specific argument from a paper I read 6 months ago in under 2 minutes. \_\_\_
2.My notes are stored in one system I check consistently. \_\_\_
3.I can search my notes by concept, not just by author or title. \_\_\_
4.When I need a citation for a specific claim, I find it without re-reading the paper. \_\_\_

Section B: Connection-Making (max 20 points)

5.I have a record of how Paper A's argument relates to Paper B's argument. \_\_\_
6.I regularly discover unexpected connections between papers I read months apart. \_\_\_
7.My notes link to other notes, not just to source PDFs. \_\_\_
8.I can trace a single concept (e.g., "legitimacy" or "measurement invariance") across all papers that address it. \_\_\_

Section C: Writing Readiness (max 20 points)

9.My notes are written in my own words, not copied passages. \_\_\_
10.I could draft a 500-word synthesis of a key debate in my field using only my existing notes. \_\_\_
11.My notes include my own critical commentary, not just summaries. \_\_\_
12.I know which papers support which specific claims in my argument. \_\_\_

Section D: Idea Capture (max 20 points)

13.When I have an original idea while reading, I record it in a retrievable place. \_\_\_
14.I revisit captured ideas regularly and develop them further. \_\_\_
15.I have a system for distinguishing my ideas from ideas I'm attributing to sources. \_\_\_
16.Original research questions have emerged from noticing patterns across my notes. \_\_\_

Section E: System Consistency (max 20 points)

17.I use the same note format every time I process a paper. \_\_\_
18.My system has not changed significantly in the last 6 months. \_\_\_
19.I trust my system enough that I don't feel anxiety about "losing" an idea. \_\_\_
20.I spend less than 10 minutes per paper on administrative tasks (filing, tagging, linking). \_\_\_

---

Scoring Your Results

| Score | Diagnosis |

|-------|-----------|

| 80–100 | Optimized — you're here for refinement, not rescue |

| 60–79 | Functional but leaking — 2-3 targeted fixes will unlock significant gains |

| 40–59 | Fragmented — your system is costing you 8+ hours per month |

| 20–39 | Critical leakage — you are re-doing intellectual work constantly |

| Under 20 | No system — everything you've read is functionally inaccessible |

Your three lowest-scoring sections (identify them now):

1.Lowest section: \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ Score: \_\_\_\_
2.Second lowest: \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ Score: \_\_\_\_
3.Third lowest: \_\_\_\_\_\_\_\_\_

07Chapter 2: Vault Architecture: Building Your Research Operating System from Day One

You've already identified where your knowledge is leaking. Now it's time to build the container that stops the leak permanently — and unlike the scattered folder systems you've tried before, this one is designed specifically around how academic thinking actually works.

---

The ATLAS Vault Architecture™

Most researchers who open Obsidian for the first time make the same mistake: they replicate their existing folder chaos inside a new app. ATLAS is different. It's built on a single governing principle — every note has exactly one home, decided by its function, not its topic. This eliminates the decision fatigue that kills PKM systems before they reach 50 notes.

ATLAS stands for five root folders: Archives, Thinking, Literature, Active Projects, Sources.

---

A — Archives

Completed project materials, old course notes, finished manuscripts, and anything you're keeping for reference but no longer actively developing. Archive is not a trash bin — it's a cold storage unit. Notes here are fully processed and linked. You'll visit them when a new project reaches back for prior work.

T — Thinking

Your intellectual engine room. This is where two note types live: your Daily Research Journal (DRJ) entries and your Permanent Notes (also called Evergreen Notes in some systems). The distinction matters enormously in academic work:

Daily Research Journal entries are time-stamped, low-friction captures. You write them the way you'd talk to a trusted colleague — messy, speculative, incomplete. Use them to log what you read today, what confused you, what half-formed idea you don't want to lose. Format: `DRJ-YYYY-MM-DD`
Permanent Notes are distilled, standalone claims — one idea per note, written in your own words, linked to at least two other notes. These are the notes that eventually become sentences in your papers. Format: `PN-[concept-slug]`

Never mix these two types. A DRJ entry that contains a genuinely good idea gets extracted into a Permanent Note — it doesn't become one by sitting there.

L — Literature

One note per source. Every paper, book chapter, report, or preprint you've engaged with gets a Literature Note here. These are generated automatically through your Zotero pipeline (detailed below) and follow a strict template. Format: `LIT-[citekey]`

A — Active Projects

One subfolder per active research project, dissertation chapter, grant application, or paper under review. Notes here are working documents: outlines, argument maps, draft sections, reviewer response letters. When a project is submitted or shelved, the entire subfolder moves to Archives intact.

S — Sources

Raw material that hasn't been processed yet: imported PDFs (if you store them in Obsidian), images of whiteboards, voice memo transcripts, screenshots of key figures. Think of Sources as your inbox — material enters here and gets processed into Literature or Thinking notes. Nothing stays in Sources permanently.

---

The Plugin Stack: Installation and Configuration

Install these four plugins in this order. Each one depends on the previous.

1. Zotero Integration (by mgmeyers)

In Obsidian: Settings → Community Plugins → Browse → search "Zotero Integration" → Install → Enable. In the plugin settings, set your PDF attachment path to match your Zotero storage location. Set the note import folder to `L — Literature`. This plugin is the bridge.

2. Templater (by SilentVoid)

Install and enable. Set your templates folder to a subfolder called `_Templates` inside your vault root (not inside any ATLAS folder — it sits outside the five). Templater will inject dynamic content — citekeys, dates, author names — into your note stubs automatically.

3. Dataview (by blacksmithgu)

Install and enable. No configuration needed at setup. You'll use Dataview queries later to generate dynamic literature review tables, track reading status across your L folder, and surface unlinked permanent notes. Leave default settings.

4. Citations (by hans)

Point this plugin to your Zotero Better BibTeX `.bib` export file. In Zotero: File → Export Library → Better BibTeX → check "Keep Updated" → save to your vault root as `references.bib`. This keeps your citation database live without manual exports.

---

The Zotero-to-Obsidian Pipeline

Here's how a paper moves from Zotero into your vault as a clean, linked Literature Note:

1.You finish annotating a paper in Zotero (highlights + comments).
2.In Obsidian, open the Command Palette (Cmd/Ctrl+P) → type "Zotero Integration: Import Notes."
3.Select your paper by author or title. The plugin pulls: citekey, title, authors, year, journal, abstract, and all your annotations with their page numbers.
4.Templater fires automatically, wrapping everything in your Literature Note template.
5.The note lands in `L — Literature` named `LIT-AuthorYear` and is immediately ready for linking.

Your Literature Note template should contain these fields at minimum:

```

---

citekey: {{citekey}}

authors: {{authors}}

year: {{year}}

journal: {{journal}}

status: unprocessed

tags: []

---

08Core Argument

09Methodology

10Key Evidence

11Tensions / Limitations

12My Response

13Links to Permanent Notes

```

The `status` field (unprocessed → reading → processed) becomes a Dataview filter later. The "My Response" section is where your thinking enters — this is what separates a literature note from a glorified abstract.

---

Tag Taxonomy Design for Academic Work

Generic tags like `#interesting` or `#toread` are useless at scale. Design your taxonomy across three axes:

Methodological tags: `#method/ethnography`, `#method/RCT`, `#method/discourse-analysis`, `#method/computational`

Theoretical framework tags: `#theory/institutionalism`, `#theory/feminist`, `#theory/STS`, `#theory/behaviorist` — use whatever frameworks anchor your field

Evidence-strength tags: `#evidence/strong`, `#evidence/suggestive`, `#evidence/contested`, `#evidence/anecdotal`

Apply all three axes to every Literature Note. A Dataview query can then pull every `#method/ethnography` + `#evidence/strong` paper in your vault in seconds — which is exactly what you need when a reviewer asks you to justify your methodological choices.

---

Real-World Example

Dr. Priya Nair is a third-year sociology PhD student studying housing precarity. Before ATLAS, she had 340 papers in Zotero, annotations in three different apps, and a folder on her desktop called "FINAL READING NOTES v3" containing 12 Word documents.

She sets up ATLAS in an afternoon. Her first pipeline test: she imports Desmond's Evicted (2016), a key theoretical text she'd annotated six months ago but couldn't remember in detail. Zotero Integration pulls 23 annotations with page numbers. Templater formats them into `LIT-Desmond2016`. She spends 20 minutes writing her "My Response" section and links it to two existing Permanent Notes: `PN-housing-as-welfare` and `PN-eviction-feedback-loops`.

Three weeks later, drafting her literature review chapter, she runs a Dataview query: `WHERE contains(tags, "#theory/institutionalism") AND status = "processed"`. Fourteen notes surface. She opens each "My Response" section, copies her own sentences into a draft document, and has a 600-word literature review skeleton in 90 minutes — built entirely from her own prior thinking, not re-reading.

---

Worksheet: The Vault Launch Checklist

Complete these 35 steps sequentially. Check each box only when fully verified, not just attempted.

Phase 1: Obsidian Installation (Steps 1–7)

[ ] 1. Download Obsidian from obsidian.md — confirm version 1.5 or later
[ ] 2. Create a new vault named `[YourName]-Research` in a location synced to cloud backup (iCloud, Dropbox, or OneDrive)
[ ] 3. Create five root folders exactly as named: `A — Archives`, `T — Thinking`, `L — Literature`, `A — Active Projects`, `S — Sources`
[ ] 4. Create `_Templates` folder at vault root (outside the five ATLAS folders)
[ ] 5. Enable Settings → Editor → "Strict line breaks" OFF, "Readable line length" ON
[ ] 6. Enable Settings → Files & Links → "New link format: Relative path to file"
[ ] 7. Enable Settings → Files & Links → "Default location for new notes: In the folder specified below" → set to `S — Sources`

Phase 2: Plugin Installation (Steps 8–18)

[ ] 8. Settings → Community Plugins → turn off Safe Mode
[ ] 9. Install and enable Templater
[ ] 10. In Templater settings: set template folder to `_Templates`; enable "Trigger Templater on new file creation"
[ ] 11. Install and enable Dataview
[ ] 12. In Dataview settings: enable "Enable JavaScript queries" and "Inline queries"
[ ] 13. Install and enable Zotero Integration
[ ] 14. In Zotero: install Better BibTeX plugin from retorque.re/zotero-better-bibtex
[ ] 15. In Zotero Better BibTeX: right-click your library → Export → Better BibTeX → check "Keep Updated" → save as `references.bib` to vault root
[ ] 16. In Zotero Integration settings: set import folder to `L — Literature`; link to your Zotero PDF storage path
[ ] 17. Install and enable Citations
[ ] 18. In Citations settings: point to `references.bib` in vault root; verify citation count loads correctly

Phase 3: Template Creation (Steps 19–25)

[ ] 19. Create `_Templates/Literature Note Template.md` — paste the template structure from the pipeline section above
[ ] 20. Create `_Templates/Daily Research Journal.md` with fields: Date, What I read today, Confusions, Half-formed ideas, Extractions needed
[ ] 21. Create `_Templates/Permanent Note.md` with

14Chapter 3: The Literature Processing Pipeline: From PDF to Permanent Knowledge in 25 Minutes

You've done the Knowledge Leakage Audit and you know exactly where your ideas are disappearing. Now comes the fix — a reading protocol that transforms a PDF from a temporary experience into a permanent intellectual asset, in less time than it takes to find your old notes on the same paper.

---

The PRISM Reading Protocol™

Most researchers read in extraction mode: scanning for relevant quotes, highlighting sentences that feel important, and trusting future-you to remember why. The problem isn't laziness — it's that highlighting is cognitively cheap. It requires recognition, not analysis. You can highlight an entire paper and understand nothing about how it actually fits your argument.

The PRISM Reading Protocol™ forces analytical reading in a single pass by giving each minute of your attention a specific job. PRISM stands for:

P — Purpose: What question is this paper trying to answer, and why does that question matter to your project?
R — Results: What did they actually find, claim, or argue? (Not what the abstract says — what the paper delivers)
I — Implications: What does this mean for the field, and more specifically, for your research problem?
S — Strengths/Weaknesses: What makes this paper credible or limited? Sample size, methodology, theoretical assumptions, scope conditions?
M — Mesh Points: Where does this paper connect to sources you've already processed? What tensions, confirmations, or extensions does it create?

Each letter is a layer of thinking, not just a category of information. The sequence matters: you can't identify Mesh Points until you've assessed Strengths and Weaknesses, because a methodologically weak paper meshes differently with your argument than a robust one.

The 3-Layer Annotation Strategy

While reading, you're making three distinct types of annotations — and keeping them visually separate:

1.Factual Extraction (what they said): Direct quotes or close paraphrases with page numbers. These are your evidentiary anchors. Use a consistent marker — in PDF annotation tools, blue highlights work well.
2.Analytical Response (what you think): Your immediate reaction to a claim, method, or finding. This is where you write "this contradicts Smith 2019" or "their operationalization of X is too narrow for my context." Use a different color — orange or yellow — and write in first person. These notes are for you, not for citation.
3.Connection Flags (what this links to): Any moment where a concept, term, or argument triggers a connection to another paper, a theoretical framework, or an emerging idea in your vault. Mark these with a symbol — a double asterisk (\\) or a bracket — and write the linked concept name immediately. These become your Mesh Points.

If you're working in a PDF reader like Zotero's built-in viewer, Highlights.app, or PDF Expert, you can use color-coded highlights plus comments. If you're working with physical printouts, use three different colored pens. The medium matters less than the discipline of keeping the three layers distinct.

---

The 25-Minute Literature Note Session

This is a timed structure. Use a timer. The constraint is the point.

Minutes 1–15: Read and Annotate

Read with the PRISM categories in mind, but don't stop to write the note yet. Annotate using the 3-layer strategy. For empirical papers, spend extra time on the methods section — this is where most researchers skim and later regret it. For theoretical papers, slow down on the conceptual definitions; authors often bury their actual argument in a subordinate clause on page 8. For review articles, focus on how the authors are organizing the field — their taxonomy is itself an argument worth capturing. For methodology papers, your factual extraction should include specific procedural details you might need to replicate or adapt.

Minutes 16–22: Write the PRISM Note in Obsidian

Open your literature note template (see Worksheet below) and complete each section. Don't write prose — write in compressed, analytical sentences. Each PRISM section should be 2–4 sentences maximum. The goal is density, not coverage. If you're summarizing rather than analyzing, you're doing it wrong.

End every PRISM note with the "So What?" test: one sentence that answers the question How does this paper change, complicate, or confirm my current thinking? This is a forcing function. If you can't write that sentence, you haven't processed the paper — you've just described it. Go back to the Implications and Mesh Points sections and push harder.

Minutes 23–25: Link to Existing Vault Notes

Open your Obsidian graph or use the Quick Switcher (Cmd/Ctrl + O) to find the notes this paper connects to. Add `[[wikilinks]]` in the Mesh Points section. If a connection doesn't have a note yet, create a stub — a note with just a title and a one-sentence placeholder. These stubs become your intellectual to-do list. Three minutes is enough to add 3–5 meaningful links. If you're finding zero connections, that's diagnostic information: either this paper is genuinely isolated (rare) or your vault is still too sparse (common in early weeks, which is why you start with your most-cited papers first).

---

Handling Different Paper Types

The PRISM protocol adapts slightly depending on what you're reading:

Empirical Studies: Weight the S (Strengths/Weaknesses) section heavily. Note sample characteristics, measurement validity, and generalizability limits explicitly. Your future self writing a methods critique will thank you.

Theoretical Papers: The R (Results) section becomes "core claims" rather than findings. Capture the paper's central conceptual move — what distinction it draws, what it reframes, what it argues is wrong about existing theory.

Review Articles: Treat the paper's organizational logic as a finding. In your Mesh Points, note which papers the review authors cite that you haven't read yet — these are priority additions to your reading list.

Methodology Papers: Add a sixth field to your template: Applicability — one sentence on whether and how you could use this method in your own work.

---

Real-World Example

Dr. Amara Chen is a third-year sociology PhD student studying organizational responses to climate disclosure mandates. She has 240 papers in Zotero, 60 of which have annotations — scattered across the Zotero comment field, a Google Doc titled "notes misc," and three physical notebooks.

She opens a 2022 paper on institutional isomorphism and ESG reporting. Old method: she'd highlight 15 passages, write "important — connects to DiMaggio?" in a comment, and close the PDF. Six months later, writing her literature review, she'd re-read the entire paper because she couldn't reconstruct her thinking from those highlights.

Using PRISM, her 25-minute session produces this note:

P: Investigates whether regulatory pressure or peer mimicry drives ESG disclosure adoption in Fortune 500 firms. Directly relevant to my Chapter 2 argument about coercive vs. mimetic isomorphism.

R: Finds that mimetic pressure (peer adoption rates) predicts disclosure quality more strongly than regulatory mandates, particularly in sectors with high analyst coverage. Effect size is moderate (β = .34) but consistent across three industry subsamples.

I: Complicates my assumption that regulatory frameworks are the primary driver. May need to revise my theoretical model to weight peer effects more heavily in high-visibility industries.

S: Strong: longitudinal design (2015–2021), large N (487 firms). Weak: operationalizes "disclosure quality" using a proprietary index that isn't publicly available — limits replicability and my ability to use their measure.

M: Connects to [[DiMaggio & Powell 1983]] (foundational isomorphism framework), [[Marquis & Lounsbury 2007]] (field-level variation), and my own emerging note [[Regulatory Salience vs. Peer Salience]] which I haven't fully developed yet.

So What?: This paper forces me to treat mimetic and coercive isomorphism as empirically separable rather than theoretically equivalent, which changes how I need to operationalize "institutional pressure" in my survey instrument.

That last sentence took her four minutes to write. It also just saved her three hours of confusion during her dissertation proposal defense.

---

Worksheet: The PRISM Practice Lab

Instructions: Select 5 papers from your current project — ideally a mix of paper types. For each paper, run the 25-minute timed session and complete the template below. After all five, complete the reflection section.

---

PRISM Literature Note Template (Copy into Obsidian for each paper)

```

---

title: [Author Last Name, Year — Short Title]

authors:

year:

journal/source:

paper-type: [empirical / theoretical / review / methodology]

date-processed:

tags: [topic-tags] [method-tags]

---

15P — Purpose

What question does this paper answer? Why does it matter to my project?

[Your 2–3 sentences here]

16R — Results / Core Claims

What did they actually find or argue?

[Your 2–3 sentences here]

17I — Implications

What does this mean for the field and for my research problem specifically?

[Your 2–3 sentences here]

18S — Strengths & Weaknesses

What makes this credible? What limits it?

Strengths:

Weaknesses:

19M — Mesh Points

Where does this connect in my vault?

[[Note 1]] — [one-sentence description of connection]
[[Note 2]] — [one-sentence description of connection]
[[Stub: Concept to develop]] — [why this needs its own note]

20So What?

One sentence: How does this paper change, complicate, or confirm my current thinking?

[Your sentence here]

---

Processing time: ___ minutes

Annotation layers used: Factual / Analytical / Connection

```

---

Reflection Section (Complete after processing all 5 papers)

| Paper | Processing Time | # Mesh Points Found | Could you write a paragraph from this note alone? (Y/N) |

|---|---|---|---|

| Paper 1 | | | |

| Paper 2 | | | |

| Paper 3 | | | |

| Paper 4 | | | |

| Paper 5 | | | |

Comparison questions (write 2–3 sentences each):

1.Pick one paper you previously annotated using your old method. Compare what you captured then versus what the PRISM note

21Chapter 4: Atomic Notes and the Art of Thinking in Concepts, Not Papers

You've done the hard work of reading carefully and capturing literature notes. Now those notes are sitting in your vault, organized by author and year, and you're starting to suspect they're just a slightly better version of your old PDF folder. That suspicion is correct — and this chapter fixes it.

The Concept Extraction Engine™

The fundamental problem with source-organized notes isn't that they're wrong — it's that they mirror the structure of your library, not the structure of your thinking. When you write a literature review, you don't think "first I'll discuss Smith (2018), then Jones (2020)." You think "the debate about X has three positions." The Concept Extraction Engine™ is the process of rebuilding your notes around that second structure — the one that actually produces arguments.

Here's how it works across five stages:

Stage 1: Concept Scan (15 minutes per batch of 10 literature notes)

Open your existing PRISM literature notes and read only the "Key Arguments" and "My Response" fields. Don't re-read the papers. As you scan, write down every noun phrase that appears more than once across different notes. These recurring phrases — "institutional isomorphism," "measurement invariance," "narrative identity," "settler colonialism" — are your concept candidates. You're not inventing categories; you're noticing what your reading has already been circling.

Stage 2: Atomic Note Creation

For each concept candidate, create a new note. The anatomy of a valid atomic concept note has exactly five components:

Title: The concept stated as a claim, not a label. Not "Institutional Isomorphism" but "Organizations adopt similar structures not for efficiency but to gain legitimacy." A claim-title forces you to have a position.
Body: 3-5 sentences in your own words. No block quotes. If you can't explain it without quoting, you don't understand it yet — mark it as a seed and move on.
Source Backlinks: `[[Smith2018]]`, `[[Jones2020]]` — every literature note that informed this concept note, linked inline where relevant.
Status Tag: `#seed` (one source, still fuzzy), `#developing` (two sources, position forming), `#evergreen` (three or more sources, you can defend this in a seminar).
Agreement Markers: A brief explicit statement — "I find this convincing because..." or "I'm skeptical because..." — not optional. Unmarked notes are intellectual dead weight.

Stage 3: The Three-Source Rule

A concept note earns `#evergreen` status only when you can articulate how three different sources relate to it — and "relate" means something specific. Source A might define the concept, Source B might challenge its scope, and Source C might apply it in a context that reveals a hidden assumption. That triangulation is what transforms a note from a summary into a scholarly position. Until you have three sources, the note stays `#developing`. This isn't arbitrary gatekeeping — it's the minimum threshold for a claim you could defend in print.

Stage 4: Splitting and Lumping

As your concept notes accumulate, two failure modes appear. The first is notes that are too broad — if your note titled "Power operates through discourse" contains four distinct mechanisms, each mechanism deserves its own note. A reliable test: if the note has more than two `[[source]]` backlinks doing completely different argumentative work, it probably needs to be split. The second failure mode is notes that are secretly identical — you might have "Foucauldian surveillance" and "panopticism as social control" as separate notes when they're the same idea approached from different angles. When two notes would always appear together in an argument, merge them and keep both titles as aliases.

Stage 5: Building Argument Chains

Once you have 15-20 atomic notes, you can start linking them sequentially to represent a line of reasoning. In Obsidian, create a "chain note" — a note whose entire purpose is to link atomic notes in logical order with one-sentence transitions. An argument chain might read: `[[Institutions seek legitimacy]] → [[Legitimacy requires conformity to field norms]] → [[Conformity suppresses technical efficiency]] → [[This explains isomorphism in nonprofits]]`. Each arrow is a link. Each linked note is a fully developed atomic note with sources. You've just built the skeleton of a literature review section in under an hour.

---

Real-World Example

Imagine a fourth-year sociology PhD student, Priya, studying how remote work policies reproduce gender inequality. She has 47 PRISM literature notes covering organizational theory, feminist labor studies, and COVID-era workplace research. Her vault is organized by author. She can't see the argument she wants to make.

She runs Stage 1 and finds "flexibility stigma" appearing in notes on Williams (2010), Munsch (2016), and a 2021 policy brief. She creates an atomic note titled: "Flexibility stigma penalizes workers who use accommodations designed for them, regardless of productivity." She writes three sentences in her own words, links all three sources, marks it `#developing` because she only has two academic sources so far.

Two weeks later, while processing a new paper by Chung (2022) on remote work uptake, she finds a third source that reframes flexibility stigma as operating differently by gender in remote versus in-person contexts. She updates the note, adds the backlink, upgrades it to `#evergreen`, and adds an agreement marker: "I find Chung's gender-asymmetry argument more precise than Williams's original framing — it explains the cases Williams's model can't."

She then builds an argument chain: `[[Remote work increases schedule autonomy]] → [[Schedule autonomy triggers flexibility stigma for women but not men]] → [[Flexibility stigma suppresses women's promotion rates]] → [[Remote work policies reproduce inequality despite neutral framing]]`. That chain is the core argument of her third dissertation chapter, assembled from notes she already had.

---

Worksheet: The Concept Mining Worksheet

Use this template after completing at least 10 PRISM literature notes. Work through each section in order.

---

PART 1: Concept Scan

List the 10 literature notes you're mining (use note titles):

1.`_______________`
2.`_______________`
3.`_______________`
4.`_______________`
5.`_______________`
6.`_______________`
7.`_______________`
8.`_______________`
9.`_______________`
10.`_______________`

Recurring concept candidates (noun phrases appearing in 2+ notes):

`_______________`
`_______________`
`_______________`
`_______________`
`_______________`

---

PART 2: Atomic Note Template (copy for each concept)

Concept Note Title (stated as a claim):

`_______________________________________________`

Body (3-5 sentences, your own words, no quotes):

`_______________________________________________`

`_______________________________________________`

`_______________________________________________`

Source Backlinks:

`[[ ]]` — role in this note: `_______________`

`[[ ]]` — role in this note: `_______________`

`[[ ]]` — role in this note: `_______________`

Status Tag: ☐ `#seed` ☐ `#developing` ☐ `#evergreen`

My Position (circle one): Agree / Disagree / Partially agree / Undecided

Agreement/Disagreement Marker:

`I find this [convincing/problematic] because: _______________`

---

PART 3: Split or Merge Check

For each atomic note, answer:

Does this note contain more than one distinct mechanism? ☐ Yes → Split ☐ No → Keep
Does another note in your vault make the same claim differently? ☐ Yes → Merge ☐ No → Keep

Notes to split: `_______________`

Notes to merge: `_______________`

---

PART 4: Argument Chain Builder

Arrange 4-6 atomic notes in logical sequence to form one argument:

`[[ ]]` → `[[ ]]` → `[[ ]]` → `[[ ]]`

Transition logic (one sentence between each link):

Step 1→2: `_______________`

Step 2→3: `_______________`

Step 3→4: `_______________`

---

PART 5: Graph View Annotation

After linking your atomic notes in Obsidian, open Graph View and take a screenshot. Print or paste it here. Annotate with:

Unexpected clusters (nodes that connected in ways you didn't predict): `_______________`
Isolated nodes (concepts with no links — these are research gaps or underread areas): `_______________`
Hub nodes (concepts linked to 5+ other notes — these are likely central to your argument): `_______________`
One research question this graph suggests that you hadn't considered before: `_______________`

---

Quick Checklist

[ ] Every atomic note title is a claim, not a label (test: does it contain a verb?)
[ ] Every note is written entirely in your own words — no block quotes in the body
[ ] Every `#evergreen` note has exactly three or more source backlinks with distinct argumentative roles noted
[ ] Every note has an explicit agreement or disagreement marker filled in
[ ] You've run the split/merge check on all notes created in this session
[ ] At least one argument chain exists connecting four or more atomic notes
[ ] You've opened Graph View and identified at least one unexpected connection and one gap
[ ] All new atomic notes are backlinked from their source literature notes (bidirectional linking)

---

Common Mistakes

1.Titling notes with concepts instead of claims — This happens because it feels more "neutral" and academically safe to write "Social Capital" rather than take a position. The fix: every time you write a concept-label title, ask "what does this concept do or mean in the literature?" and rewrite the title as the answer. "Social Capital" becomes "Social capital converts network ties into resource access asymmetrically by class." Now the note has argumentative direction.
2.Rushing notes to `#evergreen` status with superficial third sources — Researchers under deadline pressure grab a third citation just to upgrade the tag, without checking whether the third source actually adds a new angle. The fix: when adding a third source, you must write a new sentence in the note body that specifically explains what the third source contributes that the first two didn't. If you can't write that sentence, the source doesn't count toward

22Chapter 5: Research Question Incubation: Using Your Vault to Generate Original Ideas

You've built the vault, processed the papers, and extracted the concepts. Now comes the part most researchers never systematize: the moment when two ideas collide and something genuinely new emerges. That moment is not random — it's engineerable.

---

The Collision Protocol™

Most researchers generate research questions the same way they always have: by staring at a blank page after reading a pile of papers, hoping inspiration strikes. The Collision Protocol™ replaces that passive hope with a structured weekly practice that forces your vault to surface contradictions, gaps, and cross-disciplinary opportunities you would otherwise miss entirely.

There are three types of productive collisions your vault can generate:

Type 1: Contradiction Collisions

Source A makes a claim. Source B, which you've also read and noted, directly contradicts it. These contradictions are gold — they signal either a genuine empirical dispute, a methodological incompatibility, or a theoretical boundary condition that hasn't been named yet. When your concept notes are properly linked (as built in Chapter 4), these contradictions become visible as competing claims on the same concept node.

Type 2: Gap Collisions

This is the subtler type. Every field has foundational assumptions that everyone cites but nobody tests. A Gap Collision occurs when you notice that a cluster of papers all assume a relationship without ever directly measuring it, or when a concept appears repeatedly in your vault but has no empirical literature note attached to it — only theoretical ones. These are the "everyone knows this, but does anyone actually know this?" moments.

Type 3: Transfer Collisions

A method, framework, or analytical tool from one field lands on a problem in a completely different field. Behavioral economics concepts applied to science communication. Network analysis methods applied to historical correspondence. Ecological resilience theory applied to organizational change. Your vault, spanning multiple literatures if you've been reading broadly, is uniquely positioned to surface these.

---

The Weekly Collision Session: A 30-Minute Ritual

Run this session once per week, ideally on the same day. Block it in your calendar as non-negotiable research time.

Step 1 — Random Note Activation (5 minutes)

Open Obsidian and trigger the Random Note feature three times. Don't choose the notes — let the vault choose. Open all three in split panes. Read only your own synthesis sentences, not the full note.

Step 2 — Local Graph Scan (5 minutes)

For each of the three random notes, open its Local Graph (right-click → Open Local Graph). Set depth to 2. Look for nodes that appear in two or more of the three graphs simultaneously. These shared nodes are your collision candidates.

Step 3 — Dataview Surfacing (10 minutes)

Run the following queries in a dedicated `Collision Session` note. These are designed to surface structurally weak areas of your vault — the places where your knowledge is thin, disconnected, or over-reliant on a single source.

To find orphan concept notes (concepts you extracted but never linked to literature):

```dataview

LIST

FROM "Concepts"

WHERE length(file.inlinks) = 0

SORT file.mtime ASC

```

To find under-connected literature notes (papers you processed but haven't linked to concepts):

```dataview

LIST

FROM "Literature"

WHERE length(file.outlinks) < 3

SORT file.ctime DESC

LIMIT 20

```

To find over-cited sources (potential blind spots where you're over-relying on one voice):

```dataview

TABLE length(file.inlinks) AS "Cited By"

FROM "Literature"

SORT length(file.inlinks) DESC

LIMIT 10

```

Step 4 — Question Generation (10 minutes)

Using what the random notes and Dataview results surfaced, write at least two candidate research questions in your `Questions Queue` MOC. Don't filter yet — volume matters at this stage.

---

The Research Question Maturity Ladder

Not every question that emerges from a Collision Session is ready for a grant proposal. Use this four-rung ladder to track and develop each question over time:

Rung 1 — Vague Curiosity: "There's something interesting about how X relates to Y."

Template: `I notice that [concept/phenomenon] seems connected to [concept/phenomenon] but I don't know how.`

Rung 2 — Specific Question: "Does X influence Y under condition Z?"

Template: `What is the relationship between [variable/concept A] and [variable/concept B] in the context of [population/setting/condition]?`

Rung 3 — Testable Hypothesis: "I predict that X increases Y when Z is present, because of mechanism M."

Template: `[Variable A] will [increase/decrease/moderate] [Variable B] when [condition Z] because [theoretical mechanism].`

Rung 4 — Positioned Contribution: "This study addresses gap G in literature L by testing H using method M, which advances theory T."

Template: `While prior work has established [what we know], no study has examined [specific gap]. This study tests [hypothesis] using [method], contributing to [theoretical framework] by [specific advancement].`

Tag each question in your Questions Queue with its current rung: `#rung1`, `#rung2`, etc. Your goal each week is to move at least one question up one rung.

---

The Questions Queue MOC

Create a note titled `Questions Queue` in your `MOCs` folder. This is a living document — not a graveyard of abandoned ideas, but an active pipeline. Structure it as follows:

Each entry gets three ratings on a 1–5 scale:

Novelty (1 = incremental, 5 = genuinely new)
Feasibility (1 = requires 10 years and $2M, 5 = doable with current resources)
Field Demand (1 = nobody is asking this, 5 = multiple recent calls for papers on this topic)

A question scoring 12 or above across all three dimensions moves to your `Active Projects` folder immediately.

---

Real-World Example

Dr. Yemi Adeyinka is a third-year sociology PhD student studying urban food access. Her vault has 140 literature notes built using the PRISM Protocol from Chapter 3. During a Tuesday Collision Session, her Random Note feature surfaces three notes: one on cognitive load theory (from a psychology paper), one on participatory mapping methods (from a geography paper), and one on food desert measurement critiques (her core literature).

Her Local Graph scan reveals that the concept node `spatial perception` appears in both the cognitive load note and the food desert note — but there's no literature note connecting them. Her Dataview orphan query surfaces `mental maps` as a concept she extracted but never linked to any empirical study.

She writes this Rung 1 question: "There's something interesting about how residents cognitively represent food access versus how researchers spatially measure it."

By the end of the session, she's moved it to Rung 2: "Do residents' mental maps of food access diverge systematically from GIS-measured food desert boundaries, and does that divergence predict actual shopping behavior?"

She scores it: Novelty 4, Feasibility 4, Field Demand 3. Total: 11. Close — she flags it for one more week of development. Two weeks later, after finding two papers on mental mapping in transportation research (a Transfer Collision), it scores 13 and moves to Active Projects. That question becomes her dissertation's third chapter.

---

Worksheet: The Collision Session Playbook

Use this template for your first three weekly sessions. Copy it into a new note each week titled `Collision Session — [Date]`.

---

SESSION DATE: _______________

SESSION NUMBER (circle): 1 / 2 / 3

RANDOM NOTES PULLED:

1.Note title: _______________ | Core argument in one sentence: _______________
2.Note title: _______________ | Core argument in one sentence: _______________
3.Note title: _______________ | Core argument in one sentence: _______________

SHARED NODES FROM LOCAL GRAPH SCAN:

Concept/node appearing in 2+ graphs: _______________

Why is this overlap interesting? _______________

DATAVIEW RESULTS:

Orphan concepts surfaced (list up to 3): _______________
Under-connected literature notes (list up to 3): _______________
Most over-cited source: _______________ (cited by ___ notes)

COLLISION TYPE IDENTIFIED (circle): Contradiction / Gap / Transfer

RAW QUESTIONS GENERATED (write at least 2, don't filter):

1._______________
2._______________
3._______________

TOP QUESTION THIS SESSION:

Question text: _______________

Current Maturity Rung (circle): 1 / 2 / 3 / 4

Novelty score (1–5): ___ | Feasibility score (1–5): ___ | Field Demand score (1–5): ___

Total: ___ / 15

If total ≥ 12: Move to Active Projects folder → ✓ / Not yet

Next development action for this question: _______________

---

PASTE THESE QUERIES INTO YOUR VAULT NOW:

Orphan concepts:

```dataview

LIST

FROM "Concepts"

WHERE length(file.inlinks) = 0

SORT file.mtime ASC

```

Under-connected literature:

```dataview

LIST

FROM "Literature"

WHERE length(file.outlinks) < 3

SORT file.ctime DESC

LIMIT 20

```

Over-cited sources:

```dataview

TABLE length(file.inlinks) AS "Cited By"

FROM "Literature"

SORT length(file.inlinks) DESC

LIMIT 10

```

---

Quick Checklist

[ ] `Questions Queue` MOC created in your MOCs folder with Novelty/Feasibility/Field Demand rating columns
[ ] Collision Session recurring calendar block set (30 minutes, weekly, same day)
[ ] Three Dataview queries installed and tested in a `Collision Session Template` note
[ ] Random Note feature confirmed working in Obsidian (Core Plugins → Random Note → toggle on)
[ ] Local Graph depth set to 2 as default (Settings → Graph → Local Graph depth)
[ ] Each existing concept note checked for the `#rung` tag system and at least one question already entered
[ ] Scoring rubric (Novelty +

23Chapter 6: The Literature Review Assembly Line: Writing from Your Vault, Not from Scratch

You've done the hard work — you've run the PRISM protocol on dozens of papers, extracted atomic concept notes using the Concept Extraction Engine, and built a vault that actually reflects your thinking. Now comes the moment every PhD student dreads: opening a blank document and trying to write a literature review section from memory and scattered notes. This chapter eliminates that experience entirely.

The Mosaic Drafting Method™

The central insight here is uncomfortable but liberating: writing a literature review is not a writing problem — it's a curation and sequencing problem. If you've built your vault correctly through Chapters 2–4, the intellectual content of your literature review already exists. Your concept notes contain the claims. Your literature notes contain the evidence. Your contradiction notes capture the counterpoints. Your synthesis annotations hold your analytical contribution. The draft is already there — it just hasn't been assembled yet.

The Mosaic Drafting Method™ treats your vault like a physical mosaic: the tiles already exist, your job is to arrange them into a coherent image, then grout the seams.

Step 1: Build a Literature Review MOC (Map of Content)

Create a new note titled `LitReview_MOC_[YourSection]` — for example, `LitReview_MOC_InstitutionalTrust`. This becomes your staging area. Open it alongside Obsidian's graph view filtered to your relevant tags. Now drag-link every concept note, literature note, and atomic idea that belongs in this section. Don't filter aggressively yet — include anything adjacent. You're casting wide before you cut.

Group these linked notes into thematic clusters by adding second-level headers inside your MOC. These clusters become your section headings. If you have eight concept notes that all orbit around "measurement validity in survey instruments," that's a subsection. Let the density of your notes reveal the structure — your vault is telling you what matters.

Step 2: Sequence the Argument

Within each thematic cluster, order your notes into a logical argumentative sequence. Ask: what does the reader need to understand first? What claim sets up the next? This is where the intellectual work happens — not in writing sentences, but in deciding which tile goes next to which.

Step 3: Apply the 4-Move Paragraph Structure

Each paragraph in an academic literature review performs four moves. Structure your note sequence to execute all four:

Move 1 — Claim: Your concept note states the central assertion. This is the topic sentence, already written in your own words from your extraction work.
Move 2 — Evidence: Your backlinked literature notes provide the sourced support. Two to four sources, already annotated with page numbers and quotes.
Move 3 — Counterpoint: Your contradiction notes (flagged with `#tension` or `#contradicts` during PRISM reading) introduce the complication or dissenting view.
Move 4 — Synthesis: Your analytical annotations — the "so what" layer you added during concept extraction — deliver your original contribution to the conversation.

This structure prevents the most common lit review failure mode: the annotated bibliography disguised as a literature review, where you summarize source after source without ever making an argument.

Step 4: Assemble the Transclusion Draft

In a new note titled `DRAFT_[SectionName]_v1`, use Obsidian's embedded transclusion syntax to pull your notes directly into the draft:

```

![[InstitutionalTrust_ConceptNote]]

![[Smith2019_LitNote]]

![[Jones2021_LitNote]]

![[Contradiction_TrustMeasurement]]

![[Synthesis_TrustInLowIncomeContexts]]

```

Reading mode will render these as a continuous document. You're looking at a rough draft assembled in minutes from notes that took weeks to build. The prose is rough — it reads like stitched-together notes, because it is. That's intentional.

Step 5: Rewrite for Flow

Copy the rendered text into a new note or directly into Word. Now you're not writing — you're editing. You're smoothing transitions, adjusting register, and adding connective tissue between ideas that are already there. This is cognitively easier by an order of magnitude. Most researchers report cutting their drafting time by 50–70% at this stage, not because they're faster typists, but because they're no longer doing two jobs simultaneously: thinking and writing.

Step 6: Run the Citation Integrity Check

Before exporting, run this Dataview query in your vault to verify every claim in your draft links to a properly cited source note with full bibliographic metadata:

```dataview

TABLE file.name, citekey, author, year, title

FROM #literature-note

WHERE !citekey OR !author OR !year

SORT file.name ASC

```

Any note surfaced by this query has incomplete metadata. Fix it in Zotero first, re-sync via the Zotero Integration plugin, and verify the updated fields appear in Obsidian before you export. A draft with 40 citations and three missing citekeys will create hours of cleanup in Word — catch it here.

Step 7: Export via Pandoc + Better BibTeX

With your draft note open, export using Pandoc from the terminal:

```bash

pandoc DRAFT_SectionName_v1.md \

--bibliography=/path/to/your/library.bib \

--csl=/path/to/apa7.csl \

-o SectionName_draft.docx

```

Your Better BibTeX plugin (configured in Chapter 2) has already generated the `.bib` file from your Zotero library. Every `[@smith2019]` citation key in your Obsidian notes becomes a formatted citation in the output document. For LaTeX users, swap `docx` for `pdf` and add `--pdf-engine=xelatex`. The citations are not placeholders — they're real, formatted, and linked to your full bibliography.

---

Real-World Example

Scenario: Dr. Priya Mehta is a third-year PhD student in public health writing the theoretical framework section of her dissertation on community health worker retention in rural India. She has 67 literature notes in her vault tagged `#retention`, `#CHW`, and `#motivation-theory`.

She creates `LitReview_MOC_RetentionFrameworks` and links all 67 notes. Grouping by concept density, three clusters emerge naturally: intrinsic motivation theories, structural/systemic barriers, and community embeddedness factors. These become her three subsections — she didn't invent this structure, her vault revealed it.

For the intrinsic motivation subsection, she sequences eight concept notes and applies the 4-move structure: her concept note on self-determination theory as the Claim, four literature notes citing Deci, Ryan, and field adaptations as Evidence, a contradiction note flagging Bhattacharyya's critique of Western motivation models in LMIC contexts as Counterpoint, and her own synthesis note arguing for a hybrid framework as the Synthesis.

She assembles the transclusion draft in 22 minutes. Rewriting for flow takes 45 minutes. Total: 67 minutes for a 900-word subsection she estimated would take a full day. The Dataview citation check surfaces two notes missing publication years — she fixes them in Zotero before exporting. The Pandoc export produces a clean Word document with APA 7 citations intact. She sends it to her supervisor that afternoon.

---

Worksheet: The Mosaic Assembly Template

Use this template to assemble one section of your current writing project. Time yourself honestly.

---

Section I: Project Identification

```

Writing project: _______________________________________________

Target section/chapter: ________________________________________

Estimated word count for this section: _________________________

Your estimate: how long would this take writing from scratch? ___

```

Section II: MOC Construction

```

MOC note title: LitReview_MOC_[________________]

Relevant tags to filter in your vault:

Tag 1: ________________________________________________________

Tag 2: ________________________________________________________

Tag 3: ________________________________________________________

Total notes linked to MOC: _____________________________________

```

Section III: Thematic Clustering

```

Cluster 1 (Subsection heading): ________________________________

Notes included: ____________________________________________

Note count: ___

Cluster 2 (Subsection heading): ________________________________

Notes included: ____________________________________________

Note count: ___

Cluster 3 (Subsection heading): ________________________________

Notes included: ____________________________________________

Note count: ___

```

Section IV: 4-Move Structure Map (per subsection)

```

Subsection: ___________________________________________________

Move 1 — Claim note: __________________________________________

Move 2 — Evidence notes (list citekeys): _______________________

Move 3 — Counterpoint note: ____________________________________

Move 4 — Synthesis note: _______________________________________

```

Section V: Assembly Log

```

Time to build MOC and cluster: _____________ minutes

Time to sequence argument: _____________ minutes

Time to assemble transclusion draft: _____________ minutes

Time to rewrite for flow: _____________ minutes

TOTAL actual time: _____________ minutes

Compared to your scratch estimate: saved _____________ minutes

Citation integrity issues found: _______________________________

Export format used: [ ] Word [ ] LaTeX [ ] PDF

Export successful: [ ] Yes [ ] No — issue: ____________________

```

---

Quick Checklist

[ ] Literature Review MOC created with all relevant concept and literature notes linked
[ ] Notes grouped into thematic clusters that map to subsection headings
[ ] Each subsection sequenced using the 4-move structure (Claim → Evidence → Counterpoint → Synthesis)
[ ] Transclusion draft assembled in a dedicated `DRAFT_` note using `![[]]` syntax
[ ] Transclusion draft rendered in Reading Mode and reviewed for logical gaps
[ ] Dataview citation integrity query run — all notes have complete bibliographic metadata
[ ] Pandoc export executed with correct `.bib` file and CSL style
[ ] Exported document spot-checked: three random citations verified against Zotero library

---

Common Mistakes

1.Building the MOC from scratch instead of from existing notes — Researchers open the MOC note and start typing new content instead of linking notes they've already written. This defeats the entire method and turns the MOC into a second draft document. → Fix: Your MOC should contain only links and headers — no prose. If you're writing sentences in the MOC, stop and ask whether that

24Chapter 7: Multi-Project Management: Running Parallel Research Streams Without Losing Your Mind

You've built your vault, you're processing literature with PRISM, and your concept notes are multiplying — but now you're staring at four simultaneous deadlines: a dissertation chapter draft, a co-authored methods paper, a grant proposal, and next week's seminar prep. The anxiety isn't about not knowing the material. It's about not knowing which material belongs where and whether anything critical is slipping through the cracks.

The Research Portfolio Dashboard™

The Research Portfolio Dashboard™ is a project management layer built inside your vault — not alongside it in Notion, not in a separate spreadsheet, not in a second Obsidian vault. Everything lives in one place because that's where the leverage is. Here's the five-step architecture.

Step 1: The One Vault Principle

Resist the urge to create a separate vault for your dissertation and another for your teaching prep. This feels organized but destroys the single most valuable feature of your Second Brain: cross-pollination. When your note on "methodological triangulation" lives in one vault, it can be linked from your dissertation chapter MOC, your co-authored methods paper MOC, and your grant proposal MOC simultaneously. Split the vault and you're copying notes, maintaining duplicates, and — inevitably — creating the same fragmentation problem you had before Obsidian.

Step 2: Project MOC Architecture

Every active project gets a dedicated Map of Content (MOC) note. This is the hub note you return to every time you touch that project. Create it in your `Projects/Active/` folder using this standard structure:

Status line: `Status:: Active | Stalled | Under Review | Submitted`
Core question block: 2–4 sentences defining what this project is trying to answer or produce
Key deadlines: hardcoded dates, not vague references
Source list: links to the literature notes most central to this project
Draft links: links to any in-progress writing notes or exported drafts
Concept connections: links to atomic concept notes that feed this project

The MOC is not where you write — it's where you navigate. Keep it lean and functional.

Step 3: Dataview-Powered Project Dashboards

If you installed the Dataview plugin in Chapter 2, this is where it pays off. Add the following queries to a master `Research Portfolio Dashboard` note in your vault root.

Notes per project (tagged by project):

```dataview

TABLE length(rows) AS "Note Count"

FROM #project/dissertation-ch3 OR #project/methods-paper

GROUP BY tags

```

Recently modified project notes:

```dataview

TABLE file.mtime AS "Last Modified", status AS "Status"

FROM "Projects/Active"

SORT file.mtime DESC

LIMIT 15

```

Orphan notes needing integration (no outgoing links):

```dataview

LIST

FROM "Literature Notes"

WHERE length(file.outlinks) = 0

SORT file.ctime ASC

```

Approaching deadlines:

```dataview

TABLE deadline AS "Due", project AS "Project"

FROM "Projects/Active"

WHERE deadline <= date(today) + dur(30 days)

SORT deadline ASC

```

Tag every literature note and concept note with the project(s) it serves — `#project/dissertation-ch3`, `#project/grant-nsf-2025` — and these queries populate automatically. Your dashboard becomes a live status board that updates every time you work in the vault.

Step 4: The Shared Concept Advantage

This is the compounding return that justifies the entire system. When you wrote your atomic note on "methodological triangulation" back in Chapter 4, you probably wrote it in the context of one paper. Now tag it with every project it serves. A single well-developed concept note — with your synthesis, the key citations, and your critical commentary — can be linked from three different Project MOCs without any duplication. When you update the concept note with a new source, all three projects benefit instantly. This is the difference between a filing system and a thinking system.

Step 5: Academic Calendar Integration

Your vault doesn't exist in a vacuum. In each Project MOC, add a `deadline` property in YAML frontmatter (`deadline: 2025-03-15`). Then build a separate `Academic Calendar` note that aggregates conference submission windows, journal special issue deadlines, grant cycles, and chapter due dates. Cross-reference this with your Dataview deadline query. Review it every Friday.

The Weekly Research Review Ritual

Every Friday, block 45 minutes. This is non-negotiable protected time. The session follows this exact sequence:

Minutes 1–5: Open your Research Portfolio Dashboard. Scan the Dataview tables. Note anything flagged.
Minutes 6–15: Visit each active Project MOC. Update the status line. Add any new sources processed this week. Note any concept notes created that should be linked here.
Minutes 16–25: Review your orphan notes query. Pick 3–5 unlinked literature notes and connect them to at least one concept note or Project MOC.
Minutes 26–35: Check the 30-day deadline view. Identify the single most important task for each active project next week. Write it directly into the Project MOC under a `## Next Action` header.
Minutes 36–45: Scan your concept notes created this week. Ask: which other projects does this concept serve? Add the relevant project tags.

This ritual prevents the vault from becoming a beautiful archive that never gets used. It keeps every project in active working memory without requiring you to hold it all in your head.

---

Real-World Example

Dr. Priya Nair is a third-year sociology PhD student running four simultaneous workstreams: her dissertation Chapter 3 on immigrant labor networks, a co-authored paper on survey methodology with her advisor, a fellowship application due in six weeks, and weekly seminar reading for a graduate course she's TAing.

Before the Research Portfolio Dashboard, Priya kept a Notion board for her dissertation, a shared Google Doc for the co-authored paper, and a sticky note system for seminar prep. Her fellowship application existed only in her head and a half-finished Word document.

After implementing the system: Priya has four Project MOCs in her `Projects/Active/` folder. Her Dataview dashboard shows her that Chapter 3 has 34 linked literature notes, the methods paper has 12, the fellowship has 6 (flagged as underdeveloped), and seminar prep has 8. The orphan query surfaces 11 literature notes she processed two months ago that were never linked to anything — three of them turn out to be directly relevant to her fellowship application's theoretical framing.

Her concept note on "social capital in migrant communities" — originally written for Chapter 3 — is now tagged `#project/dissertation-ch3`, `#project/fellowship-2025`, and `#project/seminar-week9`. When she adds a new source to that note on a Tuesday afternoon, it enriches all three workstreams simultaneously. Her Friday review takes 40 minutes and leaves her with a clear, prioritized task list for each project. The fellowship application, previously a source of background dread, now has a visible note count, a deadline in the Dataview query, and a next action: "Draft theoretical framework section using social capital note + 3 linked sources."

---

Worksheet: The Portfolio Setup Sprint

Complete this in one focused 90-minute session. Do not skip steps.

Part 1: Project Inventory

List every active or upcoming project. Be exhaustive — include things you've been avoiding.

| Project Name | Type (dissertation/paper/grant/teaching) | Deadline | Current Status |

|---|---|---|---|

| __________________ | __________________ | __________________ | __________________ |

| __________________ | __________________ | __________________ | __________________ |

| __________________ | __________________ | __________________ | __________________ |

| __________________ | __________________ | __________________ | __________________ |

| __________________ | __________________ | __________________ | __________________ |

Part 2: Project MOC Creation

For each project above, create a MOC note in `Projects/Active/` using this template:

```markdown

---

project: [Project Name]

status: Active

deadline: YYYY-MM-DD

tags: [project/tag-name]

---

25Core Question

[What is this project trying to answer or produce? 2–4 sentences.]

26Key Deadlines

[ ] [Milestone 1] — [Date]
[ ] [Milestone 2] — [Date]
[ ] Final submission — [Date]

27Central Sources

[[Literature Note 1]]
[[Literature Note 2]]
[[Literature Note 3]]

28Concept Connections

[[Concept Note 1]]
[[Concept Note 2]]

29Draft Links

[[Draft: Introduction]]

30Next Action

[One specific task for this project this week]

```

Part 3: Shared Concept Identification

List 5 concept notes already in your vault (or that should exist) that serve multiple projects. For each, list which projects benefit.

| Concept Note Title | Project 1 | Project 2 | Project 3 |

|---|---|---|---|

| __________________ | __________________ | __________________ | __________________ |

| __________________ | __________________ | __________________ | __________________ |

| __________________ | __________________ | __________________ | __________________ |

| __________________ | __________________ | __________________ | __________________ |

| __________________ | __________________ | __________________ | __________________ |

Part 4: Dashboard Installation

[ ] Created `Research Portfolio Dashboard` note in vault root
[ ] Installed all four Dataview queries from Step 3 above
[ ] Verified queries return results (if empty, check tag syntax)
[ ] Added `deadline` property to all active Project MOCs

Part 5: First Weekly Research Review

Complete your first 45-minute Friday review session using the sequence in Step 5. After completing it, answer:

Which project is most underdeveloped in the vault right now? ______________________
How many orphan notes did you find and link? ______________________
What is your single most important vault task next week? ______________________

---

Quick Checklist

[ ] Every active project has a dedicated MOC note in `Projects/Active/`
[ ] Every Project MOC includes a `deadline` YAML property readable by Dataview
[ ] The master Research Portfolio Dashboard note contains all four Data

31Chapter 8: Vault Maintenance and Long-Game Compounding: Your System 6 Months from Now

You've built the architecture, populated it with literature notes, extracted concepts, and started seeing connections you never saw before. The question now isn't whether the system works — it's whether you will keep it working when a grant deadline hits, a committee meeting derails your week, and your vault sits untouched for three weeks.

This chapter is about making sure that never becomes a crisis.

---

The Compound Knowledge System™

Every knowledge system obeys a law that no one warns you about: Vault Entropy. Left unattended, even a beautifully structured vault degrades. Orphan notes accumulate. Tags drift. Templates get modified inconsistently. Projects stall mid-capture. The connections that made your system feel alive start to feel like noise.

The Compound Knowledge System™ is a four-tier maintenance and growth protocol designed specifically for the rhythms of academic life — where you might read intensively for two weeks, then go dark during conference travel, then resurface during a writing sprint. It doesn't require daily discipline. It requires strategic intervals.

Tier 1: The Weekly 20-Minute Tidy (Every Sunday or Friday)

Set a recurring calendar block. No longer than 20 minutes. The goal is not deep work — it's preventing accumulation.

1.Open your `_INBOX` folder. Process every note that landed there during the week: file it, tag it, link it to at least one existing note.
2.Check your `Projects/Active` folder. For any project you haven't touched in 7+ days, add a one-line status note to the project file: what's stalled, what's next.
3.Scan your daily notes (if you use them) for any flagged ideas — anything marked `#toprocess` or `#seedidea` — and either promote them to concept notes or delete them deliberately.

That's it. Twenty minutes, three actions.

Tier 2: The Monthly Vault Health Audit (First Weekend of Each Month)

This is where you catch systemic drift before it becomes structural damage. Run these four Dataview queries — paste them into a dedicated `Vault Health` note:

```dataview

LIST

FROM ""

WHERE length(file.inlinks) = 0 AND length(file.outlinks) = 0

SORT file.mtime ASC

```

This surfaces orphan notes — notes with no connections in either direction. These are your biggest entropy risk.

```dataview

LIST

FROM #literature

WHERE !contains(file.tags, "status/processed")

SORT file.ctime ASC

```

This finds literature notes you captured but never fully processed through the PRISM protocol from Chapter 3.

```dataview

LIST

FROM #concept

WHERE file.mtime < date(today) - dur(90 days)

SORT file.mtime ASC

```

Concept notes untouched for 90+ days are candidates for either promotion to evergreen status or archiving.

```dataview

TABLE file.tags AS "Tags", file.mtime AS "Last Modified"

FROM #project/active

SORT file.mtime ASC

```

Active projects that haven't been modified recently are stalling — surface them before they disappear entirely.

Spend 45-60 minutes acting on what these queries surface. Link three orphans. Process two stale literature notes. Promote one concept note.

Tier 3: The Maturation Pipeline (Ongoing, Tracked Monthly)

Your notes exist on a developmental spectrum. The Compound Knowledge System™ formalizes this as three stages, each requiring a different kind of attention:

Seed Notes (`status/seed`): Raw captures — a quote, a half-formed reaction, a term you need to look up. These should never stay seeds for more than 30 days.
Developing Notes (`status/developing`): Notes with at least 3 outbound links, your own synthesis paragraph, and a clear question the note is trying to answer. These are your intellectual workhorses.
Evergreen Notes (`status/evergreen`): Stable, densely linked, written in your own voice as a durable claim or argument. These are the notes that generate papers, talks, and grant proposals.

During your monthly audit, promote at least two notes up the pipeline. The goal by month six: 40+ evergreen notes. That's the threshold where literature reviews start assembling themselves.

Tier 4: The Quarterly Deep Restructure (4x Per Year)

Once per quarter, spend 2-3 hours on structural maintenance: rename tags that have drifted, consolidate duplicate concept notes, archive completed projects, and review your ATLAS folder structure (from Chapter 2) to see if it still reflects how you actually think. Your intellectual architecture should evolve with your research.

---

Real-World Example

Dr. Amara Osei is a third-year sociology PhD student studying urban informality in West African cities. By month two of using her vault, she had 87 literature notes and 34 concept notes. By month four, she hit a wall: her `_INBOX` had 23 unprocessed notes, she had four different tags for the same concept ("informal economy," "informality," "urban-informality," "#informal"), and three project files she hadn't opened in six weeks.

She ran the orphan query and found 19 notes with zero connections — mostly PDFs she'd imported from Zotero but never processed. She spent one Sunday afternoon linking them, and in doing so, noticed that three papers she'd considered tangentially related were actually making the same methodological argument from different disciplinary angles. That observation became the framing for her second dissertation chapter.

At month six, Amara had 312 notes, 61 evergreen notes, and had converted four of them into a conference paper abstract that was accepted on first submission. The compounding had started.

---

Preparing Your Vault for Career Transitions

Your vault will outlive your current institutional affiliation. Plan for it explicitly.

Qualifying Exams: Six weeks before your exam date, run a full audit of every concept note tagged with your exam fields. Identify gaps — concepts you've encountered but never synthesized. Use your evergreen notes as the basis for practice essays. Your vault is essentially a closed-book exam simulator.

Dissertation Defense: Create a `Dissertation/` project folder that maps your chapters to specific clusters of evergreen notes. Every claim in your dissertation should trace back to at least one evergreen note. This makes revision surgical rather than catastrophic.

Job Market: Your vault contains your intellectual identity. Before writing job letters, spend two hours mapping your evergreen notes into thematic clusters — these become your research agenda statements. Your teaching philosophy can draw directly from concept notes on pedagogy and methodology.

New Institution: Export your vault as a portable folder. When you arrive somewhere new, your intellectual infrastructure arrives with you. Archive your old institution's administrative notes; keep everything intellectual. Your vault is the one thing that doesn't get lost in the move.

---

Building Your Vault into a Public-Facing Asset

At 40+ evergreen notes, you have enough intellectual material to begin publishing in public. This isn't a distraction from your research — it is your research, distributed.

Blog posts: Take one evergreen note with 5+ outbound links. Expand each linked concept into a paragraph. Add an introduction and conclusion. You have a 1,200-word post.

Twitter/X threads: Take the central claim of an evergreen note. Each supporting link becomes one tweet. A 7-link evergreen note is a 9-tweet thread.

Conference presentations: A cluster of 8-10 evergreen notes on a shared theme is a 20-minute conference paper. The structure is already there — you're just adding transitions.

Book proposals: A dissertation chapter mapped to 15+ evergreen notes, with clear connections to 3-4 adjacent clusters, is the intellectual architecture of a book chapter. Do this for five chapters and you have a proposal.

The 1,000-note milestone changes something concrete: you stop searching for ideas and start selecting among them. Writing speed increases not because you type faster but because the thinking is already done. Researchers who hit this milestone consistently report that their biggest problem shifts from "I don't know what to write" to "I have too many directions to pursue." That is a good problem. You can get there in 6-8 months if you follow the Tier 1 and Tier 2 maintenance rhythms without exception and process every paper you read — even partially — into at least a seed note the same day you read it.

---

Worksheet: The Vault Sustainability Plan

Section 1: Recurring Calendar Events

Set these up before you close this chapter. Literally open your calendar now.

| Session Type | Frequency | Duration | Day/Time I'll Use | Calendar Event Created? |

|---|---|---|---|---|

| Weekly Tidy | Weekly | 20 min | _________________ | ☐ |

| Monthly Audit | Monthly | 60 min | _________________ | ☐ |

| Quarterly Restructure | Quarterly | 2-3 hrs | _________________ | ☐ |

Section 2: Dataview Audit Setup

Create a note titled `Vault Health Dashboard` in your `_META` or `_SYSTEM` folder. Paste all four queries from Tier 2 above. Then answer:

Date I created this note: _________________
First audit scheduled for: _________________
Number of orphan notes found on first run: _________________
Number of unprocessed literature notes found: _________________

Section 3: My 6-Month Vault Growth Roadmap

Fill in your current state and targets tied to your actual academic calendar:

| Month | Note-Count Target | Evergreen Target | Key Academic Event This Month |

|---|---|---|---|

| Month 1 (now) | Current: _______ | Current: _______ | _________________ |

| Month 2 | _______ | _______ | _________________ |

| Month 3 | _______ | _______ | _________________ |

| Month 4 | _______ | _______ | _________________ |

| Month 5 | _______ | _______ | _________________ |

| Month 6 | _______ | _______ | _________________ |

Section 4: Public-Facing Content Pipeline

Identify 3 evergreen notes you already have (or will have within 30 days) that are ready to be repurposed:

| Evergreen Note Title | Target Format | Target Platform/Venue | Target Date |

|---|---|---|---|

| _________________ | Blog post / Thread / Talk / Proposal | _________________ | _________________ |

| _________________ | Blog post / Thread

---

32Bonus Materials

---

33🎁 Bonus #1: The Academic Obsidian Starter Vault

A pre-built, download-and-go vault so you never stare at a blank screen again

---

Ready-to-Use Templates

---

#### Template 1: PRISM Literature Note

For every paper you read — structured to capture what matters for research, not just what the paper says

```markdown

---

title: "{{title}}"

authors: [{{authors}}]

year: {{year}}

citekey: {{citekey}}

journal: "{{journal}}"

doi: "{{doi}}"

tags: [literature-note, {{discipline}}, {{status}}]

status: "unread | skimming | read | processed | cited"

priority: "high | medium | low"

date-added: {{date}}

date-processed:

project: "[[{{project}}]]"

theoretical-framework:

methodology:

---

34P — Premise

What is the central claim or argument of this paper? (1–2 sentences max)

35R — Relevance

Why does this paper matter to MY research specifically? What gap does it address?

36I — Insights

The 3–5 ideas I want to remember and use. Write in my own words.

1.
2.
3.
4.
5.

37S — Surprises & Tensions

What did this paper claim that I didn't expect, disagree with, or that contradicts another source?

Tension with [[]] because:
Tension with [[]] because:

38M — My Response

My original thoughts, questions, and reactions. What does this make me want to investigate next?

---

39Key Quotes (with page numbers)

"..." (p. )
"..." (p. )

40Methodology Notes

How did they do it? Sample size, methods, limitations I should cite or critique?

41Cited By / Cites

This paper cites: [[]], [[]], [[]]
This paper is cited by: [[]], [[]], [[]]

42Connections

Link to concept notes, other papers, research questions, and project MOCs

Related papers: [[]], [[]]
Concept notes this feeds: [[]], [[]]
Research questions this informs: [[RQ — ]]
Project MOC: [[MOC — ]]

```

---

#### Template 2: Atomic Concept Note

For theoretical constructs, recurring terms, and ideas that appear across multiple papers — the building blocks of your intellectual network

```markdown

---

title: "{{concept-name}}"

aliases: [{{alternative-terms}}]

tags: [concept-note, {{discipline}}, {{theoretical-framework}}]

date-created: {{date}}

date-updated:

maturity: "seedling | budding | evergreen"

---

43Definition

My working definition in my own words — not copy-pasted from a source

44Origin & Intellectual History

Who coined this? How has the definition evolved? Where is it contested?

First introduced by [[]] in [[]]
Developed further by [[]] in [[]]
Contested by [[]] who argues...

45How I Use This Concept

In the context of my specific research, what does this concept help me explain or analyze?

46Distinctions & Confusions

What does this concept get confused with? How is it different from [[related-concept]]?

| This Concept | [[Related Concept]] |

|---|---|

| | |

| | |

47Sources That Use This Concept

```dataview

LIST

FROM [[]]

WHERE contains(theoretical-framework, this.file.name)

SORT year ASC

```

48My Developing Argument About This Concept

What do I actually think about this? Where do I agree/disagree with the literature?

49Open Questions

[ ]
[ ]

50Links

Parent concept: [[]]
Child concepts: [[]], [[]]
Opposing concepts: [[]], [[]]
Papers: [[]], [[]], [[]]

```

---

#### Template 3: Project MOC (Map of Content)

The command center for each research project — links every relevant note, tracks progress, and assembles your literature review scaffolding

```markdown

---

title: "MOC — {{project-name}}"

tags: [MOC, project, {{status}}]

status: "planning | active | writing | submitted | published"

deadline: {{deadline}}

target-journal: "{{journal}}"

word-count-target:

date-created: {{date}}

collaborators: []

---

51Project Overview

One paragraph: What is this paper/chapter/dissertation arguing? What is the intervention?

52Core Research Question

[[RQ — {{question}}]]

53Central Argument (Working Thesis)

54ATLAS Structure for This Project

📚 Literature Base

All papers directly relevant to this project

```dataview

TABLE authors, year, status

FROM "ATLAS/Literature"

WHERE contains(project, this.file.name)

SORT year ASC

```

🧠 Concept Network

Theoretical concepts this project depends on

[[Concept — ]]
[[Concept — ]]
[[Concept — ]]

🔗 Argument Chain

[[ArgChain — {{project-name}}]]

📊 Methodology

[[Method — {{methodology}}]]

---

55Literature Review Sections

Section 1: {{section-title}}

Argument this section makes:

Key sources: [[]], [[]], [[]]

Gaps I'm addressing:

Section 2: {{section-title}}

Argument this section makes:

Key sources: [[]], [[]], [[]]

Gaps I'm addressing:

Section 3: {{section-title}}

Argument this section makes:

Key sources: [[]], [[]], [[]]

Gaps I'm addressing:

---

56Writing Progress

| Section | Status | Word Count | Notes |

|---|---|---|---|

| Introduction | | | |

| Lit Review | | | |

| Methods | | | |

| Results/Analysis | | | |

| Discussion | | | |

| Conclusion | | | |

57Submission Checklist

[ ] Abstract submitted
[ ] IRB approval (if applicable)
[ ] Co-author sign-off
[ ] Reference list formatted in {{style}}
[ ] Supplementary materials attached

58Open Questions & Blockers

[ ]
[ ]

59Idea Parking Lot

Ideas that emerged during this project that don't fit here but shouldn't be lost

```

---

#### Template 4: Methodology Comparison Note

For systematically comparing how different papers approach the same research problem — essential for methods sections and for identifying gaps

```markdown

---

title: "MethodComp — {{topic}}"

tags: [methodology, comparison, {{discipline}}]

date-created: {{date}}

project: "[[MOC — ]]"

---

60What I'm Comparing

The specific methodological question or problem I'm mapping across the literature

61Comparison Matrix

| Paper | Method | Sample/Data | Key Strength | Key Limitation | Relevance to My Work |

|---|---|---|---|---|---|

| [[]] | | | | | |

| [[]] | | | | | |

| [[]] | | | | | |

| [[]] | | | | | |

| [[]] | | | | | |

62Dominant Approaches in This Literature

What methods does this field default to and why?

63Methodological Debates

Where do scholars disagree about the right approach?

Debate 1: [[]] argues ___ vs. [[]] who argues ___
Debate 2:

64Gaps & Opportunities

What methodological approach has NOT been tried that could yield new insights?

65My Methodological Choices

Given the above, what am I doing and how do I justify it against this landscape?

66Key Methodological Sources to Cite

```dataview

LIST

FROM "ATLAS/Literature"

WHERE contains(tags, "methodology") AND contains(project, [[]])

SORT year ASC

```

```

---

#### Template 5: Research Question Incubator

Where half-formed hunches become rigorous research questions — the most important template most researchers never build

```markdown

---

title: "RQ — {{question-shorthand}}"

tags: [research-question, {{status}}, {{discipline}}]

status: "raw-hunch | developing | viable | active | published | abandoned"

date-created: {{date}}

date-last-reviewed:

sparked-by: "[[]]"

project:

---

67The Raw Hunch

Write the idea exactly as it occurred to you — messy, incomplete, that's fine

68The Refined Question

After reflection: What exactly am I asking? Is it empirical, theoretical, or normative?

69Why This Question Matters

So what? Who cares? What would change in the field if this were answered?

70What We Already Know

Brief survey of existing answers — and why they're incomplete

[[]] argues:
[[]] argues:
The gap is:

71What I Would Need to Answer This

Data, methods, theoretical framework, access, time

[ ]
[ ]
[ ]

72Feasibility Assessment

Honest evaluation: Can I actually do this? In what timeframe?

| Factor | Assessment |

|---|---|

| Data availability | |

| Methodological expertise | |

| Time required | |

| Novelty/contribution | |

| Fit with my dissertation/agenda | |

73Connected Questions

What other questions does this generate or depend on?

[[RQ — ]]
[[RQ — ]]

74Notes from Conversations

What did my advisor/peers say when I mentioned this?

75Decision Log

Why I'm pursuing, pausing, or abandoning this question

{{date}}:

```

---

Quick-Start Scripts

---

#### Script 1: The "Cold Vault" First-Day Setup Email to Yourself

---

76About This Product

The definitive system for academic researchers to build a networked knowledge base in Obsidian that turns years of scattered papers, notes, and ideas into a living intellectual engine that accelerates literature reviews, surfaces novel connections, and produces publishable insights faster.

This product was designed for: PhD students (years 2-5) and early-career researchers (postdocs, assistant professors) in humanities, social sciences, or STEM fields who are drowning in 200+ saved PDFs they've half-read, scattered annotations across Zotero/Mendeley/Google Docs/physical notebooks, and feel constant anxiety that they're forgetting critical arguments or missing connections between papers. They've heard about Obsidian and maybe even installed it, but stare at an empty vault with no idea how to structure it for academic work specifically. Their desired outcome: a trusted, searchable, interconnected system where every paper they read compounds into deeper understanding and where literature review sections practically write themselves.

Your transformation: FROM: Spending 3-4 hours re-reading papers they've already annotated because they can't find or remember their notes, feeling paralyzed during literature reviews, and losing original ideas that emerge between readings → TO: A fully operational Obsidian vault with 100+ interconnected literature notes where any concept can be traced across sources in under 60 seconds, literature review drafts assembled from existing atomic notes in a single afternoon, and a personal idea incubator that surfaces novel research questions they would never have seen otherwise.

AI Cover Image

Print-Ready in Seconds

Generated with DALL-E 3. No design tools needed.

AI-generated cover
Pinterest Pins

5 Pins, Ready to Publish

1200×1800 optimized images generated with Puppeteer HTML rendering.

Free: 25-Minute Literature Pipeline
Pin 1
Stop Losing Research Ideas to Scattered Notes
Pin 2
12 Academic Templates + 25 Dataview Queries
Pin 3
Literature Reviews: Days → Weeks Saved
Pin 4
Obsidian Vault → Interconnected Knowledge Base
Pin 5
Sales Copy

Marketplace-Ready Copy

Sales page preview

Your literature review shouldn't take 6 weeks. Here's the system that makes it take 6 days.

Primary hook

Every PhD student loses brilliant ideas to scattered notes and broken workflows. This vault makes sure you never lose another one.

Built by researchers, for researchers — not recycled from a productivity influencer's morning routine.

Description

You know that sinking feeling — three browser tabs of half-read papers, a Zotero library that's basically a graveyard, and a blank document where your literature review should be. You're not disorganized. You're using tools built for the wrong person. The Academic Obsidian Blueprint was designed for the specific, relentless demands of research life: tracking arguments across dozens of papers, spotting theoretical gaps, managing a dissertation while a conference deadline looms. When your knowledge base actually reflects how academic thinking works — connected, layered, cumulative — literature reviews stop feeling like excavation and start feeling like conversation. Your ideas stop disappearing. Your writing accelerates. And for the first time, your notes work as hard as you do.

What's Included
  • Process any PDF into a permanent, linked knowledge asset in 25 minutes flat — using a repeatable pipeline that compounds in value the longer you use it
  • Run literature reviews in days instead of weeks with pre-built Dataview queries that surface papers by framework, flag orphan notes, and surface citation gaps automatically
  • Never lose a research idea again — the Research Question Incubation method systematically mines your existing vault to generate original connections you'd never find manually
  • Manage your dissertation, conference papers, and grant applications in parallel without losing context — using multi-project strategies built for real academic chaos
  • Eliminate Zotero frustration forever with a full integration masterclass covering Better BibTeX setup, plugin configuration, and fixes for the 8 most common import errors
  • Hit the ground running with a Pre-built Academic Starter Vault — ATLAS folder structure, configured queries, and 15 interconnected sample notes showing exactly how the system works
$47
One-time · Instant delivery
Create Yours Free

This entire product — 76 chapters, 14,000+ words, cover image, sales copy, and Pinterest pins — was created by AI in minutes.

Not days. Not weeks. Minutes.

Try Kupkaike Free — 20 Credits →
🧁

Your Turn to Bake.

Everything on this page was generated from a single niche idea. No design skills. No copywriting. No code. Just your idea — and Kupkaike does the rest.

Free account includes 20 cupcakes · No credit card required