The Problem:
Teams make dozens of decisions every week — architecture choices, vendor selections, hiring calls, product bets — but almost never go back to ask: "Were we right?" That institutional knowledge silently ages in Notion pages, never closing the feedback loop.
The Solution:
I built the Notion Decision Intelligence Engine — an AI agent that transforms your Notion workspace from a passive wiki into a self-auditing organizational memory. It doesn't just record decisions. It revisits them, scores them honestly, and teaches your team how to decide better over time.
The entire system runs through Notion MCP. Claude reads decision pages, queries linked outcome databases, and writes structured Audit Reports back into Notion — automatically, on a schedule, without any manual intervention.
Show us the code
GitHub: https://github.com/sushilkulkarni1389/notion-decision-engine.git
How It Works
The Core Loop
Decision Logged → Structured in Notion DB → Review Date Set
↓
Outcomes Tracked (manually in Outcome Tracker)
↓
Agent wakes up at 8am on Review Date (via node-cron)
↓
Reads Decision + Outcomes from Notion via MCP
↓
Claude generates Audit (process score, outcome score, insights)
↓
Audit page written back to Notion via MCP
↓
Monthly Pattern Report aggregates all audits on 1st of month
The agent runs as a persistent background process (via PM2). Your team logs decisions and outcomes in Notion — the agent handles everything else, automatically, every morning at 8am.
? Structured Decision Capture
Log a decision in plain text from the terminal:
node src/index.js capture "We decided to switch from Jenkins to GitHub Actions.
Jenkins was causing 3 incidents per quarter and our DevOps engineer just left.
We considered CircleCI and GitLab CI but the team already uses GitHub.
Assuming migration takes 2 weeks and costs under $200/month.
Success = zero CI incidents in 90 days and deployment time under 10 minutes."
Claude extracts the structure — decision, context, alternatives, key assumptions, expected outcome, domain, confidence level — and creates a fully populated page in the Notion Decision Log database. Review date is auto-calculated (30/60/90 days).
? Outcome Tracking
As results emerge, team members log outcomes in the Notion Outcome Tracker database. Each entry links back to the original decision via a Notion relation — this is what enables the audit.
No special tooling required. It's just a Notion database row.
? AI Decision Audit — The Key Insight
On the review date, the agent reads the decision and all linked outcomes through Notion MCP, then asks Claude to evaluate two separate things:
Process Score (1–10): Was the decision-making process sound at the time?
Were the right alternatives considered?
Were the assumptions reasonable given available information?
Was the expected outcome clearly defined?
Outcome Score (1–10): How good was the actual result?
Did outcomes match expectations?
What was the net impact?
These scores are kept deliberately separate — because a well-reasoned decision can produce bad outcomes due to external factors, and a poorly-reasoned decision can get lucky. The audit identifies which happened. That distinction is the most important insight the system produces.
The audit page Claude writes back to Notion includes:
Process Score and Outcome Score
Verdict: Right call / Wrong call / Mixed / Right call, wrong reasons
Failed assumptions (which beliefs proved incorrect)
Key insight (single most important learning)
Recommendation (what to do if this decision comes up again)
Full narrative retrospective (3–5 paragraphs, plain language)
? Monthly Pattern Intelligence Report
On the 1st of each month, the agent aggregates all audits from the last 90 days, runs them through Claude, and generates a Monthly Pattern Report page in Notion. This isn't just averages — Claude looks for systematic biases across all decisions:
"Your team consistently underestimates human learning curves — every decision involving a technology migration assumed 2-week onboarding but reality was 4–6 weeks."
"Engineering decisions score significantly higher on both process and outcome than product decisions. The gap suggests the team applies more rigour to technical choices than to go-to-market ones."
This is where compounding value kicks in. One audit tells you about one decision. Twelve months of audits tells you how your team actually thinks.
How I Used Notion MCP
Notion MCP is the backbone of the entire system — not a convenience layer, but the reason this architecture is possible.
Reading structured context across linked databases
The audit agent does something that would be painful to build with raw REST APIs: it reads a decision page and simultaneously queries a related database filtered by that page's ID — all in a few lines using the Notion client. This cross-database join is what gives Claude the full picture it need
Tags:
#0
Want to run a more efficient business?
Mewayz gives you CRM, HR, Accounting, Projects & eCommerce — all in one workspace. 14-day free trial, no credit card needed.
Try Mewayz Free →