PRINCE2 Master — hero background
Personal Project  ·  React  ·  Anthropic Claude API  ·  Replit

PRINCE2
Master

Role
Self-Initiated — Product Owner, Builder & PM
Duration
1 week — concept to exam-ready
Stack
React · Node.js · Claude API · Replit
Outcome
PRINCE2 Foundation — Passed
01 — The Problem

Every prep tool
was the wrong tool

The PeopleCert PRINCE2 Foundation exam is not a memory test. It is a 60-question, 60-minute applied reasoning paper written to exploit the specific gaps between studying PRINCE2 and actually understanding it. Generic flashcard apps could not replicate that.

The existing tools — ILX simulator, static question banks — either recycled the same questions until they were memorised by pattern rather than understood, or drilled low-value recall without targeting the confusion pairs that cost candidates 10 to 15 marks in the real exam.

  • 01 No tool mapped questions to the 48 official PeopleCert v7 learning outcomes — blind spots went undetected until the exam
  • 02 Static question banks rewarded pattern recognition, not concept mastery — right answers for the wrong reasons
  • 03 No weighted readiness tracking — it was impossible to know which weak topics would cost the most marks on exam day
  • 04 No wrong-answer recycling — questions answered incorrectly simply returned to the pool, unaddressed
  • 05 No tool targeted the 30+ known PRINCE2 confusion pairs — the exact traps that separate a 46 from a 56
PRINCE2 Master home screen with practice modes
60
Questions in full exam simulation
under real timed conditions
11
PRINCE2 topics covered
across the complete v7 syllabus
48
Official PeopleCert learning outcomes
tracked in the Syllabus Coverage Map
7
Study modes built
for every preparation scenario
PM Competencies Demonstrated
Problem Identification

Identified the specific failure mode of existing prep tools — pattern recognition without understanding — and defined the requirement gap before building anything. A PM identifies what is missing before proposing a solution.

Requirements Engineering

Mapped the PeopleCert mark scheme, identified the 30 known confusion pairs, and reverse-engineered the BL1/BL2 difficulty split to define the question generation requirements before writing a prompt or a line of code.

Scope & Iteration

Started with a working drill tool and expanded to seven modes through structured iteration — each addition triggered by a specific identified gap, not by feature creep. Every change was a controlled scope decision.

Risk Management

API cost exposure was a live project risk. Mitigated through a server-side proxy, a capped Wrong Bank, and conservative token budgets. Risk was identified, assessed, and controlled — not discovered after the fact.

Governance & Access Control

A password-protected entry screen and a server-side API key proxy were built before the app was shared as a portfolio asset. Governance was not an afterthought — it was a deployment gate.

Technical Delivery

Debugged multi-select validation logic, resolved Claude-specific API dependencies for Replit deployment, and built a full server-side proxy to replace client-side API calls — hands-on technical ownership from build through to production.

Quality Assurance

Every AI-generated question passed a self-check against PeopleCert's question-writing rules — no keyword mirroring, no obvious distractors, unambiguous correct answers. Quality criteria were defined before testing began, not during it.

Benefits Realisation

The tool was not the objective. Passing the exam was. The calibrated score prediction, the weighted readiness model, and the syllabus coverage map all existed to drive one measurable outcome — which was achieved.

02 — Build & Iteration
01

Define the Requirements

Before any code: mapped the PeopleCert v7 mark scheme by topic weight, catalogued 30 known confusion pairs from ILX simulator results, and defined question quality criteria mirroring PeopleCert's own standards. The specification preceded the build.

02

Build the Core Engine

Built the AI question generation pipeline first — system prompt engineering to enforce British English, v7 terminology, BL1/BL2 balance, and PeopleCert-style four-option structure. Topic Drill mode was the initial delivery. Everything else was built on top of a working engine.

03

Iterate on Identified Gaps

Each iteration was triggered by a specific failure: wrong answers not being tracked led to the Wrong Bank and Priority Review; score optimism led to the 12% calibration deflation; blind spots in untested topics led to the 48-LO Syllabus Coverage Map.

04

Debug & Harden

Fixed faulty multi-select validation logic — including a Quality Assurance question with a contradictory answer key and an overconstrained multi-select that made correct answers impossible. Hardening was not cosmetic; it protected the integrity of the exam simulation.

05

Deploy with Governance Controls

Migrated from Claude-specific API dependencies to a Replit-compatible server-side proxy, protecting the API key from client exposure. Added a password screen for portfolio access. Deployed to production on a flat Replit hosting fee with zero ongoing API cost leakage.

Topic Drill question with explanation panel
Exam Simulation results — 54/60 raw, 48 calibrated
Exam Simulation & Mastery Engine

A mock exam that
refuses to flatter you

The Full Exam Simulation runs 60 questions in 60 minutes under real exam conditions — no mid-session answer reveals, no hints. At the end it reports a raw score, a calibrated exam prediction (raw score deflated by 12% to account for the gap between practice and real conditions), and flags any question that took over 90 seconds as a pacing risk.

The mastery engine weighs both accuracy and volume — 25 correct from 30 attempts is stronger evidence of mastery than 5 from 5, and the model reflects that. Topic mastery scores feed a readiness prediction weighted by the actual PeopleCert mark scheme: People and Processes at 15% each, Principles at 8%, down to Key Concepts.

Syllabus Coverage Map
Syllabus Coverage & Spaced Repetition

No blind spot
goes undetected

The Syllabus Coverage Map tracks every answered question against all 48 official PeopleCert v7 Foundation learning outcomes using keyword matching against concept tags. For each of the 11 topics it shows exactly which LOs have been encountered and which remain untested — one direct Drill button per gap to close it.

Wrong answers are stored in a capped Wrong Bank and recycled through Priority Review — surfaced in order of exam mark weight, then rewritten by the AI on the second attempt so the concept cannot be answered from memory. The system is deliberately unforgiving.

PRINCE2 Master password screen
Governance & API Cost Control

Built for a portfolio,
governed like a product

Sharing an AI-powered app as a portfolio asset without access controls is an API cost exposure — every unauthorised visit burns tokens. A password screen was added as a deployment gate before the app was made publicly linkable. Access control was a risk decision, not an afterthought.

The server-side proxy was built to replace direct client-side API calls, removing the API key from the browser entirely. The Wrong Bank was capped at 50 entries. Token budgets were set conservatively across all question generation calls. Cost governance was treated as a project constraint.

03 — Outcome

Built the tool.
Passed the exam.

The objective was never to build an app. It was to close the gap between knowing PRINCE2 and being able to apply it under exam conditions — a specific, measurable outcome with a clear deadline.

The calibrated score prediction tracked within 1–2 marks of real ILX exam simulator results throughout preparation. The weighted readiness model consistently directed effort to the topics that mattered most. On exam day, that precision translated directly into a pass.

Dimension Before After
Prep tool Generic — ILX simulator, static banks Purpose-built for PeopleCert v7
LO coverage Unknown — no tracking All 48 LOs mapped and monitored
Readiness Raw score only — unweighted Weighted by PeopleCert mark scheme
Wrong answers Returned to pool unchanged Recycled, reworded, prioritised
Confusion pairs None 30+ pairs in question generation
API key exposure Uncontrolled — client-side Server-side proxy — never exposed
Exam outcome Uncertain PRINCE2 Foundation — Passed
04 — What This Demonstrates

Self-Initiated Delivery

No brief, no team, no instruction. I identified the gap in available tools, defined what the right tool would look like, and built and deployed it within a week. That instinct — to identify a problem and own its resolution — is the core PM reflex.

Requirements Before Build

The question generation specification — BL1/BL2 split, confusion pairs, British English, no keyword mirroring — was written before the first API call. Good requirements engineering is not about documentation. It is about understanding what you are building and why before you build it.

Controlled Scope Expansion

The app grew from one mode to seven through deliberate, problem-led iteration — not feature creep. Each addition was triggered by a specific identified failure. Scope discipline under self-direction is harder than scope discipline under a sponsor. This project required both.

Technical Ownership

I debugged validation logic, rewrote API dependencies for Replit compatibility, and built a backend proxy from scratch. Technical fluency is not the same as being a developer — it is the ability to understand, diagnose, and resolve the technical constraints a project faces.

Governance as Practice

The password screen and server-side proxy were governance decisions — risks identified, assessed, and mitigated before deployment. Treating a personal project with the same governance rigour as a commercial one is not over-engineering. It is professional habit.

One Measurable Outcome

Every feature in the app — the calibrated deflation, the weighted readiness model, the Coverage Map — existed to drive a single defined outcome. That outcome was achieved. This is what benefits realisation looks like in practice: not a projected figure, but a result.

The Bigger Picture

This project wasn't about building an app.
It was about identifying a gap, defining the solution, managing the build through structured iteration, governing the deployment, and realising a single measurable outcome — in one week, alone.

That sequence — problem identification, requirements definition, scope control, iterative delivery, risk management, governance, benefits realisation — is the PM lifecycle. Most candidates study it for an exam. This project ran it in practice, from day one to a passing grade. The certification confirms the framework. This confirms the application.