The PeopleCert PRINCE2 Foundation exam is not a memory test. It is a 60-question, 60-minute applied reasoning paper written to exploit the specific gaps between studying PRINCE2 and actually understanding it. Generic flashcard apps could not replicate that.
The existing tools — ILX simulator, static question banks — either recycled the same questions until they were memorised by pattern rather than understood, or drilled low-value recall without targeting the confusion pairs that cost candidates 10 to 15 marks in the real exam.
Identified the specific failure mode of existing prep tools — pattern recognition without understanding — and defined the requirement gap before building anything. A PM identifies what is missing before proposing a solution.
Mapped the PeopleCert mark scheme, identified the 30 known confusion pairs, and reverse-engineered the BL1/BL2 difficulty split to define the question generation requirements before writing a prompt or a line of code.
Started with a working drill tool and expanded to seven modes through structured iteration — each addition triggered by a specific identified gap, not by feature creep. Every change was a controlled scope decision.
API cost exposure was a live project risk. Mitigated through a server-side proxy, a capped Wrong Bank, and conservative token budgets. Risk was identified, assessed, and controlled — not discovered after the fact.
A password-protected entry screen and a server-side API key proxy were built before the app was shared as a portfolio asset. Governance was not an afterthought — it was a deployment gate.
Debugged multi-select validation logic, resolved Claude-specific API dependencies for Replit deployment, and built a full server-side proxy to replace client-side API calls — hands-on technical ownership from build through to production.
Every AI-generated question passed a self-check against PeopleCert's question-writing rules — no keyword mirroring, no obvious distractors, unambiguous correct answers. Quality criteria were defined before testing began, not during it.
The tool was not the objective. Passing the exam was. The calibrated score prediction, the weighted readiness model, and the syllabus coverage map all existed to drive one measurable outcome — which was achieved.
Before any code: mapped the PeopleCert v7 mark scheme by topic weight, catalogued 30 known confusion pairs from ILX simulator results, and defined question quality criteria mirroring PeopleCert's own standards. The specification preceded the build.
Built the AI question generation pipeline first — system prompt engineering to enforce British English, v7 terminology, BL1/BL2 balance, and PeopleCert-style four-option structure. Topic Drill mode was the initial delivery. Everything else was built on top of a working engine.
Each iteration was triggered by a specific failure: wrong answers not being tracked led to the Wrong Bank and Priority Review; score optimism led to the 12% calibration deflation; blind spots in untested topics led to the 48-LO Syllabus Coverage Map.
Fixed faulty multi-select validation logic — including a Quality Assurance question with a contradictory answer key and an overconstrained multi-select that made correct answers impossible. Hardening was not cosmetic; it protected the integrity of the exam simulation.
Migrated from Claude-specific API dependencies to a Replit-compatible server-side proxy, protecting the API key from client exposure. Added a password screen for portfolio access. Deployed to production on a flat Replit hosting fee with zero ongoing API cost leakage.
The Full Exam Simulation runs 60 questions in 60 minutes under real exam conditions — no mid-session answer reveals, no hints. At the end it reports a raw score, a calibrated exam prediction (raw score deflated by 12% to account for the gap between practice and real conditions), and flags any question that took over 90 seconds as a pacing risk.
The mastery engine weighs both accuracy and volume — 25 correct from 30 attempts is stronger evidence of mastery than 5 from 5, and the model reflects that. Topic mastery scores feed a readiness prediction weighted by the actual PeopleCert mark scheme: People and Processes at 15% each, Principles at 8%, down to Key Concepts.
The Syllabus Coverage Map tracks every answered question against all 48 official PeopleCert v7 Foundation learning outcomes using keyword matching against concept tags. For each of the 11 topics it shows exactly which LOs have been encountered and which remain untested — one direct Drill button per gap to close it.
Wrong answers are stored in a capped Wrong Bank and recycled through Priority Review — surfaced in order of exam mark weight, then rewritten by the AI on the second attempt so the concept cannot be answered from memory. The system is deliberately unforgiving.
Sharing an AI-powered app as a portfolio asset without access controls is an API cost exposure — every unauthorised visit burns tokens. A password screen was added as a deployment gate before the app was made publicly linkable. Access control was a risk decision, not an afterthought.
The server-side proxy was built to replace direct client-side API calls, removing the API key from the browser entirely. The Wrong Bank was capped at 50 entries. Token budgets were set conservatively across all question generation calls. Cost governance was treated as a project constraint.
The objective was never to build an app. It was to close the gap between knowing PRINCE2 and being able to apply it under exam conditions — a specific, measurable outcome with a clear deadline.
The calibrated score prediction tracked within 1–2 marks of real ILX exam simulator results throughout preparation. The weighted readiness model consistently directed effort to the topics that mattered most. On exam day, that precision translated directly into a pass.
| Dimension | After |
|---|---|
| Prep tool | Purpose-built for PeopleCert v7 |
| LO coverage | All 48 LOs mapped and monitored |
| Readiness | Weighted by PeopleCert mark scheme |
| Wrong answers | Recycled, reworded, prioritised |
| Confusion pairs | 30+ pairs in question generation |
| API key exposure | Server-side proxy — never exposed |
| Exam outcome | PRINCE2 Foundation — Passed |
No brief, no team, no instruction. I identified the gap in available tools, defined what the right tool would look like, and built and deployed it within a week. That instinct — to identify a problem and own its resolution — is the core PM reflex.
The question generation specification — BL1/BL2 split, confusion pairs, British English, no keyword mirroring — was written before the first API call. Good requirements engineering is not about documentation. It is about understanding what you are building and why before you build it.
The app grew from one mode to seven through deliberate, problem-led iteration — not feature creep. Each addition was triggered by a specific identified failure. Scope discipline under self-direction is harder than scope discipline under a sponsor. This project required both.
I debugged validation logic, rewrote API dependencies for Replit compatibility, and built a backend proxy from scratch. Technical fluency is not the same as being a developer — it is the ability to understand, diagnose, and resolve the technical constraints a project faces.
The password screen and server-side proxy were governance decisions — risks identified, assessed, and mitigated before deployment. Treating a personal project with the same governance rigour as a commercial one is not over-engineering. It is professional habit.
Every feature in the app — the calibrated deflation, the weighted readiness model, the Coverage Map — existed to drive a single defined outcome. That outcome was achieved. This is what benefits realisation looks like in practice: not a projected figure, but a result.
This project wasn't about building an app.
It was about identifying a gap, defining the solution, managing the build through structured iteration, governing the deployment, and realising a single measurable outcome — in one week, alone.
That sequence — problem identification, requirements definition, scope control, iterative delivery, risk management, governance, benefits realisation — is the PM lifecycle. Most candidates study it for an exam. This project ran it in practice, from day one to a passing grade. The certification confirms the framework. This confirms the application.