From idea to live site in one evening
I set out to build a different kind of election guide.
Not another one that asks "should taxes go up or down", but something that actually helps you understand what you believe. Start with values, move through trade-offs, then into concrete policy.
But there's a problem with building something like that.
If I write the questions, pick the dimensions, and define the scoring, I'm shaping the outcome. Not necessarily on purpose, but it happens anyway. In wording, in what I choose to include, and in how things are weighted.
So instead of trying to be "careful", I took a different approach.
I used AI not just to build the app, but to design the system itself. With explicit rules for fairness, traceability, and a clean separation between data, interpretation, and scoring.
ChatGPT helped define the system.
Codex built it.
The setup
I split the work deliberately.
ChatGPT handled thinking, framing, and methodology.
Codex handled execution, structure, and implementation.
ChatGPT was used to:
- define the product clearly
- structure the layers (values → trade-offs → policy → match)
- identify where bias could creep in
Then Codex was given strict constraints:
Work repo-first.
Do methodology before implementation.
Build something inspectable.
That constraint shaped everything that followed.
Codex starts with the project, not the code
It didn't begin with controllers or UI.
It began by constructing the project itself.
Repo structure
src/
Web/
Application/
Domain/
Infrastructure/
tests/
This maps directly to the problem space:
- Domain contains questions, values, and party positions
- Application contains scoring and consistency logic
- Infrastructure handles data
- Web is the Razor Pages UI
This is not generic scaffolding. It reflects the model.
The repo is the interface
This is where Codex behaves like what it is.
It doesn't generate files in isolation. It works inside the repository and structures changes through git.
Typical flow:
- initial solution setup
- methodology documents
- domain model
- scoring and consistency logic
- web layer
- data and seeding
- deployment
The changes are grouped logically and applied incrementally.
The repo becomes the interface. That is where Codex operates.
Data is modeled explicitly
Nothing is hardcoded, and nothing is implied.
The core structure:
PartyPosition {
issueId
stance (0 to 1)
sourceType (vote, program, statement)
confidence (0 to 1)
}
Everything in the system ties back to:
- a normalized position
- a source
- a confidence level
This keeps data, interpretation, and scoring separate.
Where the data comes from
The system is built around real Danish sources.
Voting data
From Folketinget:
https://oda.ft.dk
Used for:
- voting records
- cases
- documents
Mapped like this:
{
"issueId": "public_spending",
"stance": 0.3,
"sourceType": "vote",
"confidence": 0.9
}
This represents actual behavior.
Party programs
From official party sites:
- socialdemokratiet.dk
- liberalalliance.dk
- venstre.dk
- sf.dk
Mapped like this:
{
"issueId": "taxation_level",
"stance": 0.2,
"sourceType": "program",
"confidence": 0.7
}
This represents declared positions.
Public statements
Used where formal data is not available:
- speeches
- interviews
- press
Mapped with lower confidence:
{
"issueId": "immigration_restriction",
"stance": 0.8,
"sourceType": "statement",
"confidence": 0.5
}
Questions are generated within constraints
The questions are generated, but not arbitrarily.
They are anchored in defined dimensions:
- state vs individual
- freedom vs regulation
- equality vs growth
- openness vs cohesion
Example:
{
"id": "value_state_responsibility",
"layer": "value",
"dimension": "state_vs_individual"
}
Trade-off example:
{
"id": "tradeoff_immigration_economy",
"layer": "tradeoff",
"dimension": "openness_vs_cohesion"
}
The structure comes first. Wording follows.
Data is seeded, not embedded
/Data/Seed/
questions.json
parties.json
partyPositions.json
EF Core and SQLite are present, but they are not the core of the system.
- JSON is the source of truth
- data is version-controlled
- assumptions are explicit
The database exists, but the system does not depend on it.
Consistency is modeled, not enforced
Instead of detecting "errors", the system models tension.
IF freedom > 0.7 AND regulation_support > 0.7
→ flag tension
The result is surfaced as:
your answers point in two directions here
Not a correction. A signal.
Scoring is transparent
score =
value_match * 0.4 +
tradeoff_match * 0.3 +
policy_match * 0.3
Combined with:
- domain-level breakdown
- value vs policy alignment
- confidence weighting
The output is not just a number. It is an explanation.
The app
Stack:
- ASP.NET Core 8
- Razor Pages
- EF Core
- SQLite
No separate frontend. No unnecessary complexity.
Built for deployment, not trends.
Deployment
Handled end to end.
- publish
- FTP upload
- configuration
- verification
Result:

What made this work
Not AI alone.
Constraints:
- repo-first workflow
- methodology before implementation
- explicit data model
- structured git usage
- real deployment target
Without those, this does not hold together.
The takeaway
This is not AI building something on its own.
It is AI operating within a defined system.
ChatGPT defined the structure.
Codex executed within it.
The important shift is not that AI can write code.
It is that it can operate across methodology, data, architecture, and deployment when the constraints are clear.
Final line
The system does not hide its assumptions. It models them.