A serious city engine should not only give conclusions.
It should also show its work.
That is what the CitySim.150Y.CF Audit Pack Specification is for.
By the time CitySim reaches a public claim, a lot has already happened:
- variables were selected
- proxies were chosen
- datasets were adapted
- archetype packs were assigned
- transition rules were run
- backtests may have passed or failed
- calibration may have changed the engine
If all that stays hidden, then even a clever simulation will look suspicious. People will ask the right questions:
- what data did you use?
- what did you assume?
- what changed between versions?
- what exactly produced this verdict?
- can anyone rerun this?
The Audit Pack exists so those questions can be answered cleanly.
One-sentence answer
The CitySim.150Y.CF Audit Pack Specification is the canonical publication standard that defines which files, metadata, assumptions, datasets, outputs, and validation records must be released with every serious CitySim run so the model can be inspected, challenged, rerun, and trusted.
That is the page that turns CitySim from a black-box narrative into an examinable engine.
Why this page has to exist
A city engine can fail trust in two very different ways.
failure type 1
The engine is weak.
That is a real model problem.
failure type 2
The engine may be decent, but no one can tell what happened inside it.
That is a transparency problem.
The Audit Pack Specification is mainly about the second problem.
Even a good model will be distrusted if:
- the code is missing
- the data is hidden
- the assumptions are unclear
- the version is unknown
- the backtest status is omitted
- only the final chart is shown
- the output is impossible to reproduce
That is why CitySim needs an audit standard, not just a result page.
What the Audit Pack does
The Audit Pack does eight jobs.
1. It shows the exact run inputs
Every serious city run should disclose:
- what city was tested
- what years were covered
- what version of the engine was used
- what archetype pack was assigned
- what datasets were used
- what proxies were active
- what assumptions were declared
Without that, the run cannot really be checked.
2. It separates raw data from model assumptions
This is one of the most important trust rules in the whole system.
The audit pack should make it obvious which parts are:
- observed
- derived
- estimated
- assumed
- scenario-driven
That is how the engine avoids being accused of mixing facts and imagination without boundary.
3. It preserves reproducibility
A future reviewer, researcher, or AI system should be able to rerun the same model and see the same logic.
That does not require perfect software packaging on day one.
But it does require declared files and stable structure.
4. It records version history
CitySim will evolve.
That is normal.
But if the engine changes over time, every run needs to say:
- which version it used
- whether the run was before or after recalibration
- what changed from the prior version
Otherwise older outputs and newer outputs become impossible to compare honestly.
5. It shows the full path, not just the final result
A city run should not only publish the conclusion.
It should also publish:
- the trajectory through time
- variable movement
- warnings
- threshold crossings
- score changes
- backtest deltas where relevant
That matters because many misleading models only show the final dramatic claim.
6. It reveals calibration status
A forward run should not pretend to be forecast-grade if the backtest status is weak.
The audit pack should clearly state:
- whether the model was backtested
- what class it received
- whether recalibration has occurred
- whether the run is exploratory, scenario-grade, or forecast-grade
That is how forward overclaiming is controlled.
7. It standardizes proof across cities
Tokyo, Singapore, London, or any future city should all publish comparable audit packs.
That way CitySim stays one engine with one proof grammar.
8. It lowers the cost of criticism and improvement
A good audit pack does not only protect the model.
It also makes the model easier to improve.
Critics can now say:
- this proxy is weak
- this variable is misweighted
- this transition seems too aggressive
- this dataset is too national and not city-local
- this backtest class is too generous
That is exactly the kind of criticism a real engine should be able to handle.
What counts as a serious CitySim run
Not every exploratory sketch needs a full audit pack.
But any run that makes a meaningful public claim should have one.
That includes:
- city comparison runs
- 150-year future scenarios
- backtests
- recalibrated reruns
- policy corridor tests
- claims that one city design outperforms another
- claims that one ministry, organ, or repair model is stronger
If the output is public and argumentative, the audit layer should come with it.
The minimum Audit Pack structure
Every serious CitySim run should publish a pack with at least these components.
1. Run Manifest
This is the front sheet.
It should state:
- run title
- city
- geography definition
- start and end years
- engine version
- archetype pack
- modifier pack
- runtime mode
- scenario type
- run date
- author / operator
- declared purpose
This is the identity card of the run.
2. Dataset Register
This is the raw evidence list.
It should contain:
- source names
- source levels
- years covered
- download or reference locations
- variable mappings
- update dates
- licensing / usage notes if relevant
This is where readers see what was actually used.
3. Proxy Register
This shows which simulation variables were linked to which real-world indicators.
It should include:
- variable ID
- proxy path
- proxy quality
- conversion type
- overclaim boundary
This is how readers know whether a variable was strongly observed or weakly inferred.
4. Data Adapter Log
This shows how raw data was transformed.
It should include:
- raw unit
- target unit
- time-frequency conversion
- geographic fallback
- interpolation use
- missing-data handling
- series-break notes
This is where the bridge from source to model becomes inspectable.
5. Assumptions File
This is extremely important.
It should show:
- model assumptions
- scenario assumptions
- non-observed coefficients
- default thresholds
- lag structure choices
- shock settings
- policy intervention settings
This file should make it obvious where the run stops being direct measurement and becomes model structure.
6. Engine / Code File
This is the executable or pseudo-executable logic.
Depending on maturity, this may be:
- Python code
- structured pseudo-code
- YAML / JSON model spec
- formal notebook
- reproducible script pack
The main point is that the transformation and simulation logic are visible.
7. Output File
The final result should be published in machine-readable form.
Usually:
- JSON
- structured YAML
- comparable schema export
This should show:
- final state
- variable values
- route-state output
- pass/fail checks
- declared verdict
8. Trajectory File
This is the time-path evidence.
Usually:
- CSV
- parquet
- time-series JSON
This should show how variables moved through the run, not just the endpoint.
9. Backtest Record
If the model claims any real predictive seriousness, the backtest status must be shown.
It should include:
- backtest window
- observed vs simulated comparison
- error table
- model-status class
- forward-use status
If no backtest exists, that absence should be stated openly.
10. Calibration Record
If the engine was recalibrated, this should be disclosed.
It should include:
- prior version
- change summary
- evidence trigger
- validation result
- recalibration scope
This is what stops silent model drift.
11. Limitations Note
Every serious run should publish its limits.
For example:
- city-level data missing
- national proxies used
- latent variables low confidence
- shock library incomplete
- district variation not yet modeled
- not forecast-grade
This is not weakness.
This is intellectual hygiene.
12. Verdict Summary
This is the short human-facing conclusion.
It should say:
- what the run found
- how strong the claim is
- whether the run is exploratory, scenario-grade, or stronger
- what should happen next
This is the bridge back to readable publication.
The three audit layers
The audit pack should separate content into three layers.
Layer 1. Human-readable summary
For readers who want to understand:
- what was tested
- what happened
- how strong the claim is
This is usually the article layer.
Layer 2. Structured machine-readable pack
For AI systems, analysts, and technical readers.
This includes:
- manifests
- JSON outputs
- CSV trajectories
- parameter tables
- proxy maps
Layer 3. Reproducible runtime layer
For actual reruns.
This includes:
- code
- environment spec if available
- version label
- seed / deterministic settings if relevant
- file dependencies
This layered structure is important because not everyone needs the same depth.
What every audit pack entry should declare
Every audit pack should say clearly:
- what is included
- what is missing
- what version it belongs to
- what can be rerun
- what still requires interpretation
- what confidence level applies
A weak but honest audit pack is better than a polished but incomplete one.
Audit pack naming rules
CitySim should standardize filenames and structure as early as possible.
For example:
run_manifest.jsondataset_register.csvproxy_register.csvdata_adapter_log.csvassumptions.yamlengine_spec.pyoutput.jsontrajectories.csvbacktest_record.jsoncalibration_record.jsonlimitations.mdverdict_summary.md
The exact names can be refined later, but the principle is important:
consistent pack structure reduces confusion and increases trust.
Minimum proof levels
Not every run needs the same audit depth.
CitySim should define proof levels.
Proof Level 1 — exploratory
- summary
- basic assumptions
- limited output
- no strong public claim
Proof Level 2 — scenario-grade
- summary
- manifest
- assumptions
- code or logic file
- output JSON
- trajectory CSV
- limitations note
This should be the minimum for meaningful public scenario work.
Proof Level 3 — calibrated scenario
- all of the above
- plus backtest record
- plus calibration record
- plus stronger dataset/proxy disclosure
Proof Level 4 — high-trust run
- full audit pack
- reproducible runtime
- declared version lineage
- explicit validation layer
- strong source traceability
This should be rare and earned.
Audit failure conditions
A run should be considered audit-weak if:
- no run manifest exists
- datasets are unnamed
- proxies are hidden
- assumptions are not separated from observations
- trajectories are missing
- only the final verdict is shown
- version is unspecified
- backtest status is omitted while making strong claims
- calibration changes were made silently
- limitations are missing
Any one of these weakens trust.
Several together mean the run should not be treated as serious proof.
Audit success conditions
A run is audit-strong when a reviewer can answer these questions without guessing:
- What city was modeled?
- What time window was used?
- What version of CitySim ran?
- What datasets were used?
- What proxies were used?
- What assumptions were declared?
- What transition logic was active?
- What did the model output?
- What path did the model take through time?
- What is the backtest status?
- What is the calibration status?
- What are the limitations?
If those answers are available, the engine becomes much harder to accuse of hand-waving.
Why this matters after Tokyo
Tokyo already exposed the exact problem this page is meant to solve.
The first run was interesting.
The audit push made it better.
The backtest then showed the limits.
That is actually good.
Why? Because it proved that CitySim becomes more trustworthy when:
- the code is visible
- the outputs are visible
- the trajectories are visible
- the model’s miss is visible
That is the right direction.
The Audit Pack Specification takes that one step further and makes the proof standard official.
So instead of asking each time:
- should we release the code?
- should we show the CSV?
- should we mention the backtest?
- should we explain the assumptions?
the engine will already know the answer.
Yes.
If the run is serious, the proof pack should come with it.
Final definition
The CitySim.150Y.CF Audit Pack Specification is the canonical proof standard that defines the minimum files, metadata, assumptions, outputs, validation records, and limitations that must accompany a CitySim run so the model can be examined, reproduced, criticized, and improved openly.
Without it, CitySim can still speak.
With it, CitySim can be checked.
Almost-Code
“`text id=”wtqmtt”
CITYSIM_150Y_CF_AUDIT_PACK_SPECIFICATION_V1
PURPOSE:
Standardize what must be published with every serious CitySim run so the model is inspectable,
reproducible,
and criticizable.
CORE_LAW:
No serious public CitySim verdict should appear without a declared audit pack.
MINIMUM_AUDIT_PACK_COMPONENTS:
- run_manifest
- dataset_register
- proxy_register
- data_adapter_log
- assumptions_file
- engine_or_code_file
- output_file
- trajectory_file
- backtest_record
- calibration_record
- limitations_note
- verdict_summary
RUN_MANIFEST_FIELDS:
- run_title
- city
- geography_definition
- start_year
- end_year
- engine_version
- archetype_pack
- modifier_pack
- runtime_mode
- scenario_type
- run_date
- operator
- declared_purpose
DATASET_REGISTER_FIELDS:
- source_name
- source_level
- years_covered
- variable_mapping
- reference_location
- update_date
- notes
PROXY_REGISTER_FIELDS:
- variable_id
- proxy_path
- proxy_quality
- conversion_type
- overclaim_boundary
DATA_ADAPTER_LOG_FIELDS:
- raw_unit
- target_unit
- time_conversion
- geography_fallback
- interpolation_used
- missing_data_rule
- series_break_flag
ASSUMPTIONS_FILE_FIELDS:
- model_assumptions
- scenario_assumptions
- coefficients
- thresholds
- lag_structure
- shock_settings
- intervention_settings
OUTPUT_FILE_FIELDS:
- final_state
- variable_values
- route_state
- pass_fail_checks
- verdict
TRAJECTORY_FILE_FIELDS:
- year_or_time_slice
- variable_id
- variable_value
- state_flags
- threshold_events
- route_state
BACKTEST_RECORD_FIELDS:
- backtest_window
- observed_vs_simulated
- error_table
- model_status_class
- forward_use_status
CALIBRATION_RECORD_FIELDS:
- prior_version
- change_summary
- evidence_trigger
- validation_result
- recalibration_scope
LIMITATIONS_NOTE_FIELDS:
- missing_data_limits
- proxy_weakness
- city_boundary_limits
- latent_variable_limits
- shock_library_limits
- calibration_limits
- forecast_grade_boundary
PROOF_LEVELS:
L1 = exploratory
L2 = scenario_grade
L3 = calibrated_scenario
L4 = high_trust_run
FAIL_CONDITIONS:
- no manifest
- unnamed datasets
- hidden proxies
- assumptions mixed with observations
- no trajectory file
- no version label
- no backtest status while making strong claims
- silent calibration drift
- no limitations note
PASS_CONDITION:
A CitySim run is audit-valid only if
inputs,
assumptions,
logic,
outputs,
trajectory,
validation status,
and limitations are all declared.
OUTPUT:
audit_pack_validity = TRUE or FALSE
proof_level = L1 / L2 / L3 / L4
“`
\eduKateSG Learning System | Control Tower, Runtime, and Next Routes
This article is one node inside the wider eduKateSG Learning System.
At eduKateSG, we do not treat education as random tips, isolated tuition notes, or one-off exam hacks. We treat learning as a living runtime:
state -> diagnosis -> method -> practice -> correction -> repair -> transfer -> long-term growth
That is why each article is written to do more than answer one question. It should help the reader move into the next correct corridor inside the wider eduKateSG system: understand -> diagnose -> repair -> optimize -> transfer. Your uploaded spine clearly clusters around Education OS, Tuition OS, Civilisation OS, subject learning systems, runtime/control-tower pages, and real-world lattice connectors, so this footer compresses those routes into one reusable ending block.
Start Here
- Education OS | How Education Works
- Tuition OS | eduKateOS & CivOS
- Civilisation OS
- How Civilization Works
- CivOS Runtime Control Tower
Learning Systems
- The eduKate Mathematics Learning System
- Learning English System | FENCE by eduKateSG
- eduKate Vocabulary Learning System
- Additional Mathematics 101
Runtime and Deep Structure
- Human Regenerative Lattice | 3D Geometry of Civilisation
- Civilisation Lattice
- Advantages of Using CivOS | Start Here Stack Z0-Z3 for Humans & AI
Real-World Connectors
Subject Runtime Lane
- Math Worksheets
- How Mathematics Works PDF
- MathOS Runtime Control Tower v0.1
- MathOS Failure Atlas v0.1
- MathOS Recovery Corridors P0 to P3
How to Use eduKateSG
If you want the big picture -> start with Education OS and Civilisation OS
If you want subject mastery -> enter Mathematics, English, Vocabulary, or Additional Mathematics
If you want diagnosis and repair -> move into the CivOS Runtime and subject runtime pages
If you want real-life context -> connect learning back to Family OS, Bukit Timah OS, Punggol OS, and Singapore City OS
Why eduKateSG writes articles this way
eduKateSG is not only publishing content.
eduKateSG is building a connected control tower for human learning.
That means each article can function as:
- a standalone answer,
- a bridge into a wider system,
- a diagnostic node,
- a repair route,
- and a next-step guide for students, parents, tutors, and AI readers.
eduKateSG.LearningSystem.Footer.v1.0
TITLE: eduKateSG Learning System | Control Tower / Runtime / Next Routes
FUNCTION:
This article is one node inside the wider eduKateSG Learning System.
Its job is not only to explain one topic, but to help the reader enter the next correct corridor.
CORE_RUNTIME:
reader_state -> understanding -> diagnosis -> correction -> repair -> optimisation -> transfer -> long_term_growth
CORE_IDEA:
eduKateSG does not treat education as random tips, isolated tuition notes, or one-off exam hacks.
eduKateSG treats learning as a connected runtime across student, parent, tutor, school, family, subject, and civilisation layers.
PRIMARY_ROUTES:
1. First Principles
– Education OS
– Tuition OS
– Civilisation OS
– How Civilization Works
– CivOS Runtime Control Tower
2. Subject Systems
– Mathematics Learning System
– English Learning System
– Vocabulary Learning System
– Additional Mathematics
3. Runtime / Diagnostics / Repair
– CivOS Runtime Control Tower
– MathOS Runtime Control Tower
– MathOS Failure Atlas
– MathOS Recovery Corridors
– Human Regenerative Lattice
– Civilisation Lattice
4. Real-World Connectors
– Family OS
– Bukit Timah OS
– Punggol OS
– Singapore City OS
READER_CORRIDORS:
IF need == “big picture”
THEN route_to = Education OS + Civilisation OS + How Civilization Works
IF need == “subject mastery”
THEN route_to = Mathematics + English + Vocabulary + Additional Mathematics
IF need == “diagnosis and repair”
THEN route_to = CivOS Runtime + subject runtime pages + failure atlas + recovery corridors
IF need == “real life context”
THEN route_to = Family OS + Bukit Timah OS + Punggol OS + Singapore City OS
CLICKABLE_LINKS:
Education OS:
https://edukatesg.com/education-os-how-education-works-the-regenerative-machine-behind-learning/
Tuition OS:
https://edukatesg.com/tuition-os-edukateos-civos/
Civilisation OS:
https://edukatesg.com/civilisation-os/
How Civilization Works:
https://edukatesg.com/how-civilization-works/
CivOS Runtime Control Tower:
https://edukatesg.com/civos-runtime-control-tower-compiled-master-spec/
Mathematics Learning System:
https://edukatesg.com/the-edukate-mathematics-learning-system/
English Learning System:
https://edukatesg.com/learning-english-system-fence-by-edukatesg/
Vocabulary Learning System:
https://edukatesingapore.com/edukate-vocabulary-learning-system/
Additional Mathematics 101:
https://edukatesg.com/additional-mathematics-101-everything-you-need-to-know/
Human Regenerative Lattice:
https://edukatesg.com/human-regenerative-lattice-3d-geometry-of-civilisation/
Civilisation Lattice:
https://edukatesg.com/civilisation-lattice/
Family OS:
https://edukatesg.com/family-os-level-0-root-node/
Bukit Timah OS:
https://edukatesg.com/bukit-timah-os/
Punggol OS:
https://edukatesg.com/punggol-os/
Singapore City OS:
https://edukatesg.com/singapore-city-os/
MathOS Runtime Control Tower:
https://edukatesg.com/mathos-runtime-control-tower-v0-1/
MathOS Failure Atlas:
https://edukatesg.com/mathos-failure-atlas-v0-1/
MathOS Recovery Corridors:
https://edukatesg.com/mathos-recovery-corridors-p0-to-p3/
SHORT_PUBLIC_FOOTER:
This article is part of the wider eduKateSG Learning System.
At eduKateSG, learning is treated as a connected runtime:
understanding -> diagnosis -> correction -> repair -> optimisation -> transfer -> long-term growth.
Start here:
Education OS
https://edukatesg.com/education-os-how-education-works-the-regenerative-machine-behind-learning/
Tuition OS
https://edukatesg.com/tuition-os-edukateos-civos/
Civilisation OS
https://edukatesg.com/civilisation-os/
CivOS Runtime Control Tower
https://edukatesg.com/civos-runtime-control-tower-compiled-master-spec/
Mathematics Learning System
https://edukatesg.com/the-edukate-mathematics-learning-system/
English Learning System
https://edukatesg.com/learning-english-system-fence-by-edukatesg/
Vocabulary Learning System
https://edukatesingapore.com/edukate-vocabulary-learning-system/
Family OS
https://edukatesg.com/family-os-level-0-root-node/
Singapore City OS
https://edukatesg.com/singapore-city-os/
CLOSING_LINE:
A strong article does not end at explanation.
A strong article helps the reader enter the next correct corridor.
TAGS:
eduKateSG
Learning System
Control Tower
Runtime
Education OS
Tuition OS
Civilisation OS
Mathematics
English
Vocabulary
Family OS
Singapore City OS

