What the project is trying to do
Make training data feel inspectable
A lot of ML writing jumps straight into methods, benchmarks, or theory. That is useful, but it can leave people without a clean mental model for the underlying question: what changes when the training data changes?
This site uses the phrase data counterfactuals and the grid metaphor as teaching tools. They are meant to connect several research areas without pretending the literature already uses exactly this framing.
- Explorer: a toy environment for comparing training subsets, evaluation slices, and simple edits like poisoning or noise.
- Memo: the longer argument for why this framing is useful and where it helps unify adjacent literatures.
- Reference pages: paper collections and glossary entries for when you want to move from intuition to sources.