Research
Time to Automaticity
Carvalho, forthcoming. Publication pending.
Time to Automaticity is a system-level construct introduced to address a structural problem in elite team-sport governance: managerial evaluation often runs ahead of collective learning. Decisions to fire managers, reset tactics, or replace personnel get made before the system has had time to stabilise. The framework proposes that evaluation validity is time-dependent and constrained by collective learning.
The core idea
Expertise is built through repetition under consequence. A surgeon does not become expert by reading manuals; they become expert by making hundreds of real decisions, with real stakes, until the patterns become instinctive. At that point the right choice arrives before the deliberation does. That threshold is automaticity.
The same dynamic operates at the team level. A newly installed game model takes time to consolidate into shared mental models, low-communication coordination, and stable execution. Until then, results are noisy. Judging the system on early outcomes confuses learning noise with structural failure.
Three Evaluation Windows
The framework sequences judgment rather than applying it uniformly:
- Process Integrity Window (early). Judge whether the system is being delivered coherently. Is the game model articulated, stable, consistent in training and meetings? Do not judge results.
- Progress Direction Window (mid). Judge the trend. Is variability in execution decreasing? Are coordination errors becoming less frequent? Outcomes can begin to inform but must be read as trend.
- Results Accountability Window (late). Outcome-based judgment is now defensible. The system has accumulated sufficient stable exposure that results reflect the system rather than the learning process.
Reset conditions
Five conditions disrupt collective learning and reset the automaticity clock:
- Tactical churn. Frequent changes to the game model.
- Instructional inconsistency. Conflicting messages from staff.
- Non-representative training. Training tasks that don't reflect match conditions.
- Principle abandonment. Core principles dropped under pressure.
- Personnel instability. Staff, driver, or principal turnover.
Symmetric inference errors
The framework warns equally against two failure modes:
- Premature termination. A viable system is abandoned before automaticity is reached. The owner who fires the manager in season one.
- Prolonged protection. An incoherent system is shielded from scrutiny under the label of "process," even after sufficient exposure has accumulated. The board that defends the broken system into season five.
Both errors are costly. The evaluation windows are sequencing rules to prevent both.
How SwipeManager uses it
Each card outcome carries TtA impact metadata. Major player decisions visibly shift the team's Readiness and Reset Risk scores, both displayed on the pause-menu badge. The DASH Index page in-game includes an interactive TtA Calculator that lets players score a real or hypothetical situation against the framework. Both failure modes are represented in card content (see the prolonged-protection card pack added in the most recent patch).
Citation
Full citation will be added when the paper is published. Working draft available on request: research@cinderpoint.com.
Related
- The D.A.S.H. Protocol (sister framework, also built into the game)
- Modding guide (TtA tagging for community card creators)