When Integration Fails
Modeling Dyslexia in Artificial
Neural Networks
What if cognitive impairment isn’t about “broken parts” …
but about broken coordination?
That’s the idea behind a new NeuroAI framework we’ve been
developing: modeling dyslexia not as random failure, not as noisy computation,
and not as a damaged representation — but as a disruption in information
integration between systems that normally work together.
And it turns out that shift in perspective changes
everything.
Reading
Is Not a Single Skill
Reading feels effortless for fluent readers. But underneath
that smooth surface, the brain is orchestrating a remarkable coordination:
- Visual
letter recognition
- Phonological
sound mapping
- Sequential
decoding
- Rapid
temporal processing
- Semantic
access
Dyslexia, affecting roughly 5–10% of people, is typically
characterized by difficulty with phonological processing and decoding —
especially with unfamiliar or pseudowords.
Neuroimaging consistently shows something important:
It’s not that one “reading area” is missing.
It’s that connectivity between regions — especially between
orthographic and phonological systems — is atypical.
So the real question becomes:
What happens if we reduce integration in an artificial
system that reads?
From
Performance to Failure
Artificial neural networks are usually optimized for
success. They map graphemes to phonemes with near-perfect accuracy when trained
properly.
But biological systems don’t just succeed — they fail in
structured ways.
So instead of asking:
“How do we make a model read better?”
We asked:
“How do we make a model fail like dyslexia — in a principled way?”
Not through random noise.
Not by deleting half the model.
Not by sabotaging learning.
But by systematically reducing the strength of integration
between modules.
Integration
as a Control Parameter
We built a modular reading architecture:
- Orthographic
encoder
- Phonological
encoder
- Cross-attention
integration layer
- Sequential
decoder
Then we introduced a simple but powerful variable:
Integration
strength (α).
When α = 1 → full coordination.
When α decreases → reduced cross-representational binding.
When α approaches a critical threshold → decoding collapses.
And here’s the interesting part:
Performance doesn’t degrade linearly.
It drops sharply near a threshold — almost like a phase
transition.
Pseudoword decoding fails first.
Substitution errors rise.
Letter-position confusion increases.
Confidence destabilizes.
This pattern mirrors empirical dyslexia profiles.
Not because we “damaged” the model —
but because we weakened integration.
Measuring
Integration (Without Mysticism)
We borrowed from Integrated Information Theory (IIT), but in
a computationally practical way.
Instead of calculating full Φ (which is intractable in large
networks), we measured:
- Mutual
information between modules
- Transfer
entropy
- Effective
connectivity strength
We defined an Integration Index that quantifies how much
coordinated information contributes to output accuracy.
The
striking finding:
Integration metrics begin to drop before performance visibly
collapses.
In other words:
Loss of coordination precedes observable impairment.
That has implications far beyond reading.
Remediation
as Reintegration
We didn’t stop at degradation.
We simulated structured literacy intervention:
- Curriculum-based
grapheme–phoneme alignment
- Multi-task
phonological training
- Gradual
reintegration of cross-attention weights
What
we observed:
Integration increased before accuracy fully recovered.
Reintegration predicted recovery trajectory.
This reframes remediation not as “fixing errors” —
but as restoring coordination.
Why
This Matters for NeuroAI?
This work suggests a broader principle:
Cognitive impairment may be modeled as disruption of
integration across representations.
That’s not limited to dyslexia.
It could extend to:
- Aphasia
- Developmental
language disorder
- Certain
psychiatric conditions
- Even
broader questions of cognitive coherence
In artificial systems, we often obsess over scaling.
But maybe understanding cognition requires studying how things break —
systematically.
NeuroAI shouldn’t just model intelligence.
It should model vulnerability.
A
Bigger Theoretical Move
This framework reframes cognitive capacity as a dynamical
property:
Integration is a control parameter governing cognitive
coherence.
Too little integration → fragmentation.
Too much rigid integration → inflexibility.
Healthy cognition may sit in a critical regime between independence and
collapse.
That’s not just a dyslexia story.
That’s a systems neuroscience story.
Ethical
Framing
This work does not model suffering.
It does not simulate lived experience.
It does not replace diagnosis.
It models mechanisms.
And mechanism is what allows:
- Hypothesis
testing
- Intervention
simulation
- Educational
tool development
- AI-assisted
screening support
The goal is not pathology replication.
It’s understanding coordination failure.
Where
This Could Go
Next directions include:
- Developmental
learning trajectories
- Cross-linguistic
orthographic depth modeling
- Spiking
neural implementations
- Phase-transition
formalization
- Personalized
AI reading tutors
But the deeper question remains:
If cognition depends on integration…
What other mental phenomena collapse when coordination
weakens?
And can AI help us see those thresholds more clearly than
biology alone?
This
session wasn’t just about dyslexia.
It was about moving from modeling success to modeling
failure —
and discovering that failure, too, has structure.
And structure is where science lives.
Comments
Post a Comment