A Cognitive Model of Reading

From letters on a page to words, meaning, and memory

Pretzel Dialectic
Age of Awareness

--

Image by Gerd Altmann from Pixabay

When someone reads your writing, the most complex object in the universe is performing one of its most challenging tasks. If everything works, it’s effortless, creating very little cognitive load. Your readers don’t consciously move their eyes, decipher letters, or sound out words. Instead they inhabit a world you created — hearing voices, seeing visions, feeling sorrow and joy and awe.

But if the reading is difficult, their consciousness gets dragged back to the work of decoding the message. The pure flow from page to mind is broken. They’re no longer learning, or solving a problem, or hearing a story. They’re deciding that the subject is boring, or difficult, or that one of you is an idiot. They start skimming, or they close the book.

Knowing how that flow works — how the brain turns letters on a page into meaning, experience, and memory — can transform our writing. It gives powerful insight into most of the readability tips you’ve ever heard, things like use short sentences, use headings, chunk your lists, or tell stories.

In this article I’ll discuss:

  • How you are turning these letters into a meaningful input stream in your head without thinking about each letter or word
  • How you turn that stream into understanding and a lasting memory

Based on this cognitive science, I provide nine suggestions for increasing readability in the article A Deeper Readability.

How We Read

The reading brain turns letters into words, sentences, meaning, and ultimately a mental structure that cognitive scientists call a schema. We usually don’t remember the actual sentences, just the gist and some bits and pieces — the schema.

It’s not a single, linear procedure. In computing we’d call it massively parallel, with many independent processes solving parts of the problem and converging on a result.

For instance, the brain has two methods for converting letters into words, and it tries both approaches at the same time.¹ It can:

  1. convert letters to sounds and mentally pronounce the word
  2. visually recognize the word based on long practice

To put it in computing terms, one is like an algorithm for decoding a word, and the other is like a neural network that recognizes the word from repeated training. Or in education terms, one method is phonics and the other is the whole-word approach.² The visual approach is fast, and succeeds instantly for familiar words like the. But the slower, phonetic approach saves us on longer or less-familiar words like sesquipedalian.

Either way, reading builds on our natural ability to process spoken language, which was developed and in use before we learned to read. Researchers say that reading is an auditory process.³ Whether we read phonetically or visually, we use the sound of the word in our head, plus a little context, to get to its meaning. If you still sometimes move your lips while reading, this may be the reason. People who were deaf from birth, who map written words to sign language instead of spoken language, sometimes sign words to themselves while learning to read.⁴

I’m moving fast, but notice how much work it takes just to get this far. The eye makes small hops along the page that each capture a few letters. As children we had to figure out each letter individually, but now our well-trained visual cortex recognizes them instantly, in any color or size, even in an unfamiliar font or abstract art. The heatmap of this activity then flows into the two paths I mentioned earlier: 1) language and auditory centers that process the word phonetically, and 2) a visual cortex area that tries to recognize the entire word by sight, roughly like we do with letters. The two paths re-converge in the speech-intention area, which forms the mental sound of the word that lets us call up its possible meanings.

Choices

In many cases, a word’s meaning depends on other words around it, and on the larger context of the topic. I’ve tried to show that in this diagram, based on a similar diagram in the book The Reading Mind.² It shows a cognitive (high-level) view, not a neurological (low-level) view, of someone working their way up from the bottom — from letters to words, sentences, and overall meaning.

The eye keeps feeding in more information while other words are still being processed. Researchers say a word may be somewhere in this circuit, at the same time as other words, for 0.3 to 3 seconds.³ The connections are not one-way; each part shown here can influence the others, priming them to notice certain things or prefer one interpretation over another. For instance, when researchers tested a special font in which the lower-case c and e were identical, readers did not immediately notice the oddity. They automatically saw c or e based on which letter made sense in the surrounding word, such as brake or cow. Given an ambiguous word (e.g., a word that might be either car or ear), they automatically saw the word that made sense in the context.¹ Even high-level context, such as the topic of a paragraph or story, can change your brain’s reading of a letter on the page.

The brain’s interactive use of multiple simultaneous approaches — symbols, sounds, visual recognition, grammar, and context — is why we can easily decode messages like “PL34SE 4WRD 1F U CN R34D 7H1S”.

Because English has deep roots in German, French, Latin, and Greek, it has multiple spellings for the same sound (rung, young, tongue), and multiple sounds for the same spelling (food, good, blood). And many popular English words have dozens of meanings. So while you’re deciding that the written word is bat, rock, or read, you might have to decide whether the word is an animal or a piece of sporting equipment (bat), a noun, verb, or adjective (rock), and even whether it’s present or past tense (read). And although reading is primarily auditory, in some cases you can’t know the word’s sound until you know its meaning (read past or present). Your brain works like a heatmap, influenced by many possibilities at once. As soon as it picks a winner, it moves forward with that candidate’s word representation (i.e., its sound, meaning, spelling, and rules) and actively suppresses all the other candidates. That last part is why you can be confident of a word even as you misread it out loud in a room full of people.

Clearly it’s a lot of work for the brain to coordinate and synchronize many concurrent processes across visual, auditory, language, and other areas of the brain. It’s amazing that our brain can do all this and simultaneously see the fight, smell the sweat, hear the shouts, and empathize with the combatants. To make it even more impressive, researchers say that reading is not natural in the way that it’s natural, for instance, to see the world in color. Children learn to speak, sing, walk, and do many other things without formal lessons or equipment. But reading requires formal instruction, hundreds of hours of it. Even so, most of us eventually learn to read quickly, automatically, and even unconsciously, at several hundred words per minute — with sufficient cognitive reserve to think about what we’re reading, or to read aloud musically, with vivid expressions and voices, to a delighted child.

How We Learn

What happens next in our brains is similar to that child’s experience of receiving a story or other information by listening to the reader’s voice. The stream of words we’re reading may describe facts or ideas, tell a story, or describe a visual image. We store the internal audio, and the thoughts and images it evokes, in short-term working memory, which is more-or-less our mental work area.⁵ There we convert the memory contents into schema (remember, that means an organized summary plus bits and pieces), and move the schema into long-term memory.

Although this model is gradually evolving new details, some version of it has been the dominant theory of learning since the 1970s.⁶ The diagram below is pretty standard, showing several types of temporary and long-term memory, plus the executive process that does the work. Have a look at the diagram, and I’ll explain below.

The working memory has specialized short-term memory areas including:

  • Phono loop or “audio trace”, about long enough to remember the sound of a phone number or a phrase. In computing this is called a “circular buffer”. As fresh data arrives, the oldest data fades off the back end. Experts say it has about 3 seconds of storage. When you repeat a phone number over and over while searching for a pencil, you’re refreshing this buffer.
  • Episodic buffer that connects info from other buffers and tracks the story of what’s going on, for instance in real life or in a novel or news report. (It’s tempting to think this buffer provides part of the “context” from our earlier reading model, but I haven’t seen this confirmed.)
  • Visual sketchpad, more-or-less the “mind’s eye”. Here we briefly store a real scene or our own construct of, say, three red roses on a porcelain plate.

You can see how this works. You and 4 friends are hunting a woolly mammoth. You scan the scenery and imagine plans and outcomes, listing the best 2 or 3 options. The sounds around you stay in your memory long enough to interpret — a sentence, the crack of distant thunder, a wild animal’s cry. As you move through the prairie scrub, you can easily keep track of your 4 friends, but if there were more, you’d start to think of them in groups.

You may have heard of a famous 1956 psychology paper called The Magical Number Seven, Plus or Minus Two.⁷ In it, Harvard psychology professor George Miller estimated the working-memory limits for several types of information. How many items can you mentally juggle without effort? I bet you can guess Miller’s answer, but I’ll mention that recent estimates have been a little lower. In my own writing, I often prefer a magic number of 5. On each hand I have 5 fingers I can easily keep track of, but if it were 6, things might get tricky. If I see 3 to 5 pennies on a plate, I know the number without counting. But somewhere above 5, my working memory runs out of mental fingers, and I start grouping the pennies to keep track. George Miller called these groups chunks. Thinking in chunks helps us work with larger amounts of information in the same small working-memory buffers.

A central executive does the work of compiling the contents of short-term memory into a summarized, structured schema. That’s what gets stored in long-term memory. The schema for a passage of text might boil down to this information: Ralph and Cheryl were siblings. There were 3 reasons they didn’t get along. The biggest was the Mustang, which had originally been his car, not hers. The schema contains this as a selected, condensed structure, not the complete imprint of every word on the page. We remember schema and a few details. We can reconstruct the rest from our other existing schema — my experience of sibling relationships, my mental image of a Mustang, etc. The result of reading, apart from the experience of the stream of words and ideas, is to create and store schema.

Schema is sometimes shown as a graph of nodes and relationships. Your writing can help readers create schema by showing clear structure and relationships in your document and your topic.

Knowledge at Last

Schema is stored in long-term memory (LTM). Like working memory, LTM has some separate specialized areas: semantic (facts, concepts, meanings), episodic (stories, events), and procedural (skills, rituals, etc.). So perhaps schema has those types: facts and concepts, stories, and procedures. And as you probably know, we call it “long-term” but the reader may have to rehearse those schemas a little to make them permanent. We do this when memorizing facts, repeating a story, or practicing a skill.

As with reading, the work of schematizing information and moving it into long-term memory involves a mental effort. So education theorists work to reduce cognitive load in training materials.⁵ One technique is to clearly organize information, which pre-schematizes it for the reader. For instance, we can do this by creating chunked, logical structures. And we can avoid luxurious distractions like irrelevant information or difficult words, which interrupt the central executive while it’s trying to schematize your input.

When text is written for low cognitive load, readers spend very little effort decoding or schematizing it. Their mind is free to think about its structure, meaning, and implications. The result is a richer, stronger schema that is easier to remember.

Applying Cognitive Theory to Readability

My article Surface Readability covers approaches to readability based on surface analysis of text, especially the simple but effective “short words and sentences” approach. The cognitive model of reading described above shows the main reason why those approaches work — visual reading path and small working-memory buffers. (Studies show you can fit more small words in the buffer, which makes understanding easier.)

But the model suggests a lot more ways to improve readability and help readers create elegant, highly-connected schemas. I explore these in the article A Deeper Readability. They include things like clear, visible structure, distinct units of meaning, small groupings, strong connections driven by connective language and story-like approaches, and protection of the reader’s cognitive flow.

Eventually our writing tools could learn to suggest some of these. But today, a writer who understands the cognitive model can invent and apply these techniques on their own.

Resources

Articles in This Series

  • A Cognitive Model of Reading - The science of how people read
  • Surface Readability - The value and limits of simple “short words & sentences” approaches to readability, and some interesting variations
  • A Deeper Readability - Techniques that go beyond surface-based approaches, based on cognitive science and other sources
  • Writing for Challenged Readers - About ESL & dyslexic readers, and what they said helps them most
  • Readability Equals Translatability - How the right approach to readability becomes a scalable approach to fast, consistent translation across multiple languages, how that works in a modular, single-source content management system, and whether language must be “dumbed-down” to achieve readability

Bibliography

Current theory of reading is quickly made clear in The Reading Mind and told in loving neurological detail in Proust and the Squid. The rest — working memory theory, schema theory, and cognitive-load learning theory — are easy to find on the Web. (E.g., on Wikipedia: Working Memory, Schema, and Cognitive Load.)

[1] The Reading Mind: A Cognitive Approach to Understanding How the Mind Reads, Daniel Willingham, 2017

[2] At a Loss for Words: How a flawed idea is teaching millions of kids to be poor readers, Emily Hanford, American Public Media, 2019

[3] Proust and the Squid: The Story and Science of the Reading Brain, Maryanne Wolf, 2008

[4] Can You Read a Language You Can’t Hear?, David Ludden, Psychology Today, 2015

[5] Applying Learning Theory to the Design of Web-based Instruction, in Content and Complexity, 2004

[6] The Episodic Buffer: A New Component of Working Memory?, Alan Baddeley, 2000

[7] The Magical Number Seven, Plus or Minus Two, George Miller, 1956

--

--