The Digital Resolution of the Mind: Discrete Precision in Working Memory

Does the resolution or precision of human memory change with its available capacity? In other words, can you remember fewer items with greater precision than you can remember more items?

Contradicting intuition, a new paper from yesterday's issue of Nature shows that all items are stored in memory with equal resolution, regardless of the number of items stored. Authors Zhang & Luck first showed that subjects are equally accurate in reporting the color of a memorized item regardless of the number of other items being maintained in memory. Specifically, when subjects were asked to select the color of a square from memory by clicking on a color wheel (see image below), they were just as precise if the memorized array had included 3 squares as if it had included 6 squares.

i-56f550c0ea206f7b143e07dcb1f822e1-ZhangLuckNature2008.jpg
An example from the first experiment, with a set-size of 3 squares where the preceding color of one is subsequently probed. Subjects enter their response by clicking the corresponding color on a colorwheel.

But there's an alternative explanation...

Alone, this result does not prove their point - maybe subjects were only capable of remembering 3 (or fewer) items, and the resolution of memory can be flexible only for those items below the absolute capacity. In other words, perhaps memory "resources" can be distributed only among a certain number of memory "slots" - and some of those slots might be given more resources than others, leading to greater resolution (they call this the slots+resources model). Or perhaps all slots get equal resources, but items can be stored redundantly in several slots - leading to greater resolution for these redundantly stored items (they call this the slots+averaging model).

These models were both supported by a second experiment showing that the resolution of memory was greater when subjects had to remember only 1 relative to 2 or 3 colored squares. Interestingly, this increase in resolution corresponded with a square root function, as predicted by both models (the standard deviation of an average of samples is equal to the standard deviation of each individual sample divided by the square root of the number of samples; this fact allowed Zhang & Luck to predict the variability in color recall as a function of the size of each memorized array).

The Experimental Logic

However, only the slots+averaging model - the one that assumes each slot has a fixed resolution, and that items can be stored redundantly - predicts a limited range of imprecision in memory. In their words, the imprecision is "never worse than the [imprecision] of a single slot and is never better than the [imprecision] for a single slot divided by the square root of the number of slots." (The slots+resources model predicts that all resources could theoretically be assigned to just one slot, leading to very high or very low imprecision depending on which item was probed).

Zhang & Luck therefore conducted another experiment in which a certain square in each to-be-remembered array was "cued" - that cued square was the one whose color would be subsequently tested, with 70% probability. (On 10% of trials the cue was invalid insofar as an uncued square was tested, and on the remaining 20% of trials, the cue was neutral insofar as all squares were cued).

A rational strategy under a slots+resources model would be to allocate most resources to the cued square(s), leading to the highest precision when that square was cued, medium precision when all squares were cued, and the lowest precision when the cue was invalid. Under a slots+averaging model, there should be no difference in memory precision between the trials in which the cue was invalid and those in which all squares were cued (neutral trials), since "a given item receives either 0 or 1 slots on both [types of] trials."

The Critical Findings

Only the slots+averaging model accounted for these results, in that:

1) neutral and invalidly-cued trials showed equivalent precision, consistent with the idea that representations are stored discretely and with unvarying precision

2) validly cued trials showed higher precision, within a range consistent with redundant storage as allowed by the slots+averaging model, and

3) subjects remembered more squares on the validly cued than invalidly cued trials, again consistent with the redundant storage of cued colors.

Thus, short-term memory appears to consist of a limited number of "slots" which can be updated with various representations redundantly. Each representation is stored with equivalent resolution or precision.

The Implications

Fluidity in mental resources. Zhang & Luck's findings contradict the idea that attention, or the "strength" of memory, can affect their stored precision in a continuous, graded fashion. Attention may have its effects at encoding (in terms of how many "slots" are devoted to a particular to-be-remembered item) and memory strength may fall out of that (stronger memories reflect the committment of more slots to a particularly representation), but neither appear to influence precision in a graded fashion.

Digital or Analogue? Relatedly, these findings contradict a long-held belief about the cognitive system - that it is fundamentally analogue in nature. Instead, the prefrontal cortex (or wherever these items may be stored) may operate according to more digital principles: items are either "updated" in memory or not, this storage is discrete, and resources can not be fluidly deployed towards them. The general picture is rather of a digital system, as previously argued by some computational modelers (see difference #1).

Individual Differences. Zhang & Luck showed that memory resources cannot be flexibly deployed among memorized items in a way that improves their precision. However, some individuals may have greater memory precision than others, and this factor should be related to short-term or working memory capacity. This yields an interesting question: is the well-known predictive power of working memory capacity due to capacity differences per se, or is it due to the precision of those memories which can be maintained? This question has never been directly addressed in individual differences research (my money's on precision).

Pointer Models vs. Direct Storage. An influential theory about short-term memory storage is that the prefrontal cortex may contain "pointers" to richer representations in posterior cortex; short-term memory exists because those pointers can be used to maintain information over time. Zhang & Luck's findings are largely compatible with this view, but do not dissociate it from the "direct storage" alternative in which prefrontal areas are thought to represent the to-be-remembered items themselves.

Caveats

Additional experiments showed that the resolution or precision of these representations does not gradually increase over time, that these representations appear to be stored in an "all or nothing" update, that the results generalize to shape as well as color, and that the color values are stored in memory as continuous rather than discrete values.

However, there are other problems. The first is that Zhang & Luck were able to dissociate capacity from resolution by using a mixture-model decomposition of the proximity of subject's responses on the color wheel relative to the "true" color of the probed item. Thus, subjects probably guessed on several trials, and the mixture model would tend to use those in its estimates of capacity - but may also include a few "lucky guesses" in the estimates of memory precision, probably leading to an underestimate of that factor. It's not clear how this might change the results, but one possibility is that the critical experiment might have shown differences between neutral and invalidly-cued trials if subjects were guessing less often. This could be determined by running that experiment with lower set sizes.

A second caveat is that some of the most important findings here are actually null results: no differences in precision between neutral & invalidly cued trials (used to dissociate the two slots models), no differences in precision between set sizes of 3 and 6 squares (used to show that resources cannot be deployed past slot capacity), and memory precision not falling outside the range predicted by either slot model.

Related Posts:
Filtering Perception to Save Memory.
Developmental Dissociations in Prefrontal Cortex: Maintenance vs. Manipulation (of working memory).

More like this

Every now and then, I read some science from some other dimension. That is, the methods are so unusual, the relevant theories so fringe, or the conclusions so startling that I feel like the authors must be building on work from a completely separate science, with its own theories and orthodoxy.…
To enhance any system, one first needs to identify its capacity-limiting factor(s). Human cognition is a highly complex and multiply constrained system, consisting of both independent and interdependent capacity-limitations. These "bottlenecks" in cognition are reviewed below as a coherent…
Early neuropsychology research indicated that long-term memory and short-term memory were separable - in other words, long-term memory could be impaired by damage to the hippocampus without any corresponding deficits in short-term memory. However, this idea has come under scrutiny in recent years…
A number of previous behavioral and neuroimaging experiments, as well as computational models, support the idea that people can filter the contents of memory and perception so as to focus on only the information that's currently relevant. For example, in a visually-complex environment, distracting…