Tracing a Critical Path Through Human Memory

To enhance any system, one first needs to identify its capacity-limiting factor(s). Human cognition is a highly complex and multiply constrained system, consisting of both independent and interdependent capacity-limitations. These "bottlenecks" in cognition are reviewed below as a coherent framework for understanding the plethora of cognitive training paradigms which are currently associated with enhancements of working memory, executive function and fluid intelligence (1,2, 3, 4, 5, 6, 7, 8, 9,
10, c.f. 11, 12, 13).

By far, the most common complaint about limitations in cognition is something along the lines of "I have a bad memory." As described below, this is probably because what we call "memory" is the emergent result of multiple constraints operating in parallel. To improve memory, we need to understand what these parallel operations are, and how they may constrain the emergence of memory.

At the coarsest level, we can distinguish long term from short-term memory. Evidence for this distinction can be observed pharmacologically (with midazolam, among other drugs), clinically (Patient H.M., for example), neurally (long-term memory seems to require the hippocampus, whereas short-term memory may not) and behaviorally (long-term memory has an unlimited capacity, but short term memory is limited to 7 items [or 4, or 1 - the jury is still out]). Thus, this basic distinction is supported by a variety of evidence as well as common wisdom (but see this).

Although long-term memory is basically unlimited in capacity (according to most theories, anyway), it is temporally limited - after a certain amount of time has elapsed, or a certain number of contextual changes encountered, stored information may become inaccessible. There are some reasons to believe information stored in long-term memory is never truly lost, but rather becomes inaccessibly difficult to retrieve into short-term memory via attention (what Unsworth & Engle have called "cue specification"). Thus, at least for the moment, let's suppose that long-term memory has no inherent limitations, but rather inherits its limitations from those in short-term memory.

Some theorists have persuasively reconceptualized the human memory architecture such that short-term memory is simply the activated portion of long-term memory. I think they are probably right, but for our purposes the distinction is mostly irrelevant: all long-term memory must still "pass through" short-term memory and therefore is constrained by it. Accordingly, these reconceptualizations of memory architecture tend to focus on the capacity limitations of short- rather than long-term memory.

Within short-term forms of memory, we can further distinguish between "working" and "iconic/echoic" memory. Iconic/echoic memory can be thought of as the "trace" of stimulation that reverberates for a short time in sensory cortex after the stimulus is no longer present in the environment. A subset of this sensory information can be placed into working memory and thereby preserved across time. Thus, while iconic/echoic memory is capacity-unlimited, it is temporally-limited; working memory, in contrast, is capacity-limited but (theoretically) temporally-unlimited.

So, roughly speaking, both forms of short-term memory are bottlenecked. Were it possible to lengthen iconic memory - the time that environmental stimulation reverberates in sensory cortex - the disadvantages of such "temporal blurring" might supercede its advantages. Maybe precisely this kind of temporal expansion in iconic memory is the underlying cause of eidetic or so-called "photographic memory" (note, however, that the existence of true photographic memory is highly controversial: only one person has ever been reported to have passed the most rigorous test for photographic memory, and has disappeared from the scientific literature since that report. This coincidence may be indicative of fraud.)

In contrast, the capacity limitations of working memory may be a "safer" target for enhancement because working memory is under conscious control and is unlikely to maladaptively interfere with more basic cognitive functions (c.f.).

The first limiting component of working memory is the selection of important information currently represented in iconic memory, or in sensory cortex, possibly via a ventral attentional network which monitors for important stimuli. Fascinating work from Lavie and colleagues has shown that there are dissociable capacity effects at this "selection from perception" level; more recent work suggests that these perceptual load effects may be related to an object-based limit of approximately 4 items (or features) in the intraparietal sulcus.

Selected information is then made accessible for processing in working memory first via an updating process, probably relying on the basal ganglia.

Information that has been selected and updated can then be preserved for further processing through a process known as "binding" (Cowan et al) or "consolidation into working memory" (Vogel et al) in several ways:

a) via sustained attention (which may itself require what Unsworth & Engle call a "focus switch").
b) via rapid storage within an "episodic" or long-term memory buffer (requiring only transient attention, as suggested by "dual mechanisms of control" theory), or
c) via both mechanisms simultaneously (as suggested by the fact that in the absence of long-term memory support, working memory span is capacity limited to 1 item).

The format of this preserved information appears to be a series of slots each of which has a discrete resolution, which may be related to the posterior parietal cortices. Some fascinating work suggests that the resolution and slot capacity may represent characteristics of the inferior and superior parietal cortex, respectively.

(Orthodoxy in cognitive psychology holds that two "slave systems" [the "inner scribe" and the "phonological loop"] can be used to rehearse this information, although there are a number of problems with this idea and these two systems have not been well supported by more recent work in cognitive neuroscience.)

The components of working memory listed in bold in the preceding paragraphs are all potential targets for cognitive training and enhancement; they are:


  • 1) selection (filtering of perceptual information by posterior parietal cortex)
  • 2) updating (basal ganglia-mediated entry of sensory information to working memory)
  • 3) focus switching (switching attention among items in working memory)
  • 4) sustained or transient attention (aka "proactive or reactive control")
  • 5) binding (parietally or hippocampally mediated; see this discussion)
  • 5a) storage in a episodic or long-term memory buffer (hippocampally-mediated)
  • 5b) storage in short-term memory buffer (parietally-mediated)
  • 6) # of working memory "slots" (Zhang & Luck's term)
  • 7) resolution of working memory "slots" (Zhang & Luck's term)

Now, which of these bottlenecks can be alleviated?

As reviewed over the past few weeks, updating can be enhanced with n-back training, but this may not widely transfer to other tasks; "focused attention" may be made more efficient with mindfulness techniques (as assessed via the "orienting" and "conflict monitoring" components of the ANT task and via the attentional blink) but might not be parallelized or expanded via practice (as determined via constant "focus switch costs" as a function of practice on the n-back). Somewhat counterintuitively, focus switch costs are unrelated to working memory capacity and fluid intelligence; thus the immutability of focus switch costs are not a lost opportunity for executive control and intelligence enhancement.

Interestingly, although the more reactive or retrospective mechanism of working memory is the newest to be proposed, it is probably the mechanism behind the majority of the oldest memory training techniques, which encouraged strategies for "chunking". (Such strategies thus target the encoding of information; might future work reveal undiscovered techniques for enhancing retrieval?)

As far as I know, there are no reports of improvements in "selection efficiency" as operationalized in the Luck & Vogel paradigm, nor whether "selection efficiency" changes as a function of more generalized working memory training. It is also unclear whether selection efficiency is related to "updating" (and the basal ganglia, as suggested by McNab & Klingberg's work) or to the "alerting" / VAN system, but both have been shown to be plastic with training. In all likelihood, "updating" and "alerting" work in conjunction, with "altering" being accomplished by a ventral attentional network which then triggers a basal ganglia "updating" process (as supported by some analyses of the p3 response).

In addition, no work has focused on enhancing the resolution of working memory's discretely precise "slots", nor on whether the number of those slots appear to increase with practice. This is probably due to the fact that Zhang & Luck's mixture-model method is just too new to have been used in these paradigms.

Different areas of the brain are thought to underlie the two forms of information preservation in working memory discussed above (continuous "focused attention" in the service of active maintenance, and "rapid storage" in the service of a more reactive/retrospective form of working memory). While the same basic cortico-striatal architecture probably subserves both mechanisms, parietal cortex may be particularly important for the form of working memory that involves active maintenance and continuous, focused attention, whereas the hippocampus may be more involved in the reactive or retrospective form of working memory. In fact, computational models suggest the two regions may interact or coordinate their activity. (The reactive/retrospective form of working memory might also [or alternatively] involve a form of cortical "fast-weights," similar to those underlying iconic memory. By this view, reactive/retrospective working memory could involve the "reconstruction" of a memory from decaying memory traces exclusive of hippocampal pattern completion.)

Finally, attention training (whether of sustained or transient attention) has been the focus of some of the most successful and long-standing cognitive training paradigms: those carried out by Posner and colleagues. One can speculate that more recently-discovered cognitive training techniques also work directly on attention itself, perhaps increasing its efficiency by increasing its strength or coherence (see, for example, this previous post on the tightening of neuronal temporal tuning as a result of practice, and this work suggesting that uniformity in reaction time distributions subsumes and surpasses the predictive variance in IQ captured by elementary cognitive tasks.)

In summary, human memory is a heterogenous entity with multiple constraints operating both in parallel and in series. This does not, however, preclude a coherent framework for understanding the multiple behavioral training methods which are being discovered to enhance cognition.

Categories

More like this

One of the bottlenecks in human memory capacity is its "filtering efficiency" - irrelevant information in memory only detracts from an already-constrained memory span. New work by McNab & Klingberg images the neural structure directly responsible for such filtering, and shows it can predict…
A variety of new cognitive neuroscience shows how our ability to ignore distractions - to "perceptually filter", in a sense - is based on a ventral attentional network, is related to working memory, and may be involved in putative inhibitory tasks. First, a little background. In 2004, Vogel &…
Does the resolution or precision of human memory change with its available capacity? In other words, can you remember fewer items with greater precision than you can remember more items? Contradicting intuition, a new paper from yesterday's issue of Nature shows that all items are stored in…
Memory, defined as "any lasting effect of experience," is an overly broad term. Those with damage to the hippocampus lose their long-term memory but retain the ability to maintain conversations (at least for short periods of time). But new perspectives on the nature of short-term or "working"…

Some time before, I really needed to buy a house for my organization but I did not earn enough money and couldn't purchase anything. Thank heaven my father proposed to take the loans from reliable creditors. So, I did so and used to be happy with my sba loan.

When you are in the corner and have got no cash to get out from that, you will need to receive the loans. Just because it would aid you unquestionably. I take consolidation loans every year and feel myself fine because of this.

Great post, very clear.

I agree that conceptually, relaxing the temporal constraints on iconic memory would potentially reduce its usefulness. But I wonder to what extent all cognitive "enhancements" result in a zero-sum game.

For example, it is common in our field to regard the 7/4/1 capacity constraint on working memory to be a limitation. Less common, however, is the idea that perhaps working memory constraints are adaptive. For example, it may be that within our own cognitive architecture, it is simply more efficient to base behavior on relatively less data.

Imagine a system without constraints. Simple problems, like "what will I attend to" may become intractable when all options are considered, but would likely pose no problem when only a few options can be considered simultaneously.

I choose to view cognitive bottlenecks and capacity limitations as adaptive, because in most real-world scenarios, satisficing is incredibly effective (e.g. 'Heuristics'). An appropriate system, then, is one that encourages satisficing and rapid decision-making over slower, optimal decision-making.

Granted, my perspective is skewed by my own focus on visual attention, but I think it is important to consider that in general, we work REALLY well with the constraints we're given. It is possible that we'd work less-well without them.

I agree, a great post. Makes me wonder if there will ever be a day when we take cognitive enhancement classes in grade school. I'm a little bit of a layman on these topics, so maybe I missed it in the article, but are there any drug based routes as opposed to behavioral training ones being explored that one could be optimistic about?

Excellent post, Chris. It was a delight to read a well constructed and very readable update on a topic that should, and does, fascinate us all. You have potential.

Let us hear from you often!

Chris, what a superb post. You could also add the work on information processing (more perception-based) and the one on attentional control, among others, and the implication is that different people may have different bottlenecks, addressable or not, which is why, in my view, high-quality yet scalable cognitive assessments will come in very handy to measure baselines and prioritize interventions.