I had an interesting experience recently, which made me reflect on what I now think is an overly simplistic model of memory – namely the very common binary way to thinking about working memory and long-term memory.

Here’s what transpired:

My son recently asked for help with A-level maths homework. It was a bearings question, which I couldn’t immediately see how to solve. There didn’t seem to be enough information in the question to use basic trig, so I asked to see the textbook to get some context.

The chapter heading was ‘Sine and Cosine Rules’. I could see that the sine rule wouldn’t work because we did not have any suitable side/angle pair. But the cosine rule would be perfect – except I couldn’t remember it.

By ‘couldn’t remember’, I mean I couldn’t remember the formula, though I had a vague memory of using ‘the cosine rule’ in the past.

Having skim-read the worked example in the textbook, I quickly saw how to solve the problem we were working on. This made me think – could I be said to have ‘learnt’ the cosine rule all those years ago, or not? If we accept @diasychristo’s excellent definition of learning as ‘a change in long-term memory’, then had my long term memory changed sufficiently to qualify as having learnt it?

So was the Cosine rule really ever in my long-term memory? After some careful thought, I would say yes, given that after a very short amount of time spent looking at the textbook, I was able to answer the question, recall and use the Cosine rule that had lain dormant for so many years. This started me hypothesising that the long-term memory actually has different levels of ‘depth’, ‘dormancy’ and ‘permanence’.

For example, I am sure I will never forget how to create square numbers from reals, yet I had forgotten the Cosine Rule almost entirely. I will never forget the date of the Battle of Hastings, yet I sometimes forget the details of the battle itself.

In this case I was able to use and apply the cosine rule far faster than if I had never met it, yet effortless recall was not available to me; this suggests there is a grey area and memory is not easily split into ‘just’ working memory and long-term memory. @pepsmccrae explains in his book that long-term memory is more a forest than a library: today’s experience would seem to back this up. So even if learnt and understood, this suggests that everything needs revisiting, even things we previously appear to have mastered.

In recent years, we have all become more aware of the need to space practice and encourage retrieval over re-reading in order to increase permanence. But I wonder if there is more to it than just permanence? Ebbinghaus’s Forgetting Curve is something with which all teachers are familiar, if not by name then certainly by effect. How wonderful it would be if we were able to make some inroads into depth, decay and permanence rather than just decay itself.

I realise this raises more questions than it answers. For example:

• At what point does something move into Long-Term Memory?
• And how do we decide if it is there?What do we as teachers really usefully learn if we test students on what they have just done?
• Are depth, dormancy and permanence real/useful ways to describe the quality of information that our brains have stored? If so, how do we measure them?
• At what point does infornation move from the place where 1066 is stored to the place the Cosine Rule is stored?
• How many spaced retrieval practices does it take to move this information?
• Does ‘information decay’ ever reach 100%?
• How do we best slow down this decay rate?

I look forward to exploring these ideas in the coming weeks and months. This all has HUGE implications for us as teachers. It feels as though I’m on an interesting journey at any rate.

I’ll keep you posted!