Brain-training computer games “do not make users any smarter”, according to The Daily Telegraph. Various other news sources reported that popular celebrity-endorsed games are no more effective at boosting intelligence than spending time surfing the internet.
These news articles are based on a well-conducted study which looked at the effects of six weeks of computerised brain-training (cognitive-training) tasks. These tasks aimed to improve skills in reasoning, memory, planning, attention and visual and spatial (visuospatial) awareness. The study compared changes in test performance in two groups who performed different brain-training activities with a third group who surfed the internet, looking for the answers to quiz questions. All three groups showed small improvements in post-training tests. This suggests that the improvements were simply due to familiarity with the test procedure. The brain-training groups failed to transfer the skills they learned and show improvement in other test areas that they had not been trained in.
The study’s strengths include its design and large size. The researchers used recognised tests that were considered accurate to assess cognitive function. However, one limitation of this research is that a large proportion of the participants dropped out of their online training programme. Overall, the research suggests that there are no cognitive benefits from short-term use of brain-training games, although other research will need to test their long-term effects.
This research was conducted by Dr Adrian M Owen and colleagues of MRC Cognition and Brain Sciences Unit, King’s College London, and the University of Manchester and Manchester Academic Health Science Centre. The study was supported by the Medical Research Council and the Alzheimer’s Society. The study was published in the peer-reviewed scientific journal Nature .
In general, the news stories accurately reflected the research, but the Daily Mail ’s claims that eating a salad or ballroom dancing have an effect on cognitive function are not based on this research.
This randomised controlled trial examined the validity of using brain-training or computerised tests to improve cognitive function. Brain training is reportedly becoming a multimillion-pound industry, but is lacking in supportive evidence. The cognitive-training tasks in this study included tasks designed to improve reasoning, memory, planning, attention and visuospatial awareness.
This particular study has a number of strengths, including the large number of participants and a design that randomly distributed participants into the various groups. Using this type of study design to compare the online cognitive training tasks with no training is the most accurate way to assess whether the tasks have any effect on later test performance.
The researchers recruited 52,617 adults (all viewers of the BBC science programme Bang Goes the Theory ) to participate in a six-week online study. The volunteers were randomised to experimental groups 1 or 2, or the control group. All three groups took part in four “benchmarking” tests to establish initial levels of cognitive ability. The four benchmarking tests were adapted from a collection of publicly available cognitive assessment tools designed and validated at the Medical Research Council Cognition and Brain Sciences Unit. They are believed to be a sensitive test of changes in cognitive function.
The first test involved grammatical reasoning and was believed to relate to general intelligence (volunteers had 90 seconds to work through as many statements as possible, saying if they were true or false). The second test involved remembering a series of digits in their correct sequence. The third test assessed visuospatial awareness and involved searching through a series of boxes to find a hidden star, then finding it again in a new test. The fourth test, called the paired-associates learning (PAL) test, is widely used to assess cognitive deterioration. It involved recognising and associating pairs of objects with each other.
The three experimental groups (groups 1, 2 and the control group) were assigned different programmes of training sessions, which were performed over six weeks. The computerised training sessions lasted at least 10 minutes and were given on at least three days of the week. Group 1 received training on six computerised tasks, involving reasoning, planning and problem solving. Group 2 received training on six memory, attention, visuospatial-awareness and mathematical-processing tasks. The difficulty of the training tasks increased for both groups over the six weeks. The control group did not receive any formal cognitive training, but were asked five obscure general knowledge questions (related to popular culture, history and geography, for example) during each session. The control group could find the answers using online resources.
Following the six-week training programmes, the participants were tested again using the four benchmarking tests of cognitive ability. To be included in the final analysis, participants had to have taken part in at least two of their training sessions to allow them to be analysed in the study (on average, 24.5 sessions were completed). Of the 52,617 participants initially recruited, 11,430 completed both benchmark tests and at least two training sessions. Of these, 4,678 were in group 1, 4,014 in group 2 and 2,738 in the control group. The randomised groups were of equivalent size at the start of the study, so the lower number of participants left in the control group reflects the higher drop-out in this group during training. The researchers say that this was possibly due to the lower stimulation and interest of the control tests.
The main outcomes examined were the differences in pre- and post-training benchmark test scores within the three groups, and the differences in scores between the groups. The researchers also looked at how performance in the tasks the participants were being trained in changed from the first time they completed them to the last time they completed them.
The researchers found that after the training period:
For all groups, the effect of training was small: there was a small improvement after six weeks and the groups showed similar improvement to each other. These results were interpreted as showing a marginal effect of practice across the tests (i.e. the participants improved as they became more familiar with the tests).
The researchers found that, over the course of training, experimental groups 1 and 2 demonstrated the greatest improvement in the specific tasks that they had trained in. However, this was not accompanied by improved performance in other tests that they had not been trained in, even for tests expected to involve similar brain functions.
The members of the control group also improved in their ability to answer obscure general knowledge questions, although this specific improvement was not as great as the specific improvements in the other groups. The number of training sessions attended had only a negligible effect on the improvements seen.
The researchers concluded that their results provide “no evidence for any generalized improvements in cognitive function following brain training in a large sample of healthy adults”. This was the case for both general cognitive training (involving tests of memory, attention, visuospatial processing and mathematics, similar to many tests found in commercial brain-training tests) and for more focused cognitive training involving tests of reasoning, planning and problem solving. The results also suggested that training-related improvements did not transfer to other tasks that use similar cognitive functions.
This well-conducted study investigated the effects on cognitive function of cognitive-training tasks, aimed at improving reasoning, memory, planning, attention and visuospatial awareness. The researchers found that performance in four benchmarking tests was slightly improved after six weeks of training activities. Improvements were similar across the two cognitive-training groups and the control group, who were only asked obscure general knowledge questions as their training. This suggests that the improvements seen may be due to a practice effect from repeating the test. In other words, people tend to do better on a test if they have done it before.
Even though the two experimental groups showed greatest improvement in the specific tasks that they had trained in, the key question that remains is whether or not training exercises can improve performance in other tasks or general cognitive functioning. This study found no evidence that this was the case, with no improvements in tasks that the participants had not been trained in.
This study had several strengths, principally its large size and randomised controlled design. The benchmarking tests used to assess cognitive function have also been shown to be valid tests with the ability to detect changes in cognitive function in both healthy people and those with disease. However, the degree of drop-out in the control group (through lack of participation in the control training sessions) is a limitation of this study.
Although the online cognitive training did not provide any real evidence of benefit to cognitive function in the short term over six weeks, many people would be interested in whether brain training could help stave off cognitive decline and dementia, a question not addressed by the current study. To address this question, a study would need to administer the training over a prolonged period of years and follow participants up for a long time, which is likely to be impractical.