> Two studies published in 2012 failed to reproduce the effect of dual n-back training on fluid intelligence. These studies found that the effects of training did not transfer to any other cognitive ability tests.[12][13] In 2014, a meta-analysis of twenty studies showed that n-back training has small but significant effect on Gf and improve it on average for an equivalent of 3-4 points of IQ.[14] In January 2015, this meta-analysis was the subject of a critical review due to small-study effects.[15] The question of whether n-back training produces real-world improvements to working memory remains controversial.
> I meta-analyze the >19 studies up to 2016 which measure IQ after an n-back intervention, finding (over all studies) a net gain (medium-sized) on the post-training IQ tests.
The size of this increase on IQ test score correlates highly with the methodological concern of whether a study used active or passive control groups. This indicates that the medium effect size is due to methodological problems and that n-back training does not increase subjects’ underlying fluid intelligence but the gains are due to the motivational effect of passive control groups (who did not train on anything) not trying as hard as the n-back-trained experimental groups on the post-tests. The remaining studies using active control groups find a small positive effect (but this may be due to matrix-test-specific training, undetected publication bias, smaller motivational effects, etc.)
I also investigate several other n-back claims, criticisms, and indicators of bias, finding:
payment reducing performance claim: possible
dose-response relationship of n-back training time & IQ gains claim: not found
kind of n-back matters: not found
publication bias criticism: not found
speeding of IQ tests criticism: not found
That's curious to know, so I thank you for the reference, yet I don't care much as it made and makes a huge difference for me personally. Perhaps that's because it may be really effective for just a portion of people with specific conditions in specific circumstances (e.g. I have ADHD and I also take nootropics which are meant to increase neuroplasiticy, neurogenesis and cerebral blood flow so the result is synergetic, nootropics alone don't give a boost so profound) or maybe what gets improved is not the working memory but something else playing important role in cognition. By the way it seemingly has also improved my emotional intelligence, not just analytical one - I didn't expect that.
I don't really measure but I also wouldn't label it "an intuitive feeling" - it's an observation (not just a feeling like when you're stoned and feel genius), although not scientifically credible. I have a well-developed self-observation skill and can always tell when my cognition is less (for an extreme example - avoid sleeping for some 36 hours, drink some booze and try to be smart - you'll notice it's hard and you don't really do well, no need to measure digitally) or more efficient. I believe I could measure if I knew a good way [to measure fluid intelligence] and was interested enough but that's not the case.
So, I recommend everybody to try n-back exercises for some time [and see if it helps them] and I usually mention that's kind of proven to help scientifically (at least in some papers) and empirically but avoid saying its efficiency is anything close to an unquestionable fact. As a result people (including very smart and educated already) immediately report feelings like stains have been removed from their brains and their mental gears got greased.
> I usually mention that's kind of proven to help scientifically (at least in some papers) and empirically but avoid saying its efficiency is a strict fact.
All good but I must disagree with “kind of proven”. Things can be either proven or not proven, and this is not proven.
> Things can be either proven or not proven, and this is not proven.
On the contrary, nothing in science is ever proven, only disproven. We can have less or more confidence in a particular model; and whether you should use a particular model depends on the cost/benefit analysis.
In this case, there is some evidence that it might be helpful, although somewhat disputed; but playing the game certainly doesn't hurt, and doesn't cost much, so it might make sense to give it a try.
I've read the original paper by Jaeggi et. al., one paper replicating it successfully and one paper proving the results transfer to different tasks. At this point wasn't it logically correct to say that's scientifically proven? Now we know there are papers which prove those proofs questionable. That's what I mean.
If only we had a bullet-proof test to measure fluid intelligence (classic IQ tests have quite a well-known number of problems, solving math problems and puzzles is known to be much more of a skill, working memory is working memory - not intelligence itself) objectively...
From https://en.wikipedia.org/wiki/N-back:
> Two studies published in 2012 failed to reproduce the effect of dual n-back training on fluid intelligence. These studies found that the effects of training did not transfer to any other cognitive ability tests.[12][13] In 2014, a meta-analysis of twenty studies showed that n-back training has small but significant effect on Gf and improve it on average for an equivalent of 3-4 points of IQ.[14] In January 2015, this meta-analysis was the subject of a critical review due to small-study effects.[15] The question of whether n-back training produces real-world improvements to working memory remains controversial.