What AI Can Tell Us About Intelligence

Can deep learning systems learn to manipulate symbols? The answers might change our understanding of how intelligence works and what makes humans unique.

If there is one constant in the field of artificial intelligence it is exaggeration: there is always breathless hype and scornful naysaying. It is helpful to occasionally take stock of where we stand.

The dominant technique in contemporary AI is deep learning (DL) neural networks, massive self-learning algorithms which excel at discerning and utilizing patterns in data. Since their inception, critics have prematurely argued that neural networks had run into an insurmountable wall — and every time, it proved a temporary hurdle. In the 1960s, they could not solve non-linear functions. That changed in the 1980s with backpropagation, but the new wall was how difficult it was to train the systems. The 1990s saw a rise of simplifying programs and standardized architectures which made training more reliable, but the new problem was the lack of training data and computing power.

In 2012, when contemporary graphics cards could be trained on the massive ImageNet dataset, DL went mainstream, handily besting all competitors. But then critics spied a new problem: DL required too much hand-labelled data for training. The last few years have rendered this criticism moot, as self-supervised learning has resulted in incredibly impressive systems, such as GPT-3, which do not require labeled data.

Today’s seemingly insurmountable wall is symbolic reasoning, the capacity to manipulate symbols in the ways familiar from algebra or logic. As we learned as children, solving math problems involves a step-by-step manipulation of symbols according to strict rules (e.g., multiply the furthest right column, carry the extra value to the column to the left, etc.). Gary Marcus, author of “The Algebraic Mind”and co-author (with Ernie Davis) of “Rebooting AI,recently argued that DL is incapable of further progress because neural networks struggle with this kind of symbol manipulation. By contrast, many DL researchers are convinced that DL is already engaging in symbolic reasoning and will continue to improve at it.

At the heart of this debate are two different visions of the role of symbols in intelligence, both biological and mechanical: one holds that symbolic reasoning must be hard-coded from the outset and the other holds it can be learned through experience, by machines and humans alike. As such, the stakes are not just about the most practical way forward, but also how we should understand human intelligence — and, thus, how we should pursue human-level artificial intelligence.

Kinds Of AI

Symbolic reasoning demands precision: symbols can come in a host of different orders, and the difference between (3-2)-1 and 3-(2-1) is important, so performing the right rules in the right order is essential. Marcus contends this kind of reasoning is at the heart of cognition, essential for providing the underlying grammatical logic to language and the basic operations underlying mathematics. More broadly, he holds this extends into our more basic abilities, where there is an underlying symbolic logic behind causal reasoning and reidentifying the same object over time.

The field of AI got its start by studying this kind of reasoning, typically called Symbolic AI, or “Good Old-Fashioned” AI. But distilling human expertise into a set of rules and facts turns out to be very difficult, time consuming, and expensive. This was called the “knowledge acquisition bottleneck.” While simple to program rules for math or logic, the world itself is remarkably ambiguous, and it proved impossible to write rules governing every pattern or define symbols for vague concepts.

This is precisely where neural networks excel: discovering patterns and embracing ambiguity. Neural networks are a collection of relatively simple equations that learn a function designed to provide the appropriate output for whatever is inputted to the system. For example, training a visual recognition system will ensure all the chair images cluster together, allowing the system to tease out the vague set of indescribable properties of such an amorphous category. This allows the network to successfully infer whether a new object is a chair, simply by how close it is to the cluster of other chair images. Doing this with enough objects and with enough categories results in a robust conceptual space, with numerous categories clustered in overlapping but still distinguishable ways.

“At stake are questions not just about contemporary problems in AI, but also questions about what intelligence is and how the brain works.”

These networks can be trained precisely because the functions implemented are differentiable. Put differently, if Symbolic AI is akin to the discrete tokens used in symbolic logic, neural networks are the continuous functions of calculus. This allows for slow, gradual progress by tweaking the variables slightly in the direction of learning a better representation — meaning a better fit between all the data points and the numerous boundaries the function draws between one category and another. This fluidity poses problems, however, when it comes to strict rules and discrete symbols: when we are solving an equation, we usually want the exact answer, not an approximation.  

Since this is where Symbolic AI shines, Marcus recommends simply combining the two: inserting a hard-coded symbolic manipulation module on top of a pattern-completion DL module. This is attractive since the two methods complement each other well, so it seems plausible a “hybrid” system with modules working in different ways would provide the best of both worlds. And it seems like common sense, since everyone working in DL agrees that symbolic manipulation is a necessary feature for creating human-like AI.

But the debate turns on whether symbolic manipulation needs to be built into the system, where the symbols and capacity for manipulating are designed by humans and installed as a module that manipulates discrete symbols and is consequently non-differentiable — and thus incompatible with DL. Underlying this is the assumption that neural networks can’t do symbolic manipulation — and, with it, a deeper assumption about how symbolic reasoning works in the brain.

Symbolic Reasoning In Neural Nets

This assumption is very controversial and part of an older debate. The neural network approach has traditionally held that we don’t need to hand-craft symbolic reasoning but can instead learn it: training a machine on examples of symbols engaging in the right kinds of reasoning will allow it to be learned as a matter of abstract pattern completion. In short, the machine can learn to manipulate symbols in the world, despite not having hand-crafted symbols and symbolic manipulation rules built in.

Contemporary large language models — such as GPT-3 and LaMDA — show the potential of this approach. They are capable of impressive abilities to manipulate symbols, displaying some level of common-sense reasoningcompositionalitymultilingual competencysome logical and mathematical abilities, and even creepy capacities to mimic the dead. If you’re inclined to take symbolic reasoning as coming in degrees, this is incredibly exciting.

But they do not do so reliably. If you ask DALL-E to create a Roman sculpture of a bearded, bespectacled philosopher wearing a tropical shirt, it excels. If you ask it to draw a beagle in a pink harness chasing a squirrel, sometimes you get a pink beagle or a squirrel wearing a harness. It does well when it can assign all the properties to a single object, but it struggles when there are multiple objects and multiple properties. The attitude of many researchers is that this is a hurdle for DL — larger for some, smaller for others — on the path to more human-like intelligence.

“Does symbolic manipulation need to be hard-coded, or can it be learned?”

However, this is not how Marcus takes it. He broadly assumes symbolic reasoning is all-or-nothing — since DALL-E doesn’t have symbols and logical rules underlying its operations, it isn’t actually reasoning with symbols. Thus, the numerous failures in large language models show they aren’t genuinely reasoning but are simply going through a pale imitation. For Marcus, there is no path from the stuff of DL to the genuine article; as the old AI adage goes, you can’t reach the Moon by climbing a big enough tree. Thus he takes the current DL language models as no closer to genuine language than Nim Chimpsky with his few signs of sign-language. The DALL-E problems aren’t quirks of a lack of training; they are evidence the system doesn’t grasp the underlying logical structure of the sentences and thus cannot properly grasp how the different parts connect into a whole.

This is why, from one perspective, the problems of DL are hurdles and, from another perspective, walls. The same phenomena simply look different based on background assumptions about the nature of symbolic reasoning. For Marcus, if you don’t have symbolic manipulation at the start, you’ll never have it.

By contrast, people like Geoffrey Hinton contend neural networks don’t need to have symbols and algebraic reasoning hard-coded into them in order to successfully manipulate symbols. The goal, for DL, isn’t symbol manipulation inside the machine, but the right kind of symbol-using behaviors emerging from the system in the world. The rejection of the hybrid model isn’t churlishness; it’s a philosophical difference based on whether one thinks symbolic reasoning can be learned.

The Nature Of Human Thought

Marcus’s critique of DL stems from a related fight in cognitive science (and a much older one in philosophy) concerning how intelligence works and, with it, what makes humans unique. His ideas are in line with a prominent “nativist” school in psychology, which holds that many key features of cognition are innate — effectively, that we are largely born with an intuitive model of how the world works.

A central feature of this innate architecture is a capacity for symbol manipulation (though whether this is found throughout nature or whether it is human-specific is debated). For Marcus, this symbol manipulation capacity grounds many of the essential features of common sense: rule-following, abstraction, causal reasoning, reidentifying particulars, generalization and a host of other abilities. In short, much of our understanding of the world is given by nature, with learning as a matter of fleshing out the details.

There is an alternate, empiricist view which inverts this: symbolic manipulation is a rarity in nature, primarily arising as a learned capacity for communicating acquired gradually by our hominin ancestors over the last two million years. On this view, the primary cognitive capacities are non-symbolic learning abilities bound up with improving survival, such as rapidly recognizing prey, predicting their likely actions, and developing skillful responses. This assumes that the vast majority of complex cognitive abilities are acquired through a general, self-supervised learning capacity, one that acquires an intuitive world-model capable of the central features of common sense through experience. It also assumes that most of our complex cognitive capacities do not turn on symbolic manipulation; they make do, instead, with simulating various scenarios and predicting the best outcomes.

“The inevitable failure of deep learning has been predicted before, but it didn’t pay to bet against it.”

This empiricist view treats symbols and symbolic manipulation as simply another learned capacity, one acquired by the species as humans increasingly relied on cooperative behavior for success. This regards symbols as inventions we used to coordinate joint activities — things like words, but also maps, iconic depictions, rituals and even social roles. These abilities are thought to arise from the combination of an increasingly long adolescence for learning and the need for more precise, specialized skills, like tool-building and fire maintenance. This treats symbols and symbolic manipulations as primarily cultural inventions, dependent less on hard wiring in the brain and more on the increasing sophistication of our social lives.

The difference between these two views is stark. For the nativist tradition, symbols and symbolic manipulation are originally in the head, and the use of words and numerals are derived from this original capacity. This view attractively explains a whole host of abilities as stemming from an evolutionary adaptation (though proffered explanations for how or why symbolic manipulation might have evolved have been controversial). For the empiricist tradition, symbols and symbolic reasoning is a useful invention for communication purposes, which arose from general learning abilities and our complex social world. This treats the internal calculations and inner monologue — the symbolic stuff happening in our heads — as derived from the external practices of mathematics and language use.

The fields of AI and cognitive science are intimately intertwined, so it is no surprise these fights recur there. Since the success of either view in AI would partially (but only partially) vindicate one or the other approach in cognitive science, it is also no surprise these debates are intense. At stake are questions not just about the proper approach to contemporary problems in AI, but also questions about what intelligence is and how the brain works.

What The Stakes Are — And Aren’t

The high stakes explain why claims that DL has hit a wall are so provocative. If Marcus and the nativists are right, DL will never get to human-like AI, no matter how many new architectures it comes up with or how much computing power it throws at it. It is just confusion to keep adding more layers, because genuine symbolic manipulation demands an innate symbolic manipulator, full stop. And since this symbolic manipulation is at the base of several abilities of common sense, a DL-only system will never possess anything more than a rough-and-ready understanding of anything.

By contrast, if DL advocates and the empiricists are right, it’s the idea of inserting a module for symbolic manipulation that is confused. In that case, DL systems are already engaged in symbolic reasoning and will continue to improve at it as they become better at satisfying constraints through more multimodal self-supervised learning, an increasingly useful predictive world-model, and an expansion of working memory for simulating and evaluating outcomes. Introducing a symbolic manipulation module would not lead to more human-like AI, but instead force all “reasoning” operations through an unnecessary and unmotivated bottleneck that would take us further from human-like intelligence. This threatens to cut off one of the most impressive aspects to deep learning: its ability to come up with far more useful and clever solutions than the ones human programmers conceive of.

As big as the stakes are, though, it is also important to note that many issues raised in these debates are, at least to some degree, peripheral. There are sometimes claims that the high-dimensional vectors in DL systems should be treated like discrete symbols (probably not), whether the lines of code needed to implement a DL-system make it a “hybrid” system (semantics), whether winning at complex games requires hand-crafted, domain-specific knowledge or whether it can learn it (too soon to tell). There’s also a question of whether hybrid systems will help with the ethical problems surrounding AI (no).

And none of this is to justify the silliest bits of hype: current systems aren’t conscious, they don’t understand usreinforcement learning isn’t enough, and you can’t build human-like intelligence just by scaling up. But all these issues are peripheral from the main debate: does symbolic manipulation need to be hard-coded, or can it be learned?  

Is this a call to stop investigating hybrid models (i.e., models with a non-differentiable symbolic manipulator)? Of course not. People should go with what works. But researchers have worked on hybrid models since the 1980s, and they have not proven to be a silver bullet — or, in many cases, even remotely as good as neural networks. More broadly, people should be skeptical that DL is at the limit; given the constant, incremental improvement on tasks seen just recently in DALL-E 2Gato, and PaLM. it seems wise not to mistake hurdles for walls. The inevitable failure of DL has been predicted before, but it didn’t pay to bet against it.

Source: https://www.noemamag.com/what-ai-can-tell-us-about-intelligence/

Understanding the Flaws Behind the IQ Test

IQ tests are one of the most prominent tools in the modern psychologist’s toolbox. They also have numerous methodological flaws.

(Credit: Jirsak/Shutterstock)

IQ, short for intelligence quotient, is one of the most widely cited and versatile tools in psychology, spanning thousands of studies across more than a century of research. Aiming to measure the innate intelligence of the human population, IQ tests work by aggregating the scores from several distinct tasks into a singular number representing the person’s cognitive ability. The tests also have numerous methodological flaws that we’re only just beginning to understand.

“I think IQ testing has done far more harm than good,” says British Psychologist Ken Richardson, author of Understanding Intelligence. “And it’s time we moved beyond the ideologically corruptible mechanical model of IQ to a far deeper and wider appreciation of intelligence.”

Questioning Construct

Construct validity, or the degree to which any test can be said to measure the construct it proclaims, is one of the largest issues with using IQ as a scientific measure. According to Richardson, IQ’s construct validity doesn’t exist: “Not in the sense used in biomedical and other scientific measures, or knowing what differences on the ‘inside’ are being truly reflected in the scores on the ‘outside.’ That’s because there is little agreed-on theory of what happens on the ‘inside.’”

As discussed in a 2019 review, published in the Journal of Applied Research in Memory and Cognition, what’s commonly touted as “general intelligence” is instead little more than a statistical artifact resulting from the same brain regions being activated for similar tasks on an IQ test.

On the other hand, one of the things IQ is most famous for is the shape of its score distribution. When the scores of a group of test-takers are put on a graph, they generally follow a bell-shaped curve — what’s known as the normal distribution. It’s commonly said that IQ naturally falls this way, thus spelling proof that it’s a legitimate construct. However, it’s actually artificially constructed to have an average of 100 and a standard deviation of 15.

There’s also the issue of the correlational validity of IQ. A common argument in favor of IQ is that it correlates highly with other constructs, such as socioeconomic status, thus proving its validity. However, this falls apart under further scrutiny. The large-scale correlation between IQ and socioeconomic status could also be said to represent the unaccounted-for influence of income and wealth upon one’s testing conditions, for example.

Likewise, Richardson and his colleagues have revealed that job performance, one of the most well-regarded correlations in intelligence testing, is empirically suspect. “The correlations wash out over time, and there is little if any correlation with later job performance or other indices of success,” he says.

In fact, the link between job performance and IQ is extremely fragile — quite a bit of the research this link rests upon is decades old and composed of a wide variety of tests that aren’t easily comparable. The job performance measures that many such studies rely upon are themselves very flawed, adds Richardson, lacking standard reliability and validity to determine that an objective criterion of job performance is being measured.

Biological Failings

And there’s a further biological case to be made against IQ’s methodological soundness; the degree to which IQ may have a genetic foundation has been widely criticized. Work conducted by behavioral geneticists primarily rests upon twin studies (which assess the degree of similarity between identical and fraternal twins in relationship to a given trait) and genome-wide association studies (GWAS), which survey the whole genome to find specific genes that could be linked to a given trait at hand.

Both of these measures, however, suffer from methodological issues. Psychologist Jay Joseph, in his book The Trouble With Twin Studies, writes in-depth about how twin studies rest upon assumptions of equivalent raising environments for twins and non-twins — an assumption that often fails.

And according to Richardson, GWAS often have numerous environmental confounds hidden beneath the surface. Researchers frequently fall prey to the false assumption that they’ve corrected them while using measures that hold additional confounds in and of themselves. What this research often shows, as opposed to a direct genetic link, is another way in which social inequality gets inherited: through societal and environmental measures.

Further, the measures rest strongly upon the measure of heritability, something noted for being based on outdated models that don’t account for recent genetic findings such as epigenetics, the study of how your environment causes changes in your genes. “I’m not sure it rests on any genetic foundation at all,” says Kevin Bird, who recently earned his Ph.D. from Michigan State University and has focused on how IQ helps to promote scientific racism.

“The main thing they’ve found for the last several decades is that there is a correlation between similarity of IQ scores and genetic relatedness, but that tells you almost nothing,” says Bird.

IQ as a measure that seemingly holds many issues, owing to its lack of construct validity, the flawed genetics research behind it, and its problematic correlations. Thanks to the work of a few scientists, we’re beginning to understand the extent to which its flaws prevail.

Source: https://www.discovermagazine.com/mind/understanding-the-flaws-behind-the-iq-test

How a Gene Mutation Causes Higher Intelligence

The scientists also found out how the increased transmission at the synapses occurs: the molecular components in the transmitting nerve cell that trigger the synaptic impulses move closer together as a result of the mutation effect and lead to increased release of neurotransmitters. Image is in the public domain

Summary: A rare genetic mutation that causes blindness also appears to be associated with above-average intelligence, a new study reports.

Source: University of Leipzig

Synapses are the contact points in the brain via which nerve cells ‘talk’ to each other. Disturbances in this communication lead to diseases of the nervous system, since altered synaptic proteins, for example, can impair this complex molecular mechanism. This can result in mild symptoms, but also very severe disabilities in those affected.

The interest of the two neurobiologists Professor Tobias Langenhan and Professor Manfred Heckmann, from Leipzig and Würzburg respectively, was aroused when they read in a scientific publication about a mutation that damages a synaptic protein.

At first, the affected patients attracted scientists’ attention because the mutation caused them to go blind. However, doctors then noticed that the patients were also of above-average intelligence.

“It’s very rare for a mutation to lead to improvement rather than loss of function,” says Langenhan, professor and holder of a chair at the Rudolf Schönheimer Institute of Biochemistry at the Faculty of Medicine.

The two neurobiologists from Leipzig and Würzburg have been using fruit flies to analyze synaptic functions for many years.

“Our research project was designed to insert the patients’ mutation into the corresponding gene in the fly and use techniques such as electrophysiology to test what then happens to the synapses. It was our assumption that the mutation makes patients so clever because it improves communication between the neurons which involve the injured protein,” explains Langenhan.

“Of course, you can’t conduct these measurements on the synapses in the brains of human patients. You have to use animal models for that.”

“75 percent of genes that cause diseases in humans also exist in fruit flies”

First, the scientists, together with researchers from Oxford, showed that the fly protein called RIM looks molecularly identical to that of humans. This was essential in order to be able to study the changes in the human brain in the fly. In the next step, the neurobiologists inserted mutations into the fly genome that looked exactly as they did in the diseased people. They then took electrophysiological measurements of synaptic activity.

“We actually observed that the animals with the mutation showed a much increased transmission of information at the synapses. This amazing effect on the fly synapses is probably found in the same or a similar way in human patients, and could explain their increased cognitive performance, but also their blindness,” concludes Professor Langenhan.

The scientists also found out how the increased transmission at the synapses occurs: the molecular components in the transmitting nerve cell that trigger the synaptic impulses move closer together as a result of the mutation effect and lead to increased release of neurotransmitters. A novel method, super-resolution microscopy, was one of the techniques used in the study.

“This gives us a tool to look at and even count individual molecules and confirms that the molecules in the firing cell are closer together than they normally are,” says Professor Langenhan, who was also assisted in the study by Professor Hartmut Schmidt’s research group from the Carl Ludwig Institute in Leipzig.

“The project beautifully demonstrates how an extraordinary model animal like the fruit fly can be used to gain a very deep understanding of human brain disease. The animals are genetically highly similar to humans. It is estimated that 75 per cent of the genes involving disease in humans are also found in the fruit fly,” explains Professor Langenhan, pointing to further research on the topic at the Faculty of Medicine:

“We have started several joint projects with human geneticists, pathologists and the team of the Integrated Research and Treatment Center (IFB) AdiposityDiseases; based at Leipzig University Hospital, they are studying developmental brain disorders, the development of malignant tumours and obesity. Here, too, we will insert disease-causing mutations into the fruit fly to replicate and better understand human disease.”

Source: https://neurosciencenews.com/cord7-gene-iq-20550/

The surprising ways video games can boost your (or your child’s) intelligence

Credit: Namco Bandai Games


  • The assumption that video games make us dumber persists in spite of research suggesting they can lead to cognitive benefits. 
  • A recent study suggests that playing games boosts intelligence in young children. 
  • Games can improve memory and enable education through play.

When psychologist Brandon Ashinoff was seven years old, his parents got him a Nintendo Entertainment System. The NES, as the console quickly became known after its 1983 release, took video games to the next level with its intuitive design and impressive hardware. It also upset parents, who feared this increasingly popular pastime would damage the psychological development of their children.

Ashinoff’s parents referred to the NES as the “The Idiot Box.” Even at a young age, Ashinoff grasped the underlying meaning of this snidey but seemingly harmless comment. “There was an implicit assumption,” he recounts in an article published by the journal Frontiers in psychology, “that video games were simply a toy and nothing of real substance could be gained from them.”

Video games have been accused of many things over the years. For a long time, people feared they made us lazy, antisocial, depressed and even violent. While these suspicions have since been dispelled by the latest in game-related research, the assumption that gaming diminishes cognitive ability — especially within young children — has stayed with the general public to this day.

There are many reasons for this, one being that the link between video games and intelligence — a highly complicated variable — is difficult to study, let alone delineate in a convincing manner. Still, Ashinoff, who now works as a postdoctoral research fellow at Columbia University’s Department of Psychiatry, can name several studies that demonstrate how gaming helps rather than hurts the human brain.

Video games and child intelligence

The link between video games and intelligence is back in the news thanks to a study published recently in Scientific Reports. This study, co-authored by researchers from Germany, Sweden, and the Netherlands, looked at how various forms of screen time, from watching television to playing games, impacted cognition in children between the ages of nine and ten over a period of two years.

To make sure that the results reflected the impact of screen time and screen time alone, the researchers controlled for other variables that might influence intelligence. These included genetic effects as well as household income, parental education, and neighborhood quality — hugely influential factors for which, according to the authors, no study had previously accounted.

At the start of the two-year period, watching videos and socializing online seemed to be “linked to below-average intelligence” while gaming “wasn’t linked with intelligence at all.” When the researchers checked back in with their subjects two years later, however, they found that gaming “had a positive and meaningful effect on intelligence.”

The study concludes that “children who played more video games at ten years were on average no more intelligent than children who didn’t game” and “showed the most gains in intelligence after two years, in both boys and girls. For example, a child who was in the top 17% in terms of hours spent gaming increased their IQ about 2.5 points more than the average child over two years.”

Interestingly, playing video games was the only form of screen time with a positive impact on intelligence. Spending time on social media was found to have no effect on IQ. Watching TV or online videos initially showed a positive effect. However, this effect disappeared when the researchers took parental education into account, suggesting these activities are not enriching in and of themselves.

How games impact the human brain

The study from Scientific Reports is interesting, but inconclusive. First and foremost, the researchers only looked at games in general and did not make a distinction between individual types of games. We do not know, for instance, if first-person shooters impact intelligence in the same way as platformers, puzzle games, or other genres.

What’s more, the study only investigates whether playing video games impacts intelligence, not how. If games are to be recognized as having substance in the form of educational value, researchers should be able to readily demonstrate the various ways in which they influence the brain and improve our cognitive skills. This, conveniently, is where writers like Ashinoff come in.

Distrustful parents and political activists crusading against video games tend to forget that, at their core, games are puzzles which (like actual, non-digital puzzles) require cognitive skills to solve. “Today’s games,” one academic survey reads, “demand advanced analytical, visuospatial, and problem-solving capacities and tap several other facets of cognitive resources.”

Video games also help with memorization. This has been observed anecdotally by middle school teachers when they see students with learning difficulties recite the names of more than a hundred Pokémon. But it has also been found in research, with one study showing popular games like Mario and Angry Birds can vastly improve recognition memory in adults over a short period of time.

Research in mice as well as people has shown that confrontation with visually-rich environments can help boost memory. Operating under the assumption that there is no difference between digital environments and real ones, the article cited above discusses using video games in memory training for individuals who — whether because of illness or pandemic restrictions — are forced to stay indoors.

Last but not least, the design of video games mimics the learning processes we encounter in school. “Many games,” writes Ashinoff, “start off with a simple tutorial level that teaches the player the basic mechanics of the game. [The] strategy, and tactics needed to complete tasks become more complex while the teaching method gradually switches from an explicit tutorial to an experience-based process.”

The gamification of pedagogy

Because of these and other parallels, video games have become particularly interesting for people working in pedagogy. In recent years, researchers across the world have teamed up with game developers to create games that are not just entertaining, but also therapeutic, playing a key role in treating conditions like ADHD, PTSD and depression to name a few.

One example of such non-commercial games is ScrollQuest. Developed by the GEMH Lab of Radboud University in Nijmegen, Netherlands, ScrollQuest is a multiplayer game in which players must battle monsters alongside their brothers-in-arms. GEMH designed ScrollQuest to teach adolescents to cope with rejection and social anxiety through gameplay revolving around cooperation and peer-evaluation.

Commercial games can be therapeutic, too. Public health researcher Michelle Colder Carras was surprised to learn that army veterans play first-person shooters like Call of Duty to help with their PTSD. One veteran told her that playing the game helped “things make sense again.” Generally, she found that playing shooters helped ease their transition out of the military back into civilian life.

Finally, video games developed for purely educational purposes could follow in the footsteps of informative children’s programming like Sesame Street, which was originally produced to supplement the insufficient school curricula of inner-city children. Games surpass traditional early-childhood education insofar as they allow students to learn through play, thus capturing their attention and interest.

Source: https://bigthink.com/neuropsych/video-games-intelligence-children/

Two-year-old girl joins elite high-IQ society Mensa

Isla McNabb with her Mensa membership card. Photo: File / Amanda McNabb

A two-year-old recently joined the elite High-IQ society “Mensa” becoming the youngest member of the group.

Mensa is a social organisation for people who score in the top two per cent on a standardised intelligence test. It currently boasts around 140,000 members from over 90 countries.

Isla McNabb, the youngest member of the society, is from Crestwood, Kentucky. According to her mother, she has always had an “affinity for the alphabet” which is how they discovered her giftedness.

“She always had an affinity for the alphabet, so we got her all kinds of blocks and magnets,” Amanda McNabb, Isla’s mother, told Spectrum News, “and I would notice that the cat would have the letter C next to it, and then I would have the letter M.”

Isla’s parents decided to take her to a child psychologist after footage from the family security camera showed her scribbling “mom” with a crayon.

The child psychologist recommended intelligence tests and the scores revealed that the girl had “scored superior in everything and very superior in the knowledge category”.

Following these results, Isla received a Mensa membership card.

Isla’s parents also said that she picked up everything “they threw her way”.

She could do some light reading and had an extensive vocabulary of around 500 words. The parents admitted, however, that they had stopped counting after 200.

“Gifted children can feel isolated”

After the membership was confirmed, Mensa’s Charles Brown assured the McNabbs that the organisation would help Isla’s family by providing the resources they needed to manage the gifted child.

Gifted children can often feel isolated as their brain is working ten years ahead, according to Alan Thompson, the head of Mensa’s Gifted Youth Committee.

This is also why they gravitate more towards mentors and adults, and might have difficulty adjusting.

“Gifted children are very rare,” Thompson told 60 Minutes Australia in an interview, “they’re one in a million, maybe even one in five million.”

“She’s still an average two-year-old”

Despite Isla’s giftedness, her parents said that the toddler was still an average two-year-old.

“She can read well beyond her little years,” her father said, “but we’re still working on toilet training!”

The young girl also enjoys “Bluey”, an Australian cartoon about a cattle dog.

However, unlike other kids, the father told Spectrum News that she would likely skip kindergarten.

“She led us down a very interesting path, but we just let her take the reins and see where it goes from there,” he said. “Hopefully, it will lead to a scholarship—maybe Harvard or MIT one day.”

Source: https://www.aaj.tv/news/30291332/two-year-old-girl-joins-elite-high-iq-society-mensa