Wired for gaming: Brain differences in compulsive video game players

Public Release: 21-Dec-2015

 

Brain scans reveal new connections that are potentially beneficial, harmful

University of Utah Health Sciences

SALT LAKE CITY – Brain scans from nearly 200 adolescent boys provide evidence that the brains of compulsive video game players are wired differently. Chronic video game play is associated with hyperconnectivity between several pairs of brain networks. Some of the changes are predicted to help game players respond to new information. Other changes are associated with distractibility and poor impulse control. The research, a collaboration between the University of Utah School of Medicine, and Chung-Ang University in South Korea, was published online in Addiction Biology on Dec. 21, 2015.

“Most of the differences we see could be considered beneficial. However the good changes could be inseparable from problems that come with them,” says senior author Jeffrey Anderson, M.D., Ph.D., associate professor of neuroradiology at the University of Utah School of Medicine.

Those with Internet gaming disorder are obsessed with video games, often to the extent that they give up eating and sleeping to play. This study reports that in adolescent boys with the disorder, certain brain networks that process vision or hearing are more likely to have enhanced coordination to the so-called salience network. The job of the salience network is to focus attention on important events, poising that person to take action. In a video game, the enhanced coordination could help a gamer to react more quickly to the rush of an oncoming fighter. And in life, to a ball darting in front of a car, or an unfamiliar voice in a crowded room.

“Hyperconnectivity between these brain networks could lead to a more robust ability to direct attention toward targets, and to recognize novel information in the environment,” says Anderson. “The changes could essentially help someone to think more efficiently.” Follow up studies will be needed to directly determine whether the boys with these brain differences do better on performance tests.

A more troublesome finding is a coordination between two brain regions, the dorsolateral prefrontal cortex and temporoparietal junction, that is more strong than in individuals who are not compulsive video game players. “Having these networks be too connected may increase distractibility,” says Anderson. The same change is seen in patients with neuropsychiatric conditions such as schizophrenia, Down’s syndrome, and autism, and in people with poor impulse control. At this point it’s not known whether persistent video gaming causes rewiring of the brain, or whether people who are wired differently are drawn to video games.

This work is the largest, most comprehensive investigation of differences in the brains of compulsive video game players to date, says first author Doug Hyun Han, M.D., Ph.D., professor at Chung-Ang University School of Medicine and adjunct associate professor at the University of Utah School of Medicine. Study participants were screened in South Korea, where video game playing is a major social activity, much more than in the United States. The Korean government supports his research with the goal of finding ways to identify and treat addicts.

In this study, researchers performed magnetic resonance imaging on 106 boys between the ages of 10 to 19 who were seeking treatment for Internet gaming disorder, a psychological condition that the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) says warrants further research. The brain scans were compared to those from 80 boys without the disorder, and analyzed for regions that were activated simultaneously when participants were at rest. The more frequently two brain regions light up at the same time, the stronger the functional connectivity.

The team analyzed activity in 25 pairs of brain regions, 300 combinations in all. Specifically, boys with Internet gaming disorder had statistically significant, functional connections between the following pairs of brain regions:

  • Auditory cortex (hearing) – motor cortex (movement)
  • Auditory cortex (hearing) – supplementary motor cortices (movement)
  • Auditory cortex (hearing) – anterior cingulate (salience network)
  • Frontal eye field (vision) – anterior cingulate (salience network)
  • Frontal eye field (vision) – anterior insula (salience network)
  • Dorsolateral prefrontal cortex – temporoparietal junction

###

“Brain connectivity and psychiatric comorbidity in adolescents with Internet gaming disorder” was published in Addiction Biology online on December 21, 2015. In addition to Anderson and Han, the authors are Perry Renshaw from the University of Utah School of Medicine, and Sun Mi Kim and Sujin Bae from Chung-Ang University. The research was supported by a grant from the Korea Creative Content Agency

Mouse embryo with big brain: Evolving a bigger brain with human DNA

Mouse embryo with big brain: Evolving a bigger brain with human DNA — ScienceDaily

The human version of a DNA sequence called HARE5 turns on a gene important for brain development (gene activity is stained blue), and causes a mouse embryo to grow a 12 percent larger brain by the end of pregnancy than an embryo injected with the chimpanzee version of HARE5.

Credit: Silver lab, Duke University

The size of the human brain expanded dramatically during the course of evolution, imparting us with unique capabilities to use abstract language and do complex math. But how did the human brain get larger than that of our closest living relative, the chimpanzee, if almost all of our genes are the same?

Duke scientists have shown that it’s possible to pick out key changes in the genetic code between chimpanzees and humans and then visualize their respective contributions to early brain development by using mouse embryos.

The team found that humans are equipped with tiny differences in a particular regulator of gene activity, dubbed HARE5, that when introduced into a mouse embryo, led to a 12% bigger brain than in the embryos treated with the HARE5 sequence from chimpanzees.

The findings, appearing online Feb. 19, 2015, in Current Biology, may lend insight into not only what makes the human brain special but also why people get some diseases, such as autism and Alzheimer’s disease, whereas chimpanzees don’t.

“I think we’ve just scratched the surface, in terms of what we can gain from this sort of study,” said Debra Silver, an assistant professor of molecular genetics and microbiology in the Duke University Medical School. “There are some other really compelling candidates that we found that may also lead us to a better understanding of the uniqueness of the human brain.”

Every genome contains many thousands of short bits of DNA called ‘enhancers,’ whose role is to control the activity of genes. Some of these are unique to humans. Some are active in specific tissues. But none of the human-specific enhancers previously had been shown to influence brain anatomy directly.

In the new study, researchers mined databases of genomic data from humans and chimpanzees, to find enhancers expressed primarily in the brain tissue and early in development. They prioritized enhancers that differed markedly between the two species.

The group’s initial screen turned up 106 candidates, six of them near genes that are believed to be involved in brain development. The group named these ‘human-accelerated regulatory enhancers,’ HARE1 through HARE6.

The strongest candidate was HARE5 for its chromosomal location near a gene called Frizzled 8, which is part of a well-known molecular pathway implicated in brain development and disease. The group decided to focus on HARE5 and then showed that it was likely to be an enhancer for Frizzled8 because the two DNA sequences made physical contact in brain tissue.

The human HARE5 and the chimpanzee HARE5 sequences differ by only 16 letters in their genetic code. Yet, in mouse embryos the researchers found that the human enhancer was active earlier in development and more active in general than the chimpanzee enhancer.

“What’s really exciting about this was that the activity differences were detected at a critical time in brain development: when neural progenitor cells are proliferating and expanding in number, just prior to producing neurons,” Silver said.

The researchers found that in the mouse embryos equipped with Frizzled8 under control of human HARE5, progenitor cells destined to become neurons proliferated faster compared with the chimp HARE5 mice, ultimately leading to more neurons.

As the mouse embryos neared the end of gestation, their brain size differences became noticeable to the naked eye. Graduate student Lomax Boyd started dissecting the brains and looking at them under a microscope.

“After he started taking pictures, we took a ruler to the monitor. Although we were blind to what the genotype was, we started noticing a trend,” Silver said.

All told, human HARE5 mice had brains 12% larger in area compared with chimpanzee HARE5 mice. The neocortex, involved in higher-level function such as language and reasoning, was the region of the brain affected.

Producing a short list of strong candidates was in itself a feat, accomplished by applying the right filters to analysis of human and chimpanzee genomes, said co-author Gregory Wray, professor of biology and director of the Duke Center for Genomic and Computational Biology.

“Many others have tried this and failed,” Wray said. “We’ve known other people who have looked at genes involved in brain size evolution, tested them out and done the same kinds of experiments we’ve done and come up dry.”

The Duke team plans to study the human HARE5 and chimp HARE5 mice into adulthood, for possible differences in brain structure and behavior. The group also hopes to explore the role of the other HARE sequences in brain development.

“What we found is a piece of the genetic basis for why we have a bigger brain,” Wray said. “It really shows in sharp relief just how complicated those changes must have been. This is probably only one piece — a little piece.”

The work was supported by a research incubator grant from the Duke Institute for Brain Sciences, the National Institutes of Health (R01NS083897), and National Science Foundation (HOMIND BCS-08-27552).


Story Source:

The above story is based on materials provided by Duke University. Note: Materials may be edited for content and length.


Journal Reference:

  1. J. Lomax Boyd, Stephanie L. Skove, Jeremy P. Rouanet, Louis-Jan Pilaz, Tristan Bepler, Raluca Gordân, Gregory A. Wray, Debra L. Silver. Human-Chimpanzee Differences in a FZD8 Enhancer Alter Cell-Cycle Dynamics in the Developing Neocortex. Current Biology, 2015; DOI: 10.1016/j.cub.2015.01.041

Continue reading “Mouse embryo with big brain: Evolving a bigger brain with human DNA”

Unlike humans, monkeys aren’t fooled by expensive brands

PUBLIC RELEASE DATE:

2-Dec-2014

A group of researchers tested whether monkeys show a common human bias: the tendency to confuse the price of a good with its quality. Previous studies have shown that humans think wine labeled with an expensive price tag tastes better than the same wine labeled with a cheaper price tag. In other studies, people thought a pain killer worked better when they paid a higher price for it.

“We know that capuchin monkeys share a number of our own economic biases. Our previous work has shown that monkeys are loss averse, irrational when it comes to dealing with risk, and even prone to rationalizing their own decisions, just like humans,” said Laurie Santos, a psychologist at Yale University and senior author of the study. “But this is one of the first domains we’ve tested in which monkeys show more rational behavior than humans do.”

“Sticking a higher price tag on a bottle of wine shouldn’t make it taste better, but– surprisingly–it does,” added Rhia Catapano, a former Yale undergraduate who ran the study as part of her senior honors thesis. “We wanted to see whether monkeys showed this same bias.”

Santos and colleagues designed a series of four experiments to test whether capuchins would prefer higher-priced but equivalent items. They taught monkeys to make choices in an experimental market and to buy novel foods at different prices. Control studies showed that monkeys understood the differences in price between the foods. But when the researchers tested whether monkeys preferred the taste of the higher-priced goods, they were surprised to find that the monkeys didn’t fall for the bias. Continue reading “Unlike humans, monkeys aren’t fooled by expensive brands”

How artificial intelligence is changing our lives

Artificial intelligence image via Shutterstock.com. All rights reserved.

By The Christian Science Monitor
Sunday, September 16, 2012 13:38 EDT

In Silicon Valley, Nikolas Janin rises for his 40-minute commute to work just like everyone else. The shop manager and fleet technician at Google gets dressed and heads out to his Lexus RX 450h for the trip on California‘s clotted freeways. That’s when his chauffeur – the car – takes over. One of Google’s self-driving vehicles, Mr. Janin’s ride is equipped with sophisticated artificial intelligence technology that allows him to sit as a passenger in the driver’s seat.

At iRobot Corporation in Bedford, Mass., a visitor watches as a five-foot-tall Ava robot independently navigates down a hallway, carefully avoiding obstacles – including people. Its first real job, expected later this year, will be as a telemedicine robot, allowing a specialist thousands of miles away to visit patients’ hospital rooms via a video screen mounted as its “head.” When the physician is ready to visit another patient, he taps the new location on a computer map: Ava finds its own way to the next room, including using the elevator.

In Pullman, Wash., researchers at Washington State University are fitting “smart” homes with sensors that automatically adjust the lighting needed in rooms and monitor and interpret all the movements and actions of its occupants, down to how many hours they sleep and minutes they exercise. It may sound a bit like being under house arrest, but in fact boosters see such technology as a sort of benevolent nanny: Smart homes could help senior citizens, especially those facing physical and mental challenges, live independently longer.

From the Curiosity space probe that landed on Mars this summer without human help, to the cars whose dashboards we can now talk to, to smart phones that are in effect our own concierges, so-called artificial intelligence is changing our lives – sometimes in ways that are obvious and visible, but often in subtle and invisible forms. AI is making Internet searches more nimble, translating texts from one language to another, and recommending a better route through traffic. It helps detect fraudulent patterns in credit-card searches and tells us when we’ve veered over the center line while driving.

Even your toaster is about to join the AI revolution. You’ll put a bagel in it, take a picture with your smart phone, and the phone will send the toaster all the information it needs to brown it perfectly.

In a sense, AI has become almost mundanely ubiquitous, from the intelligent sensors that set the aperture and shutter speed in digital cameras, to the heat and humidity probes in dryers, to the automatic parking feature in cars. And more applications are tumbling out of labs and laptops by the hour.

“It’s an exciting world,” says Colin Angle, chairman and cofounder of iRobot, which has brought a number of smart products, including the Roomba vacuum cleaner, to consumers in the past decade.

What may be most surprising about AI today, in fact, is how little amazement it creates. Perhaps science-fiction stories with humanlike androids, from the charming Data (“Star Trek“) to the obsequious C-3PO (“Star Wars”) to the sinister Terminator, have raised unrealistic expectations. Or maybe human nature just doesn’t stay amazed for long.

“Today’s mind-popping, eye-popping technology in 18 months will be as blasé and old as a 1980 pair of double-knit trousers,” says Paul Saffo, a futurist and managing director of foresight at Discern Analytics in San Francisco. “Our expectations are a moving target.”

If Siri, the voice-recognition program in newer iPhones and seen in highly visible TV ads, had come out in 1980, “it would have been the most astonishing, breathtaking thing,” he says. “But by the time Siri had come, we were so used to other things going on we said, ‘Oh, yeah, no big deal.’ Technology goes from magic to invisible-and-taken-for-granted in about two nanoseconds.”

* * *

In one important sense, the quest for AI has been a colossal failure. The Turing test, proposed by British mathematician Alan Turing in 1950 as a way to verify machine intelligence, gauges whether a computer can fool a human into thinking another human is speaking during short conversation via text (in Turing’s day by teletype, today by online chat). The test sets a low bar: The computer doesn’t have to be able to really think like a human; it only has to seem to be human. Yet more than six decades later no AI program has passed Turing’s test (though an effort this summer did come close).

The ability to create machine intelligence that mimics human thinking would be a tremendous scientific accomplishment, enabling humans to understand their own thought processes better. But even experts in the field won’t promise when, or even if, this will happen.

“We’re a long way from [humanlike AI], and we’re not really on a track toward that because we don’t understand enough about what makes people intelligent and how people solve problems,” says Robert Lindsay, professor emeritus of psychology and computer science at the University of Michigan in Ann Arbor and author of “Understanding Understanding: Natural and Artificial Intelligence.”

“The brain is such a great mystery,” adds Patrick Winston, professor of artificial intelligence and computer science at the Massachusetts Institute of Technology (MIT) in Cambridge. “There’s some engineering in there that we just don’t understand.”

Instead, in recent years the definition of AI has gradually broadened. “Ten years ago, if you asked me if Watson [the computer that defeated all human opponents on the quiz show “Jeopardy!“] was intelligent, I’d probably argue that it wasn’t because it was missing something,” Dr. Winston says. But now, he adds, “Watson certainly is intelligent. It’s a certain kind of intelligence.”

The idea that AI must mimic the thinking process of humans has dropped away. “Creating artificial intelligences that are like humans is, at the end of the day, paving the cow paths,” Mr. Saffo argues. “It’s using the new technology to imitate some old thing.”

Entrepreneurs like iRobot’s Mr. Angle aren’t fussing over whether today’s clever gadgets represent “true” AI, or worrying about when, or if, their robots will ever be self-aware. Starting with Roomba, which marks its 10th birthday this month, his company has produced a stream of practical robots that do “dull, dirty, or dangerous” jobs in the home or on the battlefield. These range from smart machines that clean floors and gutters to the thousands of PackBots and other robot models used by the US military for reconnaissance and bomb disposal.

While robots in particular seem to fascinate humans, especially if they are designed to look like us, they represent only one visible form of AI. Two other developments are poised to fundamentally change the way we use the technology: voice recognition and self-driving cars.

* * *

In the 1986 sci-fi film “Star Trek IV: The Voyage Home, the engineer of the 23rd century starship Enterprise, Scotty, tries to talk to a 20th-century computer.

Scotty: “Computer? Computer??”

He’s handed a computer mouse and speaks into it.

Scotty: “Ah, hello Computer!”

Silence.

20th-century scientist: “Just use the keyboard.”

Scotty: “A keyboard? How quaint!”

Computers that easily understand what we say, or perhaps watch our gestures and anticipate what we want, have long been a goal of AI. Siri, the AI-powered “personal assistant” built into newer iPhones, has gained wide attention for doing the best job yet, even though it’s often as much mocked for what it doesn’t understand as admired for what it does.

Apple’s Siri – and other AI-infused voice-recognition software such as Google’s voice search – is important not only for what it can do now, like make a phone call or schedule an appointment, but for what it portends. Siri might understand human conversation at the level of a kindergartner, but it still is magnitudes ahead of earlier voice-recognition programs.

“Siri is a big deal,” says Saffo. It’s a step toward “devices that we interact with in ever less formal ways. We’re in an age where we’re using the technology we have to create ever more empathetic devices. Soon it will become de rigueur for all applications to offer spoken interaction…. In fact, we consumers will be surprised and disappointed if or when they don’t.”

Siri is a first step toward the ultimate vision of a VPA (virtual personal assistant), say Norman Winarsky and Bill Mark, who teamed up to develop Siri at the research firm SRI International before the software was bought by Apple. “Siri required not just speech recognition, but also understanding of natural language, context, and ultimately, reasoning (itself the domain of most artificial intelligence research today)…. We think we’ve only seen the tip of the iceberg,” they wrote in an article on TechCrunch last spring.

In the near future, VPAs will become more useful, helping humans do tasks such as weigh health-care alternatives, plan a vacation, and buy clothes.

Or drive your car. Vehicles that pilot themselves and leave humans as passive passengers are already being road-tested. “I expect it to happen,” says AI expert Mr. Lindsay. One advantage, he says, tongue in cheek: A vehicle driven by AI “won’t get distracted by putting on its makeup.”

While Google’s Janin rides in a self-driving car, he doesn’t talk on the phone, read his favorite blogs, or even sneak in a little catnap on the way to work – all tempting diversions. Instead, he analyzes and monitors the data derived from the car as it makes its way from his home in Santa Clara to Google’s headquarters in Mountain View. “Since the car is driving for me, though, I have this relaxed, stress-free feeling about being in stop-and-go traffic,” he says. “Time just seems to go by faster.”

Cars that drive themselves, once the stuff of science fiction, may be in garages relatively soon. A report by the consulting firm KPMG and the Center for Automotive Research, a nonprofit group in Michigan, predicts that autonomous cars will make their debut by 2019.

Google’s self-driving cars, a fleet of about a dozen, are the most widely known. But many big automotive manufac-turers, including Ford, Audi, Honda, and Toyota, are also investing heavily in autonomous vehicles.

At Google, the vehicles are fitted with a complex system of scanners, radars, lasers, GPS devices, cameras, and software. Before a test run, a person must manually drive the desired route and create a detailed map of the road’s lanes, traffic signals, and other objects. The information is then downloaded into the vehicle’s integrated software. When the car is switched to auto drive, the equipment monitors the roadway and sends the data back to the computer. The software makes the necessary speed and steering adjustments. Drivers can always take over if necessary; but in the nearly two years since the program was launched, the cars have logged more than 300,000 miles without an incident.

While it remains uncertain how quickly the public will embrace self-driving vehicles – what happens when one does malfunction? – the authors of the KPMG report make a strong case for them. They cite reduced commute times, increased productivity, and, most important, fewer accidents.

Speaking at the Mobile World Congress in Barcelona, Spain, earlier this year, Bill Ford, chairman of Ford Motor Company, argued that vehicles equipped with artificial intelligence are critically important. “If we do nothing, we face the prospect of ‘global gridlock,’ a never-ending traffic jam that wastes time, energy, and resources, and even compromises the flow of commerce and health care,” he said.

Indeed, a recent study by Patcharinee Tientrakool of Columbia University in New York estimates that self-driving vehicles – ones that not only manage their own speed but commuicate intelligently with each other – could increase our highway capacity by 273 percent.

The challenges that remain are substantial. An autonomous vehicle must be able to think and react as a human driver would. For example, when a person is behind the wheel and a ball rolls into the road, humans deduce that a child is likely nearby and that they must slow down. Right now AI does not provide that type of inferential thinking, according to the report.

But the technology is getting closer. New models already on the market are equipped with technology designed to assist with driving duties not found just a few years ago – including self-parallel parking, lane-drift warning signals, and cruise control adjustments.

Lawmakers are grappling with the new technology, too. Earlier this year the state of Nevada issued the first license for autonomous vehicles in the United States, while the California Legislature recently approved allowing the eventual testing of the vehicles on public roads. Florida is considering similar legislation.

“It’s hard to say precisely when most people will be able to use self-driving cars,” says Janin, who gets a “thumbs up” from a lot of people who recognize the car. “But it’s exciting to know that this is clearly the direction that the technology and the industry are headed.”

* * *

At first glance, the student apartment at Washington State University (WSU) in Pullman appears just like any other college housing: sparse furnishings, a laptop askew on the couch, a television and DVD player in the corner, a “student survival guide” sitting out stuffed with coupons for everything from haircuts to pizza.

But a closer examination reveals some unusual additions. The light switch on the wall adjoining the kitchen glows blue and white. Special sensors are affixed to the refrigerator, the cupboard doors, and the microwave. A water-flow gauge sits under the sink.

All are part of the CASAS Smart Home project at WSU, which is tapping AI technology to make the house operate more efficiently and improve the lives of its occupants, in this case several graduate students. The project began in 2006 under the direction of Diane Cook, a professor in The School of Electrical Engineering and Computer Science.

A smart home set up by the WSU team might have 40 to 50 motion or heat sensors. No cameras or microphones are used, unlike some other projects across the country.

The motion sensors allow researchers to know where someone is in the home. They gather intelligence about the dwellers’ habits. Once the system becomes familiar with an individual’s movements, it can determine whether certain activities have happened or not, like the taking of medication or exercising. Knowing the time of day and what the person typically does “is usually enough to distinguish what [the person is] doing right now,” says Dr. Cook.

A main focus of the WSU research is senior living. With the aging of baby boomers becoming an impending crisis for the health-care industry, Cook is searching for a way to allow older adults – especially those with dementia or mild impairments – to live independently for longer periods while decreasing the burden on caregivers. A large assisted-care facility in Seattle is now conducting smart-home technology research in 20 apartments for older individuals. A smart home could also monitor movements for clues about people’s general health.

“If we’re able to develop technology that is very unobtrusive and can monitor people continuously, we may be able to pick up on changes the person may not even recognize,” says Maureen Schmitter-Edgecombe, a professor in the WSU psychology department who is helping with the research.

Sensors seemed poised to become omnipresent. In a glimpse of the future, an entire smart city is being built outside Seoul, South Korea. Scheduled to be completed in 2017, Songdo will bristle with sensors that regulate everything from water and energy use to waste disposal – and even guide vehicle traffic in the planned city of 65,000.

While for many people such extensive monitoring might engender an uncomfortable feeling of Big Brother, AI-imbued robots or other devices also may prove to be valuable and (seemingly) compassionate companions, especially for seniors. People already form emotional attachments to AI-infused devices.

“We love our computers; we love our phones. We are getting that feeling we get from another person,” said Apple cofounder Steve Wozniak at a forum last month in Palo Alto, Calif.

The new movie “Robot & Frank,” which takes place in the near future, depicts a senior citizen who is given a robot and rejects it at first. But “bit by bit the two affect each other in unforeseen ways,” notes a review at Filmjournal.com. “Not since Butch Cassidy and the Sundance Kid has male bonding had such a meaningful but comic connection…. [P]erfect partnership is the movie’s heart.”

* * *

Not everything about AI may yield happy consequences. Besides spurring concerns about invasion of privacy, AI looks poised to eliminate large numbers of jobs for humans, especially those that require a limited set of skills. One joke notes that “a modern textile mill employs only a man and a dog – the man to feed the dog, and the dog to keep the man away from the machines,” as an article earlier this year in The Atlantic magazine put it.

“This so-called jobless recovery that we’re in the middle of is the consequence of increased machine intelligence, not so much taking away jobs that exist today but creating companies that never have jobs to begin with,” futurist Saffo says. Facebook grew to be a multibillion-dollar company, but with only a handful of employees in comparison with earlier companies of similar market value, he points out.

Another futurist, Thomas Frey, predicts that more than 2 billion jobs will disappear by 2030 – though he adds that new technologies will also create many new jobs for those who are qualified to do them.

Analysts have already noted a “hollowing out” of the workforce. Demand remains strong for highly skilled and highly educated workers and for those in lower-level service jobs like cooks, beauticians, home care aides, or security guards. But robots continue to replace workers on the factory floor. Amazon recently bought Kiva Systems, which uses robots to move goods around warehouses, greatly reducing the need for human employees.

AI is creeping into the world of knowledge workers, too. “The AI revolution is doing to white-collar jobs what robotics did to blue-collar jobs,” say Erik Brynjolfsson and Andrew McAfee, authors of the “Race Against the Machine.”

Lawyers can use smart programs instead of assistants to research case law. Forbes magazine uses an AI program called Narrative Science, rather than reporters, to write stories about corporate profits. Tax preparation software and online travel sites take work previously done by humans. Businesses from banks to airlines to cable TV companies have put the bulk of their customer service work in the hands of automated kiosks or voice-recognition systems.

“While we’re waiting for machines [to be] intelligent enough to carry on a long and convincing conversation with us, the machines are [already] intelligent enough to eliminate or preclude human jobs,” Saffo says.

* * *

The best argument that AI has a bright future may be made by fully acknowledging just how far it’s already come. Take the Mars Curiosity rover.

“It is remarkable. It’s absolutely incredible,” enthuses AI expert Lindsay. “It certainly represents intelligence.” No other biological organism on earth except man could have done what it has done, he says. But at the same time, “it doesn’t understand what it is doing in the sense that human astronauts [would] if they were up there doing the same thing,” he says.

Will machines ever exhibit that kind of humanlike intelligence, including self-awareness (which, ominously, brought about a “mental” breakdown in the AI system HAL in the classic sci-fi movie “2001: A Space Odyssey“)?

“I think we’ve passed the Turing test, but we don’t know it,” argued Pat Hayes, a senior research scientist at the Florida Institute for Human and Machine Cognition in Pensacola, in the British newspaper The Telegraph recently. Think about it, he says. Anyone talking to Siri in 1950 when Turing proposed his test would be amazed. “There’s no way they could imagine it was a machine – because no machine could do anything like that in 1950.”

But others see artificial intelligence remaining rudimentary for a long time. “Common sense is not so common. It requires an incredible breadth of world understanding,” says iRobot’s Angle. “We’re going to see more and more robots in our world that are interactive with us. But we are a long way from human-level intelligence. Not five years. Not 10 years. Far away.”

Even MIT’s Winston, a self-described techno-optimist, is cautious. “It’s easy to predict the future – it’s just hard to tell when it’s going to happen,” he says. Today’s AI rests heavily on “big data” techniques that crunch huge amounts of data quickly and cheaply – sifting through mountains of information in sophisticated ways to detect meaningful relationships. But it doesn’t mimic human reasoning. The long-term goal, Winston says, is to somehow merge this “big data” approach with the “baby steps” he and other researchers now are taking to create AI that can do real reasoning.

Winston speculates that the field of AI today may be at a place similar to where biology was in 1950, three years before the discovery of the structure of DNA. “Everybody was pessimistic, [saying] we’ll never figure it out,” he says. Then the double helix was revealed. “Fifty years of unbelievable progress in biology” has followed, Winston says, adding: AI just needs “one or two big breakthroughs….”

• Carolyn Abate in San Francisco and Kelcie Moseley in Pullman, Wash., contributed to this report

The Christian Science Monitor (http://s.tt/1nuVR)

FASEB opposes the Government Spending Accountability Act

Contact: Lawrence Green lgreen@faseb.org 301-634-7335 Federation of American Societies for Experimental Biology

Bethesda, MD – The Federation of American Societies for Experimental Biology (FASEB) wrote to all  members of the House of Representatives expressing its opposition to the Government Spending Accountability (GSA) Act (HR 4631). While strongly supporting the bill’s goal and the desire to ensure that federal agencies are using their resources responsibly and efficiently, FASEB urged Representatives to oppose the bill in its current form. FASEB President Judith S. Bond PhD expressed  concern that, “if adopted, HR 4631 would impede the professional development of government scientists, hamper the ability of research agency staff to monitor scientific developments and make appropriate funding decisions based on new research, and reduce communication among researchers.”

In addition, she pointed out that “this bill would also place new restrictions on the ability of federal agencies to support conferences aimed at advancing the national research agenda.”

The FASEB letter states that it is important for federal agencies to have the capacity to provide support for a variety of scientific meetings and conferences. Many volunteer-led organizations serving patients, the public, and the research community administer multiple conferences per year. Dr. Bond emphasized the value of these meetings to the government and the public. “These conferences facilitate the public dissemination of research findings and support the training and professional development of the next  generation of scientists. By partnering with private organizations, federal agencies are able to reach broader audiences at a lower cost while promoting the public private partnership that has been a key part of the successful research enterprise.”

###

 

FASEB is composed of 26 societies with more than 100,000 members, making it the largest coalition of biomedical research associations in the United States. Celebrating 100 Years of Advancing the Life Sciences in 2012, FASEB is rededicating its efforts to advance health and well-being by promoting progress and education in biological and biomedical sciences through service to our member societies and collaborative advocacy.

Social psychologists espouse tolerance and diversity — do they walk the walk? The Answer is NO

Contact: Anna Mikulak amikulak@psychologicalscience.org 202-293-9300 Association for Psychological Science

Every ten years or so, someone will make the observation that there is a lack of political diversity among psychological scientists and a discussion about what ought to be done ensues. The notion that the field discriminates against and is skewed toward a liberal political perspective is worthy of concern; scholars, both within and outside the field, have offered various solutions to this diversity problem.

As psychological scientists Yoel Inbar and Joris Lammers point out, however, we have few of the relevant facts necessary to understand and address the issue.

In an article to be published in the September 2012 issue of Perspectives on Psychological Science, a journal of the Association for Psychological Science, Inbar and Lammers, of Tilburg University, pose several questions in an attempt to better understand the ideological diversity between and within social psychologists.

Inbar and Lammers contacted members on the mailing list for the Society for Personality and Social Psychology and asked them to participate in an anonymous online survey. Across the two studies, the researchers received nearly 800 responses.

Their findings confirm the field’s liberal bias, but they reveal some surprises as well.

Although only 6 percent of the respondents described themselves as conservative “overall,” there was much more ideological diversity than anecdotal evidence would suggest. Inbar and Lammers found an overwhelming liberal majority when it concerned social issues, but greater diversity on economic and foreign policy issues.

So why does the field appear to be less politically diverse than it actually is? It seems that conservative social psychologists hide their views because they perceive the field as hostile to their values. The more conservative respondents were, the more likely they were to report that they had experienced an intellectually unfriendly climate. Importantly, self-defined liberals did not seem to have the same perceptions of hostility.

Furthermore, liberal respondents were more likely to say that they would discriminate against psychologists who displayed clear conservative views in the context of a paper or grant review, a symposium invitation, and in faculty hiring.

The September issue of Perspectives on Psychological Science includes five commentaries from scholars in the field who explore and discuss these new findings. While some of the commentators question the rigorousness of the methodology used by Inbar and Lammers, they all agree that ideological bias among social psychologists has serious implications for psychology as a scientific discipline.

Social tolerance and fairness are important values for many psychological scientists, so it’s surprising to find intolerance of a different kind in the field. And despite the fact that psychological scientists are well aware of the potentially harmful effects of cognitive biases, they are clearly not immune to such biases themselves.

Several of the commentaries raise serious questions about how ideology might be shaping the issues and questions that social psychologists systematically choose – and do not choose – to explore.

It may be the case that the field attracts a certain kind of inquiring and open mind that tends to embrace liberal values, and that conservatives self-select out of the field. But this, most of the commentators agree, does not change the fact that pervasive liberal bias is unhealthy for intellectual inquiry.

As Inbar and Lammers and all five commentators suggest, the time is ripe for self-examination in the field.

###

 

For more information about this study, please contact: Yoel Inbar at yinbar@uvt.nl.

Perspectives on Psychological Science is ranked among the top 10 general psychology journals for impact by the Institute for Scientific Information. It publishes an eclectic mix of thought-provoking articles on the latest important advances in psychology. For a copy of the article “Political Diversity in Social and Personality Psychology” and access to other Perspectives on Psychological Science research findings, please contact Anna Mikulak at 202-293-9300 or amikulak@psychologicalscience.org.

Omega-3 intake during last months of pregnancy boosts an infant’s cognitive and motor development

Repost 2008

Contact: Jean-François Huppé jean-francois.huppe@dap.ulaval.ca 418-656-7785 Université Laval

Quebec City, April 9, 2008—A study supervised by Université Laval researchers Gina Muckle and Éric Dewailly reveals that omega-3 intake during the last months of pregnancy boosts an infant’s sensory, cognitive, and motor development. The details of this finding are published in a recent edition of the Journal of Pediatrics.

To come to this conclusion, researchers first measured docosahexaenoic acid (DHA) concentration—a type of omega-3 fatty acid involved in the development of neurons and retinas—in the umbilical cord blood of 109 infants. “DHA concentration in the umbilical cord is a good indicator of intra-uterine exposure to omega-3s during the last trimester of pregnancy, a crucial period for the development of retinal photoreceptors and neurons,” explains Dr. Dewailly.

Tests conducted on these infants at 6 and 11 months revealed that their visual acuity as well as their cognitive and motor development were closely linked to DHA concentration in the umbilical cord blood at the time of their birth. However, there was very little relation between test results and DHA concentration in a mother’s milk among infants who were breast-fed. “These results highlight the crucial importance of prenatal exposure to omega-3s in a child’s development,” points out Dr. Muckle.

Researchers observed that DHA concentration in the umbilical cord blood was in direct relation with the concentration found in a mother’s blood, a reminder of the importance of a mother’s diet in providing omega-3 fatty acids for the fetus. They also noted that DHA concentration was higher in the fetus’s blood than in the mother’s. “While developing its nervous system, a fetus needs great quantities of DHA. It can even transform other types of omega-3s into DHA in order to develop its brain,” explains Dr. Dewailly.

For the members of the research team, there is no doubt that all pregnant women should be encouraged to get sufficient amounts of omega-3s. “A diet rich in omega-3s during pregnancy can’t be expected to solve everything, but our results show that such a diet has positive effects on a child’s sensory, cognitive, and motor development. Benefits from eating fish with low contaminant levels and high omega-3 contents, such as trout, salmon, and sardines, far outweigh potential risks even during pregnancy,” conclude the researchers.

###

In addition to Muckle and Dewailly, who are also affiliated to the Centre de recherche du CHUQ, Quebec City, the study was co-authored by Pierre Ayotte from Université Laval, as well as Joseph Jacobson, Sandra Jacobson, and Melissa Kaplan-Estrin from Wayne State University. This study was funded by the National Institute of Environmental Health Sciences, Indian and Northern Affairs Canada, Hydro-Québec, and Health Canada.

Information: Gina Muckle School of Psychology Université Laval Phone: (418) 656-4141 ext. 46199 gina.muckle@psy.ulaval.ca

Éric Dewailly Faculty of Medicine Université Laval Phone: (418) 656-4141 ext. 46518 eric.dewailly@crchul.ulaval.ca

Applying algorithm to social networks can reveal hidden connections criminals use to commit fraud, says UAlberta researcher

Contact: Jamie Hanlon jamie.hanlon@ualberta.ca 780-492-9214 University of Alberta

Math tree may help root out fraudsters

Fraudsters beware: the more your social networks connect you and your accomplices to the crime, the easier it will be to shake you from the tree.

The Steiner tree, that is.

In an article recently published in the journal Computer Fraud and Security, University of Alberta researcher Ray Patterson and colleagues from the University of Connecticut and University of California – Merced outlined the connection linking fraud cases and the algorithm designed by Swiss mathematician Jakob Steiner. Fraud is a problem that costs Canadians billions of dollars annually and countless hours of police investigations. Patterson says that building the algorithm into fraud investigation software may provide important strategic advantages.

The criminal path of least resistance

To quote a television gumshoe, everything’s connected. Figuring out who knows who and who has access to the money is like playing a game of connect-the-dots. Patterson says that for crimes like fraud, the fewer players in the scheme, the more likely it will be accomplished. Maintaining a small group of players is also what links it to the Steiner tree. He says that by analyzing various connecting social networks—email, Facebook or the like—finding out the who, what and how of the crime can be boiled down to numbers.

“You’re really trying to find the minimum set of connectors that connect these people to the various [network] resources,” he said. “The minimum number of people required is what’s most likely to be the smoking gun. You can do it with math, once you know what the networks are.”

Fraud and the Steiner tree, by the numbers

In their article, Patterson and his colleagues explored how networks such as phone calls, business partnerships and family relationships are used to form essential relationships in a fraud investigation. When these same relationships are layered, a pattern of connection becomes obvious. Once unnecessary links are removed and false leads are extracted, the remaining connections are most likely the best suspects. Patterson says that finding the shortest connection between the criminals and the crime is the crux of the Steiner tree.

“All of these things that we see in life, behind them is a mathematical representation,” said Patterson. “There are many, many different algorithms that we can pull off a shelf and apply to real-life problems.”

A potential tool for the long arm of the law?

Patterson says that with the amount of work that could potentially go into investigating a fraud case, such as obtaining warrants for phone or email records, and identifying and interviewing potential suspects, developing a program that uses a Steiner tree algorithm may save a significant portion of investigators’ time—time that, he says, could likely be reallocated to backlog or cold case files. “If you can reduce your legwork by even 20 per cent, that has massive manpower implications. I think algorithms like this one could help you reduce your legwork a lot more than that,” he said.

Although there is software that police and other law enforcement agencies can use to solve fraud, Patterson sees no evidence that those programs use a Steiner tree algorithm, something he says would bring some structure to an unstructured area. He hopes programmers and investigators will take note of the findings and make changes to their practices.

“It might take several years or many years before anyone picks it up,” said Patterson. “But it’s a good thing if we can point people towards what’s useful.”

###

Study shows link between morbid obesity, low IQ in toddlers

Contact: April Frawley Birdwell afrawley@vpha.health.ufl.edu 352-273-5817 University of Florida

GAINESVILLE, Fla. – University of Florida researchers have discovered a link between morbid obesity in toddlers and lower IQ scores, cognitive delays and brain lesions similar to those seen in Alzheimer’s disease patients, a new study shows.

Although the cause of these cognitive impairments is still unknown, UF researchers suspect the metabolic disturbances obesity causes could be taking a toll on young brains, which are still developing and not fully protected, they write in an article published in the Journal of Pediatrics this month.

“It’s well-known that obesity is associated with a number of other medical problems, such as diabetes, hypertension and elevated cholesterol,” said Daniel J. Driscoll, M.D., Ph.D., a UF professor of pediatrics and molecular genetics and microbiology in the College of Medicine and the lead author of the study. “Now, we’re postulating that early-onset morbid obesity and these metabolic, biochemical problems can also lead to cognitive impairment.”

Researchers compared 18 children and adults with early-onset morbid obesity, which means they weighed at least 150 percent of their ideal body weight before they were 4, with 19 children and adults with Prader-Willi syndrome, and with 24 of their normal-weight siblings. Researchers chose lean siblings as a control group “because they share a socioeconomic group and genetic background,” Driscoll said.

The links between cognitive impairments and Prader-Willi syndrome, a genetic disorder that causes people to eat nonstop and become morbidly obese at a very young age if not supervised, are well-established. But researchers were surprised to find that children and adults who had become obese as toddlers for no known genetic reason fared almost as poorly on IQ and achievement tests as Prader-Willi patients. Prader-Willi patients had an average IQ of 63 and patients with early-onset morbid obesity had an average of 78. The control group of siblings had an average IQ of 106, which falls within the range of what is considered normal intelligence.

“It was surprising to find that they had an average IQ score of 78, whereas their control siblings were 106,” Driscoll said. “We feel this may be another complication of obesity that may not be reversible, so it’s very important to watch what children eat even from a very young age. It’s not just setting them up for problems later on, it could affect their learning potential now.”

While performing head MRI scans of subjects, researchers also discovered white-matter lesions on the brains of many of the Prader-Willi and early-onset morbidly obese patients. White-matter lesions are typically found on the brains of adults who have developed Alzheimer’s disease or in children with untreated phenylketonuria, the researchers wrote.

These lesions could be affecting food-seeking centers of the brain, causing the children to feel hungrier. But they are most likely a result of metabolic changes that damage the young, developing brain, Driscoll said.

More studies are needed to understand what is causing these cognitive impairments, said Merlin Butler, M.D., Ph.D., a professor of pediatrics at the University of Missouri and chief of genetics and molecular medicine at Children’s Mercy Hospital and Clinics.

“This could be a really significant observation,” Butler said. “It’s an interesting concept. It’s a whole new area of investigation.”

The findings are preliminary and additional studies are planned, Driscoll said. Jennifer Miller, M.D., a UF assistant professor of pediatric endocrinology and the first author of the study, and other researchers from UF, All Children’s Hospital in St. Petersburg, Fla., and Baylor College of Medicine also took part in the research.

Although there was no known genetic cause for early-onset morbid obesity in the subjects studied, Driscoll said there are likely genetic and hormonal factors at play that researchers have yet to discover, particularly since these children are becoming obese at a time when their parents still control what they eat. The researchers studied several sets of fraternal twins where one twin was lean and the other morbidly obese, yet their parents reported that each ate the same amount of food. In one case, the obese child actually ate less, Driscoll said.

Driscoll is also careful to point out that adults or children who become obese later in childhood are not at-risk for these cognitive impairments because their brains are sufficiently developed to fend off damage from obesity.

“We’re all mindful that this is an obese society,” he said. “We all need to be more careful with respect to what we eat, but in particular, that’s very important for children under 4.”

* Reposted for Filing

Prenatal pesticide exposure tied to lower IQ in children

Contact: Sarah Yang scyang@berkeley.edu 510-643-7741 University of California – Berkeley

Berkeley – In a new study suggesting pesticides may be associated with the health and development of children, researchers at the University of California, Berkeley’s School of Public Health have found that prenatal exposure to organophosphate pesticides – widely used on food crops – is related to lower intelligence scores at age 7.

The researchers found that every tenfold increase in measures of organophosphates detected during a mother’s pregnancy corresponded to a 5.5 point drop in overall IQ scores in the 7-year-olds. Children in the study with the highest levels of prenatal pesticide exposure scored seven points lower on a standardized measure of intelligence compared with children who had the lowest levels of exposure.

“These associations are substantial, especially when viewing this at a population-wide level,” said study principal investigator Brenda Eskenazi, UC Berkeley professor of epidemiology and of maternal and child health. “That difference could mean, on average, more kids being shifted into the lower end of the spectrum of learning, and more kids needing special services in school.”

The UC Berkeley study is among a trio of papers showing an association between pesticide exposure and childhood IQ to be published online April 21 in the journal Environmental Health Perspectives. Notably, the other two studies – one at Mt. Sinai Medical Center, the other at Columbia University – examined urban populations in New York City, while the UC Berkeley study focused on children living in Salinas, an agricultural center in Monterey County, California.

The studies in New York also examined prenatal exposure to pesticides and IQ in children at age 7. Like the UC Berkeley researchers, scientists at Mt. Sinai sampled pesticide metabolites in maternal urine, while researchers at Columbia looked at umbilical cord blood levels of a specific pesticide, chlorpyrifos.

“It is very unusual to see this much consistency across populations in studies, so that speaks to the significance of the findings,” said lead author Maryse Bouchard, who was working as a UC Berkeley post-doctoral researcher with Eskenazi while this study was underway. “The children are now at a stage where they are going to school, so it’s easier to get good, valid assessments of cognitive function.”

Organophosphates (OP) are a class of pesticides that are well-known neurotoxicants. Indoor use of chlorpyrifos and diazinon, two common OP pesticides, has been phased out over the past decade, primarily because of health risks to children.

The 329 children in the UC Berkeley study had been followed from before birth as part of the Center for the Health Assessment of Mothers and Children of Salinas (CHAMACOS), an ongoing longitudinal study led by Eskenazi. The new findings on IQ come less than a year after another study from the CHAMACOS cohort found an association between prenatal pesticide exposure and attention problems in children at age 5.

Researchers began enrolling pregnant women in the study in 1999. During pregnancy and after the children were born, study participants came to regular visits where CHAMACOS staff administered questionnaires and measured the health and development of the children.

During the visits, samples of urine were taken from the participants and tested for dialkyl phosphate (DAP) metabolites, the breakdown product of about 75 percent of the organophosphorus insecticides in use in the United States. Samples were taken twice during pregnancy, with the two results averaged, and after birth from the children at regular intervals between ages 6 months and 5 years.

The Wechsler Intelligence Scale for Children – Fourth Edition (WISC-IV) was used to assess the cognitive abilities of the children at age 7. The test includes subcategories for verbal comprehension, perceptual reasoning, working memory and processing speed.

In addition to the association with overall IQ scores, each of the four cognitive development subcategories saw significant decreases in scores associated with higher levels of DAPs when the mothers were pregnant. The findings held even after researchers considered such factors as maternal education, family income and exposure to other environmental contaminants, including DDT, lead and flame retardants.

“There are limitations to every study; we used metabolites to assess exposure, so we cannot isolate the exposure to a specific pesticide chemical, for instance,” added Eskenazi. “But the way this and the New York studies were designed – starting with pregnant women and then following their children – is one of the strongest methods available to study how environmental factors affect children’s health.”

While markers of prenatal OP pesticide exposure were significantly correlated with childhood IQ, exposure to pesticides after birth was not. This suggests that exposure during fetal brain development was more critical than childhood exposure.

Levels of maternal DAPs among the women in the UC Berkeley study were somewhat higher than average compared with the U.S. population, but they were not out of the range of measurements found among women in a national study.

“These findings are likely applicable to the general population,” said Bouchard, who is currently a researcher at the University of Montreal’s Department of Environmental and Occupational Health. “In addition, the other two studies being published were done in New York City, so the connection between pesticide exposure and IQ is not limited to people living in an agricultural community.”

The prenatal exposures measured in this paper occurred in 1999-2000. Overall, OP pesticide use in the United States has been trending downward, declining more than 50 percent between 2001 and 2009, and about 45 percent since 2001 in California. At the same time, the use of OP pesticides in Monterey County remained steady between 2001 and 2008, but declined 18 percent from 2008 to 2009. Several studies suggest that exposure to OP pesticides has gone down with declining use.

According to the Centers for Disease Control, people are exposed to OP pesticides through eating foods from crops treated with these chemicals. Farm workers, gardeners, florists, pesticide applicators and manufacturers of these insecticides may have greater exposure than the general population.

“Many people are also exposed when pesticides are used around homes, schools or other buildings,” said study co-author Asa Bradman, associate director of the Center for Environmental Research in Children’s Health (CERCH) at UC Berkeley.

The researchers recommended that consumers reduce their home use of pesticides, noting that most home and garden pests can be controlled without those chemicals. If pesticides are needed, they said bait stations should be used instead of sprays.

They also said that consumers should thoroughly wash fruits and vegetables; go beyond a quick rinse and use a soft brush, if practical. Consumers could also consider buying organic produce when possible as a way to reduce pesticide exposure from food, they said.

“I’m concerned about people not eating right based on the results of this study,” said Eskenazi. “Most people already are not getting enough fruits and vegetables in their diet, which is linked to serious health problems in the United States. People, especially those who are pregnant, need to eat a diet rich in fruits and vegetables.”

###

Other co-authors of the study are Jonathan Chevrier, Kim Harley, Katherine Kogut, Michelle Vedar, Celina Trujillo and Caroline Johnson at UC Berkeley’s CERCH; Dana Boyd Barr at Emory University’s Rollins School of Public Health; and Norma Morga at the Clinica de Salud del Valle de Salinas.

The National Institute of Environmental Health Sciences, the Environmental Protection Agency and the National Institute for Occupational Health and Safety helped fund this research

*Reposted on Request

Processed food diet in early childhood may lower subsequent IQ

Contact: Emma Dickinson edickinson@bmjgroup.com 44-207-383-6529 BMJ-British Medical Journal

Are dietary patterns in childhood associated with IQ at 8 years of age? A population-based cohort study

A diet, high in fats, sugars, and processed foods in early childhood may lower IQ, while a diet packed full of vitamins and nutrients may do the opposite, suggests research published online in the Journal of Epidemiology and Community Health.

The authors base their findings on participants in the Avon Longitudinal Study of Parents and Children (ALSPAC), which is tracking the long term health and wellbeing of around 14,000 children born in 1991 and 1992.

Parents completed questionnaires, detailing the types and frequency of the food and drink their children consumed when they were 3, 4, 7 and 8.5 years old.

Three dietary patterns were identified: “processed” high in fats and sugar intake; “traditional” high in meat and vegetable intake; and “health conscious” high in salad, fruit and vegetables, rice and pasta. Scores were calculated for each pattern for each child.

IQ was measured using a validated test (the Wechsler Intelligence Scale for Children) when they were 8.5 years old. In all, complete data were available for just under 4,000 children.

The results showed that after taking account of potentially influential factors, a predominantly processed food diet at the age of 3 was associated with a lower IQ at the age of 8.5, irrespective of whether the diet improved after that age. Every 1 point increase in dietary pattern score was associated with a 1.67 fall in IQ.

On the other hand, a healthy diet was associated with a higher IQ at the age of 8.5, with every 1 point increase in dietary pattern linked to a 1.2 increase in IQ. Dietary patterns between the ages of 4 and 7 had no impact on IQ.

The authors say that these findings, although modest, are in line with previous ALSPAC research showing an association between early childhood diet and later behaviour and school performance.

“This suggests that any cognitive/behavioural effects relating to eating habits in early childhood may well persist into later childhood, despite any subsequent changes (including improvements) to dietary intake,” they say.

The brain grows at its fastest rate during the first three years of life, say the authors, by way of a possible explanation for the findings, adding that other research has indicated that head growth at this time is linked to intellectual ability.

“It is possible that good nutrition during this period may encourage optimal brain growth,” they suggest, advocating further research to determine the extent of the effect early diet has on intelligence.

* Repsoted at Request