New Technique Allows Scientists to Read Minds at Nearly the Speed of Thought


Post 7391

New Technique Allows Scientists to Read Minds at Nearly the Speed of Thought

Friday 12:16pm

 http://gizmodo.com/new-technique-allows-scientists-to-read-minds-at-nearly-1755927863

New Technique Allows Scientists to Read Minds at Nearly the Speed of Thought

An experiment by University of Washington researchers is setting the stage for advances in mind reading technology. Using brain implants and sophisticated software, researchers can now predict what their subjects are seeing with startling speed and accuracy.

The ability to view a two-dimensional image on a page or computer screen, and then transform that image into something our minds can immediately recognize, is a neurological process that remains mysterious to scientists. To learn more about how our brains perform this task—and to see if computers can collect and predict what a person is seeing in real time—a research team led by University of Washington neuroscientist Rajesh Rao and neurosurgeon Jeff Ojermann demonstrated that it’s possible to decode human brain signals at nearly the speed of perception. The details of their work can be found in a new paper in PLOS Computational Biology.

The team sought the assistance of seven patients undergoing treatment for epilepsy. Medications weren’t helping alleviate their seizures, so these patients were given temporary brain implants, and electrodes were used to pinpoint the focal points of their seizures. The UW researchers saw this as an opportunity to perform their experiment. “They were going to get the electrodes no matter what,” noted Ojermann in a UW NewsBeat article. “We were just giving them additional tasks to do during their hospital stay while they are otherwise just waiting around.”

The patients were shown a random sequence of pictures—images of human faces, houses, and blank gray screens—on computer monitors in brief 400 millisecond intervals. Their specific task was to watch for an image of an upside-down house.

New Technique Allows Scientists to Read Minds at Nearly the Speed of Thought

The face and house discrimination task. Credit: Kai J. Miller et al., 2016/PLOS Computational Biology

At the same time, the electrodes in their brain were connected to software that extracted two distinct brain signal properties, namely “event related potentials” (when massive batches of neurons simultaneously light up in response to an image) and “broadband spectral” changes (signals that linger after viewing an image).

As the images flickered on the screen, a computer sampled and digitized the incoming brain signals at a rate of 1,000 times per second. This resolution allowed the software to determine which combination of electrode locations and signals correlated best to what the patients were seeing. “We got different responses from different (electrode) locations; some were sensitive to faces and some were sensitive to houses,” Rao said.

After training the software, researchers exposed the patients to an entirely new set of pictures. Without previous exposure to these new images, the computer was able to predict with 96 percent accuracy when a test subject was seeing a house, a face, or a grey screen. And it did so at nearly the speed of perception.

This proficiency only occurred when the computer considered both event-related potentials and broadband changes, which as stated in the study, suggests “they capture different and complementary aspects of the subject’s perceptual state.”So when it comes to understanding how a person perceives a complex visual object, it’s important to consider the “global picture” of large neural networks.

While interesting, the results of the study are exceptionally limited. A true test of the system would be to see if it could learn a much larger set of images, including different categories. It’s not immediately obvious, for example, if the computer could discern if a patient was viewing the face of a human or a dog.

Once refined, however, this kind of brain decoding could be used to build communication mechanisms for “locked-in” patients who are paralyzed or have suffered a stroke. This technique could also assist with brain mapping, allowing neuroscientists to identify locations in the brain responsible for certain types of information in real time.

[PLOS Computational Biology]

Top image: Kai Miller and Brian Donohue

Email the author at george@gizmodo.com and follow him @dvorsky.

 

Advertisements

Cases of Gastroschisis, a Birth Defect, on the Rise in the US


Post 7390

Cases of Gastroschisis, a Birth Defect, on the Rise in the US

Cases of a rare birth defect called gastroschisis are increasing in the U.S., according to a recent government report. But what is gastroschisis, and what causes it?

Gastroschisis (GAS-tro-SKEE-sis) occurs when the muscles in the intestinal wall of a fetus do not develop properly, thus causing the intestines to poke through an opening in the skin, to the right of the umbilical cord.

In some cases, other organs, like the stomach, may also develop outside the baby’s body, said Dr. Holly Hedrick, an attending pediatric and fetal surgeon at The Children’s Hospital of Philadelphia. [9 Uncommon Conditions That Pregnancy May Bring]

“It’s typically diagnosed during the second trimester by ultrasound,” Hedrick told Live Science. During the first trimester, the fetus’s intestines aren’t in a fixed position inside the body — they “come out and go back in,” making it difficult for a doctor to tell if something’s wrong, Hedrick said.

By the second trimester, intestines should be permanently inside the fetus, and if they’re not, intestine loops that are visible on an ultrasound point to the gastroschisis abnormality, she said.

In a recent report, researchers at the Centers for Disease Control and Prevention found that yearly cases of gastroschisis in the U.S. more than doubled, rising 129 percent, between 1995 and 2012. The increase is concerning, but the condition remains rare — there are now about 2,000 babies in the U.S. born yearly with gastroschisis, the report said.

A previous report showed that gastroschisis cases in the U.S. doubled between 1995 and 2005 —from about 2 cases per 10,000 live births to about 4 cases per 10,000 live births, according to findings published in 2013 in the journal Obstetrics and Gynecology.

The new report shows that the biggest increase in the rate of gastroschisis has been in babies born to younger, non-Hispanic black women, age 20 or under, the CDC said. The report analyzed data from 14 states, representing about 29 percent of all births in the U.S.

Hedrick, who was not involved with the CDC study, told Live Science that babies born with gastroschisis generally need surgery shortly after birth to move the displaced organs back inside the body and to repair the wall meant to hold them in place. But the severity of the harm to a baby from having the exposed intestines can vary widely, Hedrick said.

“The outcome of the baby is directly related to the function of the bowel,” Hedrick told Live Science. If the intestine is exposed during pregnancy, it can be damaged — by exposure to the amniotic fluid surrounding it, or by trauma from repeated contact in the womb — and this damage can continue to affect the baby’s health, even after corrective surgery.

Ultrasound image showing loops of bowel floating freely in amniotic fluid in a fetus at 31 weeks of gestation.
Credit: Rachel Page et al.

In about 10 percent of gastroschisis cases, only a small part of the bowel is exposed, and replacing it is relatively simple: “You can just return the bowel to the abdominal cavity without surgery,” Hedrick said, adding that hospital stays for these cases typically last less than one month.

But in some cases, if the blood supply to the intestines is restricted and the intestine becomes malformed, or if there’s extensive scarring to the intestinal tissue, the return of normal bowel function can be delayed severely, Hedrick said. About 20 percent of gastroschisis cases exhibit these extreme complications, which can require hospital stays lasting three to six months and multiple surgeries, and can leave the baby with developmental problems caused by an impaired ability to absorb nutrients. In some cases, the damage is too great for the baby to survive, Hedrick said.

Most gastroschisis cases fall somewhere between these two extremes, Hedrick told Live Science. Typically, doctors replace the exposed bowels using a specialized pouch called a “silo,” which stacks the intestines and uses a combination of gravity and pressure to gradually push them back into place, Hedrick explained. Once the intestines are tucked away, the abdominal wall is closed, and the babies are usually released from the hospital after three to six weeks, once doctors can tell that their intestines are working well.

Health officials are not sure what causes gastroschisis, though the CDC suggests that it might be triggered by genetic factors — either independently or in combination with other influences like environmental conditions that the mother is exposed to, or food, drink or medicines she ingests while pregnant.

However, the CDC researchers noted a particularly dramatic uptick in the number of infants with gastroschisis born to young, black mothers. Over an 18-year period, gastroschisis cases among this group more than tripled, rising 263 percent. In comparison, there was a 68 percent increase in infants with gastroschisis born to white mothers over the same period.

The CDC’s analysis gives a clearer picture of where the birth defect is happening, but many questions still remain about what causes the deformity, why teen mothers are especially susceptible, and why the condition seems to be occurring more frequently than ever in children born to black women.

“It concerns us that we don’t know why more babies are being born with this serious birth defect,” Coleen Boyle, director of the CDC’s National Center on Birth Defects and Developmental Disabilities, said in a statement.

Abbey Jones, lead author of the CDC report and an epidemiologist for the CDC’s National Center on Birth Defects and Developmental Disabilities, told Live Science that further steps are required not only to determine why the risks are greater for young mothers, but also to uncover an explanation for the escalating number of cases.

“As the cause of the increase in prevalence is unknown, public health research on gastroschisis is urgently needed,” Jones said.

Follow Mindy Weisberger on Twitter and Google+. Follow us@livescience, Facebook & Google+. Original article on Live Science.

As Zika Virus Rises, Vaccine Development Gets Attention


Post 7389

As Zika Virus Rises, Vaccine Development Gets Attention

‘Behemoth’ Daddy Longlegs Discovered in Oregon


Post 7388

‘Behemoth’ Daddy Longlegs Discovered in Oregon

Mind-Reading Computer Instantly Decodes People’s Thoughts


Post 7387

Mind-Reading Computer Instantly Decodes People’s Thoughts

A new computer program can decode people’s thoughts almost in real time, new research shows.

Researchers can predict what people are seeing based on the electrical signals coming from electrodes implanted in their brain, and this decoding happens within milliseconds of someone first seeing the image, the scientists found.

The new results could one day have applications for helping people, such as those who cannot speak or have trouble communicating, express their thoughts, Rajesh Rao, a neuroscientist at the University of Washington in Seattle, said in a statement. [10 Surprising Facts About the Brain]

“Clinically, you could think of our result as a proof of concept toward building a communication mechanism for patients who are paralyzed or have had a stroke and are completely locked in,” Rao said.

Reading thoughts

In recent years, scientists have made tremendous strides in decoding human thoughts. In a 2011 study, researchers translated brain waves into the movie clips people were watching at the time. In 2014, two scientists transmitted thoughts to each other using a brain-to-brain link. And other studies have shown that computers can “see” what people are dreaming about, using only their brain activity.

Rao and his colleagues wanted to see if they could further this effort. They asked seven people with severe epilepsy, who had already undergone surgery to implant electrodes into their temporal lobes, if they would mind having their thoughts decoded. (The patients had the electrodes implanted for a single week so that doctors could pinpoint where the seizures originated within the temporal lobe, which is a common source of seizures, the researchers said.)

“They were going to get the electrodes no matter what; we were just giving them additional tasks to do during their hospital stay while they are otherwise just waiting around,” said study co-author Dr. Jeff Ojemann, a neurosurgeon at the University of Washington Medical Center in Seattle.

The temporal lobe is also the brain region responsible for processing sensory input, such as visualizing and recognizing objects that a person sees.

Rao, Ojemann and their colleagues had the participants watch a computer screen as several images briefly flickered by. The images included pictures of faces and houses, as well as blank screens, and the subjects were told to keep alert to identify the image of an upside-down house.

At the same time, the electrodes were hooked up to a powerful computer program that analyzed brain signals 1,000 times a second, determining what brain signals looked like when someone was viewing a house versus a face. For the first two-thirds of the images, the computer program got a label, essentially telling it, “This is what brain signals look like when someone views a house.” For the remaining one-third of the pictures, the computer was able to predict, with 96 percent accuracy, what the person actually saw, the researchers reported Jan. 21 in the journal PLOS Computational Biology. What’s more, the computer accomplished this task within 20 milliseconds of the instant the person looked at the object.

Complex process

It turned out that different neurons fired when people were looking at faces versus when they were looking at houses. It also turned out that the computer needed two types of brain signals to decode the images: an event-related potential and a broadband spectral change. The event-related potential is a characteristic spike in brain cell firing that appears when the brain responds to any stimulus, whereas the broadband spectral change is detected by electrodes as an overall change in power across the brain region.

“Traditionally, scientists have looked at single neurons,” Rao said. “Our study gives a more global picture, at the level of very large networks of neurons, of how a person who is awake and paying attention perceives a complex visual object.”

By allowing researchers to identify, in real time, which parts of the brain respond to certain stimuli, the new technique could help doctors map the entire human brain one day, the researchers said.

Follow Tia Ghose on Twitterand Google+. Follow Live Science@livescience, Facebook & Google+. Original article on Live Science.

Wearable Sensors Could Translate Sign Language Into English


Post 7386

Wearable Sensors Could Translate Sign Language Into English

Wearable sensors could one day interpret the gestures in sign language and translate them into English, providing a high-tech solution to communication problems between deaf people and those who don’t understand sign language.

Engineers at Texas A&M University are developing awearable device that can sense movement and muscle activity in a person’s arms.

The device works by figuring out the gestures a person is making by using two distinct sensors: one that responds to the motion of the wrist and the other to the muscular movements in the arm. A program then wirelessly receives this information and converts the data into the English translation. [Top 10 Inventions that Changed the World]

After some initial research, the engineers found that there were devices that attempted to translate sign language into text, but they were not as intricate in their designs.

“Most of the technology … was based on vision- or camera-based solutions,” said study lead researcher Roozbeh Jafari, an associate professor of biomedical engineering at Texas A&M.

These existing designs, Jafari said, are not sufficient, because often when someone is talking with sign language, they are using hand gestures combined with specific finger movements.

“I thought maybe we should look into combining motion sensors and muscle activation,” Jafari told Live Science. “And the idea here was to build a wearable device.”

The researchers built a prototype system that can recognize words that people use most commonly in their daily conversations. Jafari said that once the team starts expanding the program, the engineers will include more words that are less frequently used, in order to build up a more substantial vocabulary.

One drawback of the prototype is that the system has to be “trained” to respond to each individual that wears the device, Jafari said. This training process involves asking the user to essentially repeat or do each hand gesture a couple of times, which can take up to 30 minutes to complete.

“If I’m wearing it and you’re wearing it — our bodies are different … ourmuscle structures are different,” Jafari said.

But, Jafari thinks the issue is largely the result of time constraints the team faced in building the prototype. It took two graduate students just two weeks to build the device, so Jafari said he is confident that the device will become more advanced during the next steps of development.

The researchers plan to reduce the training time of the device, or even eliminate it altogether, so that the wearable device responds automatically to the user. Jafari also wants to improve the effectiveness of the system’s sensors so that the device will be more useful in real-life conversations. Currently, when a person gestures in sign language, the device can only read words one at a time.

This, however, is not how people speak. “When we’re speaking, we put all the words in one sentence,” Jafari said. “The transition from one word to another word is seamless and it’s actually immediate.”

“We need to build signal-processing techniques that would help us to identify and understand a complete sentence,” he added.

Jafari’s ultimate vision is to use new technology, such as the wearable sensor, to develop innovative user interfaces between humans and computers.

For instance, people are already comfortable with using keyboards to issue commands to electronic devices, but Jafari thinks typing on devices like smartwatches is not practical because they tend to have  small screens.

“We need to have a new user interface (UI) and a UI modality that helps us to communicate with these devices,” he said. “Devices like [the wearable sensor] might help us to get there. It might essentially be the right step in the right direction.”

Jafari presented this research at the Institute of Electrical and Electronics Engineers (IEEE) 12th Annual Body Sensor Networks Conference in June.

Follow Live Science @livescience, Facebook & Google+. Original article on Live Science.

Photos of Shock Waves Surrounding Supersonic Jets


Post 7385

Photos of Shock Waves Surrounding Supersonic Jets