free counter
World

Meta Is Building AI That Reads Brainwaves. THE TRUTH, UP TO NOW, Is Messy

Researchers at Meta, the parent company of Facebook, will work on a fresh solution to understand whats happening in peoples minds. On August 31, the business announced that research scientists in its AI lab are suffering from AI that may hear what someones hearing, by studying their brainwaves.

While the study continues to be in very first stages, its designed to be a foundation for tech which could help people who have traumatic brain injuries who cant communicate by talking or typing. Most of all, researchers want to record this brain activity without probing the mind with electrodes, which requires surgery.

The Meta AI study viewed 169 healthy adult participants who heard stories and sentences read out loud, as scientists recorded their brain activity with various devices (think: electrodes stuck on participants heads).

Researchers then fed that data into an AI model, searching for patterns. They wanted the algorithm to listen to or know what participants were hearing, in line with the electrical and magnetic activity within their brains.

TIME spoke with Jean Remi King, a study scientist at Facebook Artificial Intelligence Research (FAIR) Lab, concerning the goals, challenges, and ethical implications of the analysis. The research have not yet been peer-reviewed.

This interview has been condensed and edited for clarity.

TIME: In laymans terms, is it possible to explain what your team attempt to do with this particular research and that which was accomplished?

Jean Remi King: There are always a couple of conditions, from traumatic brain problems for anoxia [an oxygen deficiency], that basically make people struggling to communicate. And something of the paths that is identified for these patients in the last handful of decades is brain-computer interfaces. By putting an electrode on the motor regions of a patients brain, we are able to decode activity and help the individual communicate with all of those other worldBut its obviously extremely invasive to place an electrode inside someones brain. So we wished to use noninvasive recordings of brain activity. And the target was to create an AI system that may decode brain responses to spoken stories.

What were the largest challenges you came against along the way of conducting this research?

You can find two challenges I believe worth mentioning. On the main one hand, the signals that people grab from brain activity are really noisy. The sensors are pretty a long way away from the mind. There exists a skull, there’s skin, that may corrupt the signal that people can grab. So picking them up with a sensor requires super advanced technology.

Another big problem is more conceptual for the reason that we actually dont understand how the mind represents language to a big extent. So even though we had an extremely clear signal, without machine learning, it could be very difficult to state, OK, this brain activity means this word, or this phoneme, or an intent to do something, or whatever.

Therefore the goal here’s to delegate both of these challenges to an AI system by understanding how to align representations of speech and representations of brain activity in reaction to speech.

Do you know the next steps to help expand this research? What lengths away are we out of this AI helping individuals who have suffered a traumatic neurological injury communicate?

What patients need down the road is really a device that works at bedside and works for language production. Inside our case, we only study speech perception. THEREFORE I think one possible next thing is to make an effort to decode what folks attend to with regards to speechto make an effort to see if they can track what different folks are telling them. But moreover, ideally we’d be capable of decode what they would like to communicate. This is extremely challenging, since when we ask a wholesome volunteer to get this done, it creates lots of facial movements these sensors grab very easily. To be certain that people are decoding brain activity instead of muscle activity will undoubtedly be very hard. So thats the target, but we know that its likely to be very difficult.

How else could this research be utilized?

Its difficult to guage that because we’ve one objective here. The target is to make an effort to decode what folks have heard in the scanner, given their brain activity. At this time, colleagues and reviewers are mainly asking, How is this useful? Because decoding a thing that we realize people heard isn’t bringing much to [the table]. But I take this more as a proof principle that there could be pretty rich representations in these signalsmore than perhaps we’d have thought.

Will there be anything else you imagine its very important to people to find out about this study?

What I want to stress is that is research that’s performed within FAIR and, for the reason that regard, isn’t directed top-down by Meta and isn’t created for products.

Write to Megan McCluskey at megan.mccluskey@time.com.

Read More

Related Articles

Leave a Reply

Your email address will not be published.

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker