The project integrates optics, neuroscience, and artificial intelligence.
Earlier this year, coverage on Lenta.ua about a new generation of brain–computer interfaces drew significant public attention, attracting more than 20,000 readers (en, ua/ru) and sparking widespread curiosity about the possibility of contactless neural decoding. In light of newly released scientific results, we return to the topic.

Fig. 1 Setup
A research team from Bar-Ilan University is exploring a question that once belonged purely to science fiction: can inner speech — the silent “yes” or “no” we say to ourselves — be decoded without touching the head, without electrodes, and without surgery? Using laser-based optical sensing combined with artificial intelligence, the researchers achieved an accuracy of 95.7% in distinguishing silent “yes” versus “no” responses from the Broca language area of the brain (where speech is created), requiring only about one minute of subject-specific calibration per class.
Unlike invasive brain-computer interfaces that require implants or traditional non-invasive systems that rely on physical contact via EEG caps, this approach operates remotely. A low-power laser and a high-speed camera capture subtle speckle dynamics reflected from the scalp. These fluctuations encode tiny vibrations associated with neural activity. Deep learning models then analyze these patterns to identify the internal cognitive state.
The preprint describing the study has already attracted 205 reads on ResearchGate, placing it among the top 3% of research interest for 2026 publications, indicating strong early engagement within the scientific community (Fig. 2).
The project integrates optics, neuroscience, and artificial intelligence. A central challenge was not only detecting meaningful neural signals but also establishing their origin. A significant part of recent work focuses on ensuring that the optical patterns captured by the system correspond to cortical activity rather than to jaw- or tongue-movement artifacts. This analytical refinement required revisiting assumptions, stress-testing interpretations, and carefully defining and refining the methods used to analyze and interpret complex signals. It also required additional effort in visualizing results for explainability. Daniel Rubinstein, one of the co-authors previously highlighted for his emphasis on critical scientific questioning, contributed to this phase by helping to shape the analytical reasoning and experiments that clarified the distinction between neural signals and artifacts.
The updated study reinforces this distinction by training and evaluating additional control-validation models, using recordings from outside the language cortex to strengthen confidence in the localization of the effect. This emphasis on methodological rigor and critical validation, previously associated with co-author Daniel Rubinstein in earlier coverage, continues to shape the project's analytical direction.
To date, experiments have been conducted on healthy volunteers under controlled laboratory conditions. However, the potential implications extend far beyond the research setting. If further validated, such a contactless approach could one day offer a more comfortable alternative for individuals who have lost the ability to speak due to stroke, neurodegenerative diseases, or severe injury — particularly for patients who are sensitive to physical contact or unable to tolerate electrode-based systems. Members of the research team, including Daniel Rubinstein, have expressed particular interest in exploring potential medical applications and assessing how such optical decoding methods might translate into practical assistive communication technologies.
In the longer term, the researchers note that miniaturization and integration into wearable formats could further expand potential applications, though such developments remain at an early stage.
Over the years, the laboratory of Prof. Zeev Zalevsky (lab) has produced influential, disruptive research alongside innovations that have shaped the high-tech industry. It will be worth watching how this technology evolves in the years ahead.

Fig. 2 Statistics

Fig. 3 Reads

Fig. 3 Most read
Новости
Украина получит от партнеров $38 миллиардов
11:30 13 фев 2026.
Секс-скандал, дело НАПК, бюджетные средства. Большое интервью ректора КНУ Владимира Бугрова
11:18 13 фев 2026.
Оккупанты ударили по порту в Одесской области, есть жертвы
11:12 13 фев 2026.
Макрон отказался встречаться с Путиным
10:50 13 фев 2026.
В Киеве военный в квартире взорвал гранату
10:30 13 фев 2026.
Мы уже более года работаем над тем, чтобы завершить войну в Украине, - Рубио
10:10 13 фев 2026.
В Одессе бушевали пожары после российской атаки
09:55 13 фев 2026.
Кто подставляет министра? Как фейковые «почетные консулы» на Кипре стали реальной проблемой для Украины
09:48 13 фев 2026.
Трамп хочет выйти из переговоров по Украине
09:30 13 фев 2026.
111 российских дронов уничтожили силы ПВО этой ночью
09:15 13 фев 2026.
Торжество абсурда: почему «быстрые» выборы априори невозможны
08:49 13 фев 2026.