During F8 Facebook Developers Conference 2017, Facebook revealed its plans for the nearest years.
Among different stuff, keynote from the day 2 was especially interesting for me. Speakers talked about connectivity projects increasing access to the Internet across the globe, different methods of human-computer interaction, virtual reality, and last, but not least Brain-Computer Interface.

It’s super-interesting for me, because 4 years ago I wrote Master Thesis about Brain-Computer Interface for Mobile Devices at my university & published a few short articles about Brain-Computer Interface on this blog. You can also download my EEG Analyzer Android app from Google Play Store. I also wrote another app called EEG Controller for communicating with the external world with brain and optionally with eye blinks. It wasn’t published. Maybe I’ll enhance and publish it in the future or I’ll create a better app for the similar thing. Of course, my apps are much simpler & less advanced than the Facebook plans, but still, both of them are aiming to solve similar problems.

Facebook is planning to hire 60 engineers and PhDs to develop new Brain-Computer Interface. If I understood them correctly, they’re planning to create non-invasive hardware & software, which will be as much accurate as invasive hardware. In simple words it means, they’re not going to create implants for the brain, but some kind of wearable device, which will have the same (or better) accuracy as the brain implants, which exists today. Definitely, it’s not an easy task. If you are skeptical about such brain technologies, I can tell you that technology available today is far away from “reading the mind or thoughts”. It rather analyzes & interprets brain waves and signals. It allows to determine, if we are concentrated, relaxed, sleepy, tired, etc. With more precise tools, we can simply gather more data and information, which will hopefully allow developing tools, which are capable of “typing with the brain”. Nowadays, consumer wearable electroencephalographs like NeuroSky MindWave or Muse allows to read rather “binary” data like: you’re concentrated or not & you’re relaxed or not. In addition, they can give you such results in percentage value. It can be used for developing simple communication apps, which can allow people to communicate with just a brain, but using them may be not so convenient, slow or inefficient. Despite these disadvantages, for people with diseases like LIS, it may be the only hope for communicating with the world.

I’m keeping fingers crossed for the Facebook BCI project! Moreover, I’m planning to extend my own work and create more complex BCI solutions in the future. Right now, I had these ideas only in my head. I’m also not going to compete with Facebook because I don’t have knowledge or resources (at least not yet) to develop my own BCI hardware. Not to mention about 60 engineers, which are far smarter & more experienced than me. I have an idea about using hardware, which already exists on the market and to write better software for using it because a lot of solutions available now are quite poor in my opinion. It may leverage the potential of BCI in daily usage and make technologies like EEG more available & affordable for the people who need it for research or medical solutions. Having in mind the fact that Facebook is putting a lot of serious effort in BCI technology, I’m becoming more convinced that this technology may the future and the way to go.