History of BCIs
Published:
What were the first BCIs like? How much better are the ones today?
A very brief history
The development of brain-computer interfaces (BCIs) has a long and varied history, with many different researchers and institutions contributing to the field.
One of the earliest recorded uses of a BCI was in the 1960s, when a team at the University of California, Los Angeles (UCLA) used electrodes implanted in the brains of patients with epilepsy to detect brain activity and help guide surgical treatment.
In the 1970s, researchers at the University of Illinois developed a BCI that used sensors on the scalp to detect changes in brain activity and allow a person to control a simple computer program using their thoughts.
In the 1980s and 1990s, there were many significant developments in the field of BCIs, including the development of better algorithms for translating brain activity into commands that a computer could understand, and the first successful use of a BCI to restore limited movement to a person with quadriplegia.
More recently, there have been many advances in BCI technology, including the development of non-invasive sensors that can be placed on the scalp, rather than implanted in the brain, and the use of BCIs in a wider range of applications, such as helping people with disabilities to communicate and control their environment.
Better hardware, better software = better BCI
Over the years, there have been many significant developments, including the use of non-invasive sensors, the development of better algorithms for translating brain activity into commands, and the application of BCIs in a wider range of settings.