Protecting Individual Data Privacy Against the Next Generation of Technology

By Joseph Gregory

The marriage between man and machine has never been more evident than by the neurological technologies (“neuro technologies”) that will soon compel consumer markets. Brain-computer interface devices analyze brain signals and translate them into commands that are sent to output devices that perform the required actions. Consumer use of these devices can curb depression, control computers and machines with only thoughts, and even monitor levels of concentration in school. While the positive attributes of these novel technologies are clearly apparent, the use of these technologies comes at a cost—or so the saying goes. That contemporary cost is individual users’ data.

Traditionally, data is collected from a user based off their navigation of the internet, whether it be from browser searches, application interactions, purchase decisions, downloads, or just about every trackable action a user takes. Through data mining and complex algorithms, these daily user interactions are processed into making each user a unique data profile. These profiles are highly coveted by commercial enterprises, who in turn use the data to target optimal customer demographics and preferences. As consumers transition to devices that receive brain signals to operate, companies will look to collect this “neuro data” to further advance their commercial interests.

As consumers struggle to protect their data from Big Tech, it is crucial to set parameters on the use and collection of neuro data collected from brain-computer interfaces to ensure ethical practices and consumer safety for the next generation’s technology. The NeuroRights Initiative tackles these problems by laying out five “NeuroRights” that promote traditional individual rights—personal identity, free-will, mental privacy, equal access to mental augmentation, and the protection from algorithmic bias.

The right to personal identity and right to free-will strive to prevent technology from blurring one’s sense of self. If an individual is connected to a network via neuro technology, external inputs could threaten an individual’s sense of self and free-will by manipulating how a user’s brain-computer interface device makes their brain operate and think. The purpose and use of the individual’s data are a far cry from the user’s authorized consent considering the vague and misinformed data policies that currently populate the technological landscape. Data policies seem as if they are built to be burdensome and disengage the user from reading carefully. For example, Apple’s Privacy Policy stretches eight pages and contains broad, arbitrary, and unsubstantial language like “we believe that you can have great products and great privacy. This means that we strive to collect only the personal data that we need.” For reference, a waiver to jump out of a plane can be as brief as three pages. Reevaluating how users’ “consent” to extensive terms and conditions can assist in safeguarding an individual’s sense of self and free will. By explicitly communicating to users how their data will be used, a user will have a clearer choice of whether they would like to use the service.

Also impactful on the right to personal identity and right to free-will is the invasive algorithmic advertising model. Advanced neuro technologies could foster increasingly harmful biproducts if the contemporary algorithmic model is left unchanged; imagine shopping for a shirt and seeing product ads in real time as your neuro data is collected based on how you react to different shirt designs. In essence, algorithms could drive and manipulate human behavior.

The NeuroRights Initiative declares a right to mental privacy and a right to protection from algorithmic bias, calling for neuro data to be private. If a user allows a use of their neuro data, especially in a commercial context, it should be strictly regulated. Over the past decade, Facebook and Google have clearly demonstrated the market should not dictate ethical data practices. Despite political pressure for Big Tech to fix data privacy practices, the model remains in dire need of improvement. As publicly-owned corporations, Big Tech companies’ primary objective is to increase profits for shareholders—a reasonable justification. Thus, citizens and governments are responsible for adequately protecting individual data; data privacy legislation will greatly support the right to mental privacy and the right to protection from algorithmic bias. Because targeted advertisements on social media influence individuals’ beliefs, it is dystopic to think of the magnitude of manipulation the use of neuro data in algorithms could reach considering the nightmare of political ads on Facebook during the 2016 election cycle. Though current data collection and use is far from where privacy advocates would like it to be, taking an ex-ante approach on the next generation’s data will help protect individuals’ privacy rights.

Equal access of advanced neuro technologies is critical to promote fairness and justice. Putting advanced technology into the hands of only “majority” groups will likely increase social stratification. As an example, think of the iPhone 5S and its fingerprint scanner technology; only consumers able to afford the price tag of the new phone were able to access the new technological capabilities. As updated models of the iPhone become available, the older models trickle down to consumers who could not previously afford an iPhone 5S. As with the iPhone, neuro technologies could follow this trickle-down approach. In contrast to the iPhone, the technological capability could allow a user to forego reading by downloading textbooks directly into the brain. Considering the price of this technology could be millions of dollars, wealthier individuals could obtain exponential advantages by having the capability of seizing the opportunity; additionally, because of the steep price of this neuro technology, it may not become affordable (trickle down) to the next class for years or decades which lets the rich get richer. With this in mind, an egalitarian approach is essential to promote social fairness for distributing neuro technologies to all—not just the rich.

Positive movements towards enacting these principles can be seen in Chile’s recent amendment. Chilian lawmakers worked directly with the founder of the NeuroRights Initiative, Dr. Rafael Yuste, to model their proposed amendment on the five rights set out in the NeuroRights Initiative. Chile’s recognition of these rights on a constitutional level is exemplary and should be looked upon with praise and replication internationally to protect individuals’ data.

Future neuro technologies will unlock new realms for humans to explore. Through these channels, we have the capability of further advancing and improving each generations’ experience of the human condition. However, precautions must be put in place to preserve individual rights against unprecedentedly powerful threats. Just look at Jurassic Park! As Dr. Ian Malcom remarks, “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.” Whether one agrees with Dr. Malcom or not is beside the point. The analogous circumstances of Jurassic Park and individual privacy is significant in elucidating the imminency of technological threats. Sure, one is a reincarnation of colossal, deadly beasts while the other is an invisible, sneaky enemy. But be wary, lack of privacy regulations for neuro data and technologies is much more frightening. A dinosaur can wipe out a group of humans within proximity; privacy threats know no such bounds. There is no helicoptering to safety. As individuals we must call upon our lawmakers to secure our individual rights against significant technological threats before it is too late.