Medical Application: Legal

Legal considerations vary depending on personal or third party use of neural data. The data collected is extremely important. First, the companies need the data for their algorithms to learn and improve. These data are extremely valuable. In fact, big data in healthcare is proposed to be worth upwards of US$78 billion by 2027. In most cases, legally individual patients do not own their health data, but are allowed access to it. This creates potential ethical and legal concerns. For example, if someone withdraws one’s consent because she no longer wants to provide one’s information, is it possible to completely separate and exclude the information from the database or software a third party have developed using the data?

Efforts are being made across the world to protect neural data. In 2021, for example, Chile approved groundbreaking legislation recognizing protecting brain activity and data as an aspect of human rights (Neurorights Foundation, 2021). In the United States, medical data collected by healthcare professionals is protected by the Health Insurance Portability and Accountability Act (HIPAA), which ensures privacy. However, many of these devices will likely have data that can be stored on an app or third party device, such as an Apple watch. Data stored on such private devices not only fall outside HIPAA protections, they also open up state access in a criminal context. For example, the Fourth Amendment protects individuals from unreasonable state search and seizure, however, the U.S. Supreme Court has carved out an exception under the third party doctrine which holds that “a person has no legitimate expectation of privacy in information he voluntarily turns over to third parties” (Smith v. Maryland). This exception continues today, even though technology has evolved to collect more and more private data and information.

Similarly, the Fifth Amendment protects against an individual’s right against self-incrimination. While we have a right to protect personal testimony, this does not extend to physical evidence. This once clear distinction gets complicated with new technologies. For example, the police cannot force you to reveal your passcode to enter your phone, as this would be forcing you to reveal the contents of your mind. However, in most jurisdictions, police may force you to place your finger on your phone to open it, or use the eye gaze function. Additionally, medical devices are beginning to be used in criminal cases. In one case in the United States, a judge allowed pacemaker data to be admitted to contradict the defendant’s account of his whereabouts at the time of a crime (Maras and Wandt, 2020). Once again, this medical data is viewed as evidence, however, it may also be offering a window into the defendant’s state of mind through biometric data such as heart rate. As neurotechnologies continue to develop, they will likely collect and access more sensitive information, much of which will likely be accessible to the state.

As mentioned above, informed consent and regulatory regimes may be particularly ill-equipped to handle neurotechnologies. In some places, like Mexico, special regulations for neurotechnology do not exist at all. Elsewhere, like Japan, regulation of medical technology differs substantially between industry, research, and commercial use. Often, the stringency of regulation depends on whether the device is deemed invasive or non-invasive. In the United States, for example, the focus of regulation is on individual autonomy and physical harm. The FDA places medical devices into three categories.The most heavily regulated class is class III devices, which are those devices that, “support or sustain human life, are of substantial importance in preventing impairment of human health, or present a potential, unreasonable risk of illness or injury.” For example, the invasive BCI Neuralink is proposing would easily fit into this third class. Class I and II devices are subjected to less stringent premarket regulatory processes than class III (Moynahan, 2018). Here they are mostly focused on registration, manufacturing and labeling, and will often not require premarket clinical trials. Class III devices are required to go through extensive clinical trials, similar to a drug trial, unless it is close enough to another class III device to warrant lesser review (510(k) process).

This premise that more invasive is necessarily more likely to cause harm may not always be correct. In fact,  external neuro devices that utilize machine learning may have considerable risk that will likely become amplified as they are not viewed as fundamentally dangerous. These devices are able to extract and manipulate neural data just as powerfully as if they were physically implanted. Despite this, “harm” in the legal context is most often referring to physical harm. Structural or somatic impact, however, is an insufficient measure of harm, as harm could extend to impact on the functions of the mind, such as thoughts, memory, or personality.

Example Legal Questions for Consideration:

  • How is data ownership and privacy regulated in relation to this technology? Are there additional concerns?
  • Could this technology be used in unanticipated ways to incriminate a user?
  • What regulatory bodies and laws currently exist in relation to this technology? Are they sufficient? Are there unanticipated legal concerns?