The University of Notre Dame's John J. Reilly Center for Science, Technology and Values has issued its annual list of ethical dilemmas in science and technology. This year's issues include brain-to-brain interfaces, colonization of Mars and state-sponsored “hacktivism.” Director Anjan Chakravartty says the list offers insight into concepts that we may not think about on a daily basis, but are on the cusp of development. December 9, 2014
SOUTH BEND, Ind. – The John J. Reilly Center for Science, Technology, and Values at the University of Notre Dame has released its annual list of emerging ethical dilemmas and policy issues in science and technology for 2015.
The Reilly Center explores conceptual, ethical and policy issues where science and technology intersect with society from different disciplinary perspectives. Its goal is to promote the advancement of science and technology for the common good.
The Center generates its annual list of emerging ethical dilemmas and policy issues in science and technology with the help of Reilly fellows, other Notre Dame experts and friends of the center. This marks the third year the Center has released a list. Readers are encouraged to vote on the issue they find most compelling at reilly.nd.edu/vote15.
The Center aims to present a list of items for scientists and laypeople alike to consider in the coming months and years as new technologies develop. Each month in 2015, the Reilly Center will present an expanded set of resources for the issue with the most votes, giving readers more information, questions to ask and references to consult.
The ethical dilemmas and policy issues for 2015, presented in no particular order, are:
Real-time satellite surveillance video
What if Google Earth gave you real-time images instead of a snapshot that’s up to three years old? Companies such as Planet Labs, Skybox Imaging (recently purchased by Google) and Digital Globe have launched dozens of satellites in the last year with the goal of recording the status of the entire Earth in real time or near real-time. The satellites themselves are getting cheaper, smaller and more sophisticated, with resolutions up to 1 foot. Commercial satellite companies make this data available to corporations – or, potentially, private citizens with enough cash – allowing clients to see useful images of areas coping with natural disasters and humanitarian crises, but also data on the comings and goings of private citizens. How do we decide what should be monitored and how often? Should we use this data to solve crimes? What is the potential for abuse by corporations, governments, police departments, private citizens or terrorists and other “bad actors?”
Astronaut bioethics (of colonizing Mars)
Plans for long-term space missions to Mars and for its colonization are already underway. On Friday (Dec. 5), NASA launched the Orion spacecraft and NASA Administrator Charles Bolden declared it “Day One of the Mars era.” The company Mars One, along with Lockheed Martin and Surrey Satellite Technology, is planning to launch a robotic mission to Mars in 2018, with humans following in 2025. Four hundred and eighteen men and 287 women from around the world are currently vying for four spots on the first one-way human settlement mission. But as we watch with interest as this unfolds, we might ask ourselves the following: Is it ethical to expose people to unknown levels of human isolation and physical danger, including exposure to radiation, for such a purpose? Will these pioneers lack privacy for the rest of their lives so that we might watch what happens? Is it ethical to conceive or birth a child in space or on Mars? And, if so, who protects the rights of a child not born on Earth and who did not consent to the risks? If we say no to children in space, does that mean we sterilize all astronauts who volunteer for the mission? Given the potential dangers of setting up a new colony severely lacking in resources, how would sick colonists be cared for? And beyond bioethics, we might ask how an off-Earth colony would be governed.
We are currently attached to, literally and figuratively, multiple technologies that monitor our behaviors. The fitness tracking craze has led to the development of dozens of bracelets and clip-on devices that monitor steps taken, activity levels, heart rate, etc., not to mention the advent of organic electronics that can be layered, printed, painted or grown on human skin. Google is teaming up with Novartis to create a contact lens that monitors blood sugar levels in diabetics and sends the information to health care providers. Combine that with Google Glass and the ability to search the Internet for people while you look straight at them, and you see that we’re already encountering social issues that need to be addressed.
The new wave of wearable technology will allow users to photograph or record everything they see. It could even allow parents to view what their children are seeing in real time. Employers are experimenting with devices that track volunteer employees’ movements, tone of voice and even posture. For now, only the aggregate data is being collected and analyzed to help employers understand the average workday and how employees relate to each other. But could employers require their workers to wear devices that monitor how they speak, what they eat, when they take a break and how stressed they get during a task, and then punish or reward them for good or bad data? Wearables have the potential to educate us and protect our health, as well as violate our privacy in any number of ways.
State-sponsored hacktivism and “soft war”
“Soft war” is a concept used to explain rights and duties of insurgents and even terrorists during armed conflict. Soft war encompasses tactics other than armed force to achieve political ends. Cyber war and hacktivism could be tools of soft war, if used in certain ways by states in interstate conflict, as opposed to alienated individuals or groups.
We already live in a state of low-intensity cyber conflict. But as these actions become more aggressive, damaging infrastructure, how do we fight back? Does a nation have a right to defend itself against, or retaliate for, a cyber attack, and if so, under what circumstances? What if the aggressors are non-state actors? If a group of Chinese hackers launched an attack on the U.S., does that give the U.S. government the right to retaliate against the Chinese government? In a soft war, what are the conditions of self-defense? May that self-defense be preemptive? Who can be attacked in a cyber war? We've already seen operations that hack into corporations and steal private citizens' data. What’s to stop attackers from hacking into our personal wearable devices? Are private citizens attacked by cyberwarriors just another form of collateral damage?
On Oct. 17, the White House suspended research that would enhance the pathogenicity of viruses such as influenza, SARS and MERS — often referred to as gain-of-function, or GOF, research. Gain-of-function research, in itself, is not harmful; in fact, it is used to provide vital insights into viruses and how to treat them. But when it is used to increase mammalian transmissibility and virulence, the altered viruses pose serious security and biosafety risks. Those fighting to resume research claim that GOF research on viruses is both safe and important to science, insisting that no other form of research would be as productive. Those who argue against this type of research note that the biosafety risks far outweigh the benefits. They point to hard evidence of human fallibility and the history of laboratory accidents and warn that the release of such a virus into the general population would have devastating effects.
At first it may seem absurd that types of weapons that have been around since