THE 3rd ANNUAL LIST OF EMERGING ETHICAL DILEMMAS AND POLICY ISSUES IN SCIENCE AND TECHNOLOGY FOR 2015
Click the links below to read more about each issue and get links to news stories and other resources.
Real-time satellite surveillance video
What if Google Earth gave you real-time images instead of a snapshot 1-3 years old?
Companies such as Planet Labs, Skybox Imaging (recently purchased by Google), and Digital Globe have launched dozens of satellites in the last year with the goal of recording the status of the entire Earth in real time (or near real-time). The satellites themselves are getting cheaper, smaller, and more sophisticated (with resolutions up to 1 foot). Commercial satellite companies make this data available to corporations (or, potentially, private citizens with enough cash), allowing clients to see useful images of areas coping with natural disasters and humanitarian crises, but also data on the comings and goings of private citizens. How do we decide what should be monitored and how often? Should we use this data to solve crimes? What is the potential for abuse by corporations, governments, police departments, private citizens, or terrorists and other “bad actors”?
Astronaut bioethics (of colonizing Mars)
Plans for long-term space missions to and the colonization of Mars are already underway. On December 5, NASA launched the Orion spacecraft and NASA Administrator Charles Bolden declared it “Day One of the Mars era.” The company Mars One (along with Lockheed Martin and Surrey Satellite Technology) is planning to launch a robotic mission to Mars in 2018, with humans following in 2025. 418 men and 287 women from around the world are currently vying for four spots on the first one-way human settlement mission. But as we watch with interest as this unfolds, we might ask ourselves the following: Is it ethical to expose people to unknown levels of human isolation and physical danger (including exposure to radiation) for such a purpose? Will these pioneers lack privacy for the rest of their lives so that we might watch what happens? Is it ethical to conceive or birth a child in space or on Mars? And, if so, who protects the rights of a child not born on Earth and who did not consent to the risks? If we say no to children in space, does that mean we sterilize all astronauts who volunteer for the mission? Given the potential dangers of setting up a new colony severely lacking in resources, how would sick colonists be cared for? And beyond bioethics, we might ask how an off-Earth colony would be governed.
We are currently attached to (literally and figuratively) multiple technologies that monitor our behaviors. The fitness tracking craze has led to the development of dozens of bracelets and clip-on devices that monitor steps taken, activity levels, heart rate, etc, not to mention the advent of organic electronics that can be layered, printed, painted, or grown on human skin. Google is teaming up with Novartis to create a contact lens that monitors blood sugar levels in diabetics and sends the information to healthcare providers. Combine that with Google Glass and the ability to search the Internet for people while you look straight at them and you see that we’re already encountering social issues that need to be addressed. The new wave of wearable technology will allow users to photograph or record everything they see. It could even allow parents to view what their children are seeing in real time. Employers are experimenting with devices that track (volunteer) employees’ movements, tone of voice, and even posture. For now, only the aggregate data is being collected and analyzed to help employers understand the average workday and how employees relate to each other. But could an employer require their workers to wear devices that monitor how they speak, what they eat, when they take a break, how stressed they get during a task, and then punish or reward them for good or bad data? Wearables have the potential to educate us, protect our health, as well as violate our privacy in any number of ways.
State-sponsored hacktivism and “soft war”
“Soft war” is a concept used to explain rights and duties of insurgents (and even terrorists) during armed conflict. Soft war encompasses tactics other than armed force to achieve political ends. Cyber war and hacktivism could be tools of soft war, if used in certain ways by states in inter-state conflict, as opposed to alienated individuals or groups (like “Anonymous”).
We already live in a state of low-intensity cyber conflict. But as these actions become more aggressive, damaging infrastructure, how do we fight back? Does a nation have a right to defend itself against, or retaliate for, a cyber attack, and if so, under what circumstances? What if the aggressors are non-state actors? If a group of Chinese hackers launched an attack on the US, does that give the US government the right to retaliate against the Chinese government? In a soft war, what are the conditions of self-defense? May that self-defense be preemptive? Who can be attacked in a cyber war? We’ve already seen operations that hack into corporations and steal private citizens’ data. What’s to stop attackers from hacking into our personal wearable devices? Are private citizens attacked by cyberwarriors just another form of collateral damage?
On October 17, the White House suspended research that would enhance the pathogenicity of viruses such as influenza, SARS, and MERS (often referred to as gain-of-function (GOF) research). Gain-of-function research, in itself, is not harmful; in fact, it is used to provide vital insights into viruses and how to treat them. But when it is used to increase mammalian transmissibility and virulence, the altered viruses pose serious security and biosafety risks. Those fighting to resume research claim that GOF research on viruses is both safe and important to science, insisting that no other form of research would be as productive. Those who argue against this type of research note that the biosafety risks far outweigh the benefits. They point to hard evidence of human fallibility and the history of laboratory accidents and warn that the release of such a virus into the general population would have devastating effects.
At first it may seem absurd that types of weapons that have been around since WWI and not designed to kill could be an emerging ethical or policy dilemma. But consider the recent development and proliferation of non-lethal weapons such as laser missiles, blinding weapons, pain rays, sonic weapons, electric weapons, heat rays, disabling malodorants, as well as the use of gases and sprays in both the military and domestic police forces (which are often the beneficiaries of older military equipment). These weapons may not kill (then again, there have been fatalities from non-lethal weapons), but they can cause serious pain, physical injuries, and long-term health consequences (the latter has not been fully investigated). We must also consider that non-lethal weapons may be used more liberally in situations that could be diffused by peaceful means (since there is technically no intent to kill), used indiscriminately (without regard for collateral damage), or be used as a means of torture (since the harm they cause may be undetectable after a period of time). These weapons can also be misused as a lethal force multiplier – a means of effectively incapacitating the enemy before employing lethal weapons. Non-lethal weapons are certainly preferable to lethal ones, given the choice, but should we continue to pour billions of dollars into weapons that increase the use of violence altogether?
Researchers at Harvard University recently created a swarm of over 1000 robots, capable of communicating with each other to perform simple tasks such as arranging themselves into shapes and patterns. These “kilobots” require no human intervention beyond the original set of instructions and work together to complete tasks. These tiny bots are based on the swarm behavior of insects and can be used to perform environmental cleanups or respond to disasters where humans fear to tread. The concept of driverless cars also relies on this system, where the cars themselves (without human intervention, ideally) would communicate with each other to obey traffic laws and deliver people safely to their destinations. But should we be worried about the ethical and policy implications of letting robots work together without humans running interference? What happens if a bot malfunctions and causes harm? Who would be blamed for such an accident? What if tiny swarms of robots could be set up to spy or sabotage?
Artificial life forms
Research on artificial life forms is an area of synthetic biology focused on custom-building life forms to address specific purposes. Craig Venter and colleagues announced the first synthetic life form in 2010, created from an existing organism by introducing synthetic DNA.
Synthetic life allows scientists to study the origins of life by building it rather than breaking it down, but this technique blurs the line between life and machines and scientists foresee the ability to program organisms. The ethical and policy issues surrounding innovations in synthetic biology renew concerns raised previously with other biological breakthroughs and include safety issues and risk factors connected with releasing artificial life forms into the environment. Making artificial life forms has been deemed “playing God” because it allows individuals to create life that does not exist naturally. Gene patents have been a concern for several years now and synthetic organisms suggest a new dimension of this policy issue. While customized organisms may one day cure cancer, they may also be used as biological weapons.
Resilient social-ecological systems
We need to build resilient social and ecological systems that can tolerate being pushed to an extreme while maintaining their functionality either by returning to the previous state or by operating in a new state. Resilient systems endure external pressures such as those caused by climate change, natural disasters, and economic globalization. For example, a resilient electrical system is able to withstand extreme weather events or regain functionality quickly afterwards. A resilient ecosystem can maintain a complex web of life when one or more organism is overexploited and the system is stressed by climate change.
Who is responsible for devising and maintaining resilient systems? Both private and public companies are responsible for supporting and enhancing infrastructure that benefits the community. To what degree is it the responsibility of the federal government to assure that civil infrastructure is resilient to environmental changes? When individuals act in their own self-interest, there is the distinct possibility that their individual actions fail to maintain infrastructure and processes that are essential for all of society. This can lead to what Garrett Hardin in 1968 called the “tragedy of the commons,” in which many individuals making rational decisions based on their own interest undermine the collective’s best and long-term interests. To what extent is it the responsibility of the federal government to enact regulations that can prevent a “tragedy of the commons”?
It’s no Vulcan mind meld, but brain-to-brain interfaces (BBI) have been achieved, allowing for direct communication from one brain to another without speech. The interactions can be between humans or between humans and animals.
In 2014, University of Washington researchers performed a BBI experiment that allowed a person command over another person about half a mile away, the goal being the simple task of moving their hand (communication so far has been one-way in that one person sends the commands and the other receives them). Using an electroencephalography (EEG) machine that detects brain activity in the sender and a transcranial magnetic stimulation coil that controls movement in the receiver we’ve achieved a BBI twice – this year scientists also transmitted words from brain-to-brain across 5,000 miles. In 2013, Harvard researchers led by Seung-Schik Yoo developed the first interspecies brain-to-brain interface, retrieving a signal from a human’s brain (generated by staring at a flashing light) and transmitting it into the motor cortex of a sleeping rat, causing the rodent to move its tail.
The ethical issues are myriad. What kind of neurosecurity can we put in place to protect individuals from having accidental information shared or removed from their brains (especially by hackers)? If two individuals share an idea, who is entitled to claim ownership? Who is responsible for the actions committed by the recipient of a thought if a separate thinker is dictating the actions?