THE 5th ANNUAL LIST OF EMERGING ETHICAL DILEMMAS AND POLICY ISSUES IN SCIENCE AND TECHNOLOGY
The Self-Healing Body There’s a chip for that.
In August, a team of engineers at the University of California – Berkeley published a paper in the journal Neuron to announce the development of tiny, millimeter-size wireless devices small enough to be injected into individual nerves. The devices are programmed to detect electrical activity in the body’s nerves and muscles via ultrasound so that they could stimulate the body’s own therapeutic reactions. Meanwhile, a team in Israel is building nanobots out of DNA for a similar purpose. These nanobots are shaped to fit a drug securely inside until the point when and where it is needed by the body. Preliminary tests on cockroaches have shown that a person wearing an EEG cap to measure brain activity can indeed control the DNA bots (that is, open or close them) by either doing mental arithmetic or letting their brain rest.
If these researchers are successful, we could someday have these tiny bots implanted all over our bodies to detect the first signs of disease. They are already being eyed by researchers and physicians who want to use them for mental illnesses like depression and schizophrenia (which are difficult both to diagnose and to treat).
NeuV’s Emotion Engine Now your car can love you back.
Honda and Japanese telecommunications firm SoftBank partnered earlier this year to develop AI technology that would allow conversations between car and driver. But the NeuV goes beyond banter. Honda’s press release described its full potential:
Through this joint research project, Honda and SoftBank will strive to enable mobility products to utilize conversations with the driver, together with other information obtained from various sensors and cameras installed on the mobility product, both to perceive the emotions of the driver and to engage in dialogue with the driver based on the vehicle’s own emotions. Moreover, by letting mobility products “grow up” while sharing various experiences with their drivers, the project will strive to enable drivers to experience the feeling that their mobility product has become a good partner and thus form a stronger emotional attachment toward it.
Swarm Warfare Face it, nobody likes swarms of things.
DARPA’s OFFensive Swarm-Enabled Tactics (OFFSET) program “seeks to develop and demonstrate 100+ operationally relevant swarm tactics that could be used by groups of unmanned air and/or ground systems numbering more than 100 robots.” Right now, humans still control all unmanned aerial and ground vehicles (UAVs and UGVs) through computer programs, but DARPA wants to find a way for the drones to act in unison by enhancing the human-swarm interface so that hundreds or thousands can be controlled on the battlefield at the same time. This technology should ultimately enable the military to interact with drone swarm through augmented and virtual reality interfaces, voice gestures, and touch commands.
Brain Hacking Are the guys in the tin foil hats right!?
Like fingerprints, brain waves are unique to every individual. Unlike fingerprints, the information they produce is not straightforward, but rather a cacophony of personal data. This is why EEG-reading is the next step in biometric identification AND brain hacking. For identity purposes, EEG-based authentication systems will use specific features, or markers, of your brain activity. It’s been suggested that computers can be secured with EEG software instead of passwords because passwords only ask you to authenticate yourself once whereas the EEG can keep checking to make sure it’s you. Right now this technology still requires you to wear the EEG-reading device and hook it up to a computer, while a hacker has to be trained to read EEGs, find access to large amounts of data, and analyze that data so they know what they’re looking for. But it’s time we started thinking of ways to protect our neurodata as these devices become more common.
Medical Ghost Management You should be afraid of this ghost.
Medical ghost management is a comprehensive program for controlling the dissemination of medical research. It is generally done for pharmaceutical companies by professional publication planners who map out the entire process of publication, from the initial research design through to publication of results. Sometimes publication planners work directly for pharmaceutical companies, but often they work for specialized planning agencies that provide services to pharmaceutical companies. Ghost managed manuscripts are often published in good journals that are widely read. What they do not reveal is the role that pharmaceutical companies (Pfizer, for example) played in the production of the results. Given that big pharmaceutical companies get most of their revenue from blockbuster drugs (and that their share prices are evaluated based on potential blockbusters in the pipeline) the careful communication of information about key drugs is vital to the success of Big Pharma. Ghost management allows pharmaceutical companies to influence the literature that concerns their product. This in turn influences the prescribing practices of doctors who read the literature and affects their clinical practice.
Reanimating Cryonics Oh great, another 1000 years of THAT guy.
Cryonics is getting a second go-round. The idea, first developed in the 1960s when Robert Ettinger wrote his book The Prospect of Immortality, is the new darling of Silicon Valley. In its simplest form, cryonics aims to freeze humans immediately after their moment of death in order to reanimate them decades, or centuries, later when their injuries and diseases are treatable or old age can be reversed. The 21st-century version has a slightly different bent in that it’s all about the brain. Companies like TransTime, 21st Century Medicine, BioViva, KrioRus, Alcor Life Extension, and Cryonics Institute either still have preserved bodies or are now setting up contracts for preservation with the still-living. Lawyers are drafting new life insurance policies for people who plan to freeze their bodies, or just their heads. Freezing your brain is an interesting prospect since it would require finding a new home for it upon thawing (3-D printing has been thrown out there as a possibility). But the new cryonics is more about preserving consciousness than the physical structure of the brain. Perhaps this is why Silicon Valley tech millionaires and billionaires are among the most invested in the new cryonics. If one sees the brain as just a very sophisticated hard drive, they might be willing to gamble on there being a day when they can be downloaded to a computer. Forget preventing our bodies from aging, this is a rebooting of your consciousness in a way that can be backed up and restored indefinitely.
Automated Politics Who’s really in charge here?
Putting parties aside, a lot of us feel like Twitter bots had more to say than the voting public in the last US election cycle. There was a constant deluge of hashtags being used, then spoofed, then reclaimed. Accusations of scandals seemed never-ending. The “news” never stopped, not even to sleep or eat, because these social media accounts aren’t actually manned by people, but controlled by bots. In an election year, one fear with automated politics is that they don’t reflect popular sentiment, but create the illusion that the majority of voters are thinking a certain way. One campaign will use the other’s hashtag to draw views, but then link to altered or out-of-content photos, memes, quotes, and news. By co-opting the tools of their opponents, they infiltrate a group of supporters, undermining their trust in the government. A Twitter bot that reports a blatantly incorrect or misleading news item has the power to undermine a reader’s faith in the mainstream press. Many of these bots are gone weeks or days after they fulfill their goal (of skewing a poll, making a hashtag trend, misleading readers on a particular issue).
Predicting Criminality Can a computer predict who will commit a crime?
A research team from McMaster University in Canada and Shanghai Jiao Tong University in China originally set out to disprove the idea that there could be a link between facial features and criminality. However, in the 2016 paper they ended up writing, they claim their experiment proved otherwise – that computers could, in fact, detect a criminal based on their facial features. The dangers of these experiments are clear. Non-criminals are at risk of being labeled unfairly and, more generally, we risk validating that idea that a person can “look like a criminal.” Physiognomy (the assessment of character traits based on outer appearance) has long been regarded as a pseudoscience, and goes back at least to the Greek philosopher Aristotle. It regained favor in the European Middle Ages and Renaissance, and in the United States, it was revived again as part of its eugenics program.
The Robot Cloud The robots are collaborating.
Teaching a robot how to grasp a pen seems like an innocent enough project. But researchers at Brown University, DexNet, and Google Research (among others) are exploring the ways in which their robots could store the data that comes from thousands of practice attempts in the cloud, to be downloaded later by other robots who need the same skills. Now it’s not just about picking up objects, but about massive data transfers between robots.
It’s helpful when our devices cooperate with one another, but this is a whole new can of worms. We’re not days from being incapacitated by teams of robots working together, but there are important questions that scientists, citizens, and policymakers alike should ask, such as: How will we regulate information in the cloud? How much autonomy should we give to our robotic aides when it comes to sharing and downloading information? Can the robot or cloud be hacked to download dangerous information? How eager should we be to cut humans out of the learning process?
Edublocks Everyone’s an expert at something?
Edublocks are modular “pieces of education” and are based on the idea that education and work are integrated rather than sequential and that learning takes place everywhere and from everyone, rather than just a formal school environment. Their goal, by 2026, is to have a large marketplace of informal experts and learners exchanging skills and knowledge for money. The Ledger is built on Blockchain, which also houses Bitcoin. And like Bitcoin, there will be no central authority managing the process. But the designers of Edublocks don’t see this as an impediment, but rather fuel for the new speculative economy where people invest in each other and are able to identify the most lucrative skills. While many of us have daydreamed about getting credit for what we know and can do, rather than the title assigned to our job by HR, this system brings up dozens of ethical dilemmas and policy issues, including: Do we want people learning from self-identified experts? How can we measure the knowledge gained from these blocks? How easy will it be to game the system by buying blocks and never doing the learning?