What fresh hell is this?

In an effort to prevent mass shootings, the White House is considering a controversial proposal from the Suzanne Wright Foundation to try to predict gun violence by monitoring the mentally ill.
The proposal – called SAFEHOME – is the brainchild of Bob Wright who runs the organization named after his wife and whose stated goal is to fund research into pancreatic cancer. SAFEHOME, which stands for Stopping Aberrant Fatal Events by Helping Overcome Mental Extremes, would be part of a larger proposal to establish a new government department called the Health Advanced Research Projects Agency (HARPA), modeled after DARPA.
Predictive analytics: unproven technology
Estimated to cost $40-60 billion, SAFEHOME would employ tracking software to detect signs of mental instability that could foreshadow mass shootings. If this sounds like pseudoscience, that’s because it is. Just because we have AI and machine learning that can process massive amounts of behavioral data doesn’t mean it actually works. Neurobehavioral technology is still in its infancy.
Now, the idea of an advanced health research agency isn’t necessarily controversial, but the idea that it should come from an outside source certainly is. Questions remain about how HARPA would be run. But that’s a separate conversation.
Do we have the tech?
The concern with SAFEHOME is whether or not it’s possible or ethical to use technology such as phones and smartwatches or devices such as an Amazon Echo to try to detect mental illness or the propensity of the mentally ill to become violent. We don’t currently have reliable benchmarks to predict that behavior and the kind of monitoring this would involve has the potential to create massive privacy violations. And what happens to the falsely accused?
Many have seen this project as an effort to deflect attention away from the gun rights debate, and it remains to be seen just what would happen to those identified as high risk by the algorithms SAFEHOME might employ.

While the project would begin by collecting and processing data from volunteers who would give up any expectation of privacy, it does open the door to a future where the government can surveil those deemed mentally ill. And there’s no way of knowing yet how transparent the project would be.
Arresting the innocent
And what happens even if we do come up with even remotely reliable benchmarks for predicting gun violence? How do we intervene? Can you punish someone because an algorithm predicted they might be violent even though they haven’t yet committed a crime (like a real-life Minority Report)? Even gun advocates might raise questions about whether people can be barred from owning weapons based on a predictive algorithm bound to produce millions of false positives.
It’s also important to point out that many mass shootings in the U.S. have been perpetrated by white supremacists and not the mentally ill (since racism is not classified as a mental illness).
Further reading:
White House weighs controversial plan on mental illness and mass shootings (The Washington Post, 2019)
Trump’s plan to stop violence via smartphone tracking isn’t just a massive privacy violation,
it’s also highly unlikely to work (Slate, 2019)
The misguided and extremely flawed SAFEHOME policy (The Hofstra Chronicle, 2019)
The plan to use Fitbit data to stop mass shootings is one of the scariest proposals yet (Gizmodo, 2019)
Trump considering “neurobehavioral” tech to predict mass shooters (Vanity Fair, 2019)