Facebook is using artificial intelligence to signal deviations from normal posting behavior to provide warning of possible suicidal tendencies. Is that okay?
In today’s hyperconnected world, we are generating and collecting so much data that it is beyond human capability to sift through it all. Indeed, one application of artificial intelligence is identifying patterns and deviations that signal intent on posts. Facebook is using AI in this way to extract value from its own Big Data trove. While that may be applied to a good purpose, it also raises ethical concerns.
Where might one get insight into this issue? In my own search, I found an organization called PERVADE (Pervasive Data Ethics for Computational Research). With the cooperation of six universities and the funding it received this September, it is working to frame the questions and move toward the answers.
I reached out to the organization for some expert views on the ethical questions related to Facebook’s announcement that it was incorporating AI in its expanded suicide-signal detection effort. That led to a call with one of the group’s members, Matthew Bietz.
Bietz told me the people involved in PERVADE are researching the ramifications of pervasive data, which encompasses continuous data collection — not just from what we post to social media, but also from the “digital traces that we leave behind anytime we’re online,” such as when we Google or email. New connections from the Internet of Things (IoT) and wearables further contribute to the growing body of “data about spaces we’re in,” he said. As this phenomenon is “relatively new,” it opens up new questions to explore with respect to “data ethics.”
Data detection and the EU
Noting that Facebook had declared it would not extend its suicide-signal-detection tools into the European Union, I asked Bietz whether the program would fall afoul of the EU’s General Data Protection Regulation. He acknowledged at the EU has “some of the strictest data regulations in the world” but added that “it is not entirely clear that what Facebook is doing” would be illegal under the GDPR standard.
It might be more accurate to say that Facebook is venturing onto “an edge that hasn’t really been tested,” he said, “and my guess is they decided they don’t want to be the test case.”
Facebook’s terms of service give it permission to look at data, Bietz said, so that activity doesn’t automatically violate the GDPR privacy regulations. But the legislation prohibits companies from profiling based solely “on algorithms or just automatically by a computer.”
That means that if the algorithm makes the call on a signal “without people being involved” in determining what the signal might mean, such activity would be on the wrong side of the GDPR. It is possible, though, to stay within the legal limits by using the algorithm only “to help decide which posts need human eyes” on them, he said.
NEXT PAGE: 'Ethical minefield'