Bioethicists Defend Facebook's Controversial Mood Study

Illustration for article titled Bioethicists Defend Facebooks Controversial Mood Study

Saying it could have "a chilling effect on valuable research," over 30 ethicists have put their signatures on a column claiming that Facebook's clandestine and manipulative mood study was not inappropriate.

Advertisement

As you'll recall, the Cornell University study made adjustments to the Facebook algorithm that determines what content appears in a user's timeline. The experiment was an effort to measure how changes in the amount of positive and negative language a person sees can affect the mood or tone of their subsequent posts. The study was harshly condemned by privacy advocates because Facebook users were unaware that they were being experimented upon.

Advertisement

But as Michelle Meyer and 32 other bioethicists point out, these criticisms are actually "misjudgements" that "will drive social trials underground." Worse, the authors suggest it perpetuates the presumption that research is dangerous. But as they also make clear, the study looked into effects that wouldn't otherwise have been known. They write:

Some have said that Facebook "purposefully messed with people's minds". Maybe; but no more so than usual. The study did not violate anyone's privacy, and attempting to improve users' experience is consistent with Facebook's relationship with its consumers.

It is true that Facebook altered its algorithm for the study, but it does that all the time, and this alteration was not known at the time to increase risk to anyone involved. Academic studies have suggested that users are made unhappy by exposure to positive posts...The results of Facebook's study pointed in the opposite direction: users who were exposed to less positive content very slightly decreased their own use of positive words and increased their use of negative words.

We do not know whether that is because negativity is 'contagious' or because the complaints of others give us permission to chime in with the negative emotions we already feel. The first explanation hints at a public-health concern. The second reinforces our knowledge that human behaviour is shaped by social norms. To determine which hypothesis is more likely, Facebook and academic collaborators should do more studies. But the extreme response to this study, some of which seems to have been made without full understanding of what it entailed or what legal and ethical standards require, could result in such research being done in secret or not at all.

The authors go on to say that if critics think the manipulation of emotional content is unethical, then the same concern should apply to Facebook's standard practice, along with similar practices by companies, non-profit organizations, and governments. Otherwise, such research should be both allowed and encourages in order to "try to quantify those risks and publish the results."

All this said, the authors admit that the researchers should have sought an ethical review and debriefed the study's participants.

Advertisement

Image: Gil C / Shutterstock.com.

Share This Story

Get our newsletter

DISCUSSION

Brachiator
Singlestick

"But as Michelle Meyer and 32 other bioethicists point out, these criticisms are actually "misjudgements" that "will drive social trials underground."

This is bullshit.

Is there some mysterious gray market where expensive psychological research is done, and never submitted to peer review journals?

Now, if by underground, they mean more dubious shit will be done by big corporations, that is already happening, and Facebook will always be working that shit to their benefit.

These goofballs are also ignoring the BIG, MAIN, PRIMARY, fucking issue.

The outcome of the research and their Facebook's little algorithm is secondary.

The main thing is that ethical rules prohibit human subjects being used without their knowledge, permission, and consent. The rules for human trials also require that a subject be able to cease being a subject.

This standard was violated.

The fun thing is that many tech geeks see this as unnecessary. And, like little Randian libertarian goofballs, they point to the Holy and Divine "End User License Agreement," and suggest that this document should supersede any other rule or protocol that has ever been established to prevent abuse of human subjects.

These morons also like to bleat about how they always read every EULA and absolutely understand it.

Of course, in the Facebook Case, language about this aspect of testing was not in the original EULA. So the geek philosopher kings could never have read it and acknowledged it.

This suggests that for them EULAs are mere fig leafs, and they simply do not care what Facebook and other tech companies do as long as they keep the high tech toys coming.

"All this said, the authors admit that the researchers should have sought an ethical review and debriefed the study's participants. "

Bottom line is that Facebook and other tech companies would never want this. And this may also be an issue with other entities that do testing of human subjects.

Funny how we are now in a world where some of us are more solicitous of what happens to our pets than what happens to other human beings.