Saying it could have "a chilling effect on valuable research," over 30 ethicists have put their signatures on a column claiming that Facebook's clandestine and manipulative mood study was not inappropriate.
As you'll recall, the Cornell University study made adjustments to the Facebook algorithm that determines what content appears in a user's timeline. The experiment was an effort to measure how changes in the amount of positive and negative language a person sees can affect the mood or tone of their subsequent posts. The study was harshly condemned by privacy advocates because Facebook users were unaware that they were being experimented upon.
But as Michelle Meyer and 32 other bioethicists point out, these criticisms are actually "misjudgements" that "will drive social trials underground." Worse, the authors suggest it perpetuates the presumption that research is dangerous. But as they also make clear, the study looked into effects that wouldn't otherwise have been known. They write:
Some have said that Facebook "purposefully messed with people's minds". Maybe; but no more so than usual. The study did not violate anyone's privacy, and attempting to improve users' experience is consistent with Facebook's relationship with its consumers.
It is true that Facebook altered its algorithm for the study, but it does that all the time, and this alteration was not known at the time to increase risk to anyone involved. Academic studies have suggested that users are made unhappy by exposure to positive posts...The results of Facebook's study pointed in the opposite direction: users who were exposed to less positive content very slightly decreased their own use of positive words and increased their use of negative words.
We do not know whether that is because negativity is 'contagious' or because the complaints of others give us permission to chime in with the negative emotions we already feel. The first explanation hints at a public-health concern. The second reinforces our knowledge that human behaviour is shaped by social norms. To determine which hypothesis is more likely, Facebook and academic collaborators should do more studies. But the extreme response to this study, some of which seems to have been made without full understanding of what it entailed or what legal and ethical standards require, could result in such research being done in secret or not at all.
The authors go on to say that if critics think the manipulation of emotional content is unethical, then the same concern should apply to Facebook's standard practice, along with similar practices by companies, non-profit organizations, and governments. Otherwise, such research should be both allowed and encourages in order to "try to quantify those risks and publish the results."
All this said, the authors admit that the researchers should have sought an ethical review and debriefed the study's participants.