Facebook was a catalyst for anti-migrant violence in Germany, according to a new study.
Researchers at the University of Warwick examined more than 3,300 attacks on refugees that took place in the country over a two-year period.
They found that such hate crimes increased in areas with greater Facebook usage during periods of high online anti-refugee sentiment.
This turned out to be the case in communities ranging from large cities to small towns regardless of factors such as the area’s wealth or general political outlook.
Authors concluded social media can act as a catalyst for violence aimed at migrants.
The study follows German Chancellor Angela Merkel’s 2015 decision to open borders to more than a million migrants – many of them refugees from war zones in Iraq, Syria and Afghanistan.
The study, called ‘Fanning the Flames of Hate: Social Media and Hate Crime’, found that ‘anti-refugee hate crimes increase disproportionally in areas with higher Facebook usage during periods of high anti-refugee sentiment online.
‘This effect is especially pronounced for violent incidents against refugees, such as arson and assault.’
According to the New York Times, data from the study showed that in communities where Facebook use was one standard deviation greater than Germany’s per-person average, anti-migrant attacks went up around 50 per cent.
Study authors Karsten Müller and Carlo Schwarz wrote: ‘Our results suggest that social media can act as a propagation mechanism between online hate speech and real-life violent crime.’
In their conclusion, they say their findings ‘suggest that social media has not only become a fertile soil for the spread of hateful ideas but also motivates real-life action.’
The study shows that ‘volatile, short-lived bursts in sentiment within a given location have substantial effects on people’s behaviour, and that social media may play a role in their propagation,’ the authors added.
The Times reports that experts independent of the study had described the findings as ‘credible’ and ‘rigorous’.
Facebook did not address the study directly after being contacted by the New York Times, but told the newspaper in an email: ‘Our approach on what is allowed on Facebook has evolved over time and continues to change as we learn from experts in the field.’
Last year, the firm’s chief security officer said Facebook was shutting down one million accounts every day in its bid to banish hate speech.
The same year, Mark Zuckerberg announced the hiring of 3,000 more moderaters.
In April, he said Facebook will have AI tools within five to ten year capable of automatically flagging and removing hate speech before it appears.