Days after the bombing of a maternity and children’s hospital in the Ukrainian city of Mariupol, comments claimed that the attack never happened began flooding the queues of workers moderating Facebook and Instagram content on behalf of the apps’ owner, Meta Platforms. The bombardment killed at least three people, said Ukraine’s President Volodymyr Zelenskiy.
Among the most-recognized women was Mariana Vishegirskaya, a Ukrainian fashion and beauty influencer. Online expressions of support for the mother-to-be quickly turned to attacks on her Instagram account. The case involving the beauty influencer is just one example. Russian officialdom seized on the images, setting them side-by-side against her glossy Instagram photos. This is in an effort to persuade viewers that the attack had been faked. On state television and social media, and in the chamber of the U.N. Security Council, Moscow alleged falsely that Vishegirskaya had donned make-up and multiple outfits in an elaborately staged hoax orchestrated by Ukrainian forces.
The moderator said that the posts were vile and appeared to be orchestrated, the moderator told Reuter. But many were within the company’s rules, the person said, because they did not directly mention the attack. Meta declined to comment on its handling of the activity involving Vishegirskaya. But stated that multiple teams are addressing the issue. have separate, expert teams and outside partners that review They stated that misinformation and inauthentic behavior and have been applying their policies to counter that activity forcefully throughout the war.
The company was considering new steps to address misinformation and hoaxes. Based at a moderation hub of several hundred people reviewing content from Eastern Europe, the two contractors are foot soldiers in Meta’s battle to police content from the conflict. They are among tens of thousands of low-paid workers at outsourcing firms around the world that Meta contracts to enforce its rules. The tech giant has sought to position itself as a responsible steward of online speech during the invasion. Just a few days into the war, Meta imposed restrictions on Russian state media.
It later said it had pulled down another Russia-based network that was falsely reporting people for violations like hate speech or bullying. The company also attempted to carve out space for users in the region to express their anger over Russia’s invasion and to issue calls to arms in ways Meta normally would not permit. In Ukraine and 11 other countries across Eastern Europe and the Caucasus, it created a series of temporary spirit of the policy exemptions to its rules barring hate speech, violent threats and more.
Meta walked back elements of the exceptions. It first limited them to Ukraine alone and then cancelled one altogether. The documents offer a rare lens into how Meta interprets its policies, called community standards. The company says its system is neutral and rule-based. Critics say it is often reactive, driven as much by business considerations and news cycles as by principle. Social media researchers say the approach allows the company to escape accountability. Clegg told employees the company was reversing altogether the exemption that had allowed users to call for the deaths of Putin and Lukashenko.
Inside the company, writing on an internal social platform, some Meta employees expressed frustration that Facebook was allowing Ukrainians to make statements that would have been deemed out of bounds for users posting about previous conflicts in the Middle East and other parts of the world. Seems this policy is saying hate speech and violence is ok if it is targeting the ‘right’ people. Meanwhile, Meta gave moderators no guidance to enhance their ability to disable posts promoting false narratives about Russia’s invasion.
In theory, Meta did have a rule that should have enabled moderators to address the mobs of commenters directing baseless vitriol at Vishegirskaya. Meta’s harassment policy prohibits users from posting content about a violent tragedy, or victims of violent tragedies that include claims that a violent tragedy did not occur. It cited that rule when it removed posts by the Russian Embassy in London that had pushed false claims about the Mariupol bombing. Posts that explicitly alleged that the bombing was staged were eligible for removal. Guidance from Meta enabling commenters to consider context and enforce the spirit of that policy could have helped.
Meta removed those posts on March 16. Meta declined to comment on why the posts had evaded its own detection systems. Meta designated Vishegirskaya an involuntary public person, which meant moderators could finally start deleting the comments under the company’s bullying and harassment policy. But the change, they said, came too late. The flow of posts related to the woman had already slowed to a trickle.