What social media “transparency reports” tell us

Web giants such as Facebook, Twitter and Amazon were forced by the European Union to reveal figures that had never been released before.

The French love LinkedIn, the Spanish are addicted to AliExpress, Facebook reports are rarely successful and content moderation is largely automated: here are some lessons from the “transparency reports” that the 19 biggest platforms have just published, so respect the new European rules, in particular the DSA.

A quarter of Europeans (104 million) use AliExpress, the sales website of the Chinese company Alibaba. Mainly in Spain (20 million) and Portugal (4 million), or 40% of the inhabitants of the two countries on the Iberian Peninsula. The french are also seducedfor 18.7 million of them, far ahead of the 10.9 million Germans.

The latter prefer the American online sales giant Amazon, which has 60.4 million active German users, or three quarters of the population, compared to “only” 34 million French and 38 million Italians. Only Booking does not detail its user numbers.

On the network side, half of Europeans (258 million) use Facebook and almost the same number (257 million) on Instagram. The French are particularly linked to LinkedIn, with 21 million users and above all 9 million accounts, by far the highest score in Europe, where the professional platform has 132 million users and 45 million accounts.

All platforms and social networks indicate that they widely use algorithms and language models to detect and remove illegal content (hate, fraud, fraud, violence, counterfeiting, nudity, etc.). Millions of products are removed from sales platforms.

For example, Aliexpress Withdrew 9 million products from sale in recent months due to counterfeiting, defects or fraud, 96% of which were automatically identified, with a reliability of 95%, according to the group.

Amazon explains that it checks 8 billion attempts to modify product listings every day and has carried out 274 million actions to remove content that violates its rules, of which 73 million actions were carried out automatically.

Facebook voluntarily removed 46 million pieces of content between April and September 2023, 95% automatically. Bing explains that it uses automatic image comparison to remove illegal videos. The App Store made 1 million decisions, including 220,000 fully automated ones, such as deleting abusive reviews and closing accounts.

Human moderators are becoming rarer. X (formerly Twitter) claims 2,294 in English, 81 in German, 52 in French, 12 in Arabic and 2 in Italian. Tiktok guarantees that it has of 6,125 dedicated people to content moderation in the EU only. Meta shows only 1,362 moderators in European languages, but others are outside the EU and 2,000 work in images, without a defined language. LinkedIn has 820 moderators, including 180 in the EU and 30 in French.

The big exception remains Wikipedia, with its tens of thousands of human moderators who make most of the contributions and corrections, including 5,000 in France.

Internet user reports are processed manually on Meta (Facebook, Instagram, etc.): between April and September 2023, they mainly involved intellectual property infringement (370 thousand reports), defamation (70 thousand) and attacks on private life.

Except that content was only removed 1 time in 3 for intellectual property violations, 1 time in 7 for defamation reports, and 1 time in 5 for privacy violations. It should be noted that after an “appeal” procedure against its decision, 575 thousand contents were restored.

Furthermore, Meta sites received 666 injunctions from authorities to provide information about their users (for fraud, cyber harassment, terrorism, child abuse, hateful content, etc.), to which they responded within an average of 9 days.

Leave a Comment