Global giant Meta, the parent company of popular social media platforms Facebook, WhatsApp and Instagram, says its systems are ready to fend off disinformation and voter fraud campaigns meant to influence the upcoming general elections.
The company said this during an interview with Sunday World on Thursday at its Johannesburg offices. It warned that it will remove any content aimed at misinforming the public about this year’s fiercely contested national and provincial elections from its platforms.
Election-specific content policies
Meta’s public policy manager Balkissa Ide Siddo emphasised that the company has election-specific content policies. These help in preventing voter fraud and election interference.
“On misinformation, we have adopted a three-step approach: remove, reduce and inform. Whether it is misinformation, just a regular content or generated by artificial intelligence, if it violates our community standards, we will remove it. In an election context, we are going to remove all content with incitement to violence. Any content that could lead to imminent harm, physical damage, voter suppression. …We will remove it as soon as it comes to our attention,” stated
Siddo revealed that over the past eight years the company has developed industry-leading tools and standards. These are specific to safety and security in elections.
Tools specific to safety and security in elections
“We work with independent fact-checkers across the globe, about 100 partners. They fact-check content in 60 languages. In South Africa, we work with AFP and Africa Check. They fact-check in English and also in local languages like Zulu, Sotho, and Setswana.”
The independent fact-checkers review and rate the content. And Meta will attach a warning to content that has been debunked by the fact-checkers.
She said in the coming weeks Meta will make it possible for users and advertisers to disclose when they post content that is AI-generated.
“If they don’t disclose it and it is brought to our attention or we detected that with our tools, they will get warning. And if they don’t change that, they will get warnings,” she said.
Siddo said Meta has globally invested more than US$20-billion in this area. Specific area of dealing with disinformation campaigns, safety and security during elections.
Stringent screening processes
She stated that the platform had ads transparency tools. These use an authorisation process to screen people who want to advertise political election ads on their platform. They have to go through the process of disclosing several key information.
This is because users are demanding more transparency and want to know who is behind an ad, she said.
“When people want to advertise political or election-related ads in South Africa, they have to tell us where they are located. They must also disclose who is paying for the ads. The information they provide will be stored in the ads library. …This information is stored there for seven years, and everyone can access the information,” she said.
This means anyone can access this information from the ads library.
Siddo emphasised that as elections differed by country, they have to tailor-make their approach for each country.
Country-specific intelligence
“We also know that no two elections are the same, so we have a tailored approach for the South African election. … We also collaborate in South Africa with a number of organisations to prepare for the elections. And we are also going to put in place a dedicated elections operations centre. This means pulling together resources from across the company. [These include] threat intelligence experts, data scientists, legal people, comms people, public policy.
“It is really a comprehensive team that comes together. During the elections they analyse, they detect potential threats in real-time and they address them. Throughout the elections they will be working on that 24/7. It is a bit like a war room,” stated Siddo.
Platform has covered over 200 elections
She said Meta has covered more than 200 elections. Though no elections are the same, the company will prioritise listening to experts and civil society. Also regulators and authorities, to understand the trends, potential challenges and gaps, and we address these.
“That is the reason why for us in South Africa it is important that we remove anything that is harmful content from our platform. The second thing is to combat misinformation (through a fact-checking programme). The third thing is about countering any abuses of AI-generated content. Fourth, is about collaborating with local authorities like the IEC. Also collaborating with (parliament portfolio on communications, civil society organisation).
The company has also partnered with 10 national and community radio stations. This to raise awareness on misinformation and hate speech. Also on how to detect such content and what to do when they come across misinformation.