Thursday, November 7, 2024

Grow Your Business and Join MarketWorld Marketplace and Create Your Own Store front

HomeCyberSecurityA weapon of mass deception?

A weapon of mass deception?

Digital Security

As fabricated images, videos and audio clips of real people go mainstream, the prospect of a firehose of AI-powered disinformation is a cause for mounting concern

Deepfakes in the global election year of 2024: A weapon of mass deception?

Fake news has dominated election headlines ever since it became a big story during the race for the White House back in 2016. But eight years later, there’s an arguably bigger threat: a combination of disinformation and deepfakes that could fool even the experts. Chances are high that recent examples of election-themed AI-generated content – including a slew of images and videos circulating in the run-up to Argentina’s presential election and a doctored audio of US President Joe Biden – were harbingers of what’s likely to come on a larger scale.

With around a quarter of the world’s population heading to the polls in 2024, concerns are rising that disinformation and AI-powered trickery could be used by nefarious actors to influence the results, with many experts fearing the consequences of deepfakes going mainstream.

The deepfake disinformation threat

As mentioned, no fewer than two billion people are about to head to their local polling stations this year to vote for their favored representatives and state leaders. As major elections are set to take place in more than countries, including the US, UK and India (as well as for the European Parliament), this has the potential to change the political landscape and direction of geopolitics for the next few years – and beyond.

At the same time, however, misinformation and disinformation were recently ranked by the World Economic Forum (WEF) as the number one global risk of the next two years.

The challenge with deepfakes is that the AI-powered technology is now getting cheap, accessible and powerful enough to cause harm on a large scale. It democratizes the ability of cybercriminals, state actors and hacktivists to launch convincing disinformation campaigns and more ad hoc, one-time scams. It’s part of the reason why the WEF recently ranked misinformation/disinformation the biggest global risk of the coming two years, and the number two current risk, after extreme weather. That’s according to 1,490 experts from academia, business, government, the international community and civil society that WEF consulted.

The report warns:“Synthetic content will manipulate individuals, damage economies and fracture societies in numerous ways over the next two years … there is a risk that some governments will act too slowly, facing a trade-off between preventing misinformation and protecting free speech.”

 

deepfakes-disinformation-politics

(Deep)faking it

The challenge is that tools such as ChatGPT and freely accessible generative AI (GenAI) have made it possible for a broader range of individuals to engage in the creation of disinformation campaigns driven by deepfake technology. With all the hard work done for them, malicious actors have more time to work on their messages and amplification efforts to ensure their fake content gets seen and heard.

In an election context, deepfakes could obviously be used to erode voter trust in a particular candidate. After all, it’s easier to convince someone not to do something than the other way around. If supporters of a political party or candidate can be suitably swayed by faked audio or video that would be a definite win for rival groups. In some situations, rogue states may look to undermine faith in the entire democratic process, so that whoever wins will have a hard time governing with legitimacy.

At the heart of the challenge lies a simple truth: when humans process information, they tend to value quantity and ease of understanding. That means, the more content we view with a similar message, and the easier it is to understand, the higher the chance we’ll believe it. It’s why marketing campaigns tend to be composed of short and continually repeated messaging. Add to this the fact that deepfakes are becoming increasingly hard to tell from real content, and you have a potential recipe for democratic disaster.

From theory to practice

Worryingly, deepfakes are likely to have an impact on voter sentiment. Take this fresh example: In January 2024, a deepfake audio of US President Joe Biden was circulated via a robocall to an unknown number of primary voters in New Hampshire. In the message he apparently told them not to turn out, and instead to “save your vote for the November election.” The caller ID number displayed was also faked to appear as if the automated message was sent from the personal number of Kathy Sullivan, a former state Democratic Party chair now running a pro-Biden super-PAC.

It’s not hard to see how such calls could be used to dissuade voters to turn out for their preferred candidate ahead of the presidential election in November. The risk will be particularly acute in tightly contested elections, where the shift of a small number of voters from one side to another determines the result. With just tens of thousands of voters in a handful of swing states likely to decide the outcome of the election, a targeted campaign like this could do untold damage. And adding insult to injury, as in the case above it spread via robocalls rather than social media, it’s even harder to track or measure the impact.

What are the tech firms doing about it?

Both YouTube and Facebook are said to have been slow in responding to some deepfakes that were meant to influence a recent election. That’s despite a new EU law (the Digital Services Act) which requires social media firms to clamp down on election manipulation attempts.

For its part, OpenAI has said it will implement the digital credentials of the Coalition for Content Provenance and Authenticity (C2PA) for images generated by DALL-E 3. The cryptographic watermarking technology – also being trialled by Meta and Google – is designed to make it harder to produce fake images.

However, these are still just baby steps and there are justifiable concerns that the technological response to the threat will be too little, too late as election fever grips the globe. Especially when spread in relatively closed networks like WhatsApp groups or robocalls, it will be difficult to swiftly track and debunk any faked audio or video.

The theory of “anchoring bias” suggests that the first piece of information humans hear is the one that sticks in our minds, even if it turns out to be false. If deepfakers get to swing voters first, all bets are off as to who the ultimate victor will be. In the age of social media and AI-powered disinformation, Jonathan Swift’s adage “falsehood flies, and truth comes limping after it” takes on a whole new meaning.


Source link

Bookmark (0)
Please login to bookmarkClose
RELATED ARTICLES
- Advertisment -spot_img

Most Popular

Sponsored Business

- Advertisment -spot_img