What Can You Trust in an Age of AI and Deepfakes?

by | Jan 25, 2024 | General BI

Reading Time: 4 minutes

One of the most important aspects of analytics is whether or not you can trust the data. That’s why data governance is so important. To make sure you are getting the best results, you have to make sure the technology is working with accurate data in the first place.

When it comes to artificial intelligence (AI), trust comes into play not just with the data that goes into the technology, but also with the information that it produces. Deepfake technology uses artificial intelligence to manipulate video or audio to produce a convincing replica of reality. In 2024, security experts are on alert for misinformation, and trust in AI will face its biggest challenge yet.

Trust issues

According to The Economist, 2024 is the “biggest election year in history,” with 76 countries where voting among the whole eligible population will take place. That, of course, includes the population of the United States among the 4 billion people taking part in nationwide elections. With the increased use of deepfakes, lawmakers at both the state and federal levels have moved to increase the penalties for those found to be manipulating images or words. In other cases, they are demanding that any media that uses AI comes with some kind of disclaimer along the lines of: “Media depicting the candidate has been altered or artificially generated.” Lawmakers are hesitant to call for outright bans on ads that use artificial intelligence because they don’t want to end up in a fight over First Amendment rights.

Many of the countries involved in the elections are watching each other. Internationally, political parties have used AI to try to influence elections. The examples range from showing what they believe to be unflattering societal images that would result if their opponents are elected, to creating conversations involving corrupt maneuvering. In one unique case, the former Prime Minister of Pakistan, who is in prison and banned from making campaign and broadcast appearances, used AI-generated audio of himself giving a speech to get around the restrictions.

AI hasn’t just infiltrated the political world. Celebrities are combating their voices or likenesses being used in a variety of ways, including fake ads on social media. Experts say the tools to make such ads are easily accessible. All it takes in many cases is a script that can be translated into an AI-generated voice for an audio-only scam, and if there is video involved, lip-syncing programs can incorporate that sound into existing footage.

The damage can become even more far-reaching if fake videos are widely shared. Social media companies say they take punitive action when they find out something is false, but that fake ads can be hard to identify at first. Whether it is social media gatekeepers or PR professionals looking to share an enticing piece of content, everyone needs to be more vigilant when it comes to identifying something artificial.

AI as a solution

As much as AI can be to blame for the rise in these misleading pieces of media, it can also be a part of the solution. Developers are starting to train AI to be able to detect what is authentic and what is not. Although it is getting harder to do, often humans can parse video or audio to identify small abnormalities such as the lighting or shading of videos to tell if something has been edited in a misleading way. Those are some of the same elements AI can be trained to detect.

One tool to help identify authentic pieces such as video is blockchain. Blockchain has the capability to authenticate a video as an original source, so if the video has been tampered with, the technology can be used to show whether or not it is the original version.

Scams can target everyone, from the high-profile personalities such as celebrities and politicians to everyday people. One of the most prominent early cases of deepfake technology involved the impersonation of a CEO’s voice to commit financial fraud. When it comes to personal information, experts say you should take steps including being careful about the personal information you share online, enabling strong privacy settings, and taking advantage of security features such as multi-factor authentication. In most cases, practicing the kind of responsible on-line behavior that is taught as early as grade school applies to people of any age.

Whenever there is versatile technology that has the ability to do great things, there will be bad actors who will try to exploit it for selfish reasons. As much as those bad actors will work to find the ways they can manipulate something like AI, you can be sure there will be people working to protect it, developing ways to always stay one step ahead and to make sure we can all trust the data we are seeing.

John Sucich
Follow me

You may also like