WASHINGTON - Computer engineers and techy political scientists have warned of cheap, powerful artificial intelligence (AI) tools may soon create fake images, video and audio so realistic, it could fool voters and sway elections.
Initially, these fake images were crude, unconvincing and costly, especially when other types of misinformation is inexpensive and easy to spread. The threat posed by AI, so-called deepfakes, always seemed a year or two away, but not anymore.
Sophisticated AI tools can now create cloned human voices and hyper-realistic images, videos and audio in seconds, at minimal cost.
When strapped to powerful social media algorithms, the fake digitally created content can spread like wildfire at targeted audiences, taking dirty tricks to a new low.
“We’re not prepared for this,” warned A.J. Nash, VP of intelligence at the cybersecurity firm ZeroFox. "(T)he big leap forward is the audio and video capabilities ... (that may) have a major impact.”
Among the types of fake AI are: Automated robocalls in a candidate’s voice; audio recordings of a candidate confessing to crimes or expressing racist comments; video footage showing someone giving a speech they never gave; and faked local news reporting.
What happens if an international entity impersonates someone? What's the impact? Do we have recourse?
Legislation in the U.S. House requiring candidates to label campaign advertisements created with AI was introduced by Rep. Yvette Clarke (D-NY), who also sponsored legislation requiring anyone creating synthetic images to add a watermark indicating the fact.
Clarke said her greatest fear is that generative AI could create video/audio that incites violence and turns Americans against one another.
“We’ve got to set up some guardrails," Clarke told The AP. "AI (is) being weaponized" in political campaigns and ... "could be extremely disruptive." (The AP 05/14/23) AI presents political peril for 2024 with threat to mislead voters | AP News
No comments:
Post a Comment