‘A lack of trust’: How deepfakes and AI could rattle the US elections | US Election 2024 News

Trucker Access Forums World News ‘A lack of trust’: How deepfakes and AI could rattle the US elections | US Election 2024 News

Viewing 1 post (of 1 total)
  • Author
    Posts
  • #26359
    Sandy PhillipsSandy Phillips
    Participant

    On January 21, Patricia Gingrich was about to sit down for dinner when her landline phone rang. The New Hampshire voter picked up and heard a voice telling her not to vote in the upcoming presidential primary.

    “As I listened to it, I thought, gosh, that sounds like Joe Biden,” Gingrich told Al Jazeera. “But the fact that he was saying to save your vote, don’t use it in this next election — I knew Joe Biden would never say that.”

    The voice may have sounded like the United States president, but it wasn’t him: It was a deepfake, generated by artificial intelligence (AI).

    Experts warn that deepfakes — audio, video or images created using AI tools, with the intent to mislead — pose a high risk to US voters ahead of the November general election, not only by injecting false content into the race but by eroding public trust.

    Gingrich said she didn’t fall for the Biden deepfake, but she fears it may have suppressed voter turnout. The message reached nearly 5,000 New Hampshire voters just days before the state’s primary.

    “This could be bad for people that aren’t so informed about what’s going on with the Democrats,” said Gingrich, who is the chair of the Barrington Democratic Committee in Burlington, New Hampshire.

    “If they really thought they shouldn’t vote for something and Joe Biden was telling them not to, then maybe they wouldn’t attend that vote.”

    Joe Biden walks along a line of supporters, who stand behind a barricade outside, some pointing camera phones at the president.
    The voice of US President Joe Biden was spoofed in a robocall sent to New Hampshire primary voters [Leah Millis/Reuters]

    Online groups vulnerable

    The Biden call wasn’t the only deepfake so far this election cycle. Before calling off his presidential bid, Florida Governor Ron DeSantis’s campaign shared a video that contained AI-generated images of Donald Trump hugging immunologist Anthony Fauci — two figures who clashed publicly during the COVID-19 pandemic.

    And in September, a different robocall went out to 300 voters expected to participate in South Carolina’s Republican primary. This time, recipients heard an AI-generated voice that imitated Senator Lindsey Graham, asking whom they were voting for.

    The practice of altering or faking content — especially for political gain — has existed since the dawn of US politics. Even the country’s first president, George Washington, had to contend with a series of “spurious letters” that appeared to show him questioning the cause of US independence.

    But AI tools are now advanced enough to convincingly mimic people quickly and cheaply, heightening the risk of disinformation.

    A study published earlier this year by researchers at George Washington University predicted that, by mid-2024, daily “AI attacks” would escalate, posing a threat to the November general election.

    The study’s lead author Neil Johnson told Al Jazeera that the highest risk doesn’t come from the recent, obviously fake robocalls — which contained eyebrow-raising messages — but rather from more convincing deepfakes.

    “It’s going to be nuanced images, changed images, not entirely fake information because fake information attracts the attention of disinformation checkers,” Johnson said.

    The study found that online communities are linked in a way that allows bad actors to send large quantities of manipulated media directly into the mainstream.

    Communities in swing states could be especially vulnerable, as could parenting groups on platforms like Facebook.

    “The role of parenting communities is going to be big one,” Johnson said, pointing to the rapid spread of vaccine misinformation during the pandemic as an example.

    “I do think that we’re going to be suddenly faced with a wave of [disinformation] — lots of things that are not fake, they’re not untrue, but they stretch the truth.”

    Donald Trump stands next to the White House podium where Anthony Fauci speaks.
    An AI-generated image released by the Ron DeSantis campaign appeared to show Donald Trump, right, embracing Anthony Fauci, left [Leah Millis/Reuters]

    Eroding public trust

    Voters themselves, however, are not the only targets of deepfakes. Larry Norden, senior director of the Elections and Government Program at the Brennan Center for Justice, has been working with election officials to help them spot fake content.

    For instance, Norden said bad actors could use AI tools to instruct election workers to close a polling location prematurely, by manipulating the sound of their boss’s voice or by sending a message seemingly through a supervisor’s account.

    He is teaching poll workers to protect themselves by verifying the messages they receive.

    Norden emphasised that bad actors can create misleading content without AI. “The thing about AI is that it just makes it easier to do at scale,” he said.

    Just last year, Norden illustrated the capabilities of AI by creating a deepfake video of himself for a presentation on the risks the technology poses.

    “It didn’t take long at all,” Norden said, explaining that all he had to do was feed his previous TV interviews into an app.

    His avatar wasn’t perfect — his face was a little blurry, his voice a little choppy — but Norden noted the AI tools are rapidly improving. “Since we recorded that, the technology has gotten more sophisticated, and I think it’s more and more difficult to tell.”

    The technology alone is not the problem. As deepfakes become more common, the public will become more aware of them and more sceptical of the content they consume.

    That could erode public trust, with voters more likely to reject true information. Political figures could also abuse that scepticism for their own ends.

    Legal scholars have termed this phenomenon the “liar’s dividend”: Concern about deepfakes could make it easier for the subjects of legitimate audio or video footage to claim the recordings are fake.

    Norden pointed to the Access Hollywood audio that emerged before the 2016 election as an example. In the clip, then-candidate Trump is heard talking about his interactions with women: “You can do anything. Grab ‘em by the pussy.”

    The tape — which was very real — was considered damaging to Trump’s prospects among female voters. But if similar audio leaked today, Norden said a candidate could easily call it fake. “It would be easier for the public to dismiss that kind of thing than it would have been a few years ago.”

    Norden added, “One of the problems that we have right now in the US is that there’s a lack of trust, and this may only make things worse.”

    Steve Kramer stands in a courtroom, surrounded by a lawyer and a law enforcement officer.
    Steve Kramer, centre left, has been charged with 13 felony counts of felony voter suppression, as well as misdemeanours for his involvement in the New Hampshire robocall [Steven Senne/AP Photo, pool]

    What can be done about deepfakes?

    While deepfakes are a growing concern in US elections, relatively few federal laws restrict their use. The Federal Election Commission (FEC) has yet to restrict deepfakes in elections, and bills in Congress remain stalled.

    Individual states are scrambling to fill the void. According to a legislation tracker published by the consumer advocacy organisation Public Citizen, 20 state laws have been enacted so far to regulate deepfakes in elections.

    Several more bills — in Hawaii, Louisiana and New Hampshire — have passed and are awaiting a governor’s signature.

    Norden said he was not surprised to see individual states act before Congress. “States are supposed to be the laboratories of democracy, so it’s proving true again: The states are acting first. We all know it’s really hard to get anything passed in Congress,” he said.

    Voters and political organisations are taking action, too. After Gingrich received the fake Biden call in New Hampshire, she joined a lawsuit — led by the League of Women Voters — seeking accountability for the alleged deception.

    The source of the call turned out to be Steve Kramer, a political consultant who claimed his intention was to draw attention to the need to regulate AI in politics. Kramer also admitted to being behind the robocall in South Carolina, mimicking Senator Graham.

    Kramer came forward after NBC News revealed he had commissioned a magician to use publicly available software to generate the deepfake of Biden’s voice.

    According to the lawsuit, the deepfake took less than 20 minutes to create and cost only $1.

    Kramer, however, told CBS News that he received “$5m worth of exposure” for his efforts, which he hoped would allow AI regulations to “play themselves out or at least begin to pay themselves out”.

    “My intention was to make a difference,” he said.

    Paul Carpenter, a magician, appears to float a playing card between his two outstretched hands.
    Paul Carpenter, a New Orleans magician, said he was hired to create a deepfake of President Biden’s voice [Matthew Hinton/AP Photo]

    Potential to apply existing laws

    But Kramer’s case shows existing laws can be used to curtail deepfakes.

    The Federal Communications Commission (FCC), for instance, ruled (PDF) earlier this year that voice-mimicking software falls under the 1991 Telephone Consumer Protection Act — and is therefore illegal in most circumstances.

    The commission ultimately proposed a $6m penalty against Kramer for the illegal robocall.

    The New Hampshire Department of Justice also charged Kramer with felony voter suppression and impersonating a candidate, which could result in up to seven years in prison. Kramer has pleaded not guilty. He did not respond to a request for comment from Al Jazeera.

    Norden said it is significant that none of the laws Kramer is accused of breaking are specifically tailored to deepfakes. “The criminal charges against him have nothing to do with AI,” he said. “Those laws exist independently of the technology that is used.”

    However, those laws are not as easy to apply to bad actors who are not identifiable or who are located outside of the US.

    “We know from the intelligence agencies that they’re already seeing China and Russia experimenting with these tools. And they expect them to be used,” Norden said. “In that sense, you’re not going to legislate your way out of this problem.”

    Both Norden and Johnson believe the lack of regulation makes it more important for voters to inform themselves about deepfakes — and learn how to find accurate information.

    As for Gingrich, she said she knows that manipulative deepfakes will only grow more ubiquitous. She too feels voters need to inform themselves about the risk.

    Her message to voters? “I would tell people to make sure that they know they can vote.”

    Source…..

Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.