{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

Does the public comment system have an AI problem?

Last year, when an air quality agency in Southern California proposed a new rule to encourage consumers to buy heat pumps instead of gas heaters, the agency was flooded with 20,000 comments opposing the idea—many more than usual. “Due to the volume and nature of these submissions, South Coast AQMD had concerns about their authenticity,” says Rainbow Yeung, an agency spokesperson. The agency’s executive director got an email thanking him for his “opposition” to a rule that his own team had drafted.

To check the validity of the comments, the agency reached out to a small sample of commenters—172 people—to confirm that they’d actually sent the emails. Almost no one responded. But of the five people who did, three of them said that they didn’t know anything about the comments that had been submitted in their own names. In a separate investigation, a campaigner from the Sierra Club also started contacting people on the list; the four people he reached also said that they hadn’t sent emails.

The L.A. Times recently reported that CiviClick, a company that bills itself as a provider of “AI-powered advocacy tools,” had led the campaign to send opposition comments. The client was a public affairs consultant with ties to the gas industry.

CiviClick denies that it sent any email without consent or that it used AI to fabricate automated messages. The air quality management district is still investigating the situation; the executive director said in a recent meeting that the team was exploring more “aggressive” ways of sampling commenters, since it couldn’t draw definitive conclusions from the limited initial response.

Regardless of what happened, it points to a broader question: if AI can now easily impersonate humans—and if comments can be submitted without someone’s knowledge—how can government agencies actually know when a public comment was written by a citizen rather than a bot?

Fake comments aren’t new. In 2017, the FCC received 22 million comments during the debate on net neutrality rules—and around 18 million of them were later found to be fake. Millions came from a single college student; half a million came from Russian email addresses. After an investigation, New York Attorney General Letitia James fined “lead generator” companies that had collectively impersonated millions of real people when they submitted comments.

AI, in theory, could make it easier to write and submit fake comments that sound real. CiviClick says that it simply uses AI to help real people personalize their comments. The platform asks users questions related to the issue—for example, how an increase in taxes would affect their budget—and then tailors an email. (The company also uses AI to predict how likely someone would be to respond to a campaign.)

CiviClick founder and CEO Chazz Clevinger says he could not speak to the specifics of the Southern California campaign but insists it meaningfully captured the authentic views of people across the region. “A homeowner in Riverside County who had recently installed a gas furnace wrote a different message than a renter in Los Angeles who was concerned about landlord compliance costs,” he tells Fast Company. “A contractor in San Bernardino County who builds new homes wrote a different message than a retiree in Orange County worried about electricity grid strain during heat waves.” He argues that the tool is simply helping people “articulate their genuine concerns,” and that they’re no less legitimate than messages written from scratch.

The Sierra Club campaigner has a different take. Even if someone consents to have AI tweak a comment, it could be problematic. “Regulators give priority to customized comments, which require time and effort to send, versus form letters or petitions which do not,” says Dylan Plummer, campaign adviser for the Sierra Club’s Clean Heat campaign. “Using AI to generate custom comments creates the illusion of engaged individuals willing to spend the time to draft a thoughtful statement on an issue, when in fact, they are engaged at the same level as someone who signed a traditional form letter or petition.”

The bigger challenge, Plummer says, is whether some public comments are attributed to people who never had anything to do with them. In another case in California, he started calling people who had submitted comments on a proposed rule at the Bay Area Air District. Another nonprofit, the Energy and Policy Institute, filed a public records request to get copies of the emails that were sent in using a different software platform called Speak4. (Speak4 declined to talk; in a San Francisco Chronicle article, the company’s client, the Bay Area Council, said that neither it nor Speak4 submitted letters without consent.)

Of the seven people that Plummer spoke with, all seven said that they had no knowledge of the email. “Some even said that they didn’t know what the Bay Area Air District was,” he says. “One woman I spoke to said, ‘Why would I ever oppose regulations to protect clean air?’”

It’s very difficult to prove whether comments are actually fake after the fact. “I had to call dozens and dozens of numbers that I was able to access through internet sleuthing,” Plummer says. Most people didn’t want to talk. “When I’m talking, I’m like, ‘Hi, my name is Dylan, and I’m investigating a potential case of identity theft.’ And their first response is, ‘Oh, this guy’s totally a scammer,’ and hang up.”

In another case in North Carolina, county commissioners received hundreds of emails in support of a new gas pipeline. But when they started to respond to some of the emails, their constituents said that they hadn’t sent them. The mass email campaign backfired. “If they’re this sloppy with their advocacy work, what does that say about our concerns about their maintenance, which is the critical thing,” one commissioner told E&E News. The board voted unanimously for a resolution that raised concerns about the project and recommended that federal officials should deny a permit.

Williams, the company that wanted to build the pipeline, suggested that people might have forgotten that they sent an email. CiviClick, which facilitated the emails for the company, said the same thing about the campaign in Southern California. (It’s worth noting that the air quality agency contacted supposed commenters shortly after the comments were submitted, however.) Clevinger also suggested that there could be “deliberate mischaracterization or misuse of our tools” by groups like the Sierra Club that “have a vested interest in discrediting its authenticity.”

When agencies do receive a flood of fake emails, it’s not clear how much that necessarily affects decision making. “What matters is not the identity of the commenter,” says Steven Balla, a political science professor who studies public commenting. “What matters is the content of the comment.” Agencies are charged with considering the technical, legal, and economic information that’s submitted to them during the comment process, he says. But they’re not adding up how many comments they got on each side, and it’s the ideas that matter more than the name of the person who submitted them.

Fake or AI-generated comments “smell icky,” he says. “But I haven’t yet been moved that, wow, this is totally changing the way policy decisions are made.” In the case of net neutrality, he argues, the millions of comments didn’t ultimately sway what the first Trump administration wanted to do.

“What I know about misinformation more generally is that misinformation generally has minimal effects on what people believe or what they do,” says Jonathan Brennan, director of the Center on Technology Policy at NYU. “I’d be far more concerned about the secondary effects of a general loss in trust— government officials saying, well, we can’t really trust any public comments, maybe they’re all fake, maybe they’re not, so we’re just going to give them less weight.” A local school board, for example, might theoretically listen more to people who show up to comment in person, making it harder for others to share their opinion if they don’t have time to attend.

Agencies can use technology to sort through digital comments and summarize duplicates, Balla says. That’s different from older mass comments that showed up on postcards. “Back in the old days in the 90s, I was talking to an agency that got at that time maybe 100,000 comments,” he says. “Those were still paper based. They literally had some warehouse space out in Rockville, Maryland, where they were basically putting the pieces of paper into piles. That was a lot of work. Now you get 100,000 comments, and 99,000 of them are going to be nearly identical. And you can figure that out in seconds.”

Still, if AI can easily generate a series of unique comments, the process could get harder. The Sierra Club’s Plummer suggests that something needs to change. “Astroturfing and the creation of front groups—polluting industry working to create the illusion of widespread support for a position—is nothing new,” he says. “Our big concern, though, is that these new technologies with AI proliferating is going to put these tactics on steroids and make them even more insidious and difficult to root out. And it is, in my opinion, a direct threat to democratic processes and decision making.”

At the South Coast Air Quality Management District, the board voted narrowly to defeat the proposed rule that would have curbed pollution. Though CiviClick touted its work in influencing the decision, it’s hard to say what impact the comments had. The board directed the agency to send the rule back to a committee for further discussion. The rule could be revisited later, though no timeline has been set.

Now, the Sierra Club is asking California’s attorney general and LA’s district attorney to launch a fraud investigation. State senator Christopher Cabaldon also recently introduced a new bill, called “People Not Bots,” which would clarify that AI tools don’t qualify as people and shouldn’t be offering fake public input.

And at the air quality agency in Southern California, staff are exploring ways to make comment submission more secure, including portals that could offer new ways to verify that a submission is coming from a human—though that’s a harder and harder task to perform. “Maintaining the integrity of our public process is a top priority,” says Yeung.

Ria.city






Read also

'Oh, are you Drop Site?' Fetterman melts down on reporter over Iran war questions

Mullin says FEMA should be 'restructured' and that he'll end Noem's controversial $100K expense reviews

The Outdoor Trees and Plants That Are Safe for Pets (and What to Avoid)

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости