How AI Feeds The Clean Energy Misinformation Machine


Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!


Social media has become an increasingly powerful tool for disseminating misinformation about clean energy. A growing number of Facebook groups, influencers, and various online forums make a living by spreading mis- and disinformation about renewable energy sources like solar, wind, and hydroelectric power. Safiya Umoja Noble, an AI expert at UCLA, tells us that neither corporate self-policing nor government regulation has kept pace with today’s technology’s growth and potential for harm. Instead, AI feeds a sense that search engines simply reflect back on ourselves, so we don’t stop to think of search results as subjective.

We use search engines like fact checkers. It’s only been recently that more users have gained an awareness of “the way these platforms are coded and how they prioritize certain types of values over others,” says Noble in a UCLA interview. The algorithms that govern our Google results are just one of the multiplying ways that artificial intelligence is shaping our lives and livelihoods.

Because of algorithmic influences, it’s important for each of us to learn more about why the internet works the way it does, what that means for an AI-powered future, and how clean energy could be affected.

We tend to think that, because internet searches are reliable for fairly meaningless inquiries, we, in turn, believe they’re meaningful for everything else. “But if you use that same tool for a question that is social or political, do you think you’re going to get a reliable answer?” Noble asks. To answer that question, she offers us some insights into the ways that algorithms work.

  • The companies that own search engines build responses that favor the highest bidder — it’s a cottage industry that exists to figure out how to manipulate search engines.
  • Searches start with an algorithm that sorts through millions of potential websites.
  • Through the gray market of search engine optimization, the search is influenced by industries, foreign operatives, and political campaigns.
  •  Particular information emerges on the first page that reflects the world view of the influencers.

In essence, there’s culpability on the part of companies that make products that are easily manipulable.

To explain this in another way, we all know we hold certain values. We hold these values strongly and work to ensure they’re met. That’s the rub. “So that means we would have to acknowledge that the technology does hold as a particular set of values or is programmed around a set of values,” she explains. “And we want to optimize to have more of the values that we want.”

A technology that’s completely neutral and void of any markers that differentiate people means we default to the priorities driven by our own biases.

However, Noble admonishes us that, “if we want to work toward pluralistic and pro-democratic priorities, we have to program toward the things we value.”

That’s an important caveat, as large language models don’t have agency, so they can’t refuse programming — they’re merely statistical pattern matching tools. “So the models are limited in being able to even produce certain types of results,” she notes.

With this backdrop, think of the algorithmic impact of Big Oil, which spent $450 million to influence Donald Trump and Republicans throughout the 2024 election cycle and 118th Congress. This funding includes direct donations, lobbying, and advertising to support Republicans and their policies. In the 2024 election cycle, oil and gas donors spent:

  • $96 million in direct donations to support Donald Trump’s presidential campaign and super PACs between January 2023 and November 2024
  • $243 million lobbying Congress
  • nearly $80 million on advertising supporting Trump and other Republicans or policy positions supported by their campaigns
  • more than $25 million to Republican down-ballot races, including $16.3 million to Republican House races, $8.2 million to Republican Senate races, and $559,049 to Republican Governors

That investment is already paying off in many ways, including spreading climate and clean energy mis- and disinformation through algorithmic manipulation.

No longer are obstructionist groups primarily attacking the reality of climate change; in fact, claims abound that obstructionists now need to address climate change. But hold on a bit. “Solutions denial” has replaced “climate denial” in a way that’s equally as devastating for a sustainable future. In the case of the California wildfires, for instance, representatives of fossil fuel interests will emphasize water supply and forest floor management even when there is no floor to speak of in a desert climate.

Interest group propagandists like Tucker Carlson make the framing of arguments for remediation and land-use planning more difficult in the western US no different than eastern US resistance to renewable energy efforts using offshore wind and solar subsidies. It still isn’t about water supply — it’s more about construction regulation and regional planning.

Why Large Language Models Are So Persuasive

ChatGPT is a type of AI that’s built on what is calls a “large language model.” These models scan and absorb nearly everything that’s available on the internet into their training data. As a search engine, ChatGPT doesn’t differentiate propaganda and evidence. It takes in everything from copyrighted works and academic scholarship to random subreddits, “as if these things are all equally reliable.” A lot of what large language models produce isn’t true, so AI feeds climate and other kinds of mis- and disinformation.

News stories around generative AI tools and their problems are quite common. Generative AI is implicated in a host of ethical issues and social costs, including:

  • bias, misrepresentation, and marginalization
  • labor exploitation and worker harms
  • privacy violations and data extraction
  • copyright and authorship issues
  • environmental costs
  • misinformation and disinformation

Young people are especially susceptible, says Noble. Her students come to class “and use propaganda sites as evidence, because they can’t quite tell the difference.”

Google and other companies are the first to say that they know these problems exist and they’re working on them. The companies that are producing generative AI have released products that are not ready for everyday searching, according to Noble. She doesn’t see Google and other internet sites dealing with the power and inequalities in their systems. Instead, they tweak algorithms rather than remake them in profound and systemically strong ways. She explains:

“People who make the predictive AI models argue that they’re reducing human bias. But there’s no scenario where you do not have a prioritization, a decision tree, a system of valuing something over something else. So it’s a misnomer to say that we endeavor to un-bias technology, any more than we want to un-bias people. What we want to do is be very specific: We want to reduce forms of discrimination or unfairness, which is not the same thing as eliminating bias.”

Might many of us agree that democracy is messy. But are we ready to sacrifice our freedoms so some leaders in Silicon Valley believe they can design a better society? I think not. But, as Noble points out, “Those politics are imbued in the products they make, who they’re pointed toward, who’s experimented upon and who’s considered disposable around the world.”



Chip in a few dollars a month to help support independent cleantech coverage that helps to accelerate the cleantech revolution!


Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.


Sign up for our daily newsletter for 15 new cleantech stories a day. Or sign up for our weekly one if daily is too frequent.


Advertisement



 


CleanTechnica uses affiliate links. See our policy here.

CleanTechnica’s Comment Policy






Source link

Leave a Comment

Your email address will not be published. Required fields are marked *