In a fast-paced race to deploy generative AI technology, several major tech companies are facing scrutiny over the erosion of trust and the negative environmental impact caused by their AI products. Google, for instance, has been under fire for its AI overviews, which have occasionally disseminated misinformation in response to search queries. Examples include falsely claiming that Barack Obama was the first Muslim president of the United States or that staring at the sun for extended periods is safe. Following public outcry, Google has reduced the frequency of such misleading overviews.
Similarly, Microsoft faced backlash for its Recall feature, which aimed to capture screenshots of users’ computers for future searches. Criticized as a “security disaster,” Microsoft initially announced that Recall would not be enabled by default in Windows, and eventually removed the feature entirely from its Copilot Plus PCs.
Zoom, in an attempt to enhance video meetings, published research suggesting that remote workers trust their colleagues more when they have video on during calls. However, the CEO’s aspirations for AI deepfakes in video meetings, referred to as “digital twins,” have raised concerns about the potential manipulation and deception that could arise. Amazon, on the other hand, has been plagued by AI-generated knockoffs of books, including a counterfeit version of “Artificial Intelligence: A Guide for Thinking Humans.”
Meta, although lacking a reservoir of trust to deplete, has been inserting AI-generated comments into conversations among Facebook group members, occasionally resulting in bizarre claims of AI parenthood. X, aiming to distinguish itself from Twitter, recently updated its policy to allow “consensually produced and distributed adult pornographic content,” including AI-generated adult nudity or sexual content that does not promote objectification.
OpenAI, after launching ChatGPT, introduced the ChatGPT Store, enabling users to distribute software that builds on ChatGPT and create customized versions. With over 3 million such versions already created, the trustworthiness of these tools will impact users’ trust in OpenAI itself.
The rush to deploy generative AI products not only disrupts the companies themselves but also hampers access to accurate information, compromises privacy and security, distorts human communication, and influences perceptions of various organizations adopting flawed AI tools. Government agencies, nonprofits, and other entities are among those affected.
Moreover, generative AI’s environmental impact is significant. The World Economic Forum highlights that the computational power required to sustain AI’s rise is doubling approximately every 100 days. Furthermore, 80% of the environmental impact occurs during the usage or “inference” stage, encompassing AI-generated overviews in search, comments in social media groups, fake books on Amazon, and adult content on X. This pool of inference is expanding daily, exacerbating the negative environmental consequences.
While many companies developing AI tools have internal efforts focused on ESG and responsible AI, the rapid consumption of energy, water, and rare elements, including trust, remains a pressing concern.