Several Top U.S. Companies are using A.I. to Spy on Employees seeking out 'Thought Crimes'
The public launch of ChatGPT sent political leaders around the world scrambling to spur generative AI development within their own countries—and to create guardrails to ensure the technology doesn’t run amok.
In the U.S. alone, members of Congress have proposed a whirlwind of bills—some more serious than others—intended to regulate everything from who gets to develop AI to what kind of liability companies face when things go wrong
The Information Technology Industry Council, a leading tech lobbying firm, is tracking more than 50 federal bills that have either been introduced or proposed in draft form, in addition to more than 50 initiatives—including executive orders, regulatory efforts, and more. “It’s really a dizzying array of things on the Hill and in the administration,” says John Miller, ITI’s chief legal officer, noting that the organization is tracking at least 150 distinct proposals. You could more on this from Fast Company.
In July 2023 it was reported that Four of the world's leading technology companies, Google, Microsoft, Anthropic and Open AI agreed to a new partnership aimed at promoting responsible use of artificial intelligence.
While all these groups and Government intent to ensure that AI technology doesn't run amok, a new and disturbing report from CNBC states that huge American companies such as Walmart, Delta, Chevron and Starbucks are now using AI to monitor employee messages. Europe's Nestle and AstraZeneca are also adopting this practice using AI.
Depending on where you work, there’s a significant chance that artificial intelligence is analyzing your messages on Slack, Microsoft Teams, Zoom and other popular apps.
Jeff Schumann, co-founder and CEO of the Columbus, Ohio-based startup 'Aware,' says the AI helps companies “understand the risk within their communications,” getting a read on employee sentiment in real time, rather than depending on an annual or twice-per-year survey.
Aware’s analytics tool — the one that monitors employee sentiment and toxicity — doesn’t have the ability to flag individual employee names, according to Schumann. Yet its separate eDiscovery tool can, in the event of extreme threats or other risk behaviors that are predetermined by the client, he added.
More specifically, Aware’s eDiscovery tool operates differently. A company can set up role-based access to employee names depending on the “extreme risk” category of the company’s choice, which instructs Aware’s technology to pull an individual’s name, in certain cases, for human resources or another company representative.
Speaking broadly about employee surveillance AI rather than Aware’s technology specifically, Jutta Williams, co-founder of AI accountability nonprofit Humane Intelligence, told CNBC:"A lot of this becomes thought crime." She added, "This is treating people like inventory in a way I’ve not seen."
As CNBC framed it, "Cue the George Orwell reference." For more on this, read the full CNBC report.
Comments