Representative Marjorie Taylor Greene is receiving criticism after admitting that she didn’t fully read the budget bill she voted for, as she was unaware of the provision that “strips states of the right to regulate AI for 10 years.” The provision she is referring to would prohibit U.S. state and local governments from “limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems,” and applies broadly to generative AI and automated decision-making tools already used in many workplaces.
With that in mind, it seems like a good time to talk about the use of AI in businesses and what states are currently doing to regulate its use.
Requiring AI
A growing number of employers expect their employees to use AI in their jobs. A memo from the Shopify CEO stated that it expected all employees to be proficient in AI, as using it in their roles is no longer optional. The memo also stated that teams would have to explain why AI can’t be used before requesting additional staff or resources. Duolingo’s CEO sent a similar message to employees, saying AI is a priority for the language-learning app and its use would be a part of performance evaluations. Other companies have similarly stated that AI would be used to automate more processes to allow employees to achieve more at a faster rate and “solve problems faster.”
The founder and CEO of online freelance marketplace Fiverr said in an interview with the Washington Post that he expects employees to “double or triple their output and strive to automate their jobs” by mastering and incorporating AI into their roles. He said in a memo to his employees: “Here’s the unpleasant truth. AI is coming for your jobs… If you do not become an exceptional talent at what you do, a master, you will face the need for a career change in a matter of months… those who will not wake up and understand the new reality fast, are, unfortunately, doomed.”
These memos make it clear that not only are these companies “all in” on AI, but they’re going to move fast. However, experts warn that moving too fast could end up backfiring, as Klarna, a Swedish fintech company, recently discovered. After significantly reducing its workforce and claiming an AI chatbot could replace 700 customer service workers, Klarna is now reversing course and recruiting people for those roles. As one expert put it, “Klarna embraced AI with an eye toward cost savings and efficiency – but perhaps underestimated the tradeoff.”
Real-world harms
Part of the reason for the push by companies to embrace AI has to do with speed, whether it be to outpace the competition or to increase profit margins by making decisions en masse. And when companies push forward with integrating AI without fully considering the real-world harm it can cause, regulation is the only guardrail that can ensure levels of harm are reduced.
According to NPR, Meta is planning to use an AI-powered system to automate up to 90% of its privacy and integrity reviews – a process that used to be conducted entirely by human evaluators. While the company views it as a win for product developers, as it allows them to release updates and features more quickly, some are concerned that AI decisions could lead to real-world harms. While the company statement says AI decision-making will only be employed for “low-risk decisions,” NPR said internal documents show that the company is also considering it for sensitive areas, including evaluating violent content and the spread of misinformation.
In Arizona, the governor recently signed a new bill into law that would require healthcare claim denials based on medical necessity to be reviewed by a licensed professional. While the law doesn’t explicitly refer to AI, doctors have expressed concerns about AI’s increasing role in the insurance industry, “that may delay people getting life prolonging or lifesaving tests or treatments.”
Multiple states have either recently passed or proposed laws that are specific to regulating AI, such as a law in Colorado that aims to prevent discrimination in algorithmic models and a California law that requires software developers to provide information on data used to train AI systems. Additionally, at least 39 states have laws that impact AI in some way, generally in relation to privacy and data security, but it’s unclear whether a federal law could roll back those protections as well.
While states continue to push ahead with attempts to regulate AI in healthcare and many other industries, consensus is clear that states only have so much power, and federal regulation appears to be the only way to clarify the situation.
								
											
                                                                                                                                                                                                            

