The Lorax?

Not so with AI, a technology that is evolving at an extremely rapid rate, and as we have noted, not always in a good direction. Much of the negative side of AI comes from its use by humans’ so in those cases we cannot blame AI, but as the technology gets more sophisticated, we seem to notice that AIs themselves have some ‘issues’. There are many in academia and in the Ai community itself that are cautious about the rapid pace of AI development and the fact that even those who develop the infrastructure and architecture that Ais run on, do not always understand how they work and why they have ‘issues’. Letters have been written, cautious tales have been told, and even a few examples have been given as to how models can cause problems, and we have barely put them into service.
Once AI systems are embedded in our infrastructure, they will become even more of both a tool and a potential hazard to our everyday lives. Think of what happens to your kids when the internet is down because of a poorly written slice of code in an update and then think of that same poorly written code being inserted into the Ai system that controls stoplight timing in a small city. Taking it a step further, what if that code snippet was put into the model that runs the power grid across a state or region?. But, according to a bit of legislation inserted into the current budget reconciliation bill, all Ai regulation at both the federal and state level will be halted for 10 years to let the industry grow.
That means that developers will not have to report new models to the government, nor meet specific testing rules, and, in its most pure form, would allow AI developers to claim almost anything with little or no backchecks. The business of Ai loves this idea and promotes the potential change as opening the door to Ai innovation (aka ‘beating China’) and the administration sees it as dollar signs. But what about the poorly tested health oriented Ai that is giving bad advice to users, or the AI that is making value judgements about potential job candidates with some obvious biases? There are many scenarios where an unregulated Ai industry, tasked with self-regulation will do what most self-regulating industries do, de-regulate until a major incident and then complain when the regulations return.
We are not saying that the Ai industry should be highly regulated and relegated to single digit 10 year CAGR, but to ban all regulation for AI decision systems is a bit of an extreme in the other direction, a direction where we have yet to travel. If this was happening in the oil and gas industry, the potential outcome would likely be a huge oil spill or contamination, as such things have occurred, but with Ai we have no idea what the consequences might be, so perhaps just a bit of caution might be a good idea…We know who is going to speak for the industry, who is going to speak for us? It could be too late once the s$&t hits the fan…