The upcoming AI regulation may not protect us from dangerous AI

View all on-demand sessions from the Intelligent Security Summit here.


Most AI systems today are neural networks. Neural networks are algorithms that mimic a biological brain to process massive amounts of data. They are known to be fast, but they are unexplored. Neural networks require huge amounts of data to learn how to make decisions. However, the reasons for their decisions are hidden within countless layers of artificial neurons, all individually tuned to various parameters.

In other words, neural networks are “black boxes”. And the developers of a neural network not only don’t control what the AI ​​does, they don’t even know Why he does what he does.

This is a terrifying reality. But it gets worse.

Despite the risk inherent in the technology, neural networks are beginning to manage the basic infrastructure of critical business and government functions. As AI systems proliferate, the list of examples of dangerous neural networks grows daily. For example:

Event

Intelligent Security Summit On Demand

Learn the critical role of AI & ML in cybersecurity and specific industry case studies. Watch the on-demand sessions today.

Watch here

These effects range from deadly to comical to highly offensive. And as long as neural networks are used, we risk harm in many ways. Companies and consumers are rightly concerned that as long as artificial intelligence remains opaqueremains dangerous.

A regulatory response is coming

In response to such concerns, the EU has proposed an AI Act — due to become law by January — and the US has drawn up a Draft Bill of Rights on AI. Both face the problem of opacity head on.

The EU AI law states that “high-risk” AI systems must be built transparently, allowing an organization to identify and analyze potentially biased data and remove it from all future analyses. It completely removes the black box. The EU AI law defines high-risk systems that include critical infrastructure, human resources, essential services, law enforcement, border control, jurisprudence and surveillance. Indeed, almost every major AI application developed for government and business use will qualify as a high-risk AI system and therefore be subject to EU AI law.

Similarly, the US AI Bill of Rights asserts that users should be able to understand the automated systems that affect their lives. It has the same goal as the EU AI law: to protect the public from the real risk that opaque AI will become dangerous ALL INCLUDED. The Blueprint is currently a non-binding and therefore toothless white paper. However, its temporary nature may be a virtue, as it will give AI scientists and advocates time to work with lawmakers to shape the law appropriately.

In any case, it seems likely that both the EU and the US will require organizations to adopt AI systems that provide interpretable results to their users. In short, the AI ​​of the future may need to be transparent, not opaque.

But does it go far enough?

Establishing new regulatory regimes is always a challenge. History gives us no shortage of examples of inappropriate legislation inadvertently crushing promising new industries. But it also offers counterexamples where well-designed legislation has benefited both private enterprise and the public welfare.

For example, when the dotcom revolution began, copyright law lagged far behind the technology it was to govern. As a result, the early years of the Internet era were marred by intense litigation targeting companies and consumers. Finally, the comprehensive Digital Copyright Act (DMCA) was passed. Once companies and consumers adjusted to the new laws, online businesses began to thrive and innovations like social media, which would have been impossible under the old laws, were able to flourish.

The future leaders of the AI ​​industry have long understood that a similar institutional framework will be necessary for AI technology to reach its full potential. A well-constructed regulatory system will offer consumers the security of legal protection for their data, privacy and security, while providing companies with clear and objective regulations under which they can confidently invest resources in innovative systems.

Unfortunately, neither the AI ​​Act nor the AI ​​Bill of Rights meet these goals. No context requires enough transparency from AI systems. Neither framework provides sufficient protection for the public or sufficient regulation for businesses.

A series of analyzes provided to the EU have highlighted the flaws in the AI ​​law. (Similar criticisms could be leveled at the AI ​​Bill of Rights, with the added proviso that the American framework is not even intended to be a binding policy.) These flaws include:

  • It offers no criteria by which to define unacceptable risk for AI systems and no method for adding new high-risk applications to the Act if such applications are found to pose a significant risk of harm. This is particularly problematic because AI systems are becoming broader in their utility.
  • Companies are only required to consider harm to individuals, excluding estimates of indirect and cumulative harm to society. An AI system that has very little effect on, say, each individual’s voting patterns can have a huge societal impact overall.
  • It allows virtually no public oversight of the assessment of whether AI meets the law’s requirements. Under the AI ​​Act, companies self-assess their own AI systems for compliance without the intervention of a public authority. This is tantamount to asking drug companies to decide for themselves whether drugs are safe — a practice that both the US and the EU have found to be harmful to the public.
  • It does not well define the responsible party for evaluating general purpose AI. If a general-purpose AI can be used for high-risk purposes, does the law apply to it? If so, is the creator of the general-purpose AI or the company using the high-risk AI responsible for compliance? This ambiguity creates a vacuum that provides incentives for shifting responsibility. Both companies can claim that it was their partner’s responsibility to self-assess, not theirs.

For AI to safely proliferate in America and Europe, these flaws must be addressed.

What to do with dangerous AI until then

Until appropriate regulations are in place, black box neural networks will continue to use personal and business data in ways that are completely opaque to us. What can one do to protect themselves from opaque artificial intelligence? In the least:

  • Ask questions. If you are somehow discriminated against or rejected by an algorithm, ask the company or seller, “Why?” If they can’t answer this question, reconsider whether you should do business with them. You can’t trust an AI system to do the right thing if you don’t even know why it’s doing what it’s doing.
  • Be careful about the data you share. Does every app on your smartphone need to know your location? Does every platform you use need to go through your primary email address? A level of minimalism in data sharing can go a long way in protecting your privacy.
  • Where possible, only work with companies that follow data protection best practices and use transparent AI systems.
  • Most importantly, support regulations that will promote interpretability and transparency. Everyone deserves to understand why an AI affects their life the way it does.

The risks of artificial intelligence are real, but so are the benefits. To address the risk of opaque AI leading to dangerous outcomes, the AI ​​Bill of Rights and the AI ​​Act set the right course for the future. But the level of regulation is still not strong enough.

Michael Capps is CEO of Diveplane.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including data technicians, can share data-related insights and innovations.

If you want to read about cutting-edge ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read more from DataDecisionMakers

Leave a Comment