"Regulate Me!" - Artificial Intelligence Comes of Age

  • July 30, 2019
  • Feature
"Regulate Me!" - Artificial Intelligence Comes of Age
"Regulate Me!" - Artificial Intelligence Comes of Age

By Laura Sallstrom, Global Head of Data and Trust, Access Partnership

The topic du jour for tech regulation is not as you might expect, data, but rather a sexy new topic of interest to policy makers – artificial intelligence (AI).  It has all the glamor of Hollywood movies, all the fear that propels despots to power, and it comes complete with simple sentences and graphics that make it a Trumpian communicator’s dream. (Robots cannot rule the world!)

We get tired of hearing it, but it’s so true: technology is changing rapidly. The speed of change continues to accelerate and, let’s face it, regulators and policy makers do a poor job of understanding technology, much less creating effective regulation for it.  In the chaos however, there are repetitive patterns.  Policy makers clearly have a tendency to gravitate to overly broad regulation of whatever the current tech trend is. For example, banking regulators pick on “cloud computing” for fear that data is too far away - not understanding that their email and basic client server systems (ATM’s anyone??) have long been transmitting data across borders and using “the cloud.” Or, policy makers skip from very general principles to the prescriptively specific. Instead of forbidding “transmission of messages designed to inflict harm,” a policy maker may jump to banning, say “spreading malicious links via WhatsApp”  

It’s true, certain attributes of AI are creepy in fact (who decided the “singularity” was a good objective? Borg anyone?).  But, let’s not forget what benefits AI gives us, and that artificial intelligence can mean many things: hyper efficiency in the work place, better cyber protections, enormous leaps forward in surgical precision and elimination of human error in patient treatment. Even with the highly controversial facial recognition, we must acknowledge the benefits in fighting crime and, in making your life at the airport easier (I love breezing through the passport line!).

Certainly there are civil liberties and ethical challenges. These need to be addressed. But frankly, these types of challenges exist with almost any personal data related technology. Can you honestly say that media manipulation, changing election outcomes, cybercrimes, and online fraud do not present equally challenging ethical dilemmas? In many respects, we’ve been here already. We are there now. This is just a slightly different version of the same technology problems.  These are manifested and multiplied in ever-varied ways, but at their core, they are the same problem: technology developed without due considerations at the start of how the bad guys could abuse it.  Human behavior and its use of technology is the problem; not technology itself.  Laws and regulation to address behavior, rather than bans on technology itself, should be the focus of any regulatory efforts.

Interestingly, this is where some of the AI “regulators” may get it right. Despite recently entering the public discussion in a big way – AI and ethics have been co-mingled for a while. Key AI thought leaders have commented on the development of new forms of technological systems. Jonathan Zittrain has written and spoken extensively on how the generative internet and such systems are facilitating new kinds of control; Timnit Gebru, cofounder of Black in AI, has similarly discussed the diversity crisis facing AI systems; Cass Sunstein has not only published on how social technologies impact governance and society, but also how AI algorithms could be used beneficially to overcome the harmful effects of cognitive biases. 

Because of the work of these and others to deeply integrate the human element in AI conversations, many new regulations are contemplating the concept of “ethics by design.”  Unlike technologies that have come before, the ethical conversation is at the beginning and ongoing with AI. Nonetheless, the rapid focusing of regulator interest on AI deserves some adjustment in approach. Technology changes daily.  The bad guys change their approach daily. As a result, for technology regulation to be effective, it has to focus on human behavior. That’s why, no matter how well intended, we must pause and examine efforts to “regulate AI”, and more specific calls to ban the use of facial recognition. 

Perhaps the robots are a better rhetorical enemy than Russia, “big tech”, or cyber hackers. Yet, AI algorithms and the robots, currently, are the least dangerous of the three. Ethics from the beginning is a good basic principle for all technologies. But, ethics is a challenging concept to define since ethics are not globally consistent. Nevertheless, it is an important start. Indeed, if the concept of ethics by design had been built into some current technologies, we may not be where we are today with a number of tech-related crises.  

Regulation should be targeted at actions and human behavior, not at a type of technology. Regulatory approaches that take this into account – like the ethics by design concept – are a good way forward. It’s highly likely we’ve gotten ahead of robots taking over the world…now if we can just manage some concrete regulation with respect on data flows, election interference, cyber security, and protection of civil liberties….


About the Author

Laura Sallstrom is Global Head of Data and Trust at Access Partnership, a global public policy consultancy for the tech sector. She can be reached at laura.sallstrom@laura.sallstrom@accesspartnership.com

Learn More

Did you enjoy this great article?

Check out our free e-newsletters to read more great articles..