Regulatory policy

AI regulation and policy remains a priority for the UK | New

UK – Artificial intelligence (AI) strategy and regulation is a “significant area of ​​interest” for the government and bespoke UK AI regulations are being designed with approaches elsewhere in the world in mind world, according to the chief regulator of the Office for Artificial Intelligence.

speaking to the Next steps for AI in the UK Policy conference at the Westminster eForum earlier this week, Alex Leonidou, head of regulation at the Office for Artificial Intelligence, said the government was looking to engage with stakeholders and experts to improve current regulatory frameworks. of AI.

“This is a very important area of ​​interest for the government,” Leonidou said. “We have strived at every stage of this to proactively and collaboratively engage with the broader ecosystem, whether that be businesses or universities.

“We really appreciate the value of outward-looking engagement in this area, given the pace of change and its importance.”

Last year, the government developed an AI strategy and this year also released an AI regulatory policy framework with the aim of future-proofing the AI ​​industry in the UK.

Leonidou said the current non-statutory approach attempts to fill gaps and overlaps in regulation while keeping pace with the rapid pace of change in the AI ​​industry, and “trying to see where we perceive the risks and the damage arising from the application and context of use of AI and to regulate in this spirit”.

Interoperability is also vital for future AI regulation in the UK, she added.

“We try, very deliberately, to come up with our own regulatory framework,” Leonidou said. “We don’t copy and paste anyone else’s. But that doesn’t mean we’re not up to speed on this point of interoperability.

“We don’t create something new for the sake of it – we create something new because we believe it’s the right thing to do for the UK AI ecosystem and our leadership position in this field.

“Whether with EU AI law or any other emerging framework around the world, interoperability is a priority.”

Elsewhere in the conference, Stephen Almond, director of technology and innovation at the Office of the Information Commissioner (ICO), criticized the use of emotion analysis technologies, such as gaze tracking, facial movements, heart rate and skin moisture to draw conclusions about people’s health. emotions.

He warned that “science does not accumulate” for emotion analysis technology and warned users of the risk of “systemic bias, inaccuracy and even discrimination” with its use.

“Organizations shouldn’t use meaningless information to make decisions that can be quite meaningful,” Almond added.

“We have yet to see emotion analysis technologies that would meet the requirements of data protection law, although our door is always open to people who want to come to us.

“Organizations that do not act responsibly, that cause harm to vulnerable people, can expect to be investigated.”

Almond said the ICO will update its definition of “fairness” in AI next year and offer innovation guidance on the data protection implications of AI, in particular. building on its existing regulatory sandbox.

“We continually scan the horizon and invest our resources to examine new risks that emerge,” he explained.

Also speaking at the conference, Francois Candelon, global director of the BCG Henderson Institute, said more was needed to maintain the UK’s pre-eminent status in AI development and innovation.

“I think you’ve been extremely strong in terms of technology development, generating academic research, developing AI talent, and fostering an environment conducive to the emergence of start-ups,” Candelon said. “When I look at the turnout, there’s still room for improvement.”

He added that lessons can be learned from China, where his research suggests 80% of businesses have adopted AI, compared to 50% in the UK.

Candelon added that the Chinese government has played a “crucial catalytic role” in creating “vertical AI ecosystems” that can help AI adoption in specific industries, with private companies then helping to lead areas of innovation.

“You already have many ingredients to feed these AI ecosystems and processors,” he added.

“I’m really looking forward to seeing the creation of these AI ecosystems. We may have to compete ecosystem to ecosystem rather than business to business. This is not the prerogative of a company or an actor, all the stakeholders will have to work hand in hand.