Follow ZDNET: Add us as a favorite source On Google.
ZDNET Highlights
- The regulatory landscape is evolving and creating new demands.
- Business leaders can use compliance to guide AI innovations.
- Internal and external partners can help organizations deliver results.
The AI gold rush has placed new pressure on governments and other public agencies. As enterprises seek to gain competitive advantage from emerging technologies, governing bodies are eager to enforce rules and regulations that protect individuals and their data.
The most high-profile AI law is EU AI ActHowever, global law firm Bird & Bird has developed a AI Horizon Tracker Which analyzes 22 jurisdictions and offers a broad spectrum of regional perspectives.
Also: 5 ways Lenovo’s AI strategy can drive real results for you, too
Digital and business leaders must find ways to comply with these regulations. But while compliance can be a burden, it shouldn’t be a hindrance – and these five business leaders offer five ways you can use governance to help guide your AI explorations.
1. Explore within constraints
Art Hu, Lenovo’s global CIO, said there is no single answer to the question of how to effectively balance AI innovation and governance.
“The responses of industries, sectors and governments will sometimes vary wildly depending on what your responsibilities are,” he said.
Hu told ZDNET that, as a general rule, business leaders should pay attention to upcoming rules and regulations that must be followed in the age of AI.
Also: 5 ways to keep your AI strategy from going haywire
“The penalties for getting things wrong are quite prevalent right now. You have significant risk in a way that you didn’t have before,” he said, suggesting that authorities should focus on carefully directed AI explorations.
“I think it goes back to the toolbox that you can create and how you encourage innovation, typically, through whitelisting and some type of sandboxing, where you say, explore, but within a constraint, because you don’t want exploration to generate one of these long-tail, adverse outcomes that you’re stuck with.”
2. Work together with partners
Paul Neville, director of digital, data and technology at UK agency The Pensions Regulator (TPR), suggested that business leaders should recognize that AI represents a paradigm shift, not just a fresh take on the way organizations run technology today.
He said, “I’ve said this at a few conferences, but I’ll repeat it: We believe that much of what we do today will be automated in the future, but a little faster.”
“First, I don’t think this approach is particularly visionary. And second, it won’t take us beyond today’s problems. Visionary leaders should paint a picture of how things could be different.”
Neville tells ZDNET that leading executives helps other professionals imagine a better future: “If you think AI is going to be a little faster than it is today, you’re not going to get what you need. I think there are potentially fundamentally different work patterns and opportunities.”
Also: This Company’s AI Success Is Based on 5 Essential Steps – See How They Work for You
At TPR, Neville’s team works with the UK Government to understand how new rules and regulations can guide effective AI explorations.
“There’s a new law, a new pension bill, and there’s a lot of technology that will be required and new customer experiences,” he said.
“We are working closely with the government to ensure that we are providing modern digital services, and the law will help us do that. AI can help us create something more interactive, interesting, iterative and visual at the same time. That’s the opportunity.”
3. Manage special cases
Martin Hardy, Royal Mail’s cyber portfolio and architecture director, said he believed businesses could use AI detection and compliance as a route to risk management.
“In cyber, we do a lot of threat-modeling and a lot of it is quite general and low-level, and that’s where my security architects add value in those specific specific cases,” he said.
“With AI doing 80% of the work, you’re no longer working from a blank document, and we can say, ‘Oh yes, you need to put this security control in place,’ that means we can give our security professionals time to focus on what might happen, like a particular threat actor we’re concerned about in our region, and that approach really adds value.”
Also: Fear of AI job cuts? 5 Ways to Future-Procure Your Career – Before It’s Too Late
Hardy told ZDNET that business leaders should also recognize the risks of relying on AI and data-heavy technologies. The message is clear: use AI but proceed with caution.
“By putting all that data into your system, if the AI model is breached, an attack has a blueprint of where all your vulnerabilities are,” he said.
“So, it’s a Catch-22 situation – if you don’t use AI, others will, and you’ll be left behind. If you use it, and you’re not careful, you could be part of the crowd that gets hit by an attack.”
4. Foster key relationships
Ian Ruffle, head of data and insights at UK auto breakdown specialist RAC, said managing the balance between governance and innovation is all about internal culture.
“Everything comes back to people,” he said. “I think success is as much about implementing the right technology as well as making proper use of that technology – and it’s all about having the right people.”
Ruffle told ZDNET that senior leaders cannot be expected to be aware of every potential threat or risk on a wide scale, which is why establishing a strong culture is paramount, especially when working with trusted internal experts.
Also: Gemini vs Copilot: I compared AI tools on 7 everyday tasks, and there’s a clear winner
“You have to empower people to care about the individuals that this piece of data is representing,” he said.
“It’s a cultural thing for me. Fostering relationships with your data protection officer and information security teams is almost more important in the long run than moving forward and using the most modern technology.”
In short, balancing governance and innovation is difficult – and keeping humans involved is critical to success.
“You have to walk the tightrope,” Ruffle said. “That’s why I think organizations need humanity to think effectively about these problems.”
5. Ask important questions
Eric Meyer, chief clinical information officer at Imperial College London and Imperial College Healthcare NHS Trust, said professionals using data for AI projects should be careful to ensure that the work they do to comply with the regime does not create new issues: “If you over-clean the data, you’re probably biasing the AI. That’s the problem.”
To overcome this challenge, Mayer told ZDNET that his team continues to have regular conversations with regulatory authorities, focused on developing answers to key questions. “What KPIs do you need around the data set to support regulatory approval of AI to ensure that when you put it out into the real world it will work the way it’s intended? What was the quality of the data? How many duplicates, how many missing values? What is the actual data definition?”
Too: Cloud-native computing is set to explode thanks to AI inference work
The lesson for other digital leaders is that attempting to clean data for new projects may inadvertently remove variables that will be useful in the future. The mayor advised other professionals to take proactive steps.
“Ultimately, you want the rawest form of the data. However, when you have to clean it or transform it, you need to know exactly how you transformed and documented it,” he said.
“That’s the core element. That’s the piece we have to get absolutely right. People need to consider how they can say, ‘Yes, it’s safe to implement.’ And then long-term success is about constant validation.”