Can we regulate AI without stifling innovation? Here’s what experts think

1 min read

Whether it’s excitement, confusion, apprehension, or outright pessimism, everyone feels something about AI. So it was no surprise that AI technology was one of the main talking points at The Next Web Conference 2023.

In the wake of the EU Parliament’s decision to usher in the EU AI Act, the world’s first comprehensive AI law drafted to ensure that AI systems used in the EU are, “safe, transparent, traceable, non-discriminatory, and environmentally friendly,” the B2B marketing community FINITE hosted a roundtable event on ‘TNW Eve’ exploring the theme of reclaiming a positive future for technology.

With the UK set to host the first global AI summit this autumn to discuss ‘internationally coordinated action’ to mitigate the risks posed by artificial intelligence, regulation remains front and centre. Many are still on the fence as to whether regulation will unnecessarily slow down and restrict innovation or help prevent AI’s rapid ascent from spinning out of control.

Regulate, yay or nay?

Some entrepreneurs may be prioritising founding AI companies in countries with less regulation so they can go to market as quickly as possible. But quick market access is by no means a recipe for customer trust and long-term success, especially in a sector like AI where safety and ethics are being hotly debated.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Anton Ekker, president of the Association of AI Lawyers, made the point that regulators must strive for a balance between regulation and innovation, ensuring the protection of citizens and responsible use of technology alongside prioritising growth which will lead to local economic gains. Some might argue that the discussion is moot: the EU has lost the AI battle already, with the global sector already valued at USD$428 billion in 2022. Others, however, point to the EU’s early establishment of the GDPR, which set a global standard for data protection. Their hope is that the same will happen with AI regulation.

It’s clear there are many perspectives on regulations that affect tech innovation, but it’s important to consider the long-term implications if it goes unchecked. This is especially important in AI where some of the leading minds in the field are claiming the human race could be in grave danger if the technology isn’t developed safely and responsibly.

Ultimately, the goal of any regulation touching on business innovation should be to establish a fair and ethical environment promoting growth and innovation, while countering risks associated with new technologies.

Workforce and global opportunities

Regardless of any regulations which may be adopted, there is no doubt that AI will have a massive impact on how we work and live. As Omar Kbiri, founder of creative marketing agency Maak noted, the AI revolution could have the same impact on humans as the industrial revolution. However, for AI to be successfully rolled out across society, humans need to work in tandem with AI powered machines.

However, with automation increasing across industries and businesses, it’s unrealistic to expect an army of experts to monitor all the decisions the technology makes. As Dylan Prins, Global Communications Manager at French payments giant Worldline notes, while regulatory compliance currently requires human authorisation, this is no longer feasible. AI has the potential to make human authorisation vastly more efficient and even automate away part of the decision-making process.

The goal for regulators and innovators alike will be to create an environment where the human workforce — across companies of all sizes — can flourish within this new reality by using AI to empower a more diverse, global and democratised business landscape.

If the more optimistic predicted future of AI becomes a reality, this will depend on our ability to build safeguards around the technology, as well as removing bias. As Bas van Berkestijn of 3D-printing company AM-flow points out, a lot of the recently launched AI tools will be informed by already-trained data which could end up repeating humanity’s mistakes. Companies have a responsibility to ensure that AI isn’t being trained by people without a grasp on the historical development of democracy, or a good understanding of diversity and equity.

Technology as powerful as AI requires cautious optimism. Innovation is to be encouraged, but regulatory guardrails are essential to ensure the long-term success of these world-changing solutions.

If you’re interested in reading more perspectives on AI and communications, download the report ‘Clarity on AI: the Impact of AI on Comms’.

Source link