Startups

Without Clear and Consistent Rules, EU Risks Entrepreneurs Looking Elsewhere for AI Innovation – Entrepreneur


Opinions expressed by Entrepreneur contributors are their own.

You’re reading Entrepreneur Europe, an international franchise of Entrepreneur Media.

Co-written with Erick Espinosa

In breaking news for Europe’s AI industry, dozens of leading European companies, researchers, and developers issued an open letter calling on European Union leaders and policymakers to tackle Europe’s trailing innovation footprint.

While other global regions and tech markets press ahead with AI, embracing the technology and supporting its rapid implementation, Europe is at risk of falling behind due to fragmented policies and complicated data regulation frameworks, according to the case made in the letter.

Coordinated by Meta, the signatories warn in no uncertain terms that the bloc’s regulations are hampering innovation and economic growth. The letter highlighted why data regulations will stall innovation due to fragmented access to open AI and multimodal training models. Yet this isn’t just an issue for software engineers to be concerned with.

Research from JP Morgan estimates that generative AI (GenAI) could increase global GDP by 10% over the coming decade. Meanwhile, without clear rules, consistently applied, enabling the use of European data, CEOs and tech founders may decide to invest tens of billions of euros elsewhere, putting the EU at a further disadvantage.

EU competitiveness in new era of AI

The issue of EU competitiveness has been a growing concern for years for both AI adoption and tech innovation more broadly.

The latest call for action comes hot on the heels of Mario Draghi’s EU competitiveness report published at the beginning of September. The former Italian president and renowned economist stated that the bloc must make investments to close the innovation gap against markets like the US and China to remain competitive and maintain the region’s current standards of living.

The report leaves no doubt this is a “do or die” moment for EU innovation. However, a failure to address this issue sooner means that Europe is already significantly behind the curve when it comes to AI competitiveness.

Even if the issue of fragmented regulation can be solved in rapid time, the gap between US and China’s markets is already so wide that it will be hard to recoup this.

While Europe’s tight data protection policies have caused AI innovation to stagnate to a point, the report from FTC suggests that such frameworks will soon become the norm, not the outlier.

To date, tech firms in the US have enjoyed almost free reign with how they are permitted to collect, store and handle data. However, as AI and digital technologies continue to permeate into every area of our lives and evolve at breakneck speed, this status quo is being called into question.

Instead of aiming to play catch up with solutions already on offer from the likes of Meta and Open AI, experts say Europe should instead look to become a global leader, helping to implement new, fit-for-purpose data regulations that can operate across international markets and enable healthy innovation in the tech startup sector without putting citizens or society at undue risk.

With already established AI initiatives, industry-specific solutions for data compliance and a network of researchers committed to finding global solutions, Europe can offer immediate blueprints for future AI frameworks.

AI as part of a broader tech ecosystem

Part of the challenge in solving the regulatory challenges associated with AI is down to the interconnected nature of the development process. It’s impossible to create meaningful, useful AI tools in a vacuum.

Disruptive startups need access to comprehensive datasets for model training. The way tools are built needs to hinge on the infrastructure it’s going to be delivered under, and requires the involvement of digital service providers and system builders. Investors, government funders and academic research units need to be in close, continuous communication to drive a strategic research agenda.

This means AI regulation needs to take all the needs and goals of all these moving parts into account to ensure that innovation pipelines are free-flowing.

For instance, Microsoft plans to invest $3.2 billion in AI facilities in Sweden, its largest-ever infrastructure bet in the Nordic country. The enterprise Data4 also announced the creation of a new AI facility outside of Athens, signaling a boost to Greece’s digital infrastructure and Economy. The European Laboratory for Learning and Intelligent Systems – is a pan-European AI network of excellence that focuses on fundamental science, technical innovation and societal impact.

Meanwhile, AI think tanks like the Alan Turing Institute provide crucial research into how AI is impacting our everyday lives on matters like the European elections, while GPU solutions such as SQream are reducing the costs associated with AI analysis.

AI-compliant tools for targeted use cases

Europe also boasts extremely promising examples of how to create robust regulatory frameworks that also help fuel innovation for targeted industry use cases.

Part of the problem with current regulations is the tendency to rely on a one-size-fits-all approach when the data compliance needs vary widely across industries.

However, these tight regulations mean that European startups are already building tools and AI platforms to support niche use cases. These demonstrate how innovation in this space can be achieved without compromising compliance or privacy. In the future, these help to make the case for a more nuanced, agile approach with regard to evolving AI guardrails.

For example, patient data is arguably one of the most sensitive data types there is. AI has the potential to transform patient outcomes across the board, but siloed datasets and fragmented healthcare provider frameworks hinder the creation of these life-changing AI solutions.

Yet it is possible to support AI innovation and remain compliant. Newton’s Tree has built an AI platform for healthcare institutions across the globe to help test and validate new AI products with access to federated evaluation that is built with privacy and compliance at its core.

Robin Carpenter, Newton Tree’s Head of AI Governance and Policy at Newton’s Tree, explained: “The local challenge for those developing AI is building algorithms that healthcare needs. You do this by developing it within a collaborative framework that is legally compliant, technically robust, and ethically responsible.”

Sustainable AI through regulation

Given the breakneck speed that OpenAI and other GenAI solutions are evolving, there are understandable concerns around how to manage the technology from both industry insiders and the general public.

The current lack of coherent frameworks at an international scale means that these risks are often being self-managed by tech firms and AI creators. OpenAI has self-rated its GenAI models as “medium risk” with persuasive abilities.

At the same time, some leaders are not buying the doomsday theories about AI. Ranjit Tinaikar, CEO of Ness Digital Engineering, which operates an innovation center in Europe, said that in certain ways, “AI is not the point. Software engineering productivity is the point. We want to improve software engineering productivity with and without AI.”

“When we talk about productivity, it’s not because of AI. AI is just one more stair in that staircase to get to better productivity,” added the executive.

When it comes to self-regulation on AI, the need for a global approach will be important. Both private and public AI tools should look at regulation as a way to support, not hinder, progress to deliver safe, reliable, sustainable solutions.

The U.S. has long held the lead with AI innovation. Yet this means it’s also feeling the brunt of the growing pains in real-time.

As these challenges come to light alongside the evolution of AI, Europe offers countless examples of how AI can be developed and implemented at scale within regulation systems that support all stakeholders fairly.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.