Member of the European Parliament Michael McNamara: Key challenge in AI regulation is balancing innovation with copyright protection

Speaking at Storyboard18 DNPA Conclave 2025, in New Delhi on February 27, Member of the European Parliament, Michael McNamara said, “AI is being compared to the advent of the Industrial Revolution. It’s influencing now, increasingly, who gets hired, who gets policed, how we interact online.”

By
  • Indrani Bose,
| February 28, 2025 , 9:02 am
Member of the European Parliament Michael McNamara
Member of the European Parliament Michael McNamara

To ensure AI remains inclusive and democratic, the European Union has taken the lead in establishing the first comprehensive law defining AI systems and safeguarding against potential vulnerabilities. With a forward-looking approach, the European Parliament has set far-reaching regulations that not only anticipate innovation, but also provide a robust defense against potential risks.

Michael McNamara, co-chair of the European Parliament’s AI Working Group and one of the key members who made the EU’s AI Act a reality talked about his experiences of shaping the landmark AI Act at Storyboard18 DNPA Conclave 2025, in New Delhi on February 27.

The European Union’s Artificial Intelligence Act sets a global precedent for AI governance and is the first of its kind to regulate AI in such a way. It’s rooted in a risk-based approach. The AI Act seeks to balance innovation, economic competitiveness, and fundamental rights. I think that’s a key aspect of what we’re talking about because there’s been much criticism from various geopolitical actors in the world of the EU approach. It doesn’t set out to stifle innovation in any way, said McNamara.

“It sets out to encourage innovation, but to do so in a way that balances that right to innovate with the protection of society as a whole and, indeed, to encourage confidence in AI because there are very different approaches to the use of AI across the world. We know it’s very heavily used. Perhaps most heavily used in China,” he added.

“The Chinese industrial process is quite heavily used in the United States and very little used in the European Union. So part of the approach to the AI Act is, in fact, to encourage the use of AI in the European Union to encourage confidence in its use among a population that, up to now, has been relatively reluctant to use AI, he explained.

AI is being compared to the advent of the Industrial Revolution. It’s influencing now, increasingly, who gets hired, who gets policed, how we interact online. But the challenge is that, as AI subtly reshapes decision-making itself, it also reshapes power, politics, and the society in which it operates. So, when we talk about AI governance, we’re not just regulating a technology, he stated.

“We’re talking about how it changes how we think, or the potential for it to change how we think, make decisions and act. And that, I suppose, is one of the greatest government challenges of our time. And I’m aware, since coming here, that India is grappling with that now with proposals to develop AI regulation. There is a debate on which approach that India will take. In my own experience, a good example of where this issue lies right now in the regulating of AI and the tug of war between AI providers and stakeholders in the drafting of the EU General Purpose AI Code of Conduct. The Act creates separate rules for General Purpose AI models called the Code of Practice, he highlighted.

According to McNamara, this Code of Practice will not be legally binding. However, compliance with these guidelines will create a presumption of conformity with obligations. Therefore, this serves as the incentive for AI developers to engage in the Code of Practice. The AI office established a committee of independent academics and industry experts to draft the Code, and they are divided into four working groups covering areas such as transparency and copyright, risk mitigation and evaluation, and governance.

Participation, as McNamara explained in the process, is considered strategic by all major tech companies, all of whom are represented in the online drafting platform. There are amongst a thousand other stakeholders across civil society and industry who have made their contributions to each version of this Code of Conduct, which is being developed. Deadlines for contributions have been extremely short, mostly just weeks, including a brief two-week period to make submissions on the first draft before the second draft was being developed, and that two-week period fell over the Christmas period in Europe.

The Code will likely be the most comprehensive public document in the world detailing responsible practices for powerful AI models in the world.

McNamara acknowledged that India, like Europe, is grappling with the challenge of balancing technological development with the protection of copyright holders and content creators’ income. He emphasized that there’s a general consensus against a scenario where creators—whether news journalists or artists—are deprived of revenue from their work due to its unauthorized use by others.

According to McNamara, many believe this current model is unsustainable, as it could stifle human content creation, including journalism and creative arts. He highlighted the importance of human judgment in news sources, a trust built over decades, which risks being undermined by general-purpose AI. Specifically, creators whose work is subject to copyright cannot currently determine what materials are used to train AI models. This lack of transparency prevents them from claiming compensation for the use of their work or even verifying if their work has been used at all.

Read More:MeitY Secretary S. Krishnan on AI enforcement and news publishers’ role in driving quality content

Read More:Member of the European Parliament Brando Benifei: AI content needs transparency rules

Leave a comment