eu Archives - AI News https://www.artificialintelligence-news.com/news/tag/eu/ Artificial Intelligence News Fri, 25 Apr 2025 14:07:35 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png eu Archives - AI News https://www.artificialintelligence-news.com/news/tag/eu/ 32 32 Meta will train AI models using EU user data https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/ https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/#respond Tue, 15 Apr 2025 16:32:02 +0000 https://www.artificialintelligence-news.com/?p=105325 Meta has confirmed plans to utilise content shared by its adult users in the EU (European Union) to train its AI models. The announcement follows the recent launch of Meta AI features in Europe and aims to enhance the capabilities and cultural relevance of its AI systems for the region’s diverse population.    In a statement, […]

The post Meta will train AI models using EU user data appeared first on AI News.

]]>
Meta has confirmed plans to utilise content shared by its adult users in the EU (European Union) to train its AI models.

The announcement follows the recent launch of Meta AI features in Europe and aims to enhance the capabilities and cultural relevance of its AI systems for the region’s diverse population.   

In a statement, Meta wrote: “Today, we’re announcing our plans to train AI at Meta using public content – like public posts and comments – shared by adults on our products in the EU.

“People’s interactions with Meta AI – like questions and queries – will also be used to train and improve our models.”

Starting this week, users of Meta’s platforms (including Facebook, Instagram, WhatsApp, and Messenger) within the EU will receive notifications explaining the data usage. These notifications, delivered both in-app and via email, will detail the types of public data involved and link to an objection form.

“We have made this objection form easy to find, read, and use, and we’ll honor all objection forms we have already received, as well as newly submitted ones,” Meta explained.

Meta explicitly clarified that certain data types remain off-limits for AI training purposes.

The company says it will not “use people’s private messages with friends and family” to train its generative AI models. Furthermore, public data associated with accounts belonging to users under the age of 18 in the EU will not be included in the training datasets.

Meta wants to build AI tools designed for EU users

Meta positions this initiative as a necessary step towards creating AI tools designed for EU users. Meta launched its AI chatbot functionality across its messaging apps in Europe last month, framing this data usage as the next phase in improving the service.

“We believe we have a responsibility to build AI that’s not just available to Europeans, but is actually built for them,” the company explained. 

“That means everything from dialects and colloquialisms, to hyper-local knowledge and the distinct ways different countries use humor and sarcasm on our products.”

This becomes increasingly pertinent as AI models evolve with multi-modal capabilities spanning text, voice, video, and imagery.   

Meta also situated its actions in the EU within the broader industry landscape, pointing out that training AI on user data is common practice.

“It’s important to note that the kind of AI training we’re doing is not unique to Meta, nor will it be unique to Europe,” the statement reads. 

“We’re following the example set by others including Google and OpenAI, both of which have already used data from European users to train their AI models.”

Meta further claimed its approach surpasses others in openness, stating, “We’re proud that our approach is more transparent than many of our industry counterparts.”   

Regarding regulatory compliance, Meta referenced prior engagement with regulators, including a delay initiated last year while awaiting clarification on legal requirements. The company also cited a favourable opinion from the European Data Protection Board (EDPB) in December 2024.

“We welcome the opinion provided by the EDPB in December, which affirmed that our original approach met our legal obligations,” wrote Meta.

Broader concerns over AI training data

While Meta presents its approach in the EU as transparent and compliant, the practice of using vast swathes of public user data from social media platforms to train large language models (LLMs) and generative AI continues to raise significant concerns among privacy advocates.

Firstly, the definition of “public” data can be contentious. Content shared publicly on platforms like Facebook or Instagram may not have been posted with the expectation that it would become raw material for training commercial AI systems capable of generating entirely new content or insights. Users might share personal anecdotes, opinions, or creative works publicly within their perceived community, without envisaging its large-scale, automated analysis and repurposing by the platform owner.

Secondly, the effectiveness and fairness of an “opt-out” system versus an “opt-in” system remain debatable. Placing the onus on users to actively object, often after receiving notifications buried amongst countless others, raises questions about informed consent. Many users may not see, understand, or act upon the notification, potentially leading to their data being used by default rather than explicit permission.

Thirdly, the issue of inherent bias looms large. Social media platforms reflect and sometimes amplify societal biases, including racism, sexism, and misinformation. AI models trained on this data risk learning, replicating, and even scaling these biases. While companies employ filtering and fine-tuning techniques, eradicating bias absorbed from billions of data points is an immense challenge. An AI trained on European public data needs careful curation to avoid perpetuating stereotypes or harmful generalisations about the very cultures it aims to understand.   

Furthermore, questions surrounding copyright and intellectual property persist. Public posts often contain original text, images, and videos created by users. Using this content to train commercial AI models, which may then generate competing content or derive value from it, enters murky legal territory regarding ownership and fair compensation—issues currently being contested in courts worldwide involving various AI developers.

Finally, while Meta highlights its transparency relative to competitors, the actual mechanisms of data selection, filtering, and its specific impact on model behaviour often remain opaque. Truly meaningful transparency would involve deeper insights into how specific data influences AI outputs and the safeguards in place to prevent misuse or unintended consequences.

The approach taken by Meta in the EU underscores the immense value technology giants place on user-generated content as fuel for the burgeoning AI economy. As these practices become more widespread, the debate surrounding data privacy, informed consent, algorithmic bias, and the ethical responsibilities of AI developers will undoubtedly intensify across Europe and beyond.

(Photo by Julio Lopez)

See also: Apple AI stresses privacy with synthetic and anonymised data

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta will train AI models using EU user data appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/feed/ 0
CERTAIN drives ethical AI compliance in Europe  https://www.artificialintelligence-news.com/news/certain-drives-ethical-ai-compliance-in-europe/ https://www.artificialintelligence-news.com/news/certain-drives-ethical-ai-compliance-in-europe/#respond Wed, 26 Feb 2025 17:27:42 +0000 https://www.artificialintelligence-news.com/?p=104623 EU-funded initiative CERTAIN aims to drive ethical AI compliance in Europe amid increasing regulations like the EU AI Act. CERTAIN — short for “Certification for Ethical and Regulatory Transparency in Artificial Intelligence” — will focus on the development of tools and frameworks that promote transparency, compliance, and sustainability in AI technologies. The project is led […]

The post CERTAIN drives ethical AI compliance in Europe  appeared first on AI News.

]]>
EU-funded initiative CERTAIN aims to drive ethical AI compliance in Europe amid increasing regulations like the EU AI Act.

CERTAIN — short for “Certification for Ethical and Regulatory Transparency in Artificial Intelligence” — will focus on the development of tools and frameworks that promote transparency, compliance, and sustainability in AI technologies.

The project is led by Idemia Identity & Security France in collaboration with 19 partners across ten European countries, including the St. Pölten University of Applied Sciences (UAS) in Austria. With its official launch in January 2025, CERTAIN could serve as a blueprint for global AI governance.

Driving ethical AI practices in Europe

According to Sebastian Neumaier, Senior Researcher at the St. Pölten UAS’ Institute of IT Security Research and project manager for CERTAIN, the goal is to address crucial regulatory and ethical challenges.  

“In CERTAIN, we want to develop tools that make AI systems transparent and verifiable in accordance with the requirements of the EU’s AI Act. Our goal is to develop practically feasible solutions that help companies to efficiently fulfil regulatory requirements and sustainably strengthen confidence in AI technologies,” emphasised Neumaier.  

To achieve this, CERTAIN aims to create user-friendly tools and guidelines that simplify even the most complex AI regulations—helping organisations both in the public and private sectors navigate and implement these rules effectively. The overall intent is to provide a bridge between regulation and innovation, empowering businesses to leverage AI responsibly while fostering public trust.

Harmonising standards and improving sustainability  

One of CERTAIN’s primary objectives is to establish consistent standards for data sharing and AI development across Europe. By setting industry-wide norms for interoperability, the project seeks to improve collaboration and efficiency in the use of AI-driven technologies.

The effort to harmonise data practices isn’t just limited to compliance; it also aims to unlock new opportunities for innovation. CERTAIN’s solutions will create open and trustworthy European data spaces—essential components for driving sustainable economic growth.  

In line with the EU’s Green Deal, CERTAIN places a strong focus on sustainability. AI technologies, while transformative, come with significant environmental challenges—such as high energy consumption and resource-intensive data processing.  

CERTAIN will address these issues by promoting energy-efficient AI systems and advocating for eco-friendly methods of data management. This dual approach not only aligns with EU sustainability goals but also ensures that AI development is carried out with the health of the planet in mind.

A collaborative framework to unlock AI innovation

A unique aspect of CERTAIN is its approach to fostering collaboration and dialogue among stakeholders. The project team at St. Pölten UAS is actively engaging with researchers, tech companies, policymakers, and end-users to co-develop, test, and refine ideas, tools, and standards.  

This practice-oriented exchange extends beyond product development. CERTAIN also serves as a central authority for informing stakeholders about legal, ethical, and technical matters related to AI and certification. By maintaining open channels of communication, CERTAIN ensures that its outcomes are not only practical but also widely adopted.   

CERTAIN is part of the EU’s Horizon Europe programme, specifically under Cluster 4: Digital, Industry, and Space.

The project’s multidisciplinary and international consortium includes leading academic institutions, industrial giants, and research organisations, making it a powerful collective effort to shape the future of AI in Europe.  

In January 2025, representatives from all 20 consortium members met in Osny, France, to kick off their collaborative mission. The two-day meeting set the tone for the project’s ambitious agenda, with partners devising strategies for tackling the regulatory, technical, and ethical hurdles of AI.  

Ensuring compliance with ethical AI regulations in Europe 

As the EU’s AI Act edges closer to implementation, guidelines and tools like those developed under CERTAIN will be pivotal.

The Act will impose strict requirements on AI systems, particularly those deemed “high-risk,” such as applications in healthcare, transportation, and law enforcement.

While these regulations aim to ensure safety and accountability, they also pose challenges for organisations seeking to comply.  

CERTAIN seeks to alleviate these challenges by providing actionable solutions that align with Europe’s legal framework while encouraging innovation. By doing so, the project will play a critical role in positioning Europe as a global leader in ethical AI development.  

See also: Endor Labs: AI transparency vs ‘open-washing’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post CERTAIN drives ethical AI compliance in Europe  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/certain-drives-ethical-ai-compliance-in-europe/feed/ 0
Ursula von der Leyen: AI race ‘is far from over’ https://www.artificialintelligence-news.com/news/ursula-von-der-leyen-ai-race-is-far-from-over/ https://www.artificialintelligence-news.com/news/ursula-von-der-leyen-ai-race-is-far-from-over/#respond Tue, 11 Feb 2025 16:51:29 +0000 https://www.artificialintelligence-news.com/?p=104314 Europe has no intention of playing catch-up in the global AI race, European Commission President Ursula von der Leyen declared at the AI Action Summit in Paris. While the US and China are often seen as frontrunners, von der Leyen emphasised that the AI race “is far from over” and that Europe has distinct strengths […]

The post Ursula von der Leyen: AI race ‘is far from over’ appeared first on AI News.

]]>
Europe has no intention of playing catch-up in the global AI race, European Commission President Ursula von der Leyen declared at the AI Action Summit in Paris.

While the US and China are often seen as frontrunners, von der Leyen emphasised that the AI race “is far from over” and that Europe has distinct strengths to carve a leading role for itself.

“This is the third summit on AI safety in just over one year,” von der Leyen remarked. “In the same period, three new generations of ever more powerful AI models have been released. Some expect models that will approach human reasoning within a year’s time.”

The European Commission President set the tone of the event by contrasting the groundwork laid in previous summits with the urgency of this one.

“Past summits focused on laying the groundwork for AI safety. Together, we built a shared consensus that AI will be safe, that it will promote our values and benefit humanity. But this Summit is focused on action. And that is exactly what we need right now.”

As the world witnesses AI’s disruptive power, von der Leyen urged Europe to “formulate a vision of where we want AI to take us, as society and as humanity.” Growing adoption, “in the key sectors of our economy, and for the key challenges of our times,” provides a golden opportunity for the continent to lead, she argued.

The case for a European approach to the AI race 

Von der Leyen rejected notions that Europe has fallen behind its global competitors.

“Too often, I hear that Europe is late to the race – while the US and China have already got ahead. I disagree,” she stated. “The frontier is constantly moving. And global leadership is still up for grabs.”

Instead of replicating what other regions are doing, she called for doubling down on Europe’s unique strengths to define the continent’s distinct approach to AI.

“Too often, I have heard that we should replicate what others are doing and run after their strengths,” she said. “I think that instead, we should invest in what we can do best and build on our strengths here in Europe, which are our science and technology mastery that we have given to the world.”

Von der Leyen defined three pillars of the so-called “European brand of AI” that sets it apart: 1) focusing on high-complexity, industry-specific applications, 2) taking a cooperative, collaborative approach to innovation, and 3) embracing open-source principles.

“This summit shows there is a distinct European brand of AI,” she asserted. “It is already driving innovation and adoption. And it is picking up speed.”

Accelerating innovation: AI factories and gigafactories  

To maintain its competitive edge, Europe must supercharge its AI innovation, von der Leyen stressed.

A key component of this strategy lies in its computational infrastructure. Europe already boasts some of the world’s fastest supercomputers, which are now being leveraged through the creation of “AI factories.”

“In just a few months, we have set up a record of 12 AI factories,” von der Leyen revealed. “And we are investing €10 billion in them. This is not a promise—it is happening right now, and it is the largest public investment for AI in the world, which will unlock over ten times more private investment.”

Beyond these initial steps, von der Leyen unveiled an even more ambitious initiative. AI gigafactories, built on the scale of CERN’s Large Hadron Collider, will provide the infrastructure needed for training AI systems at unprecedented scales. They aim to foster collaboration between researchers, entrepreneurs, and industry leaders.

“We provide the infrastructure for large computational power,” von der Leyen explained. “Talents of the world are welcome. Industries will be able to collaborate and federate their data.”

The cooperative ethos underpinning AI gigafactories is part of a broader European push to balance competition with collaboration.

“AI needs competition but also collaboration,” she emphasised, highlighting that the initiative will serve as a “safe space” for these cooperative efforts.

Building trust with the AI Act

Crucially, von der Leyen reiterated Europe’s commitment to making AI safe and trustworthy. She pointed to the EU AI Act as the cornerstone of this strategy, framing it as a harmonised framework to replace fragmented national regulations across member states.

“The AI Act [will] provide one single set of safety rules across the European Union – 450 million people – instead of 27 different national regulations,” she said, before acknowledging businesses’ concerns about regulatory complexities.

“At the same time, I know, we have to make it easier, we have to cut red tape. And we will.”

€200 billion to remain in the AI race

Financing such ambitious plans naturally requires significant resources. Von der Leyen praised the recently launched EU AI Champions Initiative, which has already pledged €150 billion from providers, investors, and industry.

During her speech at the summit, von der Leyen announced the Commission’s complementary InvestAI initiative that will bring in an additional €50 billion. Altogether, the result is mobilising a massive €200 billion in public-private AI investments.

“We will have a focus on industrial and mission-critical applications,” she said. “It will be the largest public-private partnership in the world for the development of trustworthy AI.”

Ethical AI is a global responsibility

Von der Leyen closed her address by framing Europe’s AI ambitions within a broader, humanitarian perspective, arguing that ethical AI is a global responsibility.

“Cooperative AI can be attractive well beyond Europe, including for our partners in the Global South,” she proclaimed, extending a message of inclusivity.

Von der Leyen expressed full support for the AI Foundation launched at the summit, highlighting its mission to ensure widespread access to AI’s benefits.

“AI can be a gift to humanity. But we must make sure that benefits are widespread and accessible to all,” she remarked.

“We want AI to be a force for good. We want an AI where everyone collaborates and everyone benefits. That is our path – our European way.”

See also: AI Action Summit: Leaders call for unity and equitable development

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Ursula von der Leyen: AI race ‘is far from over’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ursula-von-der-leyen-ai-race-is-far-from-over/feed/ 0
EU AI Act: What businesses need to know as regulations go live https://www.artificialintelligence-news.com/news/eu-ai-act-what-businesses-need-know-regulations-go-live/ https://www.artificialintelligence-news.com/news/eu-ai-act-what-businesses-need-know-regulations-go-live/#respond Fri, 31 Jan 2025 12:52:49 +0000 https://www.artificialintelligence-news.com/?p=17015 Next week marks the beginning of a new era for AI regulations as the first obligations of the EU AI Act take effect. While the full compliance requirements won’t come into force until mid-2025, the initial phase of the EU AI Act begins February 2nd and includes significant prohibitions on specific AI applications. Businesses across […]

The post EU AI Act: What businesses need to know as regulations go live appeared first on AI News.

]]>
Next week marks the beginning of a new era for AI regulations as the first obligations of the EU AI Act take effect.

While the full compliance requirements won’t come into force until mid-2025, the initial phase of the EU AI Act begins February 2nd and includes significant prohibitions on specific AI applications. Businesses across the globe that operate in the EU must now navigate a regulatory landscape with strict rules and high stakes.

The new regulations prohibit the deployment or use of several high-risk AI systems. These include applications such as social scoring, emotion recognition, real-time remote biometric identification in public spaces, and other scenarios deemed unacceptable under the Act.

Companies found in violation of the rules could face penalties of up to 7% of their global annual turnover, making it imperative for organisations to understand and comply with the restrictions.  

Early compliance challenges  

“It’s finally here,” says Levent Ergin, Chief Strategist for Climate, Sustainability, and AI at Informatica. “While we’re still in a phased approach, businesses’ hard-earned preparations for the EU AI Act will now face the ultimate test.”

Headshot of Levent Ergin, Chief Strategist for Climate, Sustainability, and AI at Informatica, for an article on what the regulations introduced in EU AI Act means for businesses in the European Union and beyond.

Ergin highlights that even though most compliance requirements will not take effect until mid-2025, the early prohibitions set a decisive tone.

“For businesses, the pressure in 2025 is twofold. They must demonstrate tangible ROI from AI investments while navigating challenges around data quality and regulatory uncertainty. It’s already the perfect storm, with 89% of large businesses in the EU reporting conflicting expectations for their generative AI initiatives. At the same time, 48% say technology limitations are a major barrier to moving AI pilots into production,” he remarks.

Ergin believes the key to compliance and success lies in data governance.

“Without robust data foundations, organisations risk stagnation, limiting their ability to unlock AI’s full potential. After all, isn’t ensuring strong data governance a core principle that the EU AI Act is built upon?”

To adapt, companies must prioritise strengthening their approach to data quality.

“Strengthening data quality and governance is no longer optional, it’s critical. To ensure both compliance and prove the value of AI, businesses must invest in making sure data is accurate, holistic, integrated, up-to-date and well-governed,” says Ergin.

“This isn’t just about meeting regulatory demands; it’s about enabling AI to deliver real business outcomes. As 82% of EU companies plan to increase their GenAI investments in 2025, ensuring their data is AI-ready will be the difference between those who succeed and those who remain in the starting blocks.”

EU AI Act has no borders

The extraterritorial scope of the EU AI Act means non-EU organisations are assuredly not off the hook. As Marcus Evans, a partner at Norton Rose Fulbright, explains, the Act applies far beyond the EU’s borders.

Headshot of Marcus Evans, a partner at Norton Rose Fulbright, for an article on what the regulations introduced in EU AI Act means for businesses in the European Union and beyond.

“The AI Act will have a truly global application,” says Evans. “That’s because it applies not only to organisations in the EU using AI or those providing, importing, or distributing AI to the EU market, but also AI provision and use where the output is used in the EU. So, for instance, a company using AI for recruitment in the EU – even if it is based elsewhere – would still be captured by these new rules.”  

Evans advises businesses to start by auditing their AI use. “At this stage, businesses must first understand where AI is being used in their organisation so that they can then assess whether any use cases may trigger the prohibitions. Building on that initial inventory, a wider governance process can then be introduced to ensure AI use is assessed, remains outside the prohibitions, and complies with the AI Act.”  

While organisations work to align their AI practices with the new regulations, additional challenges remain. Compliance requires addressing other legal complexities such as data protection, intellectual property (IP), and discrimination risks.  

Evans emphasises that raising AI literacy within organisations is also a critical step.

“Any organisations in scope must also take measures to ensure their staff – and anyone else dealing with the operation and use of their AI systems on their behalf – have a sufficient level of AI literacy,” he states.

“AI literacy will play a vital role in AI Act compliance, as those involved in governing and using AI must understand the risks they are managing.”

Encouraging responsible innovation  

The EU AI Act is being hailed as a milestone for responsible AI development. By prohibiting harmful practices and requiring transparency and accountability, the regulation seeks to balance innovation with ethical considerations.

Headshot of Beatriz Sanz Sáiz, AI Sector Leader at EY Global, for an article on what the regulations introduced in EU AI Act means for businesses in the European Union and beyond.

“This framework is a pivotal step towards building a more responsible and sustainable future for artificial intelligence,” says Beatriz Sanz Sáiz, AI Sector Leader at EY Global.

Sanz Sáiz believes the legislation fosters trust while providing a foundation for transformative technological progress.

“It has the potential to foster further trust, accountability, and innovation in AI development, as well as strengthen the foundations upon which the technology continues to be built,” Sanz Sáiz asserts.

“It is critical that we focus on eliminating bias and prioritising fundamental rights like fairness, equity, and privacy. Responsible AI development is a crucial step in the quest to further accelerate innovation.”

What’s prohibited under the EU AI Act?  

To ensure compliance, businesses need to be crystal-clear on which activities fall under the EU AI Act’s strict prohibitions. The current list of prohibited activities includes:  

  • Harmful subliminal, manipulative, and deceptive techniques  
  • Harmful exploitation of vulnerabilities  
  • Unacceptable social scoring  
  • Individual crime risk assessment and prediction (with some exceptions)  
  • Untargeted scraping of internet or CCTV material to develop or expand facial recognition databases  
  • Emotion recognition in areas such as the workplace and education (with some exceptions)  
  • Biometric categorisation to infer sensitive categories (with some exceptions)  
  • Real-time remote biometric identification (RBI) in publicly accessible spaces for law enforcement purposes (with some exceptions)  

The Commission’s forthcoming guidance on which “AI systems” fall under these categories will be critical for businesses seeking to ensure compliance and reduce legal risks. Additionally, companies should anticipate further clarification and resources at the national and EU levels, such as the upcoming webinar hosted by the AI Office.

A new landscape for AI regulations

The early implementation of the EU AI Act represents just the beginning of what is a remarkably complex and ambitious regulatory endeavour. As AI continues to play an increasingly pivotal role in business strategy, organisations must learn to navigate new rules and continuously adapt to future changes.  

For now, businesses should focus on understanding the scope of their AI use, enhancing data governance, educating staff to build AI literacy, and adopting a proactive approach to compliance. By doing so, they can position themselves as leaders in a fast-evolving AI landscape and unlock the technology’s full potential while upholding ethical and legal standards.

(Photo by Guillaume Périgois)

See also: ChatGPT Gov aims to modernise US government agencies

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU AI Act: What businesses need to know as regulations go live appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/eu-ai-act-what-businesses-need-know-regulations-go-live/feed/ 0
AI governance: Analysing emerging global regulations https://www.artificialintelligence-news.com/news/ai-governance-analysing-emerging-global-regulations/ https://www.artificialintelligence-news.com/news/ai-governance-analysing-emerging-global-regulations/#respond Thu, 19 Dec 2024 16:21:18 +0000 https://www.artificialintelligence-news.com/?p=16742 Governments are scrambling to establish regulations to govern AI, citing numerous concerns over data privacy, bias, safety, and more. AI News caught up with Nerijus Šveistys, Senior Legal Counsel at Oxylabs, to understand the state of play when it comes to AI regulation and its potential implications for industries, businesses, and innovation. “The boom of […]

The post AI governance: Analysing emerging global regulations appeared first on AI News.

]]>
Governments are scrambling to establish regulations to govern AI, citing numerous concerns over data privacy, bias, safety, and more.

AI News caught up with Nerijus Šveistys, Senior Legal Counsel at Oxylabs, to understand the state of play when it comes to AI regulation and its potential implications for industries, businesses, and innovation.

“The boom of the last few years appears to have sparked a push to establish regulatory frameworks for AI governance,” explains Šveistys.

“This is a natural development, as the rise of AI seems to pose issues in data privacy and protection, bias and discrimination, safety, intellectual property, and other legal areas, as well as ethics that need to be addressed.”

Regions diverge in regulatory strategy

The European Union’s AI Act has, unsurprisingly, positioned the region with a strict, centralised approach. The regulation, which came into force this year, is set to be fully effective by 2026.

Šveistys pointed out that the EU has acted relatively swiftly compared to other jurisdictions: “The main difference we can see is the comparative quickness with which the EU has released a uniform regulation to govern the use of all types of AI.”

Meanwhile, other regions have opted for more piecemeal approaches. China, for instance, has been implementing regulations specific to certain AI technologies in a phased-out manner. According to Šveistys, China began regulating AI models as early as 2021.

“In 2021, they introduced regulation on recommendation algorithms, which [had] increased their capabilities in digital advertising. It was followed by regulations on deep synthesis models or, in common terms, deepfakes and content generation in 2022,” he said.

“Then, in 2023, regulation on generative AI models was introduced as these models were making a splash in commercial usage.”

The US, in contrast, remains relatively uncoordinated in its approach. Federal-level regulations are yet to be enacted, with efforts mostly emerging at the state level.

“There are proposed regulations at the state level, such as the so-called California AI Act, but even if they come into power, it may still take some time before they do,” Šveistys noted.

This delay in implementing unified AI regulations in the US has raised questions about the extent to which business pushback may be contributing to the slow rollout. Šveistys said that while lobbyist pressure is a known factor, it’s not the only potential reason.

“There was pushback to the EU AI Act, too, which was nevertheless introduced. Thus, it is not clear whether the delay in the US is only due to lobbyism or other obstacles in the legislation enactment process,” explains Šveistys.

“It might also be because some still see AI as a futuristic concern, not fully appreciating the extent to which it is already a legal issue of today.”

Balancing innovation and safety

Differentiated regulatory approaches could affect the pace of innovation and business competitiveness across regions.

Europe’s regulatory framework, though more stringent, aims to ensure consumer protection and ethical adherence—something that less-regulated environments may lack.

“More rigid regulatory frameworks may impose compliance costs for businesses in the AI field and stifle competitiveness and innovation. On the other hand, they bring the benefits of protecting consumers and adhering to certain ethical norms,” comments Šveistys.

This trade-off is especially pronounced in AI-related sectors such as targeted advertising, where algorithmic bias is increasingly scrutinised.

AI governance often extends beyond laws that specifically target AI, incorporating related legal areas like those governing data collection and privacy. For example, the EU AI Act also regulates the use of AI in physical devices, such as elevators.

“Additionally, all businesses that collect data for advertisement are potentially affected as AI regulation can also cover algorithmic bias in targeted advertising,” emphasises Šveistys.

Impact on related industries

One industry that is deeply intertwined with AI developments is web scraping. Typically used for collecting publicly available data, web scraping is undergoing an AI-driven evolution.

“From data collection, validation, analysis, or overcoming anti-scraping measures, there is a lot of potential for AI to massively improve the efficiency, accuracy, and adaptability of web scraping operations,” said Šveistys. 

However, as AI regulation and related laws tighten, web scraping companies will face greater scrutiny.

“AI regulations may also bring the spotlight on certain areas of law that were always very relevant to the web scraping industry, such as privacy or copyright laws,” Šveistys added.

“At the end of the day, scraping content protected by such laws without proper authorisation could always lead to legal issues, and now so can using AI this way.”

Copyright battles and legal precedents

The implications of AI regulation are also playing out on a broader legal stage, particularly in cases involving generative AI tools.

High-profile lawsuits have been launched against AI giants like OpenAI and its primary backer, Microsoft, by authors, artists, and musicians who claim their copyrighted materials were used to train AI systems without proper permission.

“These cases are pivotal in determining the legal boundaries of using copyrighted material for AI development and establishing legal precedents for protecting intellectual property in the digital age,” said Šveistys.

While these lawsuits could take years to resolve, their outcomes may fundamentally shape the future of AI development. So, what can businesses do now as the regulatory and legal landscape continues to evolve?

“Speaking about the specific cases of using copyrighted material for AI training, businesses should approach this the same way as any web-scraping activity – that is, evaluate the specific data they wish to collect with the help of a legal expert in the field,” recommends Šveistys.

“It is important to recognise that the AI legal landscape is very new and rapidly evolving, with not many precedents in place to refer to as of yet. Hence, continuous monitoring and adaptation of your AI usage are crucial.”

Just this week, the UK Government made headlines with its announcement of a consultation on the use of copyrighted material for training AI models. Under the proposals, tech firms could be permitted to use copyrighted material unless owners have specifically opted out.

Despite the diversity of approaches globally, the AI regulatory push marks a significant moment for technological governance. Whether through the EU’s comprehensive model, China’s step-by-step strategy, or narrower, state-level initiatives like in the US, businesses worldwide must navigate a complex, evolving framework.

The challenge ahead will be striking the right balance between fostering innovation and mitigating risks, ensuring that AI remains a force for good while avoiding potential harms.

(Photo by Nathan Bingle)

See also: Anthropic urges AI regulation to avoid catastrophes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI governance: Analysing emerging global regulations appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-governance-analysing-emerging-global-regulations/feed/ 0
EU introduces draft regulatory guidance for AI models https://www.artificialintelligence-news.com/news/eu-introduces-draft-regulatory-guidance-for-ai-models/ https://www.artificialintelligence-news.com/news/eu-introduces-draft-regulatory-guidance-for-ai-models/#respond Fri, 15 Nov 2024 14:52:05 +0000 https://www.artificialintelligence-news.com/?p=16496 The release of the “First Draft General-Purpose AI Code of Practice” marks the EU’s effort to create comprehensive regulatory guidance for general-purpose AI models. The development of this draft has been a collaborative effort, involving input from diverse sectors including industry, academia, and civil society. The initiative was led by four specialised Working Groups, each […]

The post EU introduces draft regulatory guidance for AI models appeared first on AI News.

]]>
The release of the “First Draft General-Purpose AI Code of Practice” marks the EU’s effort to create comprehensive regulatory guidance for general-purpose AI models.

The development of this draft has been a collaborative effort, involving input from diverse sectors including industry, academia, and civil society. The initiative was led by four specialised Working Groups, each addressing specific aspects of AI governance and risk mitigation:

  • Working Group 1: Transparency and copyright-related rules
  • Working Group 2: Risk identification and assessment for systemic risk
  • Working Group 3: Technical risk mitigation for systemic risk
  • Working Group 4: Governance risk mitigation for systemic risk

The draft is aligned with existing laws such as the Charter of Fundamental Rights of the European Union. It takes into account international approaches, striving for proportionality to risks, and aims to be future-proof by contemplating rapid technological changes.

Key objectives outlined in the draft include:

  • Clarifying compliance methods for providers of general-purpose AI models
  • Facilitating understanding across the AI value chain, ensuring seamless integration of AI models into downstream products
  • Ensuring compliance with Union law on copyrights, especially concerning the use of copyrighted material for model training
  • Continuously assessing and mitigating systemic risks associated with AI models

Recognising and mitigating systemic risks

A core feature of the draft is its taxonomy of systemic risks, which includes types, natures, and sources of such risks. The document outlines various threats such as cyber offences, biological risks, loss of control over autonomous AI models, and large-scale disinformation. By acknowledging the continuously evolving nature of AI technology, the draft recognises that this taxonomy will need updates to remain relevant.

As AI models with systemic risks become more common, the draft emphasises the need for robust safety and security frameworks (SSFs). It proposes a hierarchy of measures, sub-measures, and key performance indicators (KPIs) to ensure appropriate risk identification, analysis, and mitigation throughout a model’s lifecycle.

The draft suggests that providers establish processes to identify and report serious incidents associated with their AI models, offering detailed assessments and corrections as needed. It also encourages collaboration with independent experts for risk assessment, especially for models posing significant systemic risks.

Taking a proactive stance to AI regulatory guidance

The EU AI Act, which came into force on 1 August 2024, mandates that the final version of this Code be ready by 1 May 2025. This initiative underscores the EU’s proactive stance towards AI regulation, emphasising the need for AI safety, transparency, and accountability.

As the draft continues to evolve, the working groups invite stakeholders to participate actively in refining the document. Their collaborative input will shape a regulatory framework aimed at safeguarding innovation while protecting society from the potential pitfalls of AI technology.

While still in draft form, the EU’s Code of Practice for general-purpose AI models could set a benchmark for responsible AI development and deployment globally. By addressing key issues such as transparency, risk management, and copyright compliance, the Code aims to create a regulatory environment that fosters innovation, upholds fundamental rights, and ensures a high level of consumer protection.

This draft is open for written feedback until 28 November 2024. 

See also: Anthropic urges AI regulation to avoid catastrophes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU introduces draft regulatory guidance for AI models appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/eu-introduces-draft-regulatory-guidance-for-ai-models/feed/ 0
EU AI Act: Early prep could give businesses competitive edge https://www.artificialintelligence-news.com/news/eu-ai-act-early-prep-could-give-businesses-competitive-edge/ https://www.artificialintelligence-news.com/news/eu-ai-act-early-prep-could-give-businesses-competitive-edge/#respond Tue, 22 Oct 2024 13:21:32 +0000 https://www.artificialintelligence-news.com/?p=16358 The EU AI Act is set to fully take effect in August 2026, but some provisions are coming into force even earlier. The legislation establishes a first-of-its-kind regulatory framework for AI systems, employing a risk-based approach that categorises AI applications based on their potential impact on safety, human rights, and societal wellbeing. “Some systems are […]

The post EU AI Act: Early prep could give businesses competitive edge appeared first on AI News.

]]>
The EU AI Act is set to fully take effect in August 2026, but some provisions are coming into force even earlier.

The legislation establishes a first-of-its-kind regulatory framework for AI systems, employing a risk-based approach that categorises AI applications based on their potential impact on safety, human rights, and societal wellbeing.

“Some systems are banned entirely, while systems deemed ‘high-risk’ are subject to stricter requirements and assessments before deployment,” explains the DPO Centre, a data protection consultancy.

Similar to GDPR, the Act’s extra-territorial reach means it applies to any organisation marketing, deploying, or using AI systems within the EU, regardless of where the system is developed. Businesses will be classified primarily as either ‘Providers’ or ‘Deployers,’ with additional categories for ‘Distributors,’ ‘Importers,’ ‘Product Manufacturers,’ and ‘Authorised Representatives.’

For organisations developing or deploying AI systems, particularly those classified as high-risk, compliance preparation promises to be complex. However, experts suggest viewing this as an opportunity rather than a burden.

“By embracing compliance as a catalyst for more transparent AI usage, businesses can turn regulatory demands into a competitive advantage,” notes the DPO Centre.

Key preparation strategies include comprehensive staff training, establishing robust corporate governance, and implementing strong cybersecurity measures. The legislation’s requirements often overlap with existing GDPR frameworks, particularly regarding transparency and accountability.

Organisations must also adhere to ethical AI principles and maintain clear documentation of their systems’ functionality, limitations, and intended use. The EU is currently developing specific codes of practice and templates to assist with compliance obligations.

For businesses uncertain about their obligations, experts recommend seeking professional guidance early. Tools like the EU AI Act Compliance Checker can help organisations verify their systems’ alignment with regulatory requirements.

Rather than viewing compliance as merely a regulatory burden, forward-thinking organisations should view the EU’s AI Act as an opportunity to demonstrate commitment to responsible AI development and build greater trust with their customers.

See also: AI governance gap: 95% of firms haven’t implemented frameworks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU AI Act: Early prep could give businesses competitive edge appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/eu-ai-act-early-prep-could-give-businesses-competitive-edge/feed/ 0
Tech industry giants urge EU to streamline AI regulations https://www.artificialintelligence-news.com/news/tech-industry-giants-urge-eu-streamline-ai-regulations/ https://www.artificialintelligence-news.com/news/tech-industry-giants-urge-eu-streamline-ai-regulations/#respond Thu, 19 Sep 2024 15:20:55 +0000 https://www.artificialintelligence-news.com/?p=16117 Meta has spearheaded an open letter calling for urgent reform of AI regulations in the EU. The letter, which garnered support from over 50 prominent companies – including Ericsson, SAP, and Spotify – was published as an advert in the Financial Times. The collective voice of these industry leaders highlights a pressing issue: Europe’s bureaucratic […]

The post Tech industry giants urge EU to streamline AI regulations appeared first on AI News.

]]>
Meta has spearheaded an open letter calling for urgent reform of AI regulations in the EU. The letter, which garnered support from over 50 prominent companies – including Ericsson, SAP, and Spotify – was published as an advert in the Financial Times.

The collective voice of these industry leaders highlights a pressing issue: Europe’s bureaucratic approach to AI regulation may be stifling innovation and causing the region to lag behind its global counterparts.

“Europe has become less competitive and less innovative compared to other regions and it now risks falling further behind in the AI era due to inconsistent regulatory decision making,” the letter states, painting a stark picture of the continent’s current position in the AI race.

The signatories emphasise two key areas of concern. Firstly, they point to the development of ‘open’ models, which are freely available for use, modification, and further development. These models are lauded for their potential to “multiply the benefits and spread social and economic opportunity” while simultaneously bolstering sovereignty and control.

Secondly, the letter underscores the importance of ‘multimodal’ models, which integrate text, images, and speech capabilities. The signatories argue that the leap from text-only to multimodal models is akin to “the difference between having only one sense and having all five of them”. They assert that these advanced models could significantly boost productivity, drive scientific research, and inject hundreds of billions of euros into the European economy.

However, the crux of the matter lies in the regulatory landscape. The letter expresses frustration with the uncertainty surrounding data usage for AI model training, stemming from interventions by European Data Protection Authorities. This ambiguity, they argue, could result in Large Language Models (LLMs) lacking crucial Europe-specific training data.

To address these challenges, the signatories call for “harmonised, consistent, quick and clear decisions under EU data regulations that enable European data to be used in AI training for the benefit of Europeans”. They stress the need for “decisive action” to unlock Europe’s potential for creativity, ingenuity, and entrepreneurship, which they believe is essential for the region’s prosperity and technological leadership.

A copy of the letter can be found below:

While the letter acknowledges the importance of consumer protection, it also highlights the delicate balance regulators must strike to avoid hindering commercial progress. The European Commission’s approach to regulation has often been criticised for its perceived heavy-handedness, and this latest appeal from industry leaders adds weight to growing concerns about the region’s global competitiveness in the AI sector.

The pressure is rapidly mounting on European policymakers to create a regulatory environment that fosters innovation while maintaining appropriate safeguards. The coming months will likely see intensified dialogue between industry stakeholders and regulators as they grapple with these complex issues that will shape the future of AI development in Europe.

(Photo by Sara Kurfeß)

See also: SolarWinds: IT professionals want stronger AI regulation

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Tech industry giants urge EU to streamline AI regulations appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/tech-industry-giants-urge-eu-streamline-ai-regulations/feed/ 0
Balancing innovation and trust: Experts assess the EU’s AI Act https://www.artificialintelligence-news.com/news/balancing-innovation-trust-experts-assess-eu-ai-act/ https://www.artificialintelligence-news.com/news/balancing-innovation-trust-experts-assess-eu-ai-act/#respond Wed, 31 Jul 2024 15:48:45 +0000 https://www.artificialintelligence-news.com/?p=15577 As the EU’s AI Act prepares to come into force tomorrow, industry experts are weighing in on its potential impact, highlighting its role in building trust and encouraging responsible AI adoption. Curtis Wilson, Staff Data Engineer at Synopsys’ Software Integrity Group, believes the new regulation could be a crucial step in addressing the AI industry’s […]

The post Balancing innovation and trust: Experts assess the EU’s AI Act appeared first on AI News.

]]>
As the EU’s AI Act prepares to come into force tomorrow, industry experts are weighing in on its potential impact, highlighting its role in building trust and encouraging responsible AI adoption.

Curtis Wilson, Staff Data Engineer at Synopsys’ Software Integrity Group, believes the new regulation could be a crucial step in addressing the AI industry’s most pressing challenge: building trust.

“The greatest problem facing AI developers is not regulation, but a lack of trust in AI,” Wilson stated. “For an AI system to reach its full potential, it needs to be trusted by the people who use it.”

This sentiment is echoed by Paul Cardno, Global Digital Automation & Innovation Senior Manager at 3M, who noted, “With nearly 80% of UK adults now believing AI needs to be heavily regulated, the introduction of the EU’s AI Act is something that businesses have been long-waiting for.”

Both experts emphasise the Act’s potential to foster confidence in AI technologies. Wilson explained that while his company has implemented internal measures to build trust, external regulation is equally important.

“I see regulatory frameworks like the EU AI Act as an essential component to building trust in AI,” Wilson said. “The strict rules and punishing fines will deter careless developers and help customers feel more confident in trusting and using AI systems.”

Cardno added, “We know that AI is shaping the future, but companies will only be able to reap the rewards if they have the confidence to rethink existing processes and break away from entrenched structures.”

The EU AI Act primarily focuses on high-risk systems and foundational models. Wilson noted that many of its requirements align with existing best practices in data science, such as risk management, testing procedures, and comprehensive documentation.

For UK businesses, the impact of the EU AI Act extends beyond those directly selling to EU markets. 

Wilson pointed out that certain aspects of the Act may apply to Northern Ireland due to the Windsor Framework. Additionally, the UK government is developing its own AI regulations, with a recent whitepaper emphasising interoperability with EU and US regulations.

“While the EU Act isn’t perfect, and needs to be assessed in relation to other global regulations, having a clear framework and guidance on AI from one of the world’s major economies will help encourage those who remain on the fence to tap into the AI revolution,” Cardno explained.

While acknowledging that the new regulations may create some friction, particularly around registration and certification, Wilson emphasised that many of the Act’s obligations are already standard practice for responsible companies. However, he recognised that small companies and startups might face greater challenges.

“Small companies and start-ups will experience issues more strongly,” Wilson said. “The regulation acknowledges this and has included provisions for sandboxes to foster AI innovation for these smaller businesses.”

However, Wilson notes that these sandboxes will be established at the national level by individual EU member states, potentially limiting access for UK businesses.

As the AI landscape continues to evolve, the EU AI Act represents a significant step towards establishing a framework for responsible AI development and deployment.

“Having a clear framework and guidance on AI from one of the world’s major economies will help encourage those who remain on the fence to tap into the AI revolution, ensuring it has a safe, positive ongoing influence for all organisations operating across the EU, which can only be a promising step forwards for the industry,” concludes Cardno.

(Photo by Guillaume Périgois)

See also: UAE blocks US congressional meetings with G42 amid AI transfer concerns

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Balancing innovation and trust: Experts assess the EU’s AI Act appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/balancing-innovation-trust-experts-assess-eu-ai-act/feed/ 0
Meta joins Apple in withholding AI models from EU users https://www.artificialintelligence-news.com/news/meta-joins-apple-withholding-ai-models-eu-users/ https://www.artificialintelligence-news.com/news/meta-joins-apple-withholding-ai-models-eu-users/#respond Thu, 18 Jul 2024 14:10:21 +0000 https://www.artificialintelligence-news.com/?p=15450 Meta has announced it will not be launching its upcoming multimodal AI model in the European Union due to regulatory concerns. This decision from Meta comes on the heels of Apple’s similar move to exclude the EU from its Apple Intelligence rollout, signalling a growing trend of tech giants hesitating to introduce advanced AI technologies […]

The post Meta joins Apple in withholding AI models from EU users appeared first on AI News.

]]>
Meta has announced it will not be launching its upcoming multimodal AI model in the European Union due to regulatory concerns.

This decision from Meta comes on the heels of Apple’s similar move to exclude the EU from its Apple Intelligence rollout, signalling a growing trend of tech giants hesitating to introduce advanced AI technologies in the region.

Meta’s latest multimodal AI model – capable of handling video, audio, images, and text – was set to be released under an open license. However, Meta’s decision will prevent European companies from utilising this technology, potentially putting them at a disadvantage in the global AI race.

“We will release a multimodal Llama model over the coming months, but not in the EU due to the unpredictable nature of the European regulatory environment,” a Meta spokesperson stated.

A text-only version of Meta’s Llama 3 model is still expected to launch in the EU.

Meta’s announcement comes just days after the EU finalised compliance deadlines for its new AI Act. Tech companies operating in the EU will have until August 2026 to comply with rules surrounding copyright, transparency, and specific AI applications like predictive policing.

The withholding of these advanced AI models from the EU market creates a challenging situation for companies outside the region. Those hoping to provide products and services utilising these models will be unable to offer them in one of the world’s largest economic markets.

Meta plans to integrate its multimodal AI models into products like the Meta Ray-Ban smart glasses. The company’s EU exclusion will extend to future multimodal AI model releases as well.

As more tech giants potentially follow suit, the EU may face challenges in maintaining its position as a leader in technological innovation while balancing concerns about AI’s societal impacts.

(Photo by engin akyurt)

See also: AI could unleash £119 billion in UK productivity

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta joins Apple in withholding AI models from EU users appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-joins-apple-withholding-ai-models-eu-users/feed/ 0
Microsoft and Apple back away from OpenAI board https://www.artificialintelligence-news.com/news/microsoft-apple-back-away-openai-board/ https://www.artificialintelligence-news.com/news/microsoft-apple-back-away-openai-board/#respond Wed, 10 Jul 2024 15:51:00 +0000 https://www.artificialintelligence-news.com/?p=15233 Microsoft and Apple have decided against taking up board seats at OpenAI. The decision comes as regulatory bodies intensify their scrutiny of big tech’s involvement in AI development and deployment. According to a Bloomberg report on July 10, citing an anonymous source familiar with the matter, Microsoft has officially communicated its withdrawal from the OpenAI […]

The post Microsoft and Apple back away from OpenAI board appeared first on AI News.

]]>
Microsoft and Apple have decided against taking up board seats at OpenAI. The decision comes as regulatory bodies intensify their scrutiny of big tech’s involvement in AI development and deployment.

According to a Bloomberg report on July 10, citing an anonymous source familiar with the matter, Microsoft has officially communicated its withdrawal from the OpenAI board. This move comes approximately a year after the Redmond-based company made a substantial $13 billion investment in OpenAI in April 2023.

In a memo addressed to OpenAI, Microsoft stated: “Over the past eight months we have witnessed significant progress from the newly formed board and are confident in the company’s direction.” The tech giant added, “We no longer believe our limited role as an observer is necessary.”

Contrary to recent reports suggesting that Apple would secure an observer role on OpenAI’s board as part of a landmark agreement announced in June, it appears that OpenAI will now have no board observers following Microsoft’s departure.

Responding to these developments, OpenAI expressed gratitude towards Microsoft, stating, “We’re grateful to Microsoft for voicing confidence in the board and the direction of the company, and we look forward to continuing our successful partnership.”

This retreat from board involvement by major tech players occurs against a backdrop of mounting regulatory pressure. Concerns about the potential impact of big tech on AI development and industry dominance have prompted increased scrutiny from regulatory bodies worldwide.

In June, European Union regulators announced that OpenAI could face an EU antitrust investigation over its partnership with Microsoft. EU competition chief Margrethe Vestager also revealed plans for local regulators to seek additional third-party views and survey firms such as Microsoft, Google, Meta, and ByteDance’s TikTok regarding their AI partnerships.

The decision by Microsoft and Apple to step back from board positions at OpenAI could be interpreted as a strategic move to mitigate potential regulatory challenges. By maintaining a more arm’s length relationship with the AI firm, these tech giants may be attempting to avoid accusations of undue influence or control over AI development.

Alex Haffner, a competition partner at Fladgate, said:

“It is hard not to conclude that Microsoft’s decision has been heavily influenced by the ongoing competition/antitrust scrutiny of its (and other major tech players) influence over emerging AI players such as Open AI.

Microsoft scored a ‘win’ in this regard at the end of June when the EU Commission announced it was dropping its merger control probe of Microsoft and Open AI, an investigation originally announced when Open AI re-shaped its board structure at the time of Sam Altman’s on-off departure from the company.

However, the Commission confirmed it was still looking at the competitive impact of the broader arrangements between the parties and it is clear that regulators are very much focussed on the complex web of interrelationships that big tech has created with AI providers, hence the need for Microsoft and others to carefully consider how they structure these arrangements going forward.”

As AI continues to play an increasingly critical role in technological advancement and societal change, the balance between innovation, competition, and regulation remains a complex challenge for both industry players and policymakers.

The coming months will likely see continued scrutiny of AI partnerships and investments, as regulators worldwide grapple with the task of ensuring fair competition and responsible AI development.

(Photo by Andrew Neel)

See also: Nvidia: World’s most valuable company under French antitrust fire

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Microsoft and Apple back away from OpenAI board appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/microsoft-apple-back-away-openai-board/feed/ 0
Nvidia: World’s most valuable company under French antitrust fire https://www.artificialintelligence-news.com/news/nvidia-worlds-most-valuable-company-under-french-antitrust-fire/ https://www.artificialintelligence-news.com/news/nvidia-worlds-most-valuable-company-under-french-antitrust-fire/#respond Thu, 04 Jul 2024 09:44:01 +0000 https://www.artificialintelligence-news.com/?p=15180 Nvidia recently overtook Microsoft as the world’s most valuable company and is now in the crosshairs of French antitrust regulators. The French authority is preparing to charge Nvidia with anti-competitive practices as part of the EU’s commitment to maintaining checks and balances within the industry.  This development underscores the EU’s resolve to ensure fair competition […]

The post Nvidia: World’s most valuable company under French antitrust fire appeared first on AI News.

]]>
Nvidia recently overtook Microsoft as the world’s most valuable company and is now in the crosshairs of French antitrust regulators. The French authority is preparing to charge Nvidia with anti-competitive practices as part of the EU’s commitment to maintaining checks and balances within the industry. 

This development underscores the EU’s resolve to ensure fair competition and prevent market dominance from stifling innovation and consumer choice. Let’s recall Nvidia’s meteoric rise to the pinnacle of the tech industry. Founded in 1993, the US-based giant has grown from a graphics chip manufacturer to a leader in AI, data centres, and autonomous vehicles. Its products power some of the most advanced computing systems in the world, and its influence extends across multiple industries.

Nvidia’s graphics processing units (GPUs) are essential for AI and machine learning applications, driving the next wave of technological advancement. This strategic positioning has catapulted Nvidia’s market valuation, surpassing tech giants like Apple and Microsoft.

However, with great power comes great responsibility—and scrutiny. According to recent reports, French antitrust regulators are poised to charge Nvidia with anti-competitive practices. The investigation centres on allegations that Nvidia has leveraged its dominant market position to stifle competition and maintain its supremacy in the tech industry.

The French authorities’ move is part of a broader trend of increasing regulatory scrutiny of tech giants worldwide. Governments and regulatory bodies are increasingly wary of companies like Nvidia’s outsized influence and market power. In Europe, where antitrust laws are particularly stringent, regulators are keen to ensure a level playing field and protect consumer interests.

Potential Implications

If the charges are upheld, Nvidia could face substantial fines and be forced to alter its business practices. Though potentially significant, the financial penalties might not be the most critical aspect of the investigation. The operational changes imposed on Nvidia could be more consequential, impacting its competitive edge and market strategy.

In short, the stakes are high for Nvidia. The company’s leadership in AI and other cutting-edge technologies relies on its ability to innovate and dominate the market. Regulatory constraints could slow its momentum and allow competitors to catch up. Moreover, the scrutiny could extend beyond France, prompting investigations in other jurisdictions and creating a ripple effect across the global tech industry.

Nvidia’s situation is not unique. Tech giants worldwide are facing similar challenges as regulators grapple with the complexities of the digital economy. In recent years, companies like Google, Amazon, and Facebook have also been targets of antitrust investigations and regulatory actions.

It points to a widening consensus on balancing innovation with fair competition. While tech companies drive economic growth and technological progress, their market dominance can threaten competition and consumer choice. Regulators are tasked with finding this balance, ensuring that the benefits of technological advancement are widely shared without stifling innovation.

To recall, in September 2023, French antitrust authorities raided unnamed companies believed to be indulging in anti-competitive practices related to graphics card products. While they did not name the company or identify it as Nvidia, the chipmaker has since confirmed that it is targeted by French courts, among other companies, regarding its business practices.

Nvidia said in a February filing that officials in the US, European Union, China, and the UK are also scrutinizing its operations. “Our position in markets relating to AI has led to increased interest in our business from regulators worldwide,” the chipmaker said. 

In fact, according to a Bloomberg report, French antitrust authorities have already been conducting interviews with market participants regarding Nvidia’s key role in production price control due to an acute lack of chips and how it affects prices. “The office raid was designed to gather additional knowledge regarding possible anti-competitive practices.”

What is next for Nvidia and the French regulators?

It is more likely than not for Nvidia to mount a robust defence because the AI chip giant has consistently argued that its business practices are competitive and that its innovations benefit consumers and industries alike. Nvidia will likely emphasize its contributions to technological progress and economic growth, positioning itself as a driver of positive change rather than a monopolistic force.

However, public perception and regulatory interpretations can differ. Thus, the challenge for Nvidia is clear: to continue its trajectory of success while addressing the concerns of regulators and stakeholders. Ultimately, Nvidia’s response to this regulatory challenge could define its legacy as the world’s most valuable company, demonstrating whether it can uphold its leadership position while adapting to the evolving demands of a fair and competitive market.

See also: NVIDIA unveils Blackwell architecture to power next GenAI wave 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Nvidia: World’s most valuable company under French antitrust fire appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/nvidia-worlds-most-valuable-company-under-french-antitrust-fire/feed/ 0