safety Archives - AI News https://www.artificialintelligence-news.com/news/tag/safety/ Artificial Intelligence News Thu, 24 Apr 2025 11:39:38 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png safety Archives - AI News https://www.artificialintelligence-news.com/news/tag/safety/ 32 32 Ursula von der Leyen: AI race ‘is far from over’ https://www.artificialintelligence-news.com/news/ursula-von-der-leyen-ai-race-is-far-from-over/ https://www.artificialintelligence-news.com/news/ursula-von-der-leyen-ai-race-is-far-from-over/#respond Tue, 11 Feb 2025 16:51:29 +0000 https://www.artificialintelligence-news.com/?p=104314 Europe has no intention of playing catch-up in the global AI race, European Commission President Ursula von der Leyen declared at the AI Action Summit in Paris. While the US and China are often seen as frontrunners, von der Leyen emphasised that the AI race “is far from over” and that Europe has distinct strengths […]

The post Ursula von der Leyen: AI race ‘is far from over’ appeared first on AI News.

]]>
Europe has no intention of playing catch-up in the global AI race, European Commission President Ursula von der Leyen declared at the AI Action Summit in Paris.

While the US and China are often seen as frontrunners, von der Leyen emphasised that the AI race “is far from over” and that Europe has distinct strengths to carve a leading role for itself.

“This is the third summit on AI safety in just over one year,” von der Leyen remarked. “In the same period, three new generations of ever more powerful AI models have been released. Some expect models that will approach human reasoning within a year’s time.”

The European Commission President set the tone of the event by contrasting the groundwork laid in previous summits with the urgency of this one.

“Past summits focused on laying the groundwork for AI safety. Together, we built a shared consensus that AI will be safe, that it will promote our values and benefit humanity. But this Summit is focused on action. And that is exactly what we need right now.”

As the world witnesses AI’s disruptive power, von der Leyen urged Europe to “formulate a vision of where we want AI to take us, as society and as humanity.” Growing adoption, “in the key sectors of our economy, and for the key challenges of our times,” provides a golden opportunity for the continent to lead, she argued.

The case for a European approach to the AI race 

Von der Leyen rejected notions that Europe has fallen behind its global competitors.

“Too often, I hear that Europe is late to the race – while the US and China have already got ahead. I disagree,” she stated. “The frontier is constantly moving. And global leadership is still up for grabs.”

Instead of replicating what other regions are doing, she called for doubling down on Europe’s unique strengths to define the continent’s distinct approach to AI.

“Too often, I have heard that we should replicate what others are doing and run after their strengths,” she said. “I think that instead, we should invest in what we can do best and build on our strengths here in Europe, which are our science and technology mastery that we have given to the world.”

Von der Leyen defined three pillars of the so-called “European brand of AI” that sets it apart: 1) focusing on high-complexity, industry-specific applications, 2) taking a cooperative, collaborative approach to innovation, and 3) embracing open-source principles.

“This summit shows there is a distinct European brand of AI,” she asserted. “It is already driving innovation and adoption. And it is picking up speed.”

Accelerating innovation: AI factories and gigafactories  

To maintain its competitive edge, Europe must supercharge its AI innovation, von der Leyen stressed.

A key component of this strategy lies in its computational infrastructure. Europe already boasts some of the world’s fastest supercomputers, which are now being leveraged through the creation of “AI factories.”

“In just a few months, we have set up a record of 12 AI factories,” von der Leyen revealed. “And we are investing €10 billion in them. This is not a promise—it is happening right now, and it is the largest public investment for AI in the world, which will unlock over ten times more private investment.”

Beyond these initial steps, von der Leyen unveiled an even more ambitious initiative. AI gigafactories, built on the scale of CERN’s Large Hadron Collider, will provide the infrastructure needed for training AI systems at unprecedented scales. They aim to foster collaboration between researchers, entrepreneurs, and industry leaders.

“We provide the infrastructure for large computational power,” von der Leyen explained. “Talents of the world are welcome. Industries will be able to collaborate and federate their data.”

The cooperative ethos underpinning AI gigafactories is part of a broader European push to balance competition with collaboration.

“AI needs competition but also collaboration,” she emphasised, highlighting that the initiative will serve as a “safe space” for these cooperative efforts.

Building trust with the AI Act

Crucially, von der Leyen reiterated Europe’s commitment to making AI safe and trustworthy. She pointed to the EU AI Act as the cornerstone of this strategy, framing it as a harmonised framework to replace fragmented national regulations across member states.

“The AI Act [will] provide one single set of safety rules across the European Union – 450 million people – instead of 27 different national regulations,” she said, before acknowledging businesses’ concerns about regulatory complexities.

“At the same time, I know, we have to make it easier, we have to cut red tape. And we will.”

€200 billion to remain in the AI race

Financing such ambitious plans naturally requires significant resources. Von der Leyen praised the recently launched EU AI Champions Initiative, which has already pledged €150 billion from providers, investors, and industry.

During her speech at the summit, von der Leyen announced the Commission’s complementary InvestAI initiative that will bring in an additional €50 billion. Altogether, the result is mobilising a massive €200 billion in public-private AI investments.

“We will have a focus on industrial and mission-critical applications,” she said. “It will be the largest public-private partnership in the world for the development of trustworthy AI.”

Ethical AI is a global responsibility

Von der Leyen closed her address by framing Europe’s AI ambitions within a broader, humanitarian perspective, arguing that ethical AI is a global responsibility.

“Cooperative AI can be attractive well beyond Europe, including for our partners in the Global South,” she proclaimed, extending a message of inclusivity.

Von der Leyen expressed full support for the AI Foundation launched at the summit, highlighting its mission to ensure widespread access to AI’s benefits.

“AI can be a gift to humanity. But we must make sure that benefits are widespread and accessible to all,” she remarked.

“We want AI to be a force for good. We want an AI where everyone collaborates and everyone benefits. That is our path – our European way.”

See also: AI Action Summit: Leaders call for unity and equitable development

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Ursula von der Leyen: AI race ‘is far from over’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ursula-von-der-leyen-ai-race-is-far-from-over/feed/ 0
AI Action Summit: Leaders call for unity and equitable development https://www.artificialintelligence-news.com/news/ai-action-summit-leaders-call-for-unity-equitable-development/ https://www.artificialintelligence-news.com/news/ai-action-summit-leaders-call-for-unity-equitable-development/#respond Mon, 10 Feb 2025 13:07:09 +0000 https://www.artificialintelligence-news.com/?p=104258 As the 2025 AI Action Summit kicks off in Paris, global leaders, industry experts, and academics are converging to address the challenges and opportunities presented by AI. Against the backdrop of rapid technological advancements and growing societal concerns, the summit aims to build on the progress made since the 2024 Seoul Safety Summit and establish […]

The post AI Action Summit: Leaders call for unity and equitable development appeared first on AI News.

]]>
As the 2025 AI Action Summit kicks off in Paris, global leaders, industry experts, and academics are converging to address the challenges and opportunities presented by AI.

Against the backdrop of rapid technological advancements and growing societal concerns, the summit aims to build on the progress made since the 2024 Seoul Safety Summit and establish a cohesive global framework for AI governance.  

AI Action Summit is ‘a wake-up call’

French President Emmanuel Macron has described the summit as “a wake-up call for Europe,” emphasising the need for collective action in the face of AI’s transformative potential. This comes as the US has committed $500 billion to AI infrastructure.

The UK, meanwhile, has unveiled its Opportunities Action Plan ahead of the full implementation of the UK AI Act. Ahead of the AI Summit, UK tech minister Peter Kyle told The Guardian the AI race must be led by “western, liberal, democratic” countries.

These developments signal a renewed global dedication to harnessing AI’s capabilities while addressing its risks.  

Matt Cloke, CTO at Endava, highlighted the importance of bridging the gap between AI’s potential and its practical implementation.

Headshot of Matt Cloke.

“Much of the conversation is set to focus on understanding the risks involved with using AI while helping to guide decision-making in an ever-evolving landscape,” he said.  

Cloke also stressed the role of organisations in ensuring AI adoption goes beyond regulatory frameworks.

“Modernising core systems enables organisations to better harness AI while ensuring regulatory compliance,” he explained.

“With improved data management, automation, and integration capabilities, these systems make it easier for organisations to stay agile and quickly adapt to impending regulatory changes.”  

Governance and workforce among critical AI Action Summit topics

Kit Cox, CTO and Founder of Enate, outlined three critical areas for the summit’s agenda.

Headshot of Kit Cox ahead of the 2025 AI Action Summit in Paris.

“First, AI governance needs urgent clarity,” he said. “We must establish global guidelines to ensure AI is safe, ethical, and aligned across nations. A disconnected approach won’t work; we need unity to build trust and drive long-term progress.”

Cox also emphasised the need for a future-ready workforce.

“Employers and governments must invest in upskilling the workforce for an AI-driven world,” he said. “This isn’t just about automation replacing jobs; it’s about creating opportunities through education and training that genuinely prepare people for the future of work.”  

Finally, Cox called for democratising AI’s benefits.

“AI must be fair and democratic both now and in the future,” he said. “The benefits can’t be limited to a select few. We must ensure that AI’s power reaches beyond Silicon Valley to all corners of the globe, creating opportunities for everyone to thrive.”  

Developing AI in the public interest

Professor Gina Neff, Professor of Responsible AI at Queen Mary University of London and Executive Director at Cambridge University’s Minderoo Centre for Technology & Democracy, stressed the importance of making AI relatable to everyday life.

Headshot of Professor Gina Neff.

“For us in civil society, it’s essential that we bring imaginaries about AI into the everyday,” she said. “From the barista who makes your morning latte to the mechanic fixing your car, they all have to understand how AI impacts them and, crucially, why AI is a human issue.”  

Neff also pushed back against big tech’s dominance in AI development.

“I’ll be taking this spirit of public interest into the Summit and pushing back against big tech’s push for hyperscaling. Thinking about AI as something we’re building together – like we do our cities and local communities – puts us all in a better place.”

Addressing bias and building equitable AI

Professor David Leslie, Professor of Ethics, Technology, and Society at Queen Mary University of London, highlighted the unresolved challenges of bias and diversity in AI systems.

“Over a year after the first AI Safety Summit at Bletchley Park, only incremental progress has been made to address the many problems of cultural bias and toxic and imbalanced training data that have characterised the development and use of Silicon Valley-led frontier AI systems,” he said.

Headshot of Professor David Leslie ahead of the 2025 AI Action Summit in Paris.

Leslie called for a renewed focus on public interest AI.

“The French AI Action Summit promises to refocus the conversation on AI governance to tackle these and other areas of immediate risk and harm,” he explained. “A main focus will be to think about how to advance public interest AI for all through mission-driven and society-led funding.”  

He proposed the creation of a public interest AI foundation, supported by governments, companies, and philanthropic organisations.

“This type of initiative will have to address issues of algorithmic and data biases head on, at concrete and practice-based levels,” he said. “Only then can it stay true to the goal of making AI technologies – and the infrastructures upon which they depend – accessible global public goods.”  

Systematic evaluation  

Professor Maria Liakata, Professor of Natural Language Processing at Queen Mary University of London, emphasised the need for rigorous evaluation of AI systems.

Headshot of Professor Maria Liakata ahead of the 2025 AI Action Summit in Paris.

“AI has the potential to make public service more efficient and accessible,” she said. “But at the moment, we are not evaluating AI systems properly. Regulators are currently on the back foot with evaluation, and developers have no systematic way of offering the evidence regulators need.”  

Liakata called for a flexible and systematic approach to AI evaluation.

“We must remain agile and listen to the voices of all stakeholders,” she said. “This would give us the evidence we need to develop AI regulation and help us get there faster. It would also help us get better at anticipating the risks posed by AI.”  

AI in healthcare: Balancing innovation and ethics

Dr Vivek Singh, Lecturer in Digital Pathology at Barts Cancer Institute, Queen Mary University of London, highlighted the ethical implications of AI in healthcare.

Headshot of Dr Vivek Singh ahead of the 2025 AI Action Summit in Paris.

“The Paris AI Action Summit represents a critical opportunity for global collaboration on AI governance and innovation,” he said. “I hope to see actionable commitments that balance ethical considerations with the rapid advancement of AI technologies, ensuring they benefit society as a whole.”  

Singh called for clear frameworks for international cooperation.

“A key outcome would be the establishment of clear frameworks for international cooperation, fostering trust and accountability in AI development and deployment,” he said.  

AI Action Summit: A pivotal moment

The 2025 AI Action Summit in Paris represents a pivotal moment for global AI governance. With calls for unity, equity, and public interest at the forefront, the summit aims to address the challenges of bias, regulation, and workforce readiness while ensuring AI’s benefits are shared equitably.

As world leaders and industry experts converge, the hope is that actionable commitments will pave the way for a more inclusive and ethical AI future.

(Photo by Jorge Gascón)

See also: EU AI Act: What businesses need to know as regulations go live

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI Action Summit: Leaders call for unity and equitable development appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-action-summit-leaders-call-for-unity-equitable-development/feed/ 0
MHRA pilots ‘AI Airlock’ to accelerate healthcare adoption https://www.artificialintelligence-news.com/news/mhra-pilots-ai-airlock-accelerate-healthcare-adoption/ https://www.artificialintelligence-news.com/news/mhra-pilots-ai-airlock-accelerate-healthcare-adoption/#respond Wed, 04 Dec 2024 11:46:32 +0000 https://www.artificialintelligence-news.com/?p=16631 The Medicines and Healthcare products Regulatory Agency (MHRA) has announced the selection of five healthcare technologies for its ‘AI Airlock’ scheme. AI Airlock aims to refine the process of regulating AI-driven medical devices and help fast-track their safe introduction to the UK’s National Health Service (NHS) and patients in need. The technologies chosen for this […]

The post MHRA pilots ‘AI Airlock’ to accelerate healthcare adoption appeared first on AI News.

]]>
The Medicines and Healthcare products Regulatory Agency (MHRA) has announced the selection of five healthcare technologies for its ‘AI Airlock’ scheme.

AI Airlock aims to refine the process of regulating AI-driven medical devices and help fast-track their safe introduction to the UK’s National Health Service (NHS) and patients in need.

The technologies chosen for this scheme include solutions targeting cancer and chronic respiratory diseases, as well as advancements in radiology diagnostics. These AI systems promise to revolutionise the accuracy and efficiency of healthcare, potentially driving better diagnostic tools and patient care.

The AI Airlock, as described by the MHRA, is a “sandbox” environment—an experimental framework designed to help manufacturers determine how best to collect real-world evidence to support the regulatory approval of their devices.

Unlike traditional medical devices, AI models continue to evolve through learning, making the establishment of safety and efficacy evidence more complex. The Airlock enables this exploration within a monitored virtual setting, giving developers insight into the practical challenges of regulation while supporting the NHS’s broader adoption of transformative AI technologies.

Safely enabling AI healthcare innovation  

Laura Squire, the lead figure in MedTech regulatory reform and Chief Officer at the MHRA, said: “New AI medical devices have the potential to increase the accuracy of healthcare decisions, save time, and improve efficiency—leading to better outcomes for the NHS and patients across all healthcare settings. 

“But we need to be confident that AI-powered medical devices introduced into the NHS are safe, stay safe, and perform as intended through their lifetime of use.”

Squire emphasised that the AI Airlock pilot allows collaboration “in partnership with technology specialists, developers and the NHS,” facilitating the exploration of best practices and accelerating safe patient access to innovative solutions.

Government representatives have praised the initiative for its forward-thinking framework.

Karin Smyth, Minister of State for Health, commented: “As part of our 10-Year Health Plan, we’re shifting NHS care from analogue to digital, and this project will help bring the most promising technology to patients.

“AI has the power to revolutionise care by supporting doctors to diagnose diseases, automating time-consuming admin tasks, and reducing hospital admissions by predicting future ill health.”

Science Minister Lord Vallance lauded the AI Airlock pilot as “a great example of government working with businesses to enable them to turn ideas into products that improve lives.” He added, “This shows how good regulation can facilitate emerging technologies for the benefit of the UK and our economy.”

Selected technologies  

The deployment of AI-powered medical devices requires meeting stringent criteria to ensure innovation, patient benefits, and regulatory challenge readiness. The five technologies selected for this inaugural pilot offer vital insights into healthcare’s future: 

  1. Lenus Stratify

Patients with Chronic Obstructive Pulmonary Disease (COPD) are among those who stand to benefit significantly from AI innovation. Lenus Stratify, developed by Lenus Health, analyses patient data to predict severe lung disease outcomes, reducing unscheduled hospital admissions. The system empowers care providers to adopt earlier interventions, affording patients an improved quality of life while alleviating NHS resource strain.  

  1. Philips Radiology Reporting Enhancer

Philips has integrated AI into existing radiology workflows to enhance the efficiency and accuracy of critical radiology reports. This system uses AI to prepare the “Impression” section of reports, summarising essential diagnostic information for healthcare providers. By automating this process, Philips aims to minimise workload struggles, human errors, and miscommunication, creating a more seamless diagnostic experience.  

  1. Federated AI Monitoring Service (FAMOS)

One recurring AI challenge is the concept of “drift,” when changing real-world conditions impair system performance over time. Newton’s Tree has developed FAMOS to monitor AI models in real time, flagging degradation and enabling rapid corrections. Hospitals, regulators, and software developers can use this tool to ensure algorithms remain high-performing, adapting to evolving circumstances while prioritising patient safety.  

  1. OncoFlow Personalised Cancer Management

Targeting the pressing healthcare challenge of reducing waiting times for cancer treatment, OncoFlow speeds up clinical workflows through its intelligent care pathway platform. Initially applied to breast cancer protocols, the system later aims to expand across other oncology domains. With quicker access to tailored therapies, patients gain increased survival rates amidst mounting NHS pressures.  

  1. SmartGuideline

Developed to simplify complex clinical decision-making processes, SmartGuideline uses large-language AI trained on official NICE medical guidelines. This technology allows clinicians to ask routine questions and receive verified, precise answers, eliminating the ambiguity associated with current AI language models. By integrating this tool, patients benefit from more accurate treatments grounded in up-to-date medical knowledge.  

Broader implications  

The influence of the AI Airlock extends beyond its current applications. The MHRA expects pilot findings, due in 2025, to inform future medical device regulations and create a clearer path for manufacturers developing AI-enabled technologies. 

The evidence derived will contribute to shaping post-Brexit UKCA marking processes, helping manufacturers achieve compliance with higher levels of transparency. By improving regulatory frameworks, the UK could position itself as a global hub for med-tech innovation while ensuring faster access to life-saving tools.

The urgency of these developments was underscored earlier this year in Lord Darzi’s review of health and care. The report outlined the “critical state” of the NHS, offering AI interventions as a promising pathway to sustainability. The work on AI Airlock by the MHRA addresses one of the report’s major recommendations for enabling regulatory solutions and “unlocking the AI revolution” for healthcare advancements.

While being selected into the AI Airlock pilot does not indicate regulatory approval, the technologies chosen represent a potential leap forward in applying AI to some of healthcare’s most pressing challenges. The coming years will test the potential of these solutions under regulatory scrutiny.

If successful, the initiative from the MHRA could redefine how pioneering technologies like AI are adopted in healthcare, balancing the need for speed, safety, and efficiency. With the NHS under immense pressure from growing demand, AI’s ability to augment clinicians, predict illnesses, and streamline workflows may well be the game-changer the system urgently needs.

(Photo by National Cancer Institute)

See also: AI’s role in helping to prevent skin cancer through behaviour change

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post MHRA pilots ‘AI Airlock’ to accelerate healthcare adoption appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/mhra-pilots-ai-airlock-accelerate-healthcare-adoption/feed/ 0
UK establishes LASR to counter AI security threats https://www.artificialintelligence-news.com/news/uk-establishes-lasr-counter-ai-security-threats/ https://www.artificialintelligence-news.com/news/uk-establishes-lasr-counter-ai-security-threats/#respond Mon, 25 Nov 2024 11:31:13 +0000 https://www.artificialintelligence-news.com/?p=16550 The UK is establishing the Laboratory for AI Security Research (LASR) to help protect Britain and its allies against emerging threats in what officials describe as an “AI arms race.” The laboratory – which will receive an initial government funding of £8.22 million – aims to bring together experts from industry, academia, and government to […]

The post UK establishes LASR to counter AI security threats appeared first on AI News.

]]>
The UK is establishing the Laboratory for AI Security Research (LASR) to help protect Britain and its allies against emerging threats in what officials describe as an “AI arms race.”

The laboratory – which will receive an initial government funding of £8.22 million – aims to bring together experts from industry, academia, and government to assess AI’s impact on national security. The announcement comes as part of a broader strategy to strengthen the UK’s cyber defence capabilities.

Speaking at the NATO Cyber Defence Conference at Lancaster House, the Chancellor of the Duchy of Lancaster said: “NATO needs to continue to adapt to the world of AI, because as the tech evolves, the threat evolves.

“NATO has stayed relevant over the last seven decades by constantly adapting to new threats. It has navigated the worlds of nuclear proliferation and militant nationalism. The move from cold warfare to drone warfare.”

The Chancellor painted a stark picture of the current cyber security landscape, stating: “Cyber war is now a daily reality. One where our defences are constantly being tested. The extent of the threat must be matched by the strength of our resolve to combat it and to protect our citizens and systems.”

The new laboratory will operate under a ‘catalytic’ model, designed to attract additional investment and collaboration from industry partners.

Key stakeholders in the new lab include GCHQ, the National Cyber Security Centre, the MOD’s Defence Science and Technology Laboratory, and prestigious academic institutions such as the University of Oxford and Queen’s University Belfast.

In a direct warning about Russia’s activities, the Chancellor declared: “Be in no doubt: the United Kingdom and others in this room are watching Russia. We know exactly what they are doing, and we are countering their attacks both publicly and behind the scenes.

“We know from history that appeasing dictators engaged in aggression against their neighbours only encourages them. Britain learned long ago the importance of standing strong in the face of such actions.”

Reaffirming support for Ukraine, he added, “Putin is a man who wants destruction, not peace. He is trying to deter our support for Ukraine with his threats. He will not be successful.”

The new lab follows recent concerns about state actors using AI to bolster existing security threats.

“Last year, we saw the US for the first time publicly call out a state for using AI to aid its malicious cyber activity,” the Chancellor noted, referring to North Korea’s attempts to use AI for malware development and vulnerability scanning.

Stephen Doughty, Minister for Europe, North America and UK Overseas Territories, highlighted the dual nature of AI technology: “AI has enormous potential. To ensure it remains a force for good in the world, we need to understand its threats and its opportunities.”

Alongside LASR, the government announced a new £1 million incident response project to enhance collaborative cyber defence capabilities among allies. The laboratory will prioritise collaboration with Five Eyes countries and NATO allies, building on the UK’s historical strength in computing, dating back to Alan Turing’s groundbreaking work.

The initiative forms part of the government’s comprehensive approach to cybersecurity, which includes the upcoming Cyber Security and Resilience Bill and the recent classification of data centres as critical national infrastructure.

(Photo by Erik Mclean)

See also: Anthropic urges AI regulation to avoid catastrophes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK establishes LASR to counter AI security threats appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/uk-establishes-lasr-counter-ai-security-threats/feed/ 0
OpenAI enhances AI safety with new red teaming methods https://www.artificialintelligence-news.com/news/openai-enhances-ai-safety-new-red-teaming-methods/ https://www.artificialintelligence-news.com/news/openai-enhances-ai-safety-new-red-teaming-methods/#respond Fri, 22 Nov 2024 15:47:04 +0000 https://www.artificialintelligence-news.com/?p=16543 A critical part of OpenAI’s safeguarding process is “red teaming” — a structured methodology using both human and AI participants to explore potential risks and vulnerabilities in new systems. Historically, OpenAI has engaged in red teaming efforts predominantly through manual testing, which involves individuals probing for weaknesses. This was notably employed during the testing of […]

The post OpenAI enhances AI safety with new red teaming methods appeared first on AI News.

]]>
A critical part of OpenAI’s safeguarding process is “red teaming” — a structured methodology using both human and AI participants to explore potential risks and vulnerabilities in new systems.

Historically, OpenAI has engaged in red teaming efforts predominantly through manual testing, which involves individuals probing for weaknesses. This was notably employed during the testing of their DALL·E 2 image generation model in early 2022, where external experts were invited to identify potential risks. Since then, OpenAI has expanded and refined its methodologies, incorporating automated and mixed approaches for a more comprehensive risk assessment.

“We are optimistic that we can use more powerful AI to scale the discovery of model mistakes,” OpenAI stated. This optimism is rooted in the idea that automated processes can help evaluate models and train them to be safer by recognising patterns and errors on a larger scale.

In their latest push for advancement, OpenAI is sharing two important documents on red teaming — a white paper detailing external engagement strategies and a research study introducing a novel method for automated red teaming. These contributions aim to strengthen the process and outcomes of red teaming, ultimately leading to safer and more responsible AI implementations.

As AI continues to evolve, understanding user experiences and identifying risks such as abuse and misuse are crucial for researchers and developers. Red teaming provides a proactive method for evaluating these risks, especially when supplemented by insights from a range of independent external experts. This approach not only helps establish benchmarks but also facilitates the enhancement of safety evaluations over time.

The human touch

OpenAI has shared four fundamental steps in their white paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” to design effective red teaming campaigns:

  1. Composition of red teams: The selection of team members is based on the objectives of the campaign. This often involves individuals with diverse perspectives, such as expertise in natural sciences, cybersecurity, and regional politics, ensuring assessments cover the necessary breadth.
  1. Access to model versions: Clarifying which versions of a model red teamers will access can influence the outcomes. Early-stage models may reveal inherent risks, while more developed versions can help identify gaps in planned safety mitigations.
  1. Guidance and documentation: Effective interactions during campaigns rely on clear instructions, suitable interfaces, and structured documentation. This involves describing the models, existing safeguards, testing interfaces, and guidelines for recording results.
  1. Data synthesis and evaluation: Post-campaign, the data is assessed to determine if examples align with existing policies or require new behavioural modifications. The assessed data then informs repeatable evaluations for future updates.

A recent application of this methodology involved preparing the OpenAI o1 family of models for public use—testing their resistance to potential misuse and evaluating their application across various fields such as real-world attack planning, natural sciences, and AI research.

Automated red teaming

Automated red teaming seeks to identify instances where AI may fail, particularly regarding safety-related issues. This method excels at scale, generating numerous examples of potential errors quickly. However, traditional automated approaches have struggled with producing diverse, successful attack strategies.

OpenAI’s research introduces “Diverse And Effective Red Teaming With Auto-Generated Rewards And Multi-Step Reinforcement Learning,” a method which encourages greater diversity in attack strategies while maintaining effectiveness.

This method involves using AI to generate different scenarios, such as illicit advice, and training red teaming models to evaluate these scenarios critically. The process rewards diversity and efficacy, promoting more varied and comprehensive safety evaluations.

Despite its benefits, red teaming does have limitations. It captures risks at a specific point in time, which may evolve as AI models develop. Additionally, the red teaming process can inadvertently create information hazards, potentially alerting malicious actors to vulnerabilities not yet widely known. Managing these risks requires stringent protocols and responsible disclosures.

While red teaming continues to be pivotal in risk discovery and evaluation, OpenAI acknowledges the necessity of incorporating broader public perspectives on AI’s ideal behaviours and policies to ensure the technology aligns with societal values and expectations.

See also: EU introduces draft regulatory guidance for AI models

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI enhances AI safety with new red teaming methods appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-enhances-ai-safety-new-red-teaming-methods/feed/ 0
EU introduces draft regulatory guidance for AI models https://www.artificialintelligence-news.com/news/eu-introduces-draft-regulatory-guidance-for-ai-models/ https://www.artificialintelligence-news.com/news/eu-introduces-draft-regulatory-guidance-for-ai-models/#respond Fri, 15 Nov 2024 14:52:05 +0000 https://www.artificialintelligence-news.com/?p=16496 The release of the “First Draft General-Purpose AI Code of Practice” marks the EU’s effort to create comprehensive regulatory guidance for general-purpose AI models. The development of this draft has been a collaborative effort, involving input from diverse sectors including industry, academia, and civil society. The initiative was led by four specialised Working Groups, each […]

The post EU introduces draft regulatory guidance for AI models appeared first on AI News.

]]>
The release of the “First Draft General-Purpose AI Code of Practice” marks the EU’s effort to create comprehensive regulatory guidance for general-purpose AI models.

The development of this draft has been a collaborative effort, involving input from diverse sectors including industry, academia, and civil society. The initiative was led by four specialised Working Groups, each addressing specific aspects of AI governance and risk mitigation:

  • Working Group 1: Transparency and copyright-related rules
  • Working Group 2: Risk identification and assessment for systemic risk
  • Working Group 3: Technical risk mitigation for systemic risk
  • Working Group 4: Governance risk mitigation for systemic risk

The draft is aligned with existing laws such as the Charter of Fundamental Rights of the European Union. It takes into account international approaches, striving for proportionality to risks, and aims to be future-proof by contemplating rapid technological changes.

Key objectives outlined in the draft include:

  • Clarifying compliance methods for providers of general-purpose AI models
  • Facilitating understanding across the AI value chain, ensuring seamless integration of AI models into downstream products
  • Ensuring compliance with Union law on copyrights, especially concerning the use of copyrighted material for model training
  • Continuously assessing and mitigating systemic risks associated with AI models

Recognising and mitigating systemic risks

A core feature of the draft is its taxonomy of systemic risks, which includes types, natures, and sources of such risks. The document outlines various threats such as cyber offences, biological risks, loss of control over autonomous AI models, and large-scale disinformation. By acknowledging the continuously evolving nature of AI technology, the draft recognises that this taxonomy will need updates to remain relevant.

As AI models with systemic risks become more common, the draft emphasises the need for robust safety and security frameworks (SSFs). It proposes a hierarchy of measures, sub-measures, and key performance indicators (KPIs) to ensure appropriate risk identification, analysis, and mitigation throughout a model’s lifecycle.

The draft suggests that providers establish processes to identify and report serious incidents associated with their AI models, offering detailed assessments and corrections as needed. It also encourages collaboration with independent experts for risk assessment, especially for models posing significant systemic risks.

Taking a proactive stance to AI regulatory guidance

The EU AI Act, which came into force on 1 August 2024, mandates that the final version of this Code be ready by 1 May 2025. This initiative underscores the EU’s proactive stance towards AI regulation, emphasising the need for AI safety, transparency, and accountability.

As the draft continues to evolve, the working groups invite stakeholders to participate actively in refining the document. Their collaborative input will shape a regulatory framework aimed at safeguarding innovation while protecting society from the potential pitfalls of AI technology.

While still in draft form, the EU’s Code of Practice for general-purpose AI models could set a benchmark for responsible AI development and deployment globally. By addressing key issues such as transparency, risk management, and copyright compliance, the Code aims to create a regulatory environment that fosters innovation, upholds fundamental rights, and ensures a high level of consumer protection.

This draft is open for written feedback until 28 November 2024. 

See also: Anthropic urges AI regulation to avoid catastrophes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU introduces draft regulatory guidance for AI models appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/eu-introduces-draft-regulatory-guidance-for-ai-models/feed/ 0
Understanding AI’s impact on the workforce https://www.artificialintelligence-news.com/news/understanding-ai-impact-on-the-workforce/ https://www.artificialintelligence-news.com/news/understanding-ai-impact-on-the-workforce/#respond Fri, 08 Nov 2024 10:11:03 +0000 https://www.artificialintelligence-news.com/?p=16459 The Tony Blair Institute (TBI) has examined AI’s impact on the workforce. The report outlines AI’s potential to reshape work environments, boost productivity, and create opportunities—while warning of potential challenges ahead. “Technology has a long history of profoundly reshaping the world of work,” the report begins. From the agricultural revolution to the digital age, each […]

The post Understanding AI’s impact on the workforce appeared first on AI News.

]]>
The Tony Blair Institute (TBI) has examined AI’s impact on the workforce. The report outlines AI’s potential to reshape work environments, boost productivity, and create opportunities—while warning of potential challenges ahead.

“Technology has a long history of profoundly reshaping the world of work,” the report begins.

From the agricultural revolution to the digital age, each wave of innovation has redefined labour markets. Today, AI presents a seismic shift, advancing rapidly and prompting policymakers to prepare for change.

Economic opportunities

The TBI report estimates that AI, when fully adopted by UK firms, could significantly increase productivity. It suggests that AI could save “almost a quarter of private-sector workforce time,” equivalent to the annual output of 6 million workers.

Most of these time savings are expected to stem from AI-enabled software performing cognitive tasks such as data analysis and routine administrative operations.

The report identifies sectors reliant on routine cognitive tasks, such as banking and finance, as those with significant exposure to AI. However, sectors like skilled trades or construction – which involve complex manual tasks – are likely to see less direct impact.

While AI can result in initial job losses, it also has the potential to create new demand by fostering economic growth and new industries. 

The report expects these job losses can be balanced by new job creation. Over the years, technology has historically spurred new employment opportunities, as innovation leads to the development of new products and services.

Shaping future generations

AI’s potential extends into education, where it could assist both teachers and students.

The report suggests that AI could help “raise educational attainment by around six percent” on average. By personalising and supporting learning, AI has the potential to equalise access to opportunities and improve the quality of the workforce over time.

Health and wellbeing

Beyond education, AI offers potential benefits in healthcare, supporting a healthier workforce and reducing welfare costs.

The report highlights AI’s role in speeding medical research, enabling preventive healthcare, and helping those with disabilities re-enter the workforce.

Workplace transformation

The report acknowledges potential workplace challenges, such as increased monitoring and stress from AI tools. It stresses the importance of managing these technologies thoughtfully to “deliver a more engaging, inclusive and safe working environment.”

To mitigate potential disruption, the TBI outlines recommendations. These include upgrading labour-market infrastructure and utilising AI for job matching.

The report suggests creating an “Early Awareness and Opportunity System” to help workers understand the impact of AI on their jobs and provide advice on career paths.

Preparing for an AI-powered future

In light of the uncertainties surrounding AI’s impact on the workforce, the TBI urges policy changes to maximise benefits. Recommendations include incentivising AI adoption across industries, developing AI-pathfinder programmes, and creating challenge prizes to address public-sector labour shortages.

The report concludes that while AI presents risks, the potential gains are too significant to ignore.

Policymakers are encouraged to adopt a “pro-innovation” stance while being attuned to the risks, fostering an economy that is dynamic and resilient.

(Photo by Mimi Thian)

See also: Anthropic urges AI regulation to avoid catastrophes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Understanding AI’s impact on the workforce appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/understanding-ai-impact-on-the-workforce/feed/ 0
Anthropic urges AI regulation to avoid catastrophes https://www.artificialintelligence-news.com/news/anthropic-urges-ai-regulation-avoid-catastrophes/ https://www.artificialintelligence-news.com/news/anthropic-urges-ai-regulation-avoid-catastrophes/#respond Fri, 01 Nov 2024 16:46:42 +0000 https://www.artificialintelligence-news.com/?p=16415 Anthropic has flagged the potential risks of AI systems and calls for well-structured regulation to avoid potential catastrophes. The organisation argues that targeted regulation is essential to harness AI’s benefits while mitigating its dangers. As AI systems evolve in capabilities such as mathematics, reasoning, and coding, their potential misuse in areas like cybersecurity or even […]

The post Anthropic urges AI regulation to avoid catastrophes appeared first on AI News.

]]>
Anthropic has flagged the potential risks of AI systems and calls for well-structured regulation to avoid potential catastrophes. The organisation argues that targeted regulation is essential to harness AI’s benefits while mitigating its dangers.

As AI systems evolve in capabilities such as mathematics, reasoning, and coding, their potential misuse in areas like cybersecurity or even biological and chemical disciplines significantly increases.

Anthropic warns the next 18 months are critical for policymakers to act, as the window for proactive prevention is narrowing. Notably, Anthropic’s Frontier Red Team highlights how current models can already contribute to various cyber offense-related tasks and expects future models to be even more effective.

Of particular concern is the potential for AI systems to exacerbate chemical, biological, radiological, and nuclear (CBRN) misuse. The UK AI Safety Institute found that several AI models can now match PhD-level human expertise in providing responses to science-related inquiries.

In addressing these risks, Anthropic has detailed its Responsible Scaling Policy (RSP) that was released in September 2023 as a robust countermeasure. RSP mandates an increase in safety and security measures corresponding to the sophistication of AI capabilities.

The RSP framework is designed to be adaptive and iterative, with regular assessments of AI models allowing for timely refinement of safety protocols. Anthropic says that it’s committed to maintaining and enhancing safety spans various team expansions, particularly in security, interpretability, and trust sectors, ensuring readiness for the rigorous safety standards set by its RSP.

Anthropic believes the widespread adoption of RSPs across the AI industry, while primarily voluntary, is essential for addressing AI risks.

Transparent, effective regulation is crucial to reassure society of AI companies’ adherence to promises of safety. Regulatory frameworks, however, must be strategic, incentivising sound safety practices without imposing unnecessary burdens.

Anthropic envisions regulations that are clear, focused, and adaptive to evolving technological landscapes, arguing that these are vital in achieving a balance between risk mitigation and fostering innovation.

In the US, Anthropic suggests that federal legislation could be the ultimate answer to AI risk regulation—though state-driven initiatives might need to step in if federal action lags. Legislative frameworks developed by countries worldwide should allow for standardisation and mutual recognition to support a global AI safety agenda, minimising the cost of regulatory adherence across different regions.

Furthermore, Anthropic addresses scepticism towards imposing regulations—highlighting that overly broad use-case-focused regulations would be inefficient for general AI systems, which have diverse applications. Instead, regulations should target fundamental properties and safety measures of AI models. 

While covering broad risks, Anthropic acknowledges that some immediate threats – like deepfakes – aren’t the focus of their current proposals since other initiatives are tackling these nearer-term issues.

Ultimately, Anthropic stresses the importance of instituting regulations that spur innovation rather than stifle it. The initial compliance burden, though inevitable, can be minimised through flexible and carefully-designed safety tests. Proper regulation can even help safeguard both national interests and private sector innovation by securing intellectual property against threats internally and externally.

By focusing on empirically measured risks, Anthropic plans for a regulatory landscape that neither biases against nor favours open or closed-source models. The objective remains clear: to manage the significant risks of frontier AI models with rigorous but adaptable regulation.

(Image Credit: Anthropic)

See also: President Biden issues first National Security Memorandum on AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Anthropic urges AI regulation to avoid catastrophes appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/anthropic-urges-ai-regulation-avoid-catastrophes/feed/ 0
AI sector study: Record growth masks serious challenges https://www.artificialintelligence-news.com/news/ai-sector-study-record-growth-masks-serious-challenges/ https://www.artificialintelligence-news.com/news/ai-sector-study-record-growth-masks-serious-challenges/#respond Thu, 24 Oct 2024 14:31:34 +0000 https://www.artificialintelligence-news.com/?p=16382 A comprehensive AI sector study – conducted by the Department for Science, Innovation and Technology (DSIT) in collaboration with Perspective Economics, Ipsos, and glass.ai – provides a detailed overview of the industry’s current state and its future prospects. In this article, we delve deeper into the key findings and implications—drawing on additional sources to enhance […]

The post AI sector study: Record growth masks serious challenges appeared first on AI News.

]]>
A comprehensive AI sector study – conducted by the Department for Science, Innovation and Technology (DSIT) in collaboration with Perspective Economics, Ipsos, and glass.ai – provides a detailed overview of the industry’s current state and its future prospects.

In this article, we delve deeper into the key findings and implications—drawing on additional sources to enhance our understanding.

Thriving industry with significant growth

The study highlights the remarkable growth of the UK’s AI sector. With over 3,170 active AI companies, these firms have generated £10.6 billion in AI-related revenues and employed more than 50,000 people in AI-related roles. This significant contribution to GVA (Gross Value Added) underscores the sector’s transformative potential in driving the UK’s economic growth.

Mark Boost, CEO of Civo, said: “In a space that’s been dominated by US companies for too long, it’s promising to see the government now stepping up to help support the UK AI sector on the global stage.”

The study shows that AI activity is dispersed across various regions of the UK, with notable concentrations in London, the South East, and Scotland. This regional dispersion indicates a broad scope for the development of AI technology applications across different sectors and regions.

Investment and funding

Investment in the AI sector has been a key driver of growth. In 2022, £18.8 billion was secured in private investment since 2016, with investments made in 52 unique industry sectors compared to 35 sectors in 2016.

The government’s commitment to supporting AI is evident through significant investments. In 2022, the UK government unveiled a National AI Strategy and Action Plan—committing over £1.3 billion in support for the sector, complementing the £2.8 billion already invested.

However, as Boost cautions, “Major players like AWS are locking AI startups into their ecosystems with offerings like $500k cloud credits, ensuring that emerging companies start their journey reliant on their infrastructure. This not only hinders competition and promotes vendor lock-in but also risks stifling innovation across the broader UK AI ecosystem.”

Addressing bottlenecks

Despite the growth and investment, several bottlenecks must be addressed to fully harness the potential of AI:

  • Infrastructure: The UK’s digital technology infrastructure is less advanced than many other countries. This bottleneck includes inadequate data centre infrastructure and a dependent supply of powerful GPU computer chips. Boost emphasises this concern, stating “It would be dangerous for the government to ignore the immense compute power that AI relies on. We need to consider where this power is coming from and the impact it’s having on both the already over-concentrated cloud market and the environment.”
  • Commercial awareness: Many SMEs lack familiarity with digital technology. Almost a third (31%) of SMEs have yet to adopt the cloud, and nearly half (47%) do not currently use AI tools or applications.
  • Skills shortage: Two-fifths of businesses struggle to find staff with good digital skills, including traditional digital roles like data analytics or IT. There is a rising need for workers with new AI-specific skills, such as prompt engineering, that will require retraining and upskilling opportunities.

To address these bottlenecks, the government has implemented several initiatives:

  • Private sector investment: Microsoft has announced a £2.5 billion investment in AI skills, security, and data centre infrastructure, aiming to procure more than 20,000 of the most advanced GPUs by 2026.
  • Government support: The government has invested £1.5 billion in computing capacity and committed to building three new supercomputers by 2025. This support aims to enhance the UK’s infrastructure to stay competitive in the AI market.
  • Public sector integration: The UK Government Digital Service (GDS) is working to improve efficiency using predictive algorithms for future pension scheme behaviour. HMRC uses AI to help identify call centre priorities, demonstrating how AI solutions can address complex public sector challenges.

Future prospects and challenges

The future of the UK AI sector is both promising and challenging. While significant economic gains are predicted, including boosting GDP by £550 billion by 2035, delays in AI roll-out could cost the UK £150 billion over the same period. Ensuring a balanced approach between innovation and regulation will be crucial.

Boost emphasises the importance of data sovereignty and privacy: “Businesses have grown increasingly wary of how their data is collected, stored, and used by the likes of ChatGPT. The government has a real opportunity to enable the UK AI sector to offer viable alternatives.

“The forthcoming AI Action Plan will be another opportunity to identify how AI can drive economic growth and better support the UK tech sector.”

  • AI Safety Summit: The AI Safety Summit at Bletchley Park highlighted the need for responsible AI development. The “Bletchley Declaration on AI Safety” emphasises the importance of ensuring AI tools are transparent, fair, and free from bias to maintain public trust and realise AI’s benefits in public services.
  • Cybersecurity challenges: As AI systems handle sensitive or personal information, ensuring their security is paramount. This involves protecting against cyber threats, securing algorithms from manipulation, safeguarding data centres and hardware, and ensuring supply chain security.

The AI sector study underscores a thriving industry with significant growth potential. However, it also highlights several bottlenecks that must be addressed – infrastructure gaps, lack of commercial awareness, and skills shortages – to fully harness the sector’s potential.

(Photo by John Noonan)

See also: EU AI Act: Early prep could give businesses competitive edge

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI sector study: Record growth masks serious challenges appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-sector-study-record-growth-masks-serious-challenges/feed/ 0
SolarWinds: IT professionals want stronger AI regulation https://www.artificialintelligence-news.com/news/solarwinds-it-professionals-stronger-ai-regulation/ https://www.artificialintelligence-news.com/news/solarwinds-it-professionals-stronger-ai-regulation/#respond Tue, 17 Sep 2024 14:36:25 +0000 https://www.artificialintelligence-news.com/?p=16093 A new survey from SolarWinds has unveiled a resounding call for increased government oversight of AI, with 88% of IT professionals advocating for stronger regulation. The study, which polled nearly 700 IT experts, highlights security as the paramount concern. An overwhelming 72% of respondents emphasised the critical need for measures to secure infrastructure. Privacy follows […]

The post SolarWinds: IT professionals want stronger AI regulation appeared first on AI News.

]]>
A new survey from SolarWinds has unveiled a resounding call for increased government oversight of AI, with 88% of IT professionals advocating for stronger regulation.

The study, which polled nearly 700 IT experts, highlights security as the paramount concern. An overwhelming 72% of respondents emphasised the critical need for measures to secure infrastructure. Privacy follows closely behind, with 64% of IT professionals urging for more robust rules to protect sensitive information.

Rob Johnson, VP and Global Head of Solutions Engineering at SolarWinds, commented: “It is understandable that IT leaders are approaching AI with caution. As technology rapidly evolves, it naturally presents challenges typical of any emerging innovation.

“Security and privacy remain at the forefront, with ongoing scrutiny by regulatory bodies. However, it is incumbent upon organisations to take proactive measures by enhancing data hygiene, enforcing robust AI ethics and assembling the right teams to lead these efforts. This proactive stance not only helps with compliance with evolving regulations but also maximises the potential of AI.”

The survey’s findings come at a pivotal moment, coinciding with the implementation of the EU’s AI Act. In the UK, the new Labour government recently proposed its own AI legislation during the latest King’s speech, signalling a growing recognition of the need for regulatory frameworks. In the US, the California State Assembly passed a controversial AI safety bill last month.

Beyond security and privacy, the survey reveals a broader spectrum of concerns amongst IT professionals. A majority (55%) believe government intervention is crucial to stem the tide of AI-generated misinformation. Additionally, half of the respondents support regulations aimed at ensuring transparency and ethical practices in AI development.

Challenges extend beyond AI regulation

However, the challenges facing AI adoption extend beyond regulatory concerns. The survey uncovers a troubling lack of trust in data quality—a cornerstone of successful AI implementation.

Only 38% of respondents consider themselves ‘very trusting’ of the data quality and training used in AI systems. This scepticism is not unfounded, as 40% of IT leaders who have encountered issues with AI attribute these problems to algorithmic errors stemming from insufficient or biased data.

Consequently, data quality emerges as the second most significant barrier to AI adoption (16%), trailing only behind security and privacy risks. This finding underscores the critical importance of robust, unbiased datasets in driving AI success.

“High-quality data is the cornerstone of accurate and reliable AI models, which in turn drive better decision-making and outcomes,” adds Johnson. “Trustworthy data builds confidence in AI among IT professionals, accelerating the broader adoption and integration of AI technologies.”

The survey also sheds light on widespread concerns about database readiness. Less than half (43%) of IT professionals express confidence in their company’s ability to meet the increasing data demands of AI. This lack of preparedness is further exacerbated by the perception that organisations are not moving swiftly enough to implement AI, with 46% of respondents citing ongoing data quality challenges as a contributing factor.

As AI continues to reshape the technological landscape, the findings of this SolarWinds survey serve as a clarion call for both stronger regulation and improved data practices. The message from IT professionals is clear: while AI holds immense promise, its successful integration hinges on addressing critical concerns around security, privacy, and data quality.

(Photo by Kelly Sikkema)

See also: Whitepaper dispels fears of AI-induced job losses

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post SolarWinds: IT professionals want stronger AI regulation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/solarwinds-it-professionals-stronger-ai-regulation/feed/ 0
UK signs AI safety treaty to protect human rights and democracy https://www.artificialintelligence-news.com/news/uk-signs-ai-safety-treaty-to-protect-human-rights-and-democracy/ https://www.artificialintelligence-news.com/news/uk-signs-ai-safety-treaty-to-protect-human-rights-and-democracy/#respond Thu, 05 Sep 2024 11:08:19 +0000 https://www.artificialintelligence-news.com/?p=15976 The UK has signed a landmark AI safety treaty aimed at protecting human rights, democracy, and the rule of law from potential threats posed by the technology. Lord Chancellor Shabana Mahmood signed the Council of Europe’s AI convention today as part of a united global approach to managing the risks and opportunities.  “Artificial intelligence has […]

The post UK signs AI safety treaty to protect human rights and democracy appeared first on AI News.

]]>
The UK has signed a landmark AI safety treaty aimed at protecting human rights, democracy, and the rule of law from potential threats posed by the technology.

Lord Chancellor Shabana Mahmood signed the Council of Europe’s AI convention today as part of a united global approach to managing the risks and opportunities. 

“Artificial intelligence has the capacity to radically improve the responsiveness and effectiveness of public services, and turbocharge economic growth,” said Lord Chancellor Mahmood.

“However, we must not let AI shape us—we must shape AI. This convention is a major step to ensuring that these new technologies can be harnessed without eroding our oldest values, like human rights and the rule of law.”

The treaty acknowledges the potential benefits of AI – such as its ability to boost productivity and improve healthcare – whilst simultaneously addressing concerns surrounding misinformation, algorithmic bias, and data privacy. It will compel signatory nations to monitor AI development, implement strict regulations, and actively combat any misuse of the technology that could harm public services or individuals.

Keiron Holyome, VP UKI & Emerging Markets at BlackBerry, commented: “To truly outrun cybercriminals and maintain a defensive advantage, robust frameworks for AI governance and ethical standards must be established, ensuring responsible use and mitigating risks.

“The first legally binding international AI treaty is another step towards such recommendations for both AI caution and applications for good. Collaboration between governments, industry leaders, and academia will be increasingly essential for sharing knowledge, developing best practices, and responding to emerging threats collectively.”

Crucially, the convention acts as a framework to enhance existing legislation in the UK. For example, aspects of the Online Safety Act will be bolstered to better address the risk of AI systems using biased data to generate unfair outcomes. 

The agreement focuses on three key safeguards:

  • Protecting human rights: Ensuring individuals’ data is used responsibly, their privacy is respected, and AI systems are free from discrimination. 
  • Protecting democracy: Requiring countries to take proactive steps to prevent AI from being used to undermine public institutions and democratic processes.
  • Protecting the rule of law: Placing an obligation on signatory countries to establish robust AI-specific regulations, shield their citizens from potential harm, and ensure responsible AI deployment.

While the convention initially focuses on Council of Europe members, other nations – including the US and Australia – are being invited to join this international effort to ensure responsible AI development and deployment.  

Peter Kyle, Secretary of State for Science, Innovation, and Technology, commented: “AI holds the potential to be the driving force behind new economic growth, a productivity revolution and true transformation in our public services, but that ambition can only be achieved if people have faith and trust in the innovations which will bring about that change.

“The convention we’ve signed today alongside global partners will be key to that effort. Once in force, it will further enhance protections for human rights, rule of law, and democracy—strengthening our own domestic approach to the technology while furthering the global cause of safe, secure, and responsible AI.”

The UK Government has pledged to collaborate closely with domestic regulators, devolved administrations, and local authorities to ensure seamless implementation of the treaty’s requirements once it is ratified.

The signing of the convention builds on the UK’s previous efforts in responsible AI by hosting the AI Safety Summit and co-hosting the AI Seoul Summit, as well as establishing the world’s first AI Safety Institute

See also: UK adjusts AI strategy to navigate budget constraints

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK signs AI safety treaty to protect human rights and democracy appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/uk-signs-ai-safety-treaty-to-protect-human-rights-and-democracy/feed/ 0
Balancing innovation and trust: Experts assess the EU’s AI Act https://www.artificialintelligence-news.com/news/balancing-innovation-trust-experts-assess-eu-ai-act/ https://www.artificialintelligence-news.com/news/balancing-innovation-trust-experts-assess-eu-ai-act/#respond Wed, 31 Jul 2024 15:48:45 +0000 https://www.artificialintelligence-news.com/?p=15577 As the EU’s AI Act prepares to come into force tomorrow, industry experts are weighing in on its potential impact, highlighting its role in building trust and encouraging responsible AI adoption. Curtis Wilson, Staff Data Engineer at Synopsys’ Software Integrity Group, believes the new regulation could be a crucial step in addressing the AI industry’s […]

The post Balancing innovation and trust: Experts assess the EU’s AI Act appeared first on AI News.

]]>
As the EU’s AI Act prepares to come into force tomorrow, industry experts are weighing in on its potential impact, highlighting its role in building trust and encouraging responsible AI adoption.

Curtis Wilson, Staff Data Engineer at Synopsys’ Software Integrity Group, believes the new regulation could be a crucial step in addressing the AI industry’s most pressing challenge: building trust.

“The greatest problem facing AI developers is not regulation, but a lack of trust in AI,” Wilson stated. “For an AI system to reach its full potential, it needs to be trusted by the people who use it.”

This sentiment is echoed by Paul Cardno, Global Digital Automation & Innovation Senior Manager at 3M, who noted, “With nearly 80% of UK adults now believing AI needs to be heavily regulated, the introduction of the EU’s AI Act is something that businesses have been long-waiting for.”

Both experts emphasise the Act’s potential to foster confidence in AI technologies. Wilson explained that while his company has implemented internal measures to build trust, external regulation is equally important.

“I see regulatory frameworks like the EU AI Act as an essential component to building trust in AI,” Wilson said. “The strict rules and punishing fines will deter careless developers and help customers feel more confident in trusting and using AI systems.”

Cardno added, “We know that AI is shaping the future, but companies will only be able to reap the rewards if they have the confidence to rethink existing processes and break away from entrenched structures.”

The EU AI Act primarily focuses on high-risk systems and foundational models. Wilson noted that many of its requirements align with existing best practices in data science, such as risk management, testing procedures, and comprehensive documentation.

For UK businesses, the impact of the EU AI Act extends beyond those directly selling to EU markets. 

Wilson pointed out that certain aspects of the Act may apply to Northern Ireland due to the Windsor Framework. Additionally, the UK government is developing its own AI regulations, with a recent whitepaper emphasising interoperability with EU and US regulations.

“While the EU Act isn’t perfect, and needs to be assessed in relation to other global regulations, having a clear framework and guidance on AI from one of the world’s major economies will help encourage those who remain on the fence to tap into the AI revolution,” Cardno explained.

While acknowledging that the new regulations may create some friction, particularly around registration and certification, Wilson emphasised that many of the Act’s obligations are already standard practice for responsible companies. However, he recognised that small companies and startups might face greater challenges.

“Small companies and start-ups will experience issues more strongly,” Wilson said. “The regulation acknowledges this and has included provisions for sandboxes to foster AI innovation for these smaller businesses.”

However, Wilson notes that these sandboxes will be established at the national level by individual EU member states, potentially limiting access for UK businesses.

As the AI landscape continues to evolve, the EU AI Act represents a significant step towards establishing a framework for responsible AI development and deployment.

“Having a clear framework and guidance on AI from one of the world’s major economies will help encourage those who remain on the fence to tap into the AI revolution, ensuring it has a safe, positive ongoing influence for all organisations operating across the EU, which can only be a promising step forwards for the industry,” concludes Cardno.

(Photo by Guillaume Périgois)

See also: UAE blocks US congressional meetings with G42 amid AI transfer concerns

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Balancing innovation and trust: Experts assess the EU’s AI Act appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/balancing-innovation-trust-experts-assess-eu-ai-act/feed/ 0