politics Archives - AI News https://www.artificialintelligence-news.com/news/tag/politics/ Artificial Intelligence News Thu, 24 Apr 2025 11:40:59 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png politics Archives - AI News https://www.artificialintelligence-news.com/news/tag/politics/ 32 32 BCG: Analysing the geopolitics of generative AI https://www.artificialintelligence-news.com/news/bcg-analysing-the-geopolitics-of-generative-ai/ https://www.artificialintelligence-news.com/news/bcg-analysing-the-geopolitics-of-generative-ai/#respond Fri, 11 Apr 2025 16:11:17 +0000 https://www.artificialintelligence-news.com/?p=105294 Generative AI is reshaping global competition and geopolitics, presenting challenges and opportunities for nations and businesses alike. Senior figures from Boston Consulting Group (BCG) and its tech division, BCG X, discussed the intricate dynamics of the global AI race, the dominance of superpowers like the US and China, the role of emerging “middle powers,” and […]

The post BCG: Analysing the geopolitics of generative AI appeared first on AI News.

]]>
Generative AI is reshaping global competition and geopolitics, presenting challenges and opportunities for nations and businesses alike.

Senior figures from Boston Consulting Group (BCG) and its tech division, BCG X, discussed the intricate dynamics of the global AI race, the dominance of superpowers like the US and China, the role of emerging “middle powers,” and the implications for multinational corporations.

AI investments expose businesses to increasingly tense geopolitics

Sylvain Duranton, Global Leader at BCG X, noted the significant geopolitical risk companies face: “For large companies, close to half of them, 44%, have teams around the world, not just in one country where their headquarters are.”

Sylvain Duranton, Global Leader at BCG X

Many of these businesses operate across numerous countries, making them vulnerable to differing regulations and sovereignty issues. “They’ve built their AI teams and ecosystem far before there was such tension around the world.”

Duranton also pointed to the stark imbalance in the AI supply race, particularly in investment.

Comparing the market capitalisation of tech companies, the US dwarfs Europe by a factor of 20 and the Asia Pacific region by five. Investment figures paint a similar picture, showing a “completely disproportionate” imbalance compared to the relative sizes of the economies.

This AI race is fuelled by massive investments in compute power, frontier models, and the emergence of lighter, open-weight models changing the competitive dynamic.   

Benchmarking national AI capabilities

Nikolaus Lang, Global Leader at the BCG Henderson Institute – BCG’s think tank – detailed the extensive research undertaken to benchmark national GenAI capabilities objectively.

The team analysed the “upstream of GenAI,” focusing on large language model (LLM) development and its six key enablers: capital, computing power, intellectual property, talent, data, and energy.

Using hard data like AI researcher numbers, patents, data centre capacity, and VC investment, they created a comparative analysis. Unsurprisingly, the analysis revealed the US and China as the clear AI frontrunners and maintain leads in geopolitics.

Nikolaus Lang, Global Leader at the BCG Henderson Institute

The US boasts the largest pool of AI specialists (around half a million), immense capital power ($303bn in VC funding, $212bn in tech R&D), and leading compute power (45 GW).

Lang highlighted America’s historical dominance, noting, “the US has been the largest producer of notable AI models with 67%” since 1950, a lead reflected in today’s LLM landscape. This strength is reinforced by “outsized capital power” and strategic restrictions on advanced AI chip access through frameworks like the US AI Diffusion Framework.   

China, the second AI superpower, shows particular strength in data—ranking highly in e-governance and mobile broadband subscriptions, alongside significant data centre capacity (20 GW) and capital power. 

Despite restricted access to the latest chips, Chinese LLMs are rapidly closing the gap with US models. Lang mentioned the emergence of models like DeepSpeech as evidence of this trend, achieved with smaller teams, fewer GPU hours, and previous-generation chips.

China’s progress is also fuelled by heavy investment in AI academic institutions (hosting 45 of the world’s top 100), a leading position in AI patent applications, and significant government-backed VC funding. Lang predicts “governments will play an important role in funding AI work going forward.”

The middle powers: Europe, Middle East, and Asia

Beyond the superpowers, several “middle powers” are carving out niches.

  • EU: While trailing the US and China, the EU holds the third spot with significant data centre capacity (8 GW) and the world’s second-largest AI talent pool (275,000 specialists) when capabilities are combined. Europe also leads in top AI publications. Lang stressed the need for bundled capacities, suggesting AI, defence, and renewables are key areas for future EU momentum.
  • Middle East (UAE & Saudi Arabia): These nations leverage strong capital power via sovereign wealth funds and competitively low electricity prices to attract talent and build compute power, aiming to become AI drivers “from scratch”. They show positive dynamics in attracting AI specialists and are climbing the ranks in AI publications.   
  • Asia (Japan & South Korea): Leveraging strong existing tech ecosystems in hardware and gaming, these countries invest heavily in R&D (around $207bn combined by top tech firms). Government support, particularly in Japan, fosters both supply and demand. Local LLMs and strategic investments by companies like Samsung and SoftBank demonstrate significant activity.   
  • Singapore: Singapore is boosting its AI ecosystem by focusing on talent upskilling programmes, supporting Southeast Asia’s first LLM, ensuring data centre capacity, and fostering adoption through initiatives like establishing AI centres of excellence.   

The geopolitics of generative AI: Strategy and sovereignty

The geopolitics of generative AI is being shaped by four clear dynamics: the US retains its lead, driven by an unrivalled tech ecosystem; China is rapidly closing the gap; middle powers face a strategic choice between building supply or accelerating adoption; and government funding is set to play a pivotal role, particularly as R&D costs climb and commoditisation sets in.

As geopolitical tensions mount, businesses are likely to diversify their GenAI supply chains to spread risk. The race ahead will be defined by how nations and companies navigate the intersection of innovation, policy, and resilience.

(Photo by Markus Krisetya)

See also: OpenAI counter-sues Elon Musk for attempts to ‘take down’ AI rival

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post BCG: Analysing the geopolitics of generative AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/bcg-analysing-the-geopolitics-of-generative-ai/feed/ 0
OpenAI and Google call for US government action to secure AI lead https://www.artificialintelligence-news.com/news/openai-and-google-call-us-government-action-secure-ai-lead/ https://www.artificialintelligence-news.com/news/openai-and-google-call-us-government-action-secure-ai-lead/#respond Fri, 14 Mar 2025 16:12:54 +0000 https://www.artificialintelligence-news.com/?p=104794 OpenAI and Google are each urging the US government to take decisive action to secure the nation’s AI leadership. “As America’s world-leading AI sector approaches AGI, with a Chinese Communist Party (CCP) determined to overtake us by 2030, the Trump Administration’s new AI Action Plan can ensure that American-led AI built on democratic principles continues […]

The post OpenAI and Google call for US government action to secure AI lead appeared first on AI News.

]]>
OpenAI and Google are each urging the US government to take decisive action to secure the nation’s AI leadership.

“As America’s world-leading AI sector approaches AGI, with a Chinese Communist Party (CCP) determined to overtake us by 2030, the Trump Administration’s new AI Action Plan can ensure that American-led AI built on democratic principles continues to prevail over CCP-built autocratic, authoritarian AI,” wrote OpenAI, in a letter to the Office of Science and Technology Policy.

In a separate letter, Google echoed this sentiment by stating, “While America currently leads the world in AI – and is home to the most capable and widely adopted AI models and tools – our lead is not assured.”    

A plan for the AI Action Plan

OpenAI highlighted AI’s potential to “scale human ingenuity,” driving productivity, prosperity, and freedom.  The company likened the current advancements in AI to historical leaps in innovation, such as the domestication of the horse, the invention of the printing press, and the advent of the computer.

We are at “the doorstep of the next leap in prosperity,” according to OpenAI CEO Sam Altman. The company stresses the importance of “freedom of intelligence,” advocating for open access to AGI while safeguarding against autocratic control and bureaucratic barriers.

OpenAI also outlined three scaling principles:

  1. The intelligence of an AI model roughly equals the log of the resources used to train and run it.
  1. The cost to use a given level of AI capability falls by about 10x every 12 months.
  1. The amount of calendar time it takes to improve an AI model keeps decreasing.

Google also has a three-point plan for the US to focus on:

  1. Invest in AI: Google called for coordinated action to address the surging energy needs of AI infrastructure, balanced export controls, continued funding for R&D, and pro-innovation federal policy frameworks.
  1. Accelerate and modernise government AI adoption: Google urged the federal government to lead by example through AI adoption and deployment, including implementing multi-vendor, interoperable AI solutions and streamlining procurement processes.
  1. Promote pro-innovation approaches internationally: Google advocated for an active international economic policy to support AI innovation, championing market-driven technical standards, working with aligned countries to address national security risks, and combating restrictive foreign AI barriers.

AI policy recommendations for the US government

Both companies provided detailed policy recommendations to the US government.

OpenAI’s proposals include:

  • A regulatory strategy that ensures the freedom to innovate through voluntary partnership between the federal government and the private sector.    
  • An export control strategy that promotes the global adoption of American AI systems while protecting America’s AI lead.    
  • A copyright strategy that protects the rights of content creators while preserving American AI models’ ability to learn from copyrighted material.    
  • An infrastructure opportunity strategy to drive growth, including policies to support a thriving AI-ready workforce and ecosystems of labs, start-ups, and larger companies.    
  • An ambitious government adoption strategy to ensure the US government itself sets an example of using AI to benefit its citizens.    

Google’s recommendations include:

  • Advancing energy policies to power domestic data centres, including transmission and permitting reform.    
  • Adopting balanced export control policies that support market access while targeting pertinent risks.    
  • Accelerating AI R&D, streamlining access to computational resources, and incentivising public-private partnerships.    
  • Crafting a pro-innovation federal framework for AI, including federal legislation that prevents a patchwork of state laws, ensuring industry has access to data that enables fair learning, emphasising sector-specific and risk-based AI governance, and supporting workforce initiatives to develop AI skills.    

Both OpenAI and Google emphasise the need for swift and decisive action. OpenAI warned that America’s lead in AI is narrowing, while Google stressed that policy decisions will determine the outcome of the global AI competition.

“We are in a global AI competition, and policy decisions will determine the outcome,” Google explained. “A pro-innovation approach that protects national security and ensures that everyone benefits from AI is essential to realising AI’s transformative potential and ensuring that America’s lead endures.”

(Photo by Nils Huenerfuerst

See also: Gemma 3: Google launches its latest open AI models

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI and Google call for US government action to secure AI lead appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-and-google-call-us-government-action-secure-ai-lead/feed/ 0
Eric Schmidt: AI misuse poses an ‘extreme risk’ https://www.artificialintelligence-news.com/news/eric-schmidt-ai-misuse-poses-extreme-risk/ https://www.artificialintelligence-news.com/news/eric-schmidt-ai-misuse-poses-extreme-risk/#respond Thu, 13 Feb 2025 12:17:38 +0000 https://www.artificialintelligence-news.com/?p=104423 Eric Schmidt, former CEO of Google, has warned that AI misuse poses an “extreme risk” and could do catastrophic harm. Speaking to BBC Radio 4’s Today programme, Schmidt cautioned that AI could be weaponised by extremists and “rogue states” such as North Korea, Iran, and Russia to “harm innocent people.” Schmidt expressed concern that rapid […]

The post Eric Schmidt: AI misuse poses an ‘extreme risk’ appeared first on AI News.

]]>
Eric Schmidt, former CEO of Google, has warned that AI misuse poses an “extreme risk” and could do catastrophic harm.

Speaking to BBC Radio 4’s Today programme, Schmidt cautioned that AI could be weaponised by extremists and “rogue states” such as North Korea, Iran, and Russia to “harm innocent people.”

Schmidt expressed concern that rapid AI advancements could be exploited to create weapons, including biological attacks. Highlighting the dangers, he said: “The real fears that I have are not the ones that most people talk about AI, I talk about extreme risk.”

Using a chilling analogy, Schmidt referenced the al-Qaeda leader responsible for the 9/11 attacks: “I’m always worried about the Osama bin Laden scenario, where you have some truly evil person who takes over some aspect of our modern life and uses it to harm innocent people.”

He emphasised the pace of AI development and its potential to be co-opted by nations or groups with malevolent intent.

“Think about North Korea, or Iran, or even Russia, who have some evil goal … they could misuse it and do real harm,” Schmidt warns.

Oversight without stifling innovation

Schmidt urged governments to closely monitor private tech companies pioneering AI research. While noting that tech leaders are generally aware of AI’s societal implications, they may make decisions based on different values from those of public officials.

“My experience with the tech leaders is that they do have an understanding of the impact they’re having, but they might make a different values judgement than the government would make.”

Schmidt also endorsed the export controls introduced under former US President Joe Biden last year to restrict the sale of advanced microchips. The measure is aimed at slowing the progress of geopolitical adversaries in AI research.  

Global divisions around preventing AI misuse

The tech veteran was in Paris when he made his remarks, attending the AI Action Summit, a two-day event that wrapped up on Tuesday.

The summit, attended by 57 countries, saw the announcement of an agreement on “inclusive” AI development. Signatories included major players like China, India, the EU, and the African Union.  

However, the UK and the US declined to sign the communique. The UK government said the agreement lacked “practical clarity” and failed to address critical “harder questions” surrounding national security. 

Schmidt cautioned against excessive regulation that might hinder progress in this transformative field. This was echoed by US Vice-President JD Vance who warned that heavy-handed regulation “would kill a transformative industry just as it’s taking off”.  

This reluctance to endorse sweeping international accords reflects diverging approaches to AI governance. The EU has championed a more restrictive framework for AI, prioritising consumer protections, while countries like the US and UK are opting for more agile and innovation-driven strategies. 

Schmidt pointed to the consequences of Europe’s tight regulatory stance, predicting that the region would miss out on pioneering roles in AI.

“The AI revolution, which is the most important revolution in my opinion since electricity, is not going to be invented in Europe,” he remarked.

Prioritising national and global safety

Schmidt’s comments come against a backdrop of increasing scrutiny over AI’s dual-use potential—its ability to be used for both beneficial and harmful purposes.

From deepfakes to autonomous weapons, AI poses a bevy of risks if left without measures to guard against misuse. Leaders and experts, including Schmidt, are advocating for a balanced approach that fosters innovation while addressing these dangers head-on.

While international cooperation remains a complex and contentious issue, the overarching consensus is clear: without safeguards, AI’s evolution could have unintended – and potentially catastrophic – consequences.

(Photo by Guillaume Paumier under CC BY 3.0 license. Cropped to landscape from original version.)

See also: NEPC: AI sprint risks environmental catastrophe

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Eric Schmidt: AI misuse poses an ‘extreme risk’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/eric-schmidt-ai-misuse-poses-extreme-risk/feed/ 0
Ursula von der Leyen: AI race ‘is far from over’ https://www.artificialintelligence-news.com/news/ursula-von-der-leyen-ai-race-is-far-from-over/ https://www.artificialintelligence-news.com/news/ursula-von-der-leyen-ai-race-is-far-from-over/#respond Tue, 11 Feb 2025 16:51:29 +0000 https://www.artificialintelligence-news.com/?p=104314 Europe has no intention of playing catch-up in the global AI race, European Commission President Ursula von der Leyen declared at the AI Action Summit in Paris. While the US and China are often seen as frontrunners, von der Leyen emphasised that the AI race “is far from over” and that Europe has distinct strengths […]

The post Ursula von der Leyen: AI race ‘is far from over’ appeared first on AI News.

]]>
Europe has no intention of playing catch-up in the global AI race, European Commission President Ursula von der Leyen declared at the AI Action Summit in Paris.

While the US and China are often seen as frontrunners, von der Leyen emphasised that the AI race “is far from over” and that Europe has distinct strengths to carve a leading role for itself.

“This is the third summit on AI safety in just over one year,” von der Leyen remarked. “In the same period, three new generations of ever more powerful AI models have been released. Some expect models that will approach human reasoning within a year’s time.”

The European Commission President set the tone of the event by contrasting the groundwork laid in previous summits with the urgency of this one.

“Past summits focused on laying the groundwork for AI safety. Together, we built a shared consensus that AI will be safe, that it will promote our values and benefit humanity. But this Summit is focused on action. And that is exactly what we need right now.”

As the world witnesses AI’s disruptive power, von der Leyen urged Europe to “formulate a vision of where we want AI to take us, as society and as humanity.” Growing adoption, “in the key sectors of our economy, and for the key challenges of our times,” provides a golden opportunity for the continent to lead, she argued.

The case for a European approach to the AI race 

Von der Leyen rejected notions that Europe has fallen behind its global competitors.

“Too often, I hear that Europe is late to the race – while the US and China have already got ahead. I disagree,” she stated. “The frontier is constantly moving. And global leadership is still up for grabs.”

Instead of replicating what other regions are doing, she called for doubling down on Europe’s unique strengths to define the continent’s distinct approach to AI.

“Too often, I have heard that we should replicate what others are doing and run after their strengths,” she said. “I think that instead, we should invest in what we can do best and build on our strengths here in Europe, which are our science and technology mastery that we have given to the world.”

Von der Leyen defined three pillars of the so-called “European brand of AI” that sets it apart: 1) focusing on high-complexity, industry-specific applications, 2) taking a cooperative, collaborative approach to innovation, and 3) embracing open-source principles.

“This summit shows there is a distinct European brand of AI,” she asserted. “It is already driving innovation and adoption. And it is picking up speed.”

Accelerating innovation: AI factories and gigafactories  

To maintain its competitive edge, Europe must supercharge its AI innovation, von der Leyen stressed.

A key component of this strategy lies in its computational infrastructure. Europe already boasts some of the world’s fastest supercomputers, which are now being leveraged through the creation of “AI factories.”

“In just a few months, we have set up a record of 12 AI factories,” von der Leyen revealed. “And we are investing €10 billion in them. This is not a promise—it is happening right now, and it is the largest public investment for AI in the world, which will unlock over ten times more private investment.”

Beyond these initial steps, von der Leyen unveiled an even more ambitious initiative. AI gigafactories, built on the scale of CERN’s Large Hadron Collider, will provide the infrastructure needed for training AI systems at unprecedented scales. They aim to foster collaboration between researchers, entrepreneurs, and industry leaders.

“We provide the infrastructure for large computational power,” von der Leyen explained. “Talents of the world are welcome. Industries will be able to collaborate and federate their data.”

The cooperative ethos underpinning AI gigafactories is part of a broader European push to balance competition with collaboration.

“AI needs competition but also collaboration,” she emphasised, highlighting that the initiative will serve as a “safe space” for these cooperative efforts.

Building trust with the AI Act

Crucially, von der Leyen reiterated Europe’s commitment to making AI safe and trustworthy. She pointed to the EU AI Act as the cornerstone of this strategy, framing it as a harmonised framework to replace fragmented national regulations across member states.

“The AI Act [will] provide one single set of safety rules across the European Union – 450 million people – instead of 27 different national regulations,” she said, before acknowledging businesses’ concerns about regulatory complexities.

“At the same time, I know, we have to make it easier, we have to cut red tape. And we will.”

€200 billion to remain in the AI race

Financing such ambitious plans naturally requires significant resources. Von der Leyen praised the recently launched EU AI Champions Initiative, which has already pledged €150 billion from providers, investors, and industry.

During her speech at the summit, von der Leyen announced the Commission’s complementary InvestAI initiative that will bring in an additional €50 billion. Altogether, the result is mobilising a massive €200 billion in public-private AI investments.

“We will have a focus on industrial and mission-critical applications,” she said. “It will be the largest public-private partnership in the world for the development of trustworthy AI.”

Ethical AI is a global responsibility

Von der Leyen closed her address by framing Europe’s AI ambitions within a broader, humanitarian perspective, arguing that ethical AI is a global responsibility.

“Cooperative AI can be attractive well beyond Europe, including for our partners in the Global South,” she proclaimed, extending a message of inclusivity.

Von der Leyen expressed full support for the AI Foundation launched at the summit, highlighting its mission to ensure widespread access to AI’s benefits.

“AI can be a gift to humanity. But we must make sure that benefits are widespread and accessible to all,” she remarked.

“We want AI to be a force for good. We want an AI where everyone collaborates and everyone benefits. That is our path – our European way.”

See also: AI Action Summit: Leaders call for unity and equitable development

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Ursula von der Leyen: AI race ‘is far from over’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ursula-von-der-leyen-ai-race-is-far-from-over/feed/ 0
DeepSeek ban? China data transfer boosts security concerns https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/ https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/#respond Fri, 07 Feb 2025 17:44:01 +0000 https://www.artificialintelligence-news.com/?p=104228 US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company. DeepSeek, practically unknown just weeks ago, took the tech world by storm—gaining global acclaim for its cutting-edge performance while sparking debates reminiscent of the TikTok saga. Its rise has been fuelled in part […]

The post DeepSeek ban? China data transfer boosts security concerns appeared first on AI News.

]]>
US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company.

DeepSeek, practically unknown just weeks ago, took the tech world by storm—gaining global acclaim for its cutting-edge performance while sparking debates reminiscent of the TikTok saga.

Its rise has been fuelled in part by its business model: unlike many of its American counterparts, including OpenAI and Google, DeepSeek offered its advanced powers for free.

However, concerns have been raised about DeepSeek’s extensive data collection practices and a probe has been launched by Microsoft and OpenAI over a breach of the latter’s system by a group allegedly linked to the Chinese AI startup.

A threat to US AI dominance

DeepSeek’s astonishing capabilities have, within a matter of weeks, positioned it as a major competitor to American AI stalwarts like OpenAI’s ChatGPT and Google Gemini. But, alongside the app’s prowess, concerns have emerged over alleged ties to the Chinese Communist Party (CCP).  

According to security researchers, hidden code within DeepSeek’s AI has been found transmitting user data to China Mobile—a state-owned telecoms company banned in the US. DeepSeek’s own privacy policy permits the collection of data such as IP addresses, device information, and, most alarmingly, even keystroke patterns.

Such findings have led to bipartisan efforts in the US Congress to curtail DeepSeek’s influence, with lawmakers scrambling to protect sensitive data from potential CCP oversight.

Reps. Darin LaHood (R-IL) and Josh Gottheimer (D-NJ) are spearheading efforts to introduce legislation that would prohibit DeepSeek from being installed on all government-issued devices. 

Several federal agencies, among them NASA and the US Navy, have already preemptively issued a ban on DeepSeek. Similarly, the state of Texas has also introduced restrictions.

Potential ban of DeepSeek a TikTok redux?

The controversy surrounding DeepSeek bears similarities to debates over TikTok, the social video app owned by Chinese company ByteDance. TikTok remains under fire over accusations that user data is accessible to the CCP, though definitive proof has yet to materialise.

In contrast, DeepSeek’s case involves clear evidence, as revealed by cybersecurity investigators who identified the app’s unauthorised data transmissions. While some might say DeepSeek echoes the TikTok controversy, security experts argue that it represents a much starker and documented threat.

Lawmakers around the world are taking note. In addition to the US proposals, DeepSeek has already faced bans from government systems in countries including Australia, South Korea, and Italy.  

AI becomes a geopolitical battleground

The concerns over DeepSeek exemplify how AI has now become a geopolitical flashpoint between global superpowers—especially between the US and China.

American AI firms like OpenAI have enjoyed a dominant position in recent years, but Chinese companies have poured resources into catching up and, in some cases, surpassing their US competitors.  

DeepSeek’s lightning-quick growth has unsettled that balance, not only because of its AI models but also due to its pricing strategy, which undercuts competitors by offering the app free of charge. That begs the question of whether it’s truly “free” or if the cost is paid in lost privacy and security.

China Mobile’s involvement raises further eyebrows, given the state-owned telecom company’s prior sanctions and prohibition from the US market. Critics worry that data collected through platforms like DeepSeek could fill gaps in Chinese surveillance activities or even potential economic manipulations.

A nationwide DeepSeek ban is on the cards

If the proposed US legislation is passed, it could represent the first step toward nationwide restrictions or an outright ban on DeepSeek. Geopolitical tension between China and the West continues to shape policies in advanced technologies, and AI appears to be the latest arena for this ongoing chess match.  

In the meantime, calls to regulate applications like DeepSeek are likely to grow louder. Conversations about data privacy, national security, and ethical boundaries in AI development are becoming ever more urgent as individuals and organisations across the globe navigate the promises and pitfalls of next-generation tools.  

DeepSeek’s rise may have, indeed, rattled the AI hierarchy, but whether it can maintain its momentum in the face of increasing global pushback remains to be seen.

(Photo by Solen Feyissa)

See also: AVAXAI brings DeepSeek to Web3 with decentralised AI agents

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepSeek ban? China data transfer boosts security concerns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/feed/ 0
EU introduces draft regulatory guidance for AI models https://www.artificialintelligence-news.com/news/eu-introduces-draft-regulatory-guidance-for-ai-models/ https://www.artificialintelligence-news.com/news/eu-introduces-draft-regulatory-guidance-for-ai-models/#respond Fri, 15 Nov 2024 14:52:05 +0000 https://www.artificialintelligence-news.com/?p=16496 The release of the “First Draft General-Purpose AI Code of Practice” marks the EU’s effort to create comprehensive regulatory guidance for general-purpose AI models. The development of this draft has been a collaborative effort, involving input from diverse sectors including industry, academia, and civil society. The initiative was led by four specialised Working Groups, each […]

The post EU introduces draft regulatory guidance for AI models appeared first on AI News.

]]>
The release of the “First Draft General-Purpose AI Code of Practice” marks the EU’s effort to create comprehensive regulatory guidance for general-purpose AI models.

The development of this draft has been a collaborative effort, involving input from diverse sectors including industry, academia, and civil society. The initiative was led by four specialised Working Groups, each addressing specific aspects of AI governance and risk mitigation:

  • Working Group 1: Transparency and copyright-related rules
  • Working Group 2: Risk identification and assessment for systemic risk
  • Working Group 3: Technical risk mitigation for systemic risk
  • Working Group 4: Governance risk mitigation for systemic risk

The draft is aligned with existing laws such as the Charter of Fundamental Rights of the European Union. It takes into account international approaches, striving for proportionality to risks, and aims to be future-proof by contemplating rapid technological changes.

Key objectives outlined in the draft include:

  • Clarifying compliance methods for providers of general-purpose AI models
  • Facilitating understanding across the AI value chain, ensuring seamless integration of AI models into downstream products
  • Ensuring compliance with Union law on copyrights, especially concerning the use of copyrighted material for model training
  • Continuously assessing and mitigating systemic risks associated with AI models

Recognising and mitigating systemic risks

A core feature of the draft is its taxonomy of systemic risks, which includes types, natures, and sources of such risks. The document outlines various threats such as cyber offences, biological risks, loss of control over autonomous AI models, and large-scale disinformation. By acknowledging the continuously evolving nature of AI technology, the draft recognises that this taxonomy will need updates to remain relevant.

As AI models with systemic risks become more common, the draft emphasises the need for robust safety and security frameworks (SSFs). It proposes a hierarchy of measures, sub-measures, and key performance indicators (KPIs) to ensure appropriate risk identification, analysis, and mitigation throughout a model’s lifecycle.

The draft suggests that providers establish processes to identify and report serious incidents associated with their AI models, offering detailed assessments and corrections as needed. It also encourages collaboration with independent experts for risk assessment, especially for models posing significant systemic risks.

Taking a proactive stance to AI regulatory guidance

The EU AI Act, which came into force on 1 August 2024, mandates that the final version of this Code be ready by 1 May 2025. This initiative underscores the EU’s proactive stance towards AI regulation, emphasising the need for AI safety, transparency, and accountability.

As the draft continues to evolve, the working groups invite stakeholders to participate actively in refining the document. Their collaborative input will shape a regulatory framework aimed at safeguarding innovation while protecting society from the potential pitfalls of AI technology.

While still in draft form, the EU’s Code of Practice for general-purpose AI models could set a benchmark for responsible AI development and deployment globally. By addressing key issues such as transparency, risk management, and copyright compliance, the Code aims to create a regulatory environment that fosters innovation, upholds fundamental rights, and ensures a high level of consumer protection.

This draft is open for written feedback until 28 November 2024. 

See also: Anthropic urges AI regulation to avoid catastrophes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU introduces draft regulatory guidance for AI models appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/eu-introduces-draft-regulatory-guidance-for-ai-models/feed/ 0
Anthropic urges AI regulation to avoid catastrophes https://www.artificialintelligence-news.com/news/anthropic-urges-ai-regulation-avoid-catastrophes/ https://www.artificialintelligence-news.com/news/anthropic-urges-ai-regulation-avoid-catastrophes/#respond Fri, 01 Nov 2024 16:46:42 +0000 https://www.artificialintelligence-news.com/?p=16415 Anthropic has flagged the potential risks of AI systems and calls for well-structured regulation to avoid potential catastrophes. The organisation argues that targeted regulation is essential to harness AI’s benefits while mitigating its dangers. As AI systems evolve in capabilities such as mathematics, reasoning, and coding, their potential misuse in areas like cybersecurity or even […]

The post Anthropic urges AI regulation to avoid catastrophes appeared first on AI News.

]]>
Anthropic has flagged the potential risks of AI systems and calls for well-structured regulation to avoid potential catastrophes. The organisation argues that targeted regulation is essential to harness AI’s benefits while mitigating its dangers.

As AI systems evolve in capabilities such as mathematics, reasoning, and coding, their potential misuse in areas like cybersecurity or even biological and chemical disciplines significantly increases.

Anthropic warns the next 18 months are critical for policymakers to act, as the window for proactive prevention is narrowing. Notably, Anthropic’s Frontier Red Team highlights how current models can already contribute to various cyber offense-related tasks and expects future models to be even more effective.

Of particular concern is the potential for AI systems to exacerbate chemical, biological, radiological, and nuclear (CBRN) misuse. The UK AI Safety Institute found that several AI models can now match PhD-level human expertise in providing responses to science-related inquiries.

In addressing these risks, Anthropic has detailed its Responsible Scaling Policy (RSP) that was released in September 2023 as a robust countermeasure. RSP mandates an increase in safety and security measures corresponding to the sophistication of AI capabilities.

The RSP framework is designed to be adaptive and iterative, with regular assessments of AI models allowing for timely refinement of safety protocols. Anthropic says that it’s committed to maintaining and enhancing safety spans various team expansions, particularly in security, interpretability, and trust sectors, ensuring readiness for the rigorous safety standards set by its RSP.

Anthropic believes the widespread adoption of RSPs across the AI industry, while primarily voluntary, is essential for addressing AI risks.

Transparent, effective regulation is crucial to reassure society of AI companies’ adherence to promises of safety. Regulatory frameworks, however, must be strategic, incentivising sound safety practices without imposing unnecessary burdens.

Anthropic envisions regulations that are clear, focused, and adaptive to evolving technological landscapes, arguing that these are vital in achieving a balance between risk mitigation and fostering innovation.

In the US, Anthropic suggests that federal legislation could be the ultimate answer to AI risk regulation—though state-driven initiatives might need to step in if federal action lags. Legislative frameworks developed by countries worldwide should allow for standardisation and mutual recognition to support a global AI safety agenda, minimising the cost of regulatory adherence across different regions.

Furthermore, Anthropic addresses scepticism towards imposing regulations—highlighting that overly broad use-case-focused regulations would be inefficient for general AI systems, which have diverse applications. Instead, regulations should target fundamental properties and safety measures of AI models. 

While covering broad risks, Anthropic acknowledges that some immediate threats – like deepfakes – aren’t the focus of their current proposals since other initiatives are tackling these nearer-term issues.

Ultimately, Anthropic stresses the importance of instituting regulations that spur innovation rather than stifle it. The initial compliance burden, though inevitable, can be minimised through flexible and carefully-designed safety tests. Proper regulation can even help safeguard both national interests and private sector innovation by securing intellectual property against threats internally and externally.

By focusing on empirically measured risks, Anthropic plans for a regulatory landscape that neither biases against nor favours open or closed-source models. The objective remains clear: to manage the significant risks of frontier AI models with rigorous but adaptable regulation.

(Image Credit: Anthropic)

See also: President Biden issues first National Security Memorandum on AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Anthropic urges AI regulation to avoid catastrophes appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/anthropic-urges-ai-regulation-avoid-catastrophes/feed/ 0
Chinese firms use cloud loophole to access US AI tech https://www.artificialintelligence-news.com/news/chinese-firms-cloud-loophole-access-us-ai-tech/ https://www.artificialintelligence-news.com/news/chinese-firms-cloud-loophole-access-us-ai-tech/#respond Wed, 28 Aug 2024 13:40:22 +0000 https://www.artificialintelligence-news.com/?p=15851 Chinese organisations are utilising cloud services from Amazon and its competitors to gain access to advanced US AI chips and capabilities that they cannot otherwise obtain, according to a Reuters report based on public tender documents. In a comprehensive investigation, Reuters revealed how Chinese cloud access to US AI chips is facilitated through intermediaries. Over […]

The post Chinese firms use cloud loophole to access US AI tech appeared first on AI News.

]]>
Chinese organisations are utilising cloud services from Amazon and its competitors to gain access to advanced US AI chips and capabilities that they cannot otherwise obtain, according to a Reuters report based on public tender documents.

In a comprehensive investigation, Reuters revealed how Chinese cloud access to US AI chips is facilitated through intermediaries. Over 50 tender documents posted in the past year revealed that at least 11 Chinese entities have sought access to restricted US technologies or cloud services. Four of these explicitly named Amazon Web Services (AWS) as a cloud service provider, though accessed through Chinese intermediaries rather than directly from AWS.

“AWS complies with all applicable US laws, including trade laws, regarding the provision of AWS services inside and outside of China,” an AWS spokesperson told Reuters.

The report highlights that while the US government has restricted the export of high-end AI chips to China, providing access to such chips or advanced AI models through the cloud is not a violation of US regulations. This loophole has raised concerns among US officials and lawmakers.

One example cited in the report involves Shenzhen University, which spent 200,000 yuan (£21,925) on an AWS account to access cloud servers powered by Nvidia A100 and H100 chips for an unspecified project. The university obtained this service via an intermediary, Yunda Technology Ltd Co. Neither Shenzhen University nor Yunda Technology responded to Reuters’ requests for comment.

The investigation also revealed that Zhejiang Lab, a research institute developing its own large language model called GeoGPT, stated in a tender document that it intended to spend 184,000 yuan to purchase AWS cloud computing services. The institute claimed that its AI model could not get enough computing power from homegrown Alibaba cloud services.

Michael McCaul, chair of the US House of Representatives Foreign Affairs Committee, told Reuters: “This loophole has been a concern of mine for years, and we are long overdue to address it.”

In response to these concerns, the US Commerce Department is tightening rules. A government spokeswoman told Reuters that they are “seeking additional resources to strengthen our existing controls that restrict PRC companies from accessing advanced AI chips through remote access to cloud computing capability.”

The Commerce Department has also proposed a rule that would require US cloud computing firms to verify large AI model users and notify authorities when they use US cloud computing services to train large AI models capable of “malicious cyber-enabled activity.”

The study also found that Chinese companies are seeking access to Microsoft’s cloud services. For example, Sichuan University stated in a tender filing that it was developing a generative AI platform and would purchase 40 million Microsoft Azure OpenAI tokens to help with project delivery.

Reuters’ report also indicated that Amazon has provided Chinese businesses with access to modern AI chips as well as advanced AI models such as Anthropic’s Claude, which they would not otherwise have had. This was demonstrated by public postings, tenders, and marketing materials evaluated by the news organisation.

Chu Ruisong, President of AWS Greater China, stated during a generative AI-themed conference in Shanghai in May that “Bedrock provides a selection of leading LLMs, including prominent closed-source models such as Anthropic’s Claude 3.”

The report overall emphasises the difficulty of regulating access to advanced computing resources in an increasingly interconnected global technological ecosystem. It focuses on the intricate relationship between US export laws, cloud service providers, and Chinese enterprises looking to improve their AI capabilities.

As the US government works to close this gap, the scenario raises concerns about the efficacy of present export controls and the potential need for more comprehensive laws that cover cloud-based access to banned technologies.

The findings of this paper are likely to feed ongoing discussions about technology transfer, national security, and the global AI race. As politicians and industry leaders analyse these findings, they may spark fresh discussions about how to balance technological cooperation with national security concerns in an era of rapid AI growth.

See also: GlobalData: China is ahead of global rivals for AI ‘unicorns’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Chinese firms use cloud loophole to access US AI tech appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/chinese-firms-cloud-loophole-access-us-ai-tech/feed/ 0
OpenAI warns California’s AI bill threatens US innovation https://www.artificialintelligence-news.com/news/openai-warns-california-ai-bill-threatens-us-innovation/ https://www.artificialintelligence-news.com/news/openai-warns-california-ai-bill-threatens-us-innovation/#respond Thu, 22 Aug 2024 15:46:57 +0000 https://www.artificialintelligence-news.com/?p=15810 OpenAI has added its voice to the growing chorus of tech leaders and politicians opposing a controversial AI safety bill in California. The company argues that the legislation, SB 1047, would stifle innovation and that regulation should be handled at a federal level. In a letter sent to California State Senator Scott Wiener’s office, OpenAI […]

The post OpenAI warns California’s AI bill threatens US innovation appeared first on AI News.

]]>
OpenAI has added its voice to the growing chorus of tech leaders and politicians opposing a controversial AI safety bill in California. The company argues that the legislation, SB 1047, would stifle innovation and that regulation should be handled at a federal level.

In a letter sent to California State Senator Scott Wiener’s office, OpenAI expressed concerns that the bill could have “broad and significant” implications for US competitiveness and national security. The company argued that SB 1047 would threaten California’s position as a global leader in AI, prompting talent to seek “greater opportunity elsewhere.” 

Introduced by Senator Wiener, the bill aims to enact “common sense safety standards” for companies developing large AI models exceeding specific size and cost thresholds. These standards would require companies to implement shut-down mechanisms, take “reasonable care” to prevent catastrophic outcomes, and submit compliance statements to the California attorney general. Failure to comply could result in lawsuits and civil penalties.

Lieutenant General John (Jack) Shanahan, who served in the US Air Force and was the inaugural director of the US Department of Defense’s Joint Artificial Intelligence Center (JAIC), believes the bill “thoughtfully navigates the serious risks that AI poses to both civil society and national security” and provides “pragmatic solutions”.

Hon. Andrew C. Weber – former Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs – echoed the national security concerns.

“The theft of a powerful AI system from a leading lab by our adversaries would impose considerable risks on us all,” said Weber. “Developers of the most advanced AI systems need to take significant cybersecurity precautions given the potential risks involved in their work. I’m glad to see that SB 1047 helps establish the necessary protective measures.”

SB 1047 has sparked fierce opposition from major tech companies, startups, and venture capitalists who argue that it overreaches for a nascent technology, potentially stifling innovation and driving businesses from the state. These concerns are echoed by OpenAI, with sources revealing that the company has paused plans to expand its San Francisco offices due to the uncertain regulatory landscape.

Senator Wiener defended the bill, stating that OpenAI’s letter fails to “criticise a single provision.” He dismissed concerns about talent exodus as “nonsensical,” stating that the law would apply to any company conducting business in California, regardless of their physical location. Wiener highlighted the bill’s “highly reasonable” requirement for large AI labs to test their models for catastrophic safety risks, a practice many have already committed to.

Critics, however, counter that mandating the submission of model details to the government will hinder innovation. They also fear that the threat of lawsuits will deter smaller, open-source developers from establishing startups.  In response to the backlash, Senator Wiener recently amended the bill to eliminate criminal liability for non-compliant companies, safeguard smaller developers, and remove the proposed “Frontier Model Division.”

OpenAI maintains that a clear federal framework, rather than state-level regulation, is essential for preserving public safety while maintaining  US competitiveness against rivals like China. The company highlighted the suitability of federal agencies, such as the White House Office of Science and Technology Policy and the Department of Commerce, to govern AI risks.

Senator Wiener acknowledged the ideal of congressional action but expressed scepticism about its likelihood. He drew parallels with California’s data privacy law, passed in the absence of federal action, suggesting that inaction from Congress shouldn’t preclude California from taking a leading role.

The California state assembly is set to vote on SB 1047 this month. If passed, the bill will land on the desk of Governor Gavin Newsom, whose stance on the legislation remains unclear. However, Newsom has publicly recognised the need to balance AI innovation with risk mitigation.

(Photo by Solen Feyissa)

See also: OpenAI delivers GPT-4o fine-tuning

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI warns California’s AI bill threatens US innovation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-warns-california-ai-bill-threatens-us-innovation/feed/ 0
Balancing innovation and trust: Experts assess the EU’s AI Act https://www.artificialintelligence-news.com/news/balancing-innovation-trust-experts-assess-eu-ai-act/ https://www.artificialintelligence-news.com/news/balancing-innovation-trust-experts-assess-eu-ai-act/#respond Wed, 31 Jul 2024 15:48:45 +0000 https://www.artificialintelligence-news.com/?p=15577 As the EU’s AI Act prepares to come into force tomorrow, industry experts are weighing in on its potential impact, highlighting its role in building trust and encouraging responsible AI adoption. Curtis Wilson, Staff Data Engineer at Synopsys’ Software Integrity Group, believes the new regulation could be a crucial step in addressing the AI industry’s […]

The post Balancing innovation and trust: Experts assess the EU’s AI Act appeared first on AI News.

]]>
As the EU’s AI Act prepares to come into force tomorrow, industry experts are weighing in on its potential impact, highlighting its role in building trust and encouraging responsible AI adoption.

Curtis Wilson, Staff Data Engineer at Synopsys’ Software Integrity Group, believes the new regulation could be a crucial step in addressing the AI industry’s most pressing challenge: building trust.

“The greatest problem facing AI developers is not regulation, but a lack of trust in AI,” Wilson stated. “For an AI system to reach its full potential, it needs to be trusted by the people who use it.”

This sentiment is echoed by Paul Cardno, Global Digital Automation & Innovation Senior Manager at 3M, who noted, “With nearly 80% of UK adults now believing AI needs to be heavily regulated, the introduction of the EU’s AI Act is something that businesses have been long-waiting for.”

Both experts emphasise the Act’s potential to foster confidence in AI technologies. Wilson explained that while his company has implemented internal measures to build trust, external regulation is equally important.

“I see regulatory frameworks like the EU AI Act as an essential component to building trust in AI,” Wilson said. “The strict rules and punishing fines will deter careless developers and help customers feel more confident in trusting and using AI systems.”

Cardno added, “We know that AI is shaping the future, but companies will only be able to reap the rewards if they have the confidence to rethink existing processes and break away from entrenched structures.”

The EU AI Act primarily focuses on high-risk systems and foundational models. Wilson noted that many of its requirements align with existing best practices in data science, such as risk management, testing procedures, and comprehensive documentation.

For UK businesses, the impact of the EU AI Act extends beyond those directly selling to EU markets. 

Wilson pointed out that certain aspects of the Act may apply to Northern Ireland due to the Windsor Framework. Additionally, the UK government is developing its own AI regulations, with a recent whitepaper emphasising interoperability with EU and US regulations.

“While the EU Act isn’t perfect, and needs to be assessed in relation to other global regulations, having a clear framework and guidance on AI from one of the world’s major economies will help encourage those who remain on the fence to tap into the AI revolution,” Cardno explained.

While acknowledging that the new regulations may create some friction, particularly around registration and certification, Wilson emphasised that many of the Act’s obligations are already standard practice for responsible companies. However, he recognised that small companies and startups might face greater challenges.

“Small companies and start-ups will experience issues more strongly,” Wilson said. “The regulation acknowledges this and has included provisions for sandboxes to foster AI innovation for these smaller businesses.”

However, Wilson notes that these sandboxes will be established at the national level by individual EU member states, potentially limiting access for UK businesses.

As the AI landscape continues to evolve, the EU AI Act represents a significant step towards establishing a framework for responsible AI development and deployment.

“Having a clear framework and guidance on AI from one of the world’s major economies will help encourage those who remain on the fence to tap into the AI revolution, ensuring it has a safe, positive ongoing influence for all organisations operating across the EU, which can only be a promising step forwards for the industry,” concludes Cardno.

(Photo by Guillaume Périgois)

See also: UAE blocks US congressional meetings with G42 amid AI transfer concerns

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Balancing innovation and trust: Experts assess the EU’s AI Act appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/balancing-innovation-trust-experts-assess-eu-ai-act/feed/ 0
Senators probe OpenAI on safety and employment practices https://www.artificialintelligence-news.com/news/senators-probe-openai-safety-employment-practices/ https://www.artificialintelligence-news.com/news/senators-probe-openai-safety-employment-practices/#respond Tue, 23 Jul 2024 14:12:24 +0000 https://www.artificialintelligence-news.com/?p=15499 Five prominent Senate Democrats have sent a letter to OpenAI CEO Sam Altman, seeking clarity on the company’s safety and employment practices. The letter – signed by Senators Brian Schatz, Ben Ray Luján, Peter Welch, Mark R. Warner, and Angus S. King, Jr. – comes in response to recent reports questioning OpenAI’s commitment to its […]

The post Senators probe OpenAI on safety and employment practices appeared first on AI News.

]]>
Five prominent Senate Democrats have sent a letter to OpenAI CEO Sam Altman, seeking clarity on the company’s safety and employment practices.

The letter – signed by Senators Brian Schatz, Ben Ray Luján, Peter Welch, Mark R. Warner, and Angus S. King, Jr. – comes in response to recent reports questioning OpenAI’s commitment to its stated goals of safe and responsible AI development.

The senators emphasise the importance of AI safety for national economic competitiveness and geopolitical standing. They note OpenAI’s partnerships with the US government and national security agencies to develop cybersecurity tools, underscoring the critical nature of secure AI systems.

“National and economic security are among the most important responsibilities of the United States Government, and unsecure or otherwise vulnerable AI systems are not acceptable,” the letter states.

The lawmakers have requested detailed information on several key areas by 13 August 2024. These include:

  • OpenAI’s commitment to dedicating 20% of its computing resources to AI safety research.
  • The company’s stance on non-disparagement agreements for current and former employees.
  • Procedures for employees to raise cybersecurity and safety concerns.
  • Security protocols to prevent theft of AI models, research, or intellectual property.
  • OpenAI’s adherence to its own Supplier Code of Conduct regarding non-retaliation policies and whistleblower channels.
  • Plans for independent expert testing and assessment of OpenAI’s systems pre-release.
  • Commitment to making future foundation models available to US Government agencies for pre-deployment testing.
  • Post-release monitoring practices and learnings from deployed models.
  • Plans for public release of retrospective impact assessments on deployed models.
  • Documentation on meeting voluntary safety and security commitments to the Biden-Harris administration.

The senators’ inquiry touches on recent controversies surrounding OpenAI, including reports of internal disputes over safety practices and alleged cybersecurity breaches. They specifically ask whether OpenAI will “commit to removing any other provisions from employment agreements that could be used to penalise employees who publicly raise concerns about company practices.”

This congressional scrutiny comes at a time of increasing debate over AI regulation and safety measures. The letter references the voluntary commitments made by leading AI companies to the White House last year, framing them as “an important step towards building this trust” in AI safety and security.

Kamala Harris may be the next US president following the election later this year. At the AI Safety Summit in the UK last year, Harris said: “Let us be clear, there are additional threats that also demand our action. Threats that are currently causing harm, and which to many people also feel existential… when people around the world cannot discern fact from fiction because of a flood of AI-enabled myths and disinformation.”

Chelsea Alves, a consultant with UNMiss, commented: “Kamala Harris’ approach to AI and big tech regulation is both timely and critical as she steps into the presidential race. Her policies could set new standards for how we navigate the complexities of modern technology and individual privacy.”

The response from OpenAI to these inquiries could have significant implications for the future of AI governance and the relationship between tech companies and government oversight bodies.

(Photo by Darren Halstead)

See also: OpenResearch reveals potential impacts of universal basic income

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Senators probe OpenAI on safety and employment practices appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/senators-probe-openai-safety-employment-practices/feed/ 0
OpenResearch reveals potential impacts of universal basic income https://www.artificialintelligence-news.com/news/openresearch-reveals-potential-impacts-universal-basic-income/ https://www.artificialintelligence-news.com/news/openresearch-reveals-potential-impacts-universal-basic-income/#respond Tue, 23 Jul 2024 11:45:52 +0000 https://www.artificialintelligence-news.com/?p=15495 A study conducted by OpenResearch has shed light on the transformative potential of universal basic income (UBI). The research aimed to “learn from participants’ experiences and better understand both the potential and the limitations of unconditional cash transfers.” The study – which provided participants with an extra $1,000 per month – revealed significant impacts across […]

The post OpenResearch reveals potential impacts of universal basic income appeared first on AI News.

]]>
A study conducted by OpenResearch has shed light on the transformative potential of universal basic income (UBI). The research aimed to “learn from participants’ experiences and better understand both the potential and the limitations of unconditional cash transfers.”

The study – which provided participants with an extra $1,000 per month – revealed significant impacts across various aspects of recipients’ lives, including health, spending habits, employment, personal agency, and housing mobility.

In healthcare, the analysis showed increased utilisation of medical services, particularly in dental and specialist care.

One participant noted, “I got myself braces…I feel like people underestimate the importance of having nice teeth because it affects more than just your own sense of self, it affects how people look at you.”

While no immediate measurable effects on physical health were observed, researchers suggest that increased medical care utilisation could lead to long-term health benefits.

The study also uncovered interesting spending patterns among UBI recipients.

On average, participants increased their overall monthly spending by $310, with significant allocations towards basic needs such as food, transportation, and rent. Notably, there was a 26% increase in financial support provided to others, highlighting the ripple effect of UBI on communities.

In terms of employment, the study revealed nuanced outcomes.

While there was a slight decrease in overall employment rates and work hours among recipients, the study found that UBI provided individuals with greater flexibility in making employment decisions aligned with their circumstances and goals.

One participant explained, “Because of that money and being able to build up my savings, I’m in a position for once to be picky…I don’t have to take a crappy job just because I need income right now.”

The research also uncovered significant improvements in personal agency and future planning. 

UBI recipients were 14% more likely to pursue education or job training and 5% more likely to have a budget compared to the control group. Black recipients in the third year of the program were 26% more likely to report starting or helping to start a business.

Lastly, the study’s analysis revealed increased housing mobility among UBI recipients. Participants were 11% more likely to move neighbourhoods and 23% more likely to actively search for new housing compared to the control group.

The study provides valuable insights into the potential impacts of UBI, offering policymakers and researchers a data-driven foundation for future decisions on social welfare programs. This major societal conversation may be necessary if worst case scenarios around AI-induced job displacement come to fruition.

(Photo by Freddie Collins on Unsplash)

See also: AI could unleash £119 billion in UK productivity

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenResearch reveals potential impacts of universal basic income appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openresearch-reveals-potential-impacts-universal-basic-income/feed/ 0