usa Archives - AI News https://www.artificialintelligence-news.com/news/tag/usa/ Artificial Intelligence News Thu, 24 Apr 2025 11:40:59 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png usa Archives - AI News https://www.artificialintelligence-news.com/news/tag/usa/ 32 32 BCG: Analysing the geopolitics of generative AI https://www.artificialintelligence-news.com/news/bcg-analysing-the-geopolitics-of-generative-ai/ https://www.artificialintelligence-news.com/news/bcg-analysing-the-geopolitics-of-generative-ai/#respond Fri, 11 Apr 2025 16:11:17 +0000 https://www.artificialintelligence-news.com/?p=105294 Generative AI is reshaping global competition and geopolitics, presenting challenges and opportunities for nations and businesses alike. Senior figures from Boston Consulting Group (BCG) and its tech division, BCG X, discussed the intricate dynamics of the global AI race, the dominance of superpowers like the US and China, the role of emerging “middle powers,” and […]

The post BCG: Analysing the geopolitics of generative AI appeared first on AI News.

]]>
Generative AI is reshaping global competition and geopolitics, presenting challenges and opportunities for nations and businesses alike.

Senior figures from Boston Consulting Group (BCG) and its tech division, BCG X, discussed the intricate dynamics of the global AI race, the dominance of superpowers like the US and China, the role of emerging “middle powers,” and the implications for multinational corporations.

AI investments expose businesses to increasingly tense geopolitics

Sylvain Duranton, Global Leader at BCG X, noted the significant geopolitical risk companies face: “For large companies, close to half of them, 44%, have teams around the world, not just in one country where their headquarters are.”

Sylvain Duranton, Global Leader at BCG X

Many of these businesses operate across numerous countries, making them vulnerable to differing regulations and sovereignty issues. “They’ve built their AI teams and ecosystem far before there was such tension around the world.”

Duranton also pointed to the stark imbalance in the AI supply race, particularly in investment.

Comparing the market capitalisation of tech companies, the US dwarfs Europe by a factor of 20 and the Asia Pacific region by five. Investment figures paint a similar picture, showing a “completely disproportionate” imbalance compared to the relative sizes of the economies.

This AI race is fuelled by massive investments in compute power, frontier models, and the emergence of lighter, open-weight models changing the competitive dynamic.   

Benchmarking national AI capabilities

Nikolaus Lang, Global Leader at the BCG Henderson Institute – BCG’s think tank – detailed the extensive research undertaken to benchmark national GenAI capabilities objectively.

The team analysed the “upstream of GenAI,” focusing on large language model (LLM) development and its six key enablers: capital, computing power, intellectual property, talent, data, and energy.

Using hard data like AI researcher numbers, patents, data centre capacity, and VC investment, they created a comparative analysis. Unsurprisingly, the analysis revealed the US and China as the clear AI frontrunners and maintain leads in geopolitics.

Nikolaus Lang, Global Leader at the BCG Henderson Institute

The US boasts the largest pool of AI specialists (around half a million), immense capital power ($303bn in VC funding, $212bn in tech R&D), and leading compute power (45 GW).

Lang highlighted America’s historical dominance, noting, “the US has been the largest producer of notable AI models with 67%” since 1950, a lead reflected in today’s LLM landscape. This strength is reinforced by “outsized capital power” and strategic restrictions on advanced AI chip access through frameworks like the US AI Diffusion Framework.   

China, the second AI superpower, shows particular strength in data—ranking highly in e-governance and mobile broadband subscriptions, alongside significant data centre capacity (20 GW) and capital power. 

Despite restricted access to the latest chips, Chinese LLMs are rapidly closing the gap with US models. Lang mentioned the emergence of models like DeepSpeech as evidence of this trend, achieved with smaller teams, fewer GPU hours, and previous-generation chips.

China’s progress is also fuelled by heavy investment in AI academic institutions (hosting 45 of the world’s top 100), a leading position in AI patent applications, and significant government-backed VC funding. Lang predicts “governments will play an important role in funding AI work going forward.”

The middle powers: Europe, Middle East, and Asia

Beyond the superpowers, several “middle powers” are carving out niches.

  • EU: While trailing the US and China, the EU holds the third spot with significant data centre capacity (8 GW) and the world’s second-largest AI talent pool (275,000 specialists) when capabilities are combined. Europe also leads in top AI publications. Lang stressed the need for bundled capacities, suggesting AI, defence, and renewables are key areas for future EU momentum.
  • Middle East (UAE & Saudi Arabia): These nations leverage strong capital power via sovereign wealth funds and competitively low electricity prices to attract talent and build compute power, aiming to become AI drivers “from scratch”. They show positive dynamics in attracting AI specialists and are climbing the ranks in AI publications.   
  • Asia (Japan & South Korea): Leveraging strong existing tech ecosystems in hardware and gaming, these countries invest heavily in R&D (around $207bn combined by top tech firms). Government support, particularly in Japan, fosters both supply and demand. Local LLMs and strategic investments by companies like Samsung and SoftBank demonstrate significant activity.   
  • Singapore: Singapore is boosting its AI ecosystem by focusing on talent upskilling programmes, supporting Southeast Asia’s first LLM, ensuring data centre capacity, and fostering adoption through initiatives like establishing AI centres of excellence.   

The geopolitics of generative AI: Strategy and sovereignty

The geopolitics of generative AI is being shaped by four clear dynamics: the US retains its lead, driven by an unrivalled tech ecosystem; China is rapidly closing the gap; middle powers face a strategic choice between building supply or accelerating adoption; and government funding is set to play a pivotal role, particularly as R&D costs climb and commoditisation sets in.

As geopolitical tensions mount, businesses are likely to diversify their GenAI supply chains to spread risk. The race ahead will be defined by how nations and companies navigate the intersection of innovation, policy, and resilience.

(Photo by Markus Krisetya)

See also: OpenAI counter-sues Elon Musk for attempts to ‘take down’ AI rival

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post BCG: Analysing the geopolitics of generative AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/bcg-analysing-the-geopolitics-of-generative-ai/feed/ 0
Hugging Face calls for open-source focus in the AI Action Plan https://www.artificialintelligence-news.com/news/hugging-face-open-source-focus-ai-action-plan/ https://www.artificialintelligence-news.com/news/hugging-face-open-source-focus-ai-action-plan/#respond Thu, 20 Mar 2025 17:41:39 +0000 https://www.artificialintelligence-news.com/?p=104946 Hugging Face has called on the US government to prioritise open-source development in its forthcoming AI Action Plan. In a statement to the Office of Science and Technology Policy (OSTP), Hugging Face emphasised that “thoughtful policy can support innovation while ensuring that AI development remains competitive, and aligned with American values.” Hugging Face, which hosts […]

The post Hugging Face calls for open-source focus in the AI Action Plan appeared first on AI News.

]]>
Hugging Face has called on the US government to prioritise open-source development in its forthcoming AI Action Plan.

In a statement to the Office of Science and Technology Policy (OSTP), Hugging Face emphasised that “thoughtful policy can support innovation while ensuring that AI development remains competitive, and aligned with American values.”

Hugging Face, which hosts over 1.5 million public models across various sectors and serves seven million users, proposes an AI Action Plan centred on three interconnected pillars:

  1. Hugging Face stresses the importance of strengthening open-source AI ecosystems.  The company argues that technical innovation stems from diverse actors across institutions and that support for infrastructure – such as the National AI Research Resource (NAIRR), and investment in open science and data – allows these contributions to have an additive effect and accelerate robust innovation.
  1. The company prioritises efficient and reliable adoption of AI. Hugging Face believes that spreading the benefits of the technology by facilitating its adoption along the value chain requires actors across sectors of activity to shape its development. It states that more efficient, modular, and robust AI models require research and infrastructural investments to enable the broadest possible participation and innovation—enabling diffusion of technology across the US economy.
  1. Hugging Face also highlights the need to promote security and standards. The company suggests that decades of practices in open-source software cybersecurity, information security, and standards can inform safer AI technology. It advocates for promoting traceability, disclosure, and interoperability standards to foster a more resilient and robust technology ecosystem.

Open-source is key for AI advancement in the US (and beyond)

Hugging Face underlines that modern AI is built on decades of open research, with commercial giants relying heavily on open-source contributions. Recent breakthroughs – such as OLMO-2 and Olympic-Coder – demonstrate that open research remains a promising path to developing systems that match the performance of commercial models, and can often surpass them, especially in terms of efficiency and performance in specific domains.

“Perhaps most striking is the rapid compression of development timelines,” notes the company, “what once required over 100B parameter models just two years ago can now be accomplished with 2B parameter models, suggesting an accelerating path to parity.”

This trend towards more accessible, efficient, and collaborative AI development indicates that open approaches to AI development have a critical role to play in enabling a successful AI strategy that maintains technical leadership and supports more widespread and secure adoption of the technology.

Hugging Face argues that open models, infrastructure, and scientific practices constitute the foundation of AI innovation, allowing a diverse ecosystem of researchers, companies, and developers to build upon shared knowledge.

The company’s platform hosts AI models and datasets from both small actors (e.g., startups, universities) and large organisations (e.g., Microsoft, Google, OpenAI, Meta), demonstrating how open approaches accelerate progress and democratise access to AI capabilities.

“The United States must lead in open-source AI and open science, which can enhance American competitiveness by fostering a robust ecosystem of innovation and ensuring a healthy balance of competition and shared innovation,” states Hugging Face.

Research has shown that open technical systems act as force multipliers for economic impact, with an estimated 2000x multiplier effect. This means that $4 billion invested in open systems could potentially generate $8 trillion in value for companies using them.

These economic benefits extend to national economies as well. Without any open-source software contributions, the average country would lose 2.2% of its GDP. Open-source drove between €65 billion and €95 billion of European GDP in 2018 alone, a finding so significant that the European Commission cited it when establishing new rules to streamline the process for open-sourcing government software.

This demonstrates how open-source impact translates directly into policy action and economic advantage at the national level, underlining the importance of open-source as a public good.

Practical factors driving commercial adoption of open-source AI

Hugging Face identifies several practical factors driving the commercial adoption of open models:

  • Cost efficiency is a major driver, as developing AI models from scratch requires significant investment, so leveraging open foundations reduces R&D expenses.
  • Customisation is crucial, as organisations can adapt and deploy models specifically tailored to their use cases rather than relying on one-size-fits-all solutions.
  • Open models reduce vendor lock-in, giving companies greater control over their technology stack and independence from single providers.
  • Open models have caught up to and, in certain cases, surpassed the capabilities of closed, proprietary systems.

These factors are particularly valuable for startups and mid-sized companies, which can access cutting-edge technology without massive infrastructure investments. Banks, pharmaceutical companies, and other industries have been adapting open models to specific market needs—demonstrating how open-source foundations support a vibrant commercial ecosystem across the value chain.

Hugging Face’s policy recommendations to support open-source AI in the US

To support the development and adoption of open AI systems, Hugging Face offers several policy recommendations:

  • Enhance research infrastructure: Fully implement and expand the National AI Research Resource (NAIRR) pilot. Hugging Face’s active participation in the NAIRR pilot has demonstrated the value of providing researchers with access to computing resources, datasets, and collaborative tools.
  • Allocate public computing resources for open-source: The public should have ways to participate via public AI infrastructure. One way to do this would be to dedicate a portion of publicly-funded computing infrastructure to support open-source AI projects, reducing barriers to innovation for smaller research teams and companies that cannot afford proprietary systems.
  • Enable access to data for developing open systems: Create sustainable data ecosystems through targeted policies that address the decreasing data commons. Publishers are increasingly signing data licensing deals with proprietary AI model developers, meaning that quality data acquisition costs are now approaching or even surpassing computational expenses of training frontier models, threatening to lock out small open developers from access to quality data.  Support organisations that contribute to public data repositories and streamline compliance pathways that reduce legal barriers to responsible data sharing.
  • Develop open datasets: Invest in the creation, curation, and maintenance of robust, representative datasets that can support the next generation of AI research and applications. Expand initiatives like the IBM AI Alliance Trusted Data Catalog and support projects like IDI’s AI-driven Digitization of the public collections in the Boston Public Library.
  • Strengthen rights-respecting data access frameworks: Establish clear guidelines for data usage, including standardised protocols for anonymisation, consent management, and usage tracking.  Support public-private partnerships to create specialised data trusts for high-value domains like healthcare and climate science, ensuring that individuals and organisations maintain appropriate control over their data while enabling innovation.    
  • Invest in stakeholder-driven innovation: Create and support programmes that enable organisations across diverse sectors (healthcare, manufacturing, education) to develop customised AI systems for their specific needs, rather than relying exclusively on general-purpose systems from major providers. This enables broader participation in the AI ecosystem and ensures that the benefits of AI extend throughout the economy.
  • Strengthen centres of excellence: Expand NIST’s role as a convener for AI experts across academia, industry, and government to share lessons and develop best practices.  In particular, the AI Risk Management Framework has played a significant role in identifying stages of AI development and research questions that are critical to ensuring more robust and secure technology deployment for all. The tools developed at Hugging Face, from model documentation to evaluation libraries, are directly shaped by these questions.
  • Support high-quality data for performance and reliability evaluation: AI development depends heavily on data, both to train models and to reliably evaluate their progress, strengths, risks, and limitations. Fostering greater access to public data in a safe and secure way and ensuring that the evaluation data used to characterise models is sound and evidence-based will accelerate progress in both performance and reliability of the technology.

Prioritising efficient and reliable AI adoption

Hugging Face highlights that smaller companies and startups face significant barriers to AI adoption due to high costs and limited resources. According to IDC, global AI spending will reach $632 billion in 2028, but these costs remain prohibitive for many small organisations.

For organisations adopting open-source AI tools, it brings financial returns. 51% of surveyed companies currently utilising open-source AI tools report positive ROI, compared to just 41% of those not using open-source.

However, energy scarcity presents a growing concern, with the International Energy Agency projecting that data centres’ electricity consumption could double from 2022 levels to 1,000 TWh by 2026, equivalent to Japan’s entire electricity demand. While training AI models is energy-intensive, inference, due to its scale and frequency, can ultimately exceed training energy consumption.

Ensuring broad AI accessibility requires both hardware optimisations and scalable software frameworks.  A range of organisations are developing models tailored to their specific needs, and US leadership in efficiency-focused AI development presents a strategic advantage. The DOE’s AI for Energy initiative further supports research into energy-efficient AI, facilitating wider adoption without excessive computational demands.

With its letter to the OSTP, Hugging Face advocates for an AI Action Plan centred on open-source principles. By taking decisive action, the US can secure its leadership, drive innovation, enhance security, and ensure the widespread benefits of AI are realised across society and the economy.

See also: UK minister in US to pitch Britain as global AI investment hub

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Hugging Face calls for open-source focus in the AI Action Plan appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/hugging-face-open-source-focus-ai-action-plan/feed/ 0
UK minister in US to pitch Britain as global AI investment hub https://www.artificialintelligence-news.com/news/uk-minister-in-us-pitch-britain-global-ai-investment-hub/ https://www.artificialintelligence-news.com/news/uk-minister-in-us-pitch-britain-global-ai-investment-hub/#respond Thu, 20 Mar 2025 13:18:04 +0000 https://www.artificialintelligence-news.com/?p=104940 The UK aims to secure its position as a global leader with additional AI investment, with Technology Secretary Peter Kyle currently in the US to champion Britain’s credentials. As the UK government prioritises AI within its “Plan for Change,” Kyle’s visit aims to strengthen the special relationship between the UK and the US that has […]

The post UK minister in US to pitch Britain as global AI investment hub appeared first on AI News.

]]>
The UK aims to secure its position as a global leader with additional AI investment, with Technology Secretary Peter Kyle currently in the US to champion Britain’s credentials.

As the UK government prioritises AI within its “Plan for Change,” Kyle’s visit aims to strengthen the special relationship between the UK and the US that has been under particular strain in recent years.

Speaking at NVIDIA’s annual conference in San Jose on 20th March, Kyle outlined the government’s strategy to “rewire” the British economy around AI. This initiative seeks to distribute the benefits of AI-driven wealth creation beyond traditional hubs like Silicon Valley and London, empowering communities across the UK to embrace its opportunities.

Addressing an audience of business leaders, developers, and innovators, the Technology Secretary articulated his vision for leveraging AI and advanced technologies to tackle complex global challenges, positioning Britain as a beacon of innovation.

The UK is actively deploying AI to enhance public services and stimulate economic growth, a cornerstone of the government’s “Plan for Change.”

Kyle is now highlighting the significant potential of the UK’s AI sector, currently valued at over $92 billion and projected to exceed $1 trillion by 2035. This growth trajectory, according to the government, will position Britain as the second-leading AI nation in the democratic world—presenting a wealth of investment opportunities for US companies and financial institutions.

A central theme of Kyle’s message is the readiness of the UK to embrace AI investment, with a particular emphasis on transforming “the relics of economic eras past into the UK’s innovative AI Growth Zones.”

These “AI Growth Zones” are a key element of the government’s AI Opportunities Action Plan. They are strategically designated areas designed to rapidly attract large-scale AI investment through streamlined regulations and dedicated infrastructure.

AI Growth Zones, as the name suggests, are envisioned as vibrant hubs for AI development with a pipeline of new opportunities for companies to scale up and innovate. The Technology Secretary is actively encouraging investors to participate in this new form of partnership.

During his speech at the NVIDIA conference, Kyle is expected to detail how these Growth Zones – benefiting from access to substantial power connections and a planning system designed to expedite construction – will facilitate the development of a compute infrastructure on a scale that the UK “has never seen before.”

The government has already received numerous proposals from local leaders and industry stakeholders across the nation, demonstrating Britain’s eagerness to utilise AI to revitalise communities and drive economic growth throughout the country.

This initiative is expected to contribute to higher living standards across the UK, a key priority for the government over the next four years. The AI Growth Zones are intended to deliver the jobs, investment, and a thriving business environment necessary to improve the financial well-being of citizens and deliver on the “Plan for Change.”

At the NVIDIA conference, Kyle is expected to say: “In empty factories and abandoned mines, in derelict sites and unused power supplies, I see the places where we can begin to build a new economic model. A model completely rewired around the immense power of artificial intelligence.

“Where, faced with that power, the state is neither a blocker nor a shirker—but an agile, proactive partner. In Britain, we want to turn the relics of economic eras past into AI Growth Zones.”

As part of his visit to the US, Peter Kyle will also engage with prominent companies in the tech sector, including OpenAI, Anthropic, NVIDIA, and Vantage. His aim is to encourage more of these companies to establish a presence in the UK, positioning it as their “Silicon Valley home from home.”

Furthermore, the Technology Secretary is expected to state: “There is a real hunger for investment in Britain, and people who are optimistic about the future, and hopeful for the opportunities which AI will bring for them and their families. States owe it to their citizens to support it. Not through diktat or directive, but through partnership.”

The UK Prime Minister and the President of the US have placed AI at the forefront of the transatlantic relationship. During a visit to the White House last month, the Prime Minister confirmed that both nations are collaborating on a new economic deal with advanced technologies at its core.

Since unveiling its new AI strategy at the beginning of the year and assigning the technology a central role in delivering the government’s ‘Plan for Change,’ the UK has already witnessed significant investment from US companies seeking to establish AI bases in Britain.

Notable recent investments include a substantial £12 billion commitment from Vantage Data Centers to significantly expand Britain’s data infrastructure, which is projected to create approximately 11,500 jobs. Additionally, last month saw the UK Government formalise a partnership with Anthropic to enhance collaboration on leveraging AI to improve public services nationwide.

By strengthening these partnerships with leading US tech firms and investors, the UK’s AI sector is well-positioned for sustained growth as the government aims to continue to remove innovation barriers.

(Photo by Billy Joachim)

See also: OpenAI and Google call for US government action to secure AI lead

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK minister in US to pitch Britain as global AI investment hub appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/uk-minister-in-us-pitch-britain-global-ai-investment-hub/feed/ 0
OpenAI and Google call for US government action to secure AI lead https://www.artificialintelligence-news.com/news/openai-and-google-call-us-government-action-secure-ai-lead/ https://www.artificialintelligence-news.com/news/openai-and-google-call-us-government-action-secure-ai-lead/#respond Fri, 14 Mar 2025 16:12:54 +0000 https://www.artificialintelligence-news.com/?p=104794 OpenAI and Google are each urging the US government to take decisive action to secure the nation’s AI leadership. “As America’s world-leading AI sector approaches AGI, with a Chinese Communist Party (CCP) determined to overtake us by 2030, the Trump Administration’s new AI Action Plan can ensure that American-led AI built on democratic principles continues […]

The post OpenAI and Google call for US government action to secure AI lead appeared first on AI News.

]]>
OpenAI and Google are each urging the US government to take decisive action to secure the nation’s AI leadership.

“As America’s world-leading AI sector approaches AGI, with a Chinese Communist Party (CCP) determined to overtake us by 2030, the Trump Administration’s new AI Action Plan can ensure that American-led AI built on democratic principles continues to prevail over CCP-built autocratic, authoritarian AI,” wrote OpenAI, in a letter to the Office of Science and Technology Policy.

In a separate letter, Google echoed this sentiment by stating, “While America currently leads the world in AI – and is home to the most capable and widely adopted AI models and tools – our lead is not assured.”    

A plan for the AI Action Plan

OpenAI highlighted AI’s potential to “scale human ingenuity,” driving productivity, prosperity, and freedom.  The company likened the current advancements in AI to historical leaps in innovation, such as the domestication of the horse, the invention of the printing press, and the advent of the computer.

We are at “the doorstep of the next leap in prosperity,” according to OpenAI CEO Sam Altman. The company stresses the importance of “freedom of intelligence,” advocating for open access to AGI while safeguarding against autocratic control and bureaucratic barriers.

OpenAI also outlined three scaling principles:

  1. The intelligence of an AI model roughly equals the log of the resources used to train and run it.
  1. The cost to use a given level of AI capability falls by about 10x every 12 months.
  1. The amount of calendar time it takes to improve an AI model keeps decreasing.

Google also has a three-point plan for the US to focus on:

  1. Invest in AI: Google called for coordinated action to address the surging energy needs of AI infrastructure, balanced export controls, continued funding for R&D, and pro-innovation federal policy frameworks.
  1. Accelerate and modernise government AI adoption: Google urged the federal government to lead by example through AI adoption and deployment, including implementing multi-vendor, interoperable AI solutions and streamlining procurement processes.
  1. Promote pro-innovation approaches internationally: Google advocated for an active international economic policy to support AI innovation, championing market-driven technical standards, working with aligned countries to address national security risks, and combating restrictive foreign AI barriers.

AI policy recommendations for the US government

Both companies provided detailed policy recommendations to the US government.

OpenAI’s proposals include:

  • A regulatory strategy that ensures the freedom to innovate through voluntary partnership between the federal government and the private sector.    
  • An export control strategy that promotes the global adoption of American AI systems while protecting America’s AI lead.    
  • A copyright strategy that protects the rights of content creators while preserving American AI models’ ability to learn from copyrighted material.    
  • An infrastructure opportunity strategy to drive growth, including policies to support a thriving AI-ready workforce and ecosystems of labs, start-ups, and larger companies.    
  • An ambitious government adoption strategy to ensure the US government itself sets an example of using AI to benefit its citizens.    

Google’s recommendations include:

  • Advancing energy policies to power domestic data centres, including transmission and permitting reform.    
  • Adopting balanced export control policies that support market access while targeting pertinent risks.    
  • Accelerating AI R&D, streamlining access to computational resources, and incentivising public-private partnerships.    
  • Crafting a pro-innovation federal framework for AI, including federal legislation that prevents a patchwork of state laws, ensuring industry has access to data that enables fair learning, emphasising sector-specific and risk-based AI governance, and supporting workforce initiatives to develop AI skills.    

Both OpenAI and Google emphasise the need for swift and decisive action. OpenAI warned that America’s lead in AI is narrowing, while Google stressed that policy decisions will determine the outcome of the global AI competition.

“We are in a global AI competition, and policy decisions will determine the outcome,” Google explained. “A pro-innovation approach that protects national security and ensures that everyone benefits from AI is essential to realising AI’s transformative potential and ensuring that America’s lead endures.”

(Photo by Nils Huenerfuerst

See also: Gemma 3: Google launches its latest open AI models

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI and Google call for US government action to secure AI lead appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-and-google-call-us-government-action-secure-ai-lead/feed/ 0
DeepSeek ban? China data transfer boosts security concerns https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/ https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/#respond Fri, 07 Feb 2025 17:44:01 +0000 https://www.artificialintelligence-news.com/?p=104228 US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company. DeepSeek, practically unknown just weeks ago, took the tech world by storm—gaining global acclaim for its cutting-edge performance while sparking debates reminiscent of the TikTok saga. Its rise has been fuelled in part […]

The post DeepSeek ban? China data transfer boosts security concerns appeared first on AI News.

]]>
US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company.

DeepSeek, practically unknown just weeks ago, took the tech world by storm—gaining global acclaim for its cutting-edge performance while sparking debates reminiscent of the TikTok saga.

Its rise has been fuelled in part by its business model: unlike many of its American counterparts, including OpenAI and Google, DeepSeek offered its advanced powers for free.

However, concerns have been raised about DeepSeek’s extensive data collection practices and a probe has been launched by Microsoft and OpenAI over a breach of the latter’s system by a group allegedly linked to the Chinese AI startup.

A threat to US AI dominance

DeepSeek’s astonishing capabilities have, within a matter of weeks, positioned it as a major competitor to American AI stalwarts like OpenAI’s ChatGPT and Google Gemini. But, alongside the app’s prowess, concerns have emerged over alleged ties to the Chinese Communist Party (CCP).  

According to security researchers, hidden code within DeepSeek’s AI has been found transmitting user data to China Mobile—a state-owned telecoms company banned in the US. DeepSeek’s own privacy policy permits the collection of data such as IP addresses, device information, and, most alarmingly, even keystroke patterns.

Such findings have led to bipartisan efforts in the US Congress to curtail DeepSeek’s influence, with lawmakers scrambling to protect sensitive data from potential CCP oversight.

Reps. Darin LaHood (R-IL) and Josh Gottheimer (D-NJ) are spearheading efforts to introduce legislation that would prohibit DeepSeek from being installed on all government-issued devices. 

Several federal agencies, among them NASA and the US Navy, have already preemptively issued a ban on DeepSeek. Similarly, the state of Texas has also introduced restrictions.

Potential ban of DeepSeek a TikTok redux?

The controversy surrounding DeepSeek bears similarities to debates over TikTok, the social video app owned by Chinese company ByteDance. TikTok remains under fire over accusations that user data is accessible to the CCP, though definitive proof has yet to materialise.

In contrast, DeepSeek’s case involves clear evidence, as revealed by cybersecurity investigators who identified the app’s unauthorised data transmissions. While some might say DeepSeek echoes the TikTok controversy, security experts argue that it represents a much starker and documented threat.

Lawmakers around the world are taking note. In addition to the US proposals, DeepSeek has already faced bans from government systems in countries including Australia, South Korea, and Italy.  

AI becomes a geopolitical battleground

The concerns over DeepSeek exemplify how AI has now become a geopolitical flashpoint between global superpowers—especially between the US and China.

American AI firms like OpenAI have enjoyed a dominant position in recent years, but Chinese companies have poured resources into catching up and, in some cases, surpassing their US competitors.  

DeepSeek’s lightning-quick growth has unsettled that balance, not only because of its AI models but also due to its pricing strategy, which undercuts competitors by offering the app free of charge. That begs the question of whether it’s truly “free” or if the cost is paid in lost privacy and security.

China Mobile’s involvement raises further eyebrows, given the state-owned telecom company’s prior sanctions and prohibition from the US market. Critics worry that data collected through platforms like DeepSeek could fill gaps in Chinese surveillance activities or even potential economic manipulations.

A nationwide DeepSeek ban is on the cards

If the proposed US legislation is passed, it could represent the first step toward nationwide restrictions or an outright ban on DeepSeek. Geopolitical tension between China and the West continues to shape policies in advanced technologies, and AI appears to be the latest arena for this ongoing chess match.  

In the meantime, calls to regulate applications like DeepSeek are likely to grow louder. Conversations about data privacy, national security, and ethical boundaries in AI development are becoming ever more urgent as individuals and organisations across the globe navigate the promises and pitfalls of next-generation tools.  

DeepSeek’s rise may have, indeed, rattled the AI hierarchy, but whether it can maintain its momentum in the face of increasing global pushback remains to be seen.

(Photo by Solen Feyissa)

See also: AVAXAI brings DeepSeek to Web3 with decentralised AI agents

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepSeek ban? China data transfer boosts security concerns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/feed/ 0
ChatGPT Gov aims to modernise US government agencies https://www.artificialintelligence-news.com/news/chatgpt-gov-aims-modernise-us-government-agencies/ https://www.artificialintelligence-news.com/news/chatgpt-gov-aims-modernise-us-government-agencies/#respond Tue, 28 Jan 2025 16:21:26 +0000 https://www.artificialintelligence-news.com/?p=16999 OpenAI has launched ChatGPT Gov, a specially designed version of its AI chatbot tailored for use by US government agencies. ChatGPT Gov aims to harness the potential of AI to enhance efficiency, productivity, and service delivery while safeguarding sensitive data and complying with stringent security requirements. “We believe the US government’s adoption of artificial intelligence […]

The post ChatGPT Gov aims to modernise US government agencies appeared first on AI News.

]]>
OpenAI has launched ChatGPT Gov, a specially designed version of its AI chatbot tailored for use by US government agencies.

ChatGPT Gov aims to harness the potential of AI to enhance efficiency, productivity, and service delivery while safeguarding sensitive data and complying with stringent security requirements.

“We believe the US government’s adoption of artificial intelligence can boost efficiency and productivity and is crucial for maintaining and enhancing America’s global leadership in this technology,” explained OpenAI.

The company emphasised how its AI solutions present “enormous potential” for tackling complex challenges in the public sector, ranging from improving public health and infrastructure to bolstering national security.

By introducing ChatGPT Gov, OpenAI hopes to offer tools that “serve the national interest and the public good, aligned with democratic values,” while assisting policymakers in responsibly integrating AI to enhance services for the American people.

The role of ChatGPT Gov

Public sector organisations can deploy ChatGPT Gov within their own Microsoft Azure environments, either through Azure’s commercial cloud or the specialised Azure Government cloud.

This self-hosting capability ensures that agencies can meet strict security, privacy, and compliance standards, such as IL5, CJIS, ITAR, and FedRAMP High. 

OpenAI believes this infrastructure will not only help facilitate compliance with cybersecurity frameworks, but also speed up internal authorisation processes for handling non-public sensitive data.

The tailored version of ChatGPT incorporates many of the features found in the enterprise version, including:

  • The ability to save and share conversations within a secure government workspace.
  • Uploading text and image files for streamlined workflows.
  • Access to GPT-4o, OpenAI’s state-of-the-art model capable of advanced text interpretation, summarisation, coding, image analysis, and mathematics.
  • Customisable GPTs, which enable users to create and share specifically tailored models for their agency’s needs.
  • A built-in administrative console to help CIOs and IT departments manage users, groups, security protocols such as single sign-on (SSO), and more.

These features ensure that ChatGPT Gov is not merely a tool for innovation, but an infrastructure supportive of secure and efficient operations across US public-sector entities.

OpenAI says it’s actively working to achieve FedRAMP Moderate and High accreditations for its fully managed SaaS product, ChatGPT Enterprise, a step that would bolster trust in its AI offerings for government use.

Additionally, the company is exploring ways to expand ChatGPT Gov’s capabilities into Azure’s classified regions for even more secure environments.

“ChatGPT Gov reflects our commitment to helping US government agencies leverage OpenAI’s technology today,” the company said.

A better track record in government than most politicians

Since January 2024, ChatGPT has seen widespread adoption among US government agencies, with over 90,000 users across more than 3,500 federal, state, and local agencies having already sent over 18 million messages to support a variety of operational tasks.

Several notable agencies have highlighted how they are employing OpenAI’s AI tools for meaningful outcomes:

  • The Air Force Research Laboratory: The lab uses ChatGPT Enterprise for administrative purposes, including improving access to internal resources, basic coding assistance, and boosting AI education efforts.
  • Los Alamos National Laboratory: The laboratory leverages ChatGPT Enterprise for scientific research and innovation. This includes work within its Bioscience Division, which is evaluating ways GPT-4o can safely advance bioscientific research in laboratory settings.
  • State of Minnesota: Minnesota’s Enterprise Translations Office uses ChatGPT Team to provide faster, more accurate translation services to multilingual communities across the state. The integration has resulted in significant cost savings and reduced turnaround times.
  • Commonwealth of Pennsylvania: Employees in Pennsylvania’s pioneering AI pilot programme reported that ChatGPT Enterprise helped them reduce routine task times, such as analysing project requirements, by approximately 105 minutes per day on days they used the tool.

These early use cases demonstrate the transformative potential of AI applications across various levels of government.

Beyond delivering tangible improvements to government workflows, OpenAI seeks to foster public trust in artificial intelligence through collaboration and transparency. The company said it is committed to working closely with government agencies to align its tools with shared priorities and democratic values. 

“We look forward to collaborating with government agencies to enhance service delivery to the American people through AI,” OpenAI stated.

As other governments across the globe begin adopting similar technologies, America’s proactive approach may serve as a model for integrating AI into the public sector while safeguarding against risks.

Whether supporting administrative workflows, research initiatives, or language services, ChatGPT Gov stands as a testament to the growing role AI will play in shaping the future of effective governance.

(Photo by Dave Sherrill)

See also: Yiannis Antoniou, Lab49: OpenAI Operator kickstarts era of browser AI agents

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ChatGPT Gov aims to modernise US government agencies appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/chatgpt-gov-aims-modernise-us-government-agencies/feed/ 0
AI governance: Analysing emerging global regulations https://www.artificialintelligence-news.com/news/ai-governance-analysing-emerging-global-regulations/ https://www.artificialintelligence-news.com/news/ai-governance-analysing-emerging-global-regulations/#respond Thu, 19 Dec 2024 16:21:18 +0000 https://www.artificialintelligence-news.com/?p=16742 Governments are scrambling to establish regulations to govern AI, citing numerous concerns over data privacy, bias, safety, and more. AI News caught up with Nerijus Šveistys, Senior Legal Counsel at Oxylabs, to understand the state of play when it comes to AI regulation and its potential implications for industries, businesses, and innovation. “The boom of […]

The post AI governance: Analysing emerging global regulations appeared first on AI News.

]]>
Governments are scrambling to establish regulations to govern AI, citing numerous concerns over data privacy, bias, safety, and more.

AI News caught up with Nerijus Šveistys, Senior Legal Counsel at Oxylabs, to understand the state of play when it comes to AI regulation and its potential implications for industries, businesses, and innovation.

“The boom of the last few years appears to have sparked a push to establish regulatory frameworks for AI governance,” explains Šveistys.

“This is a natural development, as the rise of AI seems to pose issues in data privacy and protection, bias and discrimination, safety, intellectual property, and other legal areas, as well as ethics that need to be addressed.”

Regions diverge in regulatory strategy

The European Union’s AI Act has, unsurprisingly, positioned the region with a strict, centralised approach. The regulation, which came into force this year, is set to be fully effective by 2026.

Šveistys pointed out that the EU has acted relatively swiftly compared to other jurisdictions: “The main difference we can see is the comparative quickness with which the EU has released a uniform regulation to govern the use of all types of AI.”

Meanwhile, other regions have opted for more piecemeal approaches. China, for instance, has been implementing regulations specific to certain AI technologies in a phased-out manner. According to Šveistys, China began regulating AI models as early as 2021.

“In 2021, they introduced regulation on recommendation algorithms, which [had] increased their capabilities in digital advertising. It was followed by regulations on deep synthesis models or, in common terms, deepfakes and content generation in 2022,” he said.

“Then, in 2023, regulation on generative AI models was introduced as these models were making a splash in commercial usage.”

The US, in contrast, remains relatively uncoordinated in its approach. Federal-level regulations are yet to be enacted, with efforts mostly emerging at the state level.

“There are proposed regulations at the state level, such as the so-called California AI Act, but even if they come into power, it may still take some time before they do,” Šveistys noted.

This delay in implementing unified AI regulations in the US has raised questions about the extent to which business pushback may be contributing to the slow rollout. Šveistys said that while lobbyist pressure is a known factor, it’s not the only potential reason.

“There was pushback to the EU AI Act, too, which was nevertheless introduced. Thus, it is not clear whether the delay in the US is only due to lobbyism or other obstacles in the legislation enactment process,” explains Šveistys.

“It might also be because some still see AI as a futuristic concern, not fully appreciating the extent to which it is already a legal issue of today.”

Balancing innovation and safety

Differentiated regulatory approaches could affect the pace of innovation and business competitiveness across regions.

Europe’s regulatory framework, though more stringent, aims to ensure consumer protection and ethical adherence—something that less-regulated environments may lack.

“More rigid regulatory frameworks may impose compliance costs for businesses in the AI field and stifle competitiveness and innovation. On the other hand, they bring the benefits of protecting consumers and adhering to certain ethical norms,” comments Šveistys.

This trade-off is especially pronounced in AI-related sectors such as targeted advertising, where algorithmic bias is increasingly scrutinised.

AI governance often extends beyond laws that specifically target AI, incorporating related legal areas like those governing data collection and privacy. For example, the EU AI Act also regulates the use of AI in physical devices, such as elevators.

“Additionally, all businesses that collect data for advertisement are potentially affected as AI regulation can also cover algorithmic bias in targeted advertising,” emphasises Šveistys.

Impact on related industries

One industry that is deeply intertwined with AI developments is web scraping. Typically used for collecting publicly available data, web scraping is undergoing an AI-driven evolution.

“From data collection, validation, analysis, or overcoming anti-scraping measures, there is a lot of potential for AI to massively improve the efficiency, accuracy, and adaptability of web scraping operations,” said Šveistys. 

However, as AI regulation and related laws tighten, web scraping companies will face greater scrutiny.

“AI regulations may also bring the spotlight on certain areas of law that were always very relevant to the web scraping industry, such as privacy or copyright laws,” Šveistys added.

“At the end of the day, scraping content protected by such laws without proper authorisation could always lead to legal issues, and now so can using AI this way.”

Copyright battles and legal precedents

The implications of AI regulation are also playing out on a broader legal stage, particularly in cases involving generative AI tools.

High-profile lawsuits have been launched against AI giants like OpenAI and its primary backer, Microsoft, by authors, artists, and musicians who claim their copyrighted materials were used to train AI systems without proper permission.

“These cases are pivotal in determining the legal boundaries of using copyrighted material for AI development and establishing legal precedents for protecting intellectual property in the digital age,” said Šveistys.

While these lawsuits could take years to resolve, their outcomes may fundamentally shape the future of AI development. So, what can businesses do now as the regulatory and legal landscape continues to evolve?

“Speaking about the specific cases of using copyrighted material for AI training, businesses should approach this the same way as any web-scraping activity – that is, evaluate the specific data they wish to collect with the help of a legal expert in the field,” recommends Šveistys.

“It is important to recognise that the AI legal landscape is very new and rapidly evolving, with not many precedents in place to refer to as of yet. Hence, continuous monitoring and adaptation of your AI usage are crucial.”

Just this week, the UK Government made headlines with its announcement of a consultation on the use of copyrighted material for training AI models. Under the proposals, tech firms could be permitted to use copyrighted material unless owners have specifically opted out.

Despite the diversity of approaches globally, the AI regulatory push marks a significant moment for technological governance. Whether through the EU’s comprehensive model, China’s step-by-step strategy, or narrower, state-level initiatives like in the US, businesses worldwide must navigate a complex, evolving framework.

The challenge ahead will be striking the right balance between fostering innovation and mitigating risks, ensuring that AI remains a force for good while avoiding potential harms.

(Photo by Nathan Bingle)

See also: Anthropic urges AI regulation to avoid catastrophes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI governance: Analysing emerging global regulations appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-governance-analysing-emerging-global-regulations/feed/ 0
UK establishes LASR to counter AI security threats https://www.artificialintelligence-news.com/news/uk-establishes-lasr-counter-ai-security-threats/ https://www.artificialintelligence-news.com/news/uk-establishes-lasr-counter-ai-security-threats/#respond Mon, 25 Nov 2024 11:31:13 +0000 https://www.artificialintelligence-news.com/?p=16550 The UK is establishing the Laboratory for AI Security Research (LASR) to help protect Britain and its allies against emerging threats in what officials describe as an “AI arms race.” The laboratory – which will receive an initial government funding of £8.22 million – aims to bring together experts from industry, academia, and government to […]

The post UK establishes LASR to counter AI security threats appeared first on AI News.

]]>
The UK is establishing the Laboratory for AI Security Research (LASR) to help protect Britain and its allies against emerging threats in what officials describe as an “AI arms race.”

The laboratory – which will receive an initial government funding of £8.22 million – aims to bring together experts from industry, academia, and government to assess AI’s impact on national security. The announcement comes as part of a broader strategy to strengthen the UK’s cyber defence capabilities.

Speaking at the NATO Cyber Defence Conference at Lancaster House, the Chancellor of the Duchy of Lancaster said: “NATO needs to continue to adapt to the world of AI, because as the tech evolves, the threat evolves.

“NATO has stayed relevant over the last seven decades by constantly adapting to new threats. It has navigated the worlds of nuclear proliferation and militant nationalism. The move from cold warfare to drone warfare.”

The Chancellor painted a stark picture of the current cyber security landscape, stating: “Cyber war is now a daily reality. One where our defences are constantly being tested. The extent of the threat must be matched by the strength of our resolve to combat it and to protect our citizens and systems.”

The new laboratory will operate under a ‘catalytic’ model, designed to attract additional investment and collaboration from industry partners.

Key stakeholders in the new lab include GCHQ, the National Cyber Security Centre, the MOD’s Defence Science and Technology Laboratory, and prestigious academic institutions such as the University of Oxford and Queen’s University Belfast.

In a direct warning about Russia’s activities, the Chancellor declared: “Be in no doubt: the United Kingdom and others in this room are watching Russia. We know exactly what they are doing, and we are countering their attacks both publicly and behind the scenes.

“We know from history that appeasing dictators engaged in aggression against their neighbours only encourages them. Britain learned long ago the importance of standing strong in the face of such actions.”

Reaffirming support for Ukraine, he added, “Putin is a man who wants destruction, not peace. He is trying to deter our support for Ukraine with his threats. He will not be successful.”

The new lab follows recent concerns about state actors using AI to bolster existing security threats.

“Last year, we saw the US for the first time publicly call out a state for using AI to aid its malicious cyber activity,” the Chancellor noted, referring to North Korea’s attempts to use AI for malware development and vulnerability scanning.

Stephen Doughty, Minister for Europe, North America and UK Overseas Territories, highlighted the dual nature of AI technology: “AI has enormous potential. To ensure it remains a force for good in the world, we need to understand its threats and its opportunities.”

Alongside LASR, the government announced a new £1 million incident response project to enhance collaborative cyber defence capabilities among allies. The laboratory will prioritise collaboration with Five Eyes countries and NATO allies, building on the UK’s historical strength in computing, dating back to Alan Turing’s groundbreaking work.

The initiative forms part of the government’s comprehensive approach to cybersecurity, which includes the upcoming Cyber Security and Resilience Bill and the recent classification of data centres as critical national infrastructure.

(Photo by Erik Mclean)

See also: Anthropic urges AI regulation to avoid catastrophes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK establishes LASR to counter AI security threats appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/uk-establishes-lasr-counter-ai-security-threats/feed/ 0
President Biden issues first National Security Memorandum on AI https://www.artificialintelligence-news.com/news/president-biden-issues-first-national-security-memorandum-ai/ https://www.artificialintelligence-news.com/news/president-biden-issues-first-national-security-memorandum-ai/#respond Fri, 25 Oct 2024 14:45:42 +0000 https://www.artificialintelligence-news.com/?p=16393 President Biden has issued the US’ first-ever National Security Memorandum (NSM) on AI, addressing how the nation approaches the technology from a security perspective. The memorandum, which builds upon Biden’s earlier executive order on AI, is founded on the premise that cutting-edge AI developments will substantially impact national security and foreign policy in the immediate […]

The post President Biden issues first National Security Memorandum on AI appeared first on AI News.

]]>
President Biden has issued the US’ first-ever National Security Memorandum (NSM) on AI, addressing how the nation approaches the technology from a security perspective.

The memorandum, which builds upon Biden’s earlier executive order on AI, is founded on the premise that cutting-edge AI developments will substantially impact national security and foreign policy in the immediate future.

Security experts suggest the implications are already being felt. “AI already has implications for national security, as we know that more and more attackers are using AI to create higher volume and more complex attacks, especially in the social engineering and misinformation fronts,” says Melissa Ruzzi, Director of AI at AppOmni.

At its core, the NSM outlines three primary objectives: establishing US leadership in safe AI development, leveraging AI technologies for national security, and fostering international governance frameworks.

“Our competitors want to upend US AI leadership and have employed economic and technological espionage in efforts to steal US technology,” the memorandum states, elevating the protection of American AI innovations to a “top-tier intelligence priority.”

The document formally designates the AI Safety Institute as the primary governmental point of contact for the AI industry. This institute will be staffed with technical experts and will maintain close partnerships with national security agencies, including the intelligence community, Department of Defence, and Department of Energy.

“The actions listed in the memo are great starting points to get a good picture of the status quo and obtain enough information to make decisions based on data, instead of jumping to conclusions to make decisions based on vague assumptions,” Ruzzi explains.

However, Ruzzi cautions that “the data that needs to be collected on the actions is not trivial, and even with the data, assumptions and trade-offs will be necessary for final decision making. Making decisions after data gathering is where the big challenge will be.”

In a notable move to democratise AI research, the memorandum reinforces support for the National AI Research Resource pilot programme. This initiative aims to extend AI research capabilities beyond major tech firms to universities, civil society organisations, and small businesses.

The NSM introduces the Framework to Advance AI Governance and Risk Management in National Security (PDF), which establishes comprehensive guidelines for implementing AI in national security applications. These guidelines mandate rigorous risk assessment procedures and safeguards against privacy invasions, bias, discrimination, and human rights violations.

Security considerations feature prominently in the framework, with Ruzzi emphasising their importance: “Cybersecurity of AI is crucial – we know that if AI is misconfigured, it can pose risks similar to misconfigurations in SaaS applications that cause confidential data to be exposed.”

On the international front, the memorandum builds upon recent diplomatic achievements, including the G7’s International Code of Conduct on AI and agreements reached at the Bletchley and Seoul AI Safety Summits. Notably, 56 nations have endorsed the US-led Political Declaration on the Military Use of AI and Autonomy.

The Biden administration has also secured a diplomatic victory with the passage of the first UN General Assembly Resolution on AI, which garnered unanimous support, including co-sponsorship from China.

The memorandum emphasises the critical role of semiconductor manufacturing in AI development, connecting to Biden’s earlier CHIPS Act. It directs actions to enhance chip supply chain security and diversity, ensuring American leadership in advanced computing infrastructure.

This latest initiative forms part of the Biden-Harris Administration’s broader strategy for responsible innovation in the AI sector, reinforcing America’s commitment to maintaining technological leadership while upholding democratic values and human rights.

(Photo by Nils Huenerfuerst)

See also: EU AI Act: Early prep could give businesses competitive edge

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post President Biden issues first National Security Memorandum on AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/president-biden-issues-first-national-security-memorandum-ai/feed/ 0
California Assembly passes controversial AI safety bill https://www.artificialintelligence-news.com/news/california-assembly-passes-controversial-ai-safety-bill/ https://www.artificialintelligence-news.com/news/california-assembly-passes-controversial-ai-safety-bill/#respond Thu, 29 Aug 2024 15:34:07 +0000 https://www.artificialintelligence-news.com/?p=15906 The California State Assembly has approved the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047). The bill, which has sparked intense debate in Silicon Valley and beyond, aims to impose a series of safety measures on AI companies operating within California. These precautions must be implemented before training advanced foundation models. […]

The post California Assembly passes controversial AI safety bill appeared first on AI News.

]]>
The California State Assembly has approved the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047).

The bill, which has sparked intense debate in Silicon Valley and beyond, aims to impose a series of safety measures on AI companies operating within California. These precautions must be implemented before training advanced foundation models.

Key requirements of the bill include:

  • Implementing mechanisms for swift and complete model shutdown
  • Safeguarding models against “unsafe post-training modifications”
  • Establishing testing procedures to assess the potential risks of models or their derivatives causing “critical harm”

Senator Scott Wiener, the primary author of SB 1047, said: “We’ve worked hard all year, with open source advocates, Anthropic, and others, to refine and improve the bill. SB 1047 is well calibrated to what we know about foreseeable AI risks, and it deserves to be enacted.”

The senator emphasised that the bill simply asks large AI laboratories to follow through on their existing commitments to test their extensive models for catastrophic safety risks.

However, the proposed legislation has faced opposition from various quarters, including AI companies OpenAI and Anthropic, politicians Zoe Lofgren and Nancy Pelosi, and California’s Chamber of Commerce. Critics argue that the bill places excessive focus on catastrophic harms and could disproportionately affect small, open-source AI developers.

In response to these concerns, several amendments were made to the original bill. These changes include:

  • Replacing potential criminal penalties with civil ones
  • Limiting the enforcement powers granted to California’s attorney general
  • Modifying requirements for joining the “Board of Frontier Models” created by the bill

The next step for SB 1047 is a vote in the State Senate, where it is expected to pass. Should this occur, the bill will then be presented to Governor Gavin Newsom, who will have until the end of September to make a decision on its enactment.

As one of the first significant AI regulations in the US, the passage of SB 1047 could set a precedent for future legislation. The outcome of this bill may have far-reaching implications for the AI industry, potentially influencing the development and deployment of advanced AI models not only in California but across the nation and beyond.

(Photo by Josh Hild)

See also: Chinese firms use cloud loophole to access US AI tech

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post California Assembly passes controversial AI safety bill appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/california-assembly-passes-controversial-ai-safety-bill/feed/ 0
OpenAI warns California’s AI bill threatens US innovation https://www.artificialintelligence-news.com/news/openai-warns-california-ai-bill-threatens-us-innovation/ https://www.artificialintelligence-news.com/news/openai-warns-california-ai-bill-threatens-us-innovation/#respond Thu, 22 Aug 2024 15:46:57 +0000 https://www.artificialintelligence-news.com/?p=15810 OpenAI has added its voice to the growing chorus of tech leaders and politicians opposing a controversial AI safety bill in California. The company argues that the legislation, SB 1047, would stifle innovation and that regulation should be handled at a federal level. In a letter sent to California State Senator Scott Wiener’s office, OpenAI […]

The post OpenAI warns California’s AI bill threatens US innovation appeared first on AI News.

]]>
OpenAI has added its voice to the growing chorus of tech leaders and politicians opposing a controversial AI safety bill in California. The company argues that the legislation, SB 1047, would stifle innovation and that regulation should be handled at a federal level.

In a letter sent to California State Senator Scott Wiener’s office, OpenAI expressed concerns that the bill could have “broad and significant” implications for US competitiveness and national security. The company argued that SB 1047 would threaten California’s position as a global leader in AI, prompting talent to seek “greater opportunity elsewhere.” 

Introduced by Senator Wiener, the bill aims to enact “common sense safety standards” for companies developing large AI models exceeding specific size and cost thresholds. These standards would require companies to implement shut-down mechanisms, take “reasonable care” to prevent catastrophic outcomes, and submit compliance statements to the California attorney general. Failure to comply could result in lawsuits and civil penalties.

Lieutenant General John (Jack) Shanahan, who served in the US Air Force and was the inaugural director of the US Department of Defense’s Joint Artificial Intelligence Center (JAIC), believes the bill “thoughtfully navigates the serious risks that AI poses to both civil society and national security” and provides “pragmatic solutions”.

Hon. Andrew C. Weber – former Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs – echoed the national security concerns.

“The theft of a powerful AI system from a leading lab by our adversaries would impose considerable risks on us all,” said Weber. “Developers of the most advanced AI systems need to take significant cybersecurity precautions given the potential risks involved in their work. I’m glad to see that SB 1047 helps establish the necessary protective measures.”

SB 1047 has sparked fierce opposition from major tech companies, startups, and venture capitalists who argue that it overreaches for a nascent technology, potentially stifling innovation and driving businesses from the state. These concerns are echoed by OpenAI, with sources revealing that the company has paused plans to expand its San Francisco offices due to the uncertain regulatory landscape.

Senator Wiener defended the bill, stating that OpenAI’s letter fails to “criticise a single provision.” He dismissed concerns about talent exodus as “nonsensical,” stating that the law would apply to any company conducting business in California, regardless of their physical location. Wiener highlighted the bill’s “highly reasonable” requirement for large AI labs to test their models for catastrophic safety risks, a practice many have already committed to.

Critics, however, counter that mandating the submission of model details to the government will hinder innovation. They also fear that the threat of lawsuits will deter smaller, open-source developers from establishing startups.  In response to the backlash, Senator Wiener recently amended the bill to eliminate criminal liability for non-compliant companies, safeguard smaller developers, and remove the proposed “Frontier Model Division.”

OpenAI maintains that a clear federal framework, rather than state-level regulation, is essential for preserving public safety while maintaining  US competitiveness against rivals like China. The company highlighted the suitability of federal agencies, such as the White House Office of Science and Technology Policy and the Department of Commerce, to govern AI risks.

Senator Wiener acknowledged the ideal of congressional action but expressed scepticism about its likelihood. He drew parallels with California’s data privacy law, passed in the absence of federal action, suggesting that inaction from Congress shouldn’t preclude California from taking a leading role.

The California state assembly is set to vote on SB 1047 this month. If passed, the bill will land on the desk of Governor Gavin Newsom, whose stance on the legislation remains unclear. However, Newsom has publicly recognised the need to balance AI innovation with risk mitigation.

(Photo by Solen Feyissa)

See also: OpenAI delivers GPT-4o fine-tuning

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI warns California’s AI bill threatens US innovation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-warns-california-ai-bill-threatens-us-innovation/feed/ 0
UAE blocks US congressional meetings with G42 amid AI transfer concerns https://www.artificialintelligence-news.com/news/uae-blocks-us-congressional-meetings-g42-ai-transfer-concerns/ https://www.artificialintelligence-news.com/news/uae-blocks-us-congressional-meetings-g42-ai-transfer-concerns/#respond Wed, 31 Jul 2024 11:23:09 +0000 https://www.artificialintelligence-news.com/?p=15564 There have been reports that the United Arab Emirates (UAE) has “suddenly cancelled” the ongoing series of meetings between a group of US congressional staffers and Emirati AI firm G42, after some US lawmakers raised concerns that this practice may lead to the transfer of advanced American AI technology to China. However, a congressional spokesperson, […]

The post UAE blocks US congressional meetings with G42 amid AI transfer concerns appeared first on AI News.

]]>
There have been reports that the United Arab Emirates (UAE) has “suddenly cancelled” the ongoing series of meetings between a group of US congressional staffers and Emirati AI firm G42, after some US lawmakers raised concerns that this practice may lead to the transfer of advanced American AI technology to China.

However, a congressional spokesperson, who provided this information, chose to remain anonymous due to internal committee policy, as reported by Reuters.

The order was given directly by the UAE’s ambassador to the US, who halted the meetings between staffers from the House Select Committee on China and G42, as well as various Emirati government officials. This development only adds fuel to the fire of high tensions surrounding the scrutiny of G42 amid a $1.5 billion agreement with Microsoft. Some US congresspeople are already worried about sensitive technology getting into the hands of a UAE firm that reportedly has Chinese ties.

The committee’s spokesperson expressed increased concerns regarding the G42-Microsoft deal due to the UAE’s unwillingness to engage in talks. “Expect Congress to become more involved in overseeing these negotiations,” the spokesperson said.

The cancelled meetings may signal a diplomatic crisis due to the increased attention of China hawks in Congress. The efforts of these lawmakers to closely scrutinise the G42-Microsoft deal have particularly sparked controversies. Members of Congress are focused on ensuring that sensitive AI developments and products resulting from the agreement will not be diverted by the Emiratis to China.

The State Department gave no comment, whereas G42 directed the media to the Emirati government. The UAE embassy spokesperson announced that the situation resulted from a “miscommunication,” as they were notified of the staff delegation just the day before their planned arrival. The embassy emphasised its regular engagement with committee members and staffers in recent months, asserting that the committee has been kept informed about joint UAE-US efforts to strengthen control over critical advanced technologies.

The congressional staffers had planned these meetings as part of a regional visit from July 16-19. Their agenda included discussions on the transfer of sophisticated chips from companies like Nvidia to the UAE and Saudi Arabia, as well as US-China tech competition.

Ambassador Yousef Al Otaiba cited a July 11 letter from committee chairman John Moolenaar to US National Security Advisor Jake Sullivan as the reason for the cancellations. This letter, co-signed by House Foreign Affairs chair Michael McCaul, requested a White House intelligence briefing on Microsoft’s investment in G42 before the deal could progress to its second phase, which would involve transferring export-restricted semiconductor chips from Nvidia and sophisticated AI model weights.

The Biden administration has taken a positive view of the G42-Microsoft deal, stating that G42’s severance from China’s Huawei has been a major positive factor for the deal. However, last year, the administration also imposed sweeping curbs on AI chip exports, requiring licenses for shipments under a more restrictive policy than the previous Trump administration. Additionally, the policy of restricting exports to China requires licenses for exports to the UAE and some other Middle Eastern countries.

It is noted that a regional visit by a congressional delegation took place, during which they met with Saudi officials who expressed a desire to alleviate US companies’ concerns about the activities of the Chinese government in Saudi Arabia. Their goal was to obtain permission to import advanced American chips.

The level of interaction between US and other countries’ authorities illustrates the link between technological innovation, international political relationships, and national security issues.

See also: UAE unveils new AI model to rival big tech giants

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UAE blocks US congressional meetings with G42 amid AI transfer concerns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/uae-blocks-us-congressional-meetings-g42-ai-transfer-concerns/feed/ 0