legislation Archives - AI News https://www.artificialintelligence-news.com/news/tag/legislation/ Artificial Intelligence News Thu, 24 Apr 2025 11:40:59 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png legislation Archives - AI News https://www.artificialintelligence-news.com/news/tag/legislation/ 32 32 Eric Schmidt: AI misuse poses an ‘extreme risk’ https://www.artificialintelligence-news.com/news/eric-schmidt-ai-misuse-poses-extreme-risk/ https://www.artificialintelligence-news.com/news/eric-schmidt-ai-misuse-poses-extreme-risk/#respond Thu, 13 Feb 2025 12:17:38 +0000 https://www.artificialintelligence-news.com/?p=104423 Eric Schmidt, former CEO of Google, has warned that AI misuse poses an “extreme risk” and could do catastrophic harm. Speaking to BBC Radio 4’s Today programme, Schmidt cautioned that AI could be weaponised by extremists and “rogue states” such as North Korea, Iran, and Russia to “harm innocent people.” Schmidt expressed concern that rapid […]

The post Eric Schmidt: AI misuse poses an ‘extreme risk’ appeared first on AI News.

]]>
Eric Schmidt, former CEO of Google, has warned that AI misuse poses an “extreme risk” and could do catastrophic harm.

Speaking to BBC Radio 4’s Today programme, Schmidt cautioned that AI could be weaponised by extremists and “rogue states” such as North Korea, Iran, and Russia to “harm innocent people.”

Schmidt expressed concern that rapid AI advancements could be exploited to create weapons, including biological attacks. Highlighting the dangers, he said: “The real fears that I have are not the ones that most people talk about AI, I talk about extreme risk.”

Using a chilling analogy, Schmidt referenced the al-Qaeda leader responsible for the 9/11 attacks: “I’m always worried about the Osama bin Laden scenario, where you have some truly evil person who takes over some aspect of our modern life and uses it to harm innocent people.”

He emphasised the pace of AI development and its potential to be co-opted by nations or groups with malevolent intent.

“Think about North Korea, or Iran, or even Russia, who have some evil goal … they could misuse it and do real harm,” Schmidt warns.

Oversight without stifling innovation

Schmidt urged governments to closely monitor private tech companies pioneering AI research. While noting that tech leaders are generally aware of AI’s societal implications, they may make decisions based on different values from those of public officials.

“My experience with the tech leaders is that they do have an understanding of the impact they’re having, but they might make a different values judgement than the government would make.”

Schmidt also endorsed the export controls introduced under former US President Joe Biden last year to restrict the sale of advanced microchips. The measure is aimed at slowing the progress of geopolitical adversaries in AI research.  

Global divisions around preventing AI misuse

The tech veteran was in Paris when he made his remarks, attending the AI Action Summit, a two-day event that wrapped up on Tuesday.

The summit, attended by 57 countries, saw the announcement of an agreement on “inclusive” AI development. Signatories included major players like China, India, the EU, and the African Union.  

However, the UK and the US declined to sign the communique. The UK government said the agreement lacked “practical clarity” and failed to address critical “harder questions” surrounding national security. 

Schmidt cautioned against excessive regulation that might hinder progress in this transformative field. This was echoed by US Vice-President JD Vance who warned that heavy-handed regulation “would kill a transformative industry just as it’s taking off”.  

This reluctance to endorse sweeping international accords reflects diverging approaches to AI governance. The EU has championed a more restrictive framework for AI, prioritising consumer protections, while countries like the US and UK are opting for more agile and innovation-driven strategies. 

Schmidt pointed to the consequences of Europe’s tight regulatory stance, predicting that the region would miss out on pioneering roles in AI.

“The AI revolution, which is the most important revolution in my opinion since electricity, is not going to be invented in Europe,” he remarked.

Prioritising national and global safety

Schmidt’s comments come against a backdrop of increasing scrutiny over AI’s dual-use potential—its ability to be used for both beneficial and harmful purposes.

From deepfakes to autonomous weapons, AI poses a bevy of risks if left without measures to guard against misuse. Leaders and experts, including Schmidt, are advocating for a balanced approach that fosters innovation while addressing these dangers head-on.

While international cooperation remains a complex and contentious issue, the overarching consensus is clear: without safeguards, AI’s evolution could have unintended – and potentially catastrophic – consequences.

(Photo by Guillaume Paumier under CC BY 3.0 license. Cropped to landscape from original version.)

See also: NEPC: AI sprint risks environmental catastrophe

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Eric Schmidt: AI misuse poses an ‘extreme risk’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/eric-schmidt-ai-misuse-poses-extreme-risk/feed/ 0
DeepSeek ban? China data transfer boosts security concerns https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/ https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/#respond Fri, 07 Feb 2025 17:44:01 +0000 https://www.artificialintelligence-news.com/?p=104228 US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company. DeepSeek, practically unknown just weeks ago, took the tech world by storm—gaining global acclaim for its cutting-edge performance while sparking debates reminiscent of the TikTok saga. Its rise has been fuelled in part […]

The post DeepSeek ban? China data transfer boosts security concerns appeared first on AI News.

]]>
US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company.

DeepSeek, practically unknown just weeks ago, took the tech world by storm—gaining global acclaim for its cutting-edge performance while sparking debates reminiscent of the TikTok saga.

Its rise has been fuelled in part by its business model: unlike many of its American counterparts, including OpenAI and Google, DeepSeek offered its advanced powers for free.

However, concerns have been raised about DeepSeek’s extensive data collection practices and a probe has been launched by Microsoft and OpenAI over a breach of the latter’s system by a group allegedly linked to the Chinese AI startup.

A threat to US AI dominance

DeepSeek’s astonishing capabilities have, within a matter of weeks, positioned it as a major competitor to American AI stalwarts like OpenAI’s ChatGPT and Google Gemini. But, alongside the app’s prowess, concerns have emerged over alleged ties to the Chinese Communist Party (CCP).  

According to security researchers, hidden code within DeepSeek’s AI has been found transmitting user data to China Mobile—a state-owned telecoms company banned in the US. DeepSeek’s own privacy policy permits the collection of data such as IP addresses, device information, and, most alarmingly, even keystroke patterns.

Such findings have led to bipartisan efforts in the US Congress to curtail DeepSeek’s influence, with lawmakers scrambling to protect sensitive data from potential CCP oversight.

Reps. Darin LaHood (R-IL) and Josh Gottheimer (D-NJ) are spearheading efforts to introduce legislation that would prohibit DeepSeek from being installed on all government-issued devices. 

Several federal agencies, among them NASA and the US Navy, have already preemptively issued a ban on DeepSeek. Similarly, the state of Texas has also introduced restrictions.

Potential ban of DeepSeek a TikTok redux?

The controversy surrounding DeepSeek bears similarities to debates over TikTok, the social video app owned by Chinese company ByteDance. TikTok remains under fire over accusations that user data is accessible to the CCP, though definitive proof has yet to materialise.

In contrast, DeepSeek’s case involves clear evidence, as revealed by cybersecurity investigators who identified the app’s unauthorised data transmissions. While some might say DeepSeek echoes the TikTok controversy, security experts argue that it represents a much starker and documented threat.

Lawmakers around the world are taking note. In addition to the US proposals, DeepSeek has already faced bans from government systems in countries including Australia, South Korea, and Italy.  

AI becomes a geopolitical battleground

The concerns over DeepSeek exemplify how AI has now become a geopolitical flashpoint between global superpowers—especially between the US and China.

American AI firms like OpenAI have enjoyed a dominant position in recent years, but Chinese companies have poured resources into catching up and, in some cases, surpassing their US competitors.  

DeepSeek’s lightning-quick growth has unsettled that balance, not only because of its AI models but also due to its pricing strategy, which undercuts competitors by offering the app free of charge. That begs the question of whether it’s truly “free” or if the cost is paid in lost privacy and security.

China Mobile’s involvement raises further eyebrows, given the state-owned telecom company’s prior sanctions and prohibition from the US market. Critics worry that data collected through platforms like DeepSeek could fill gaps in Chinese surveillance activities or even potential economic manipulations.

A nationwide DeepSeek ban is on the cards

If the proposed US legislation is passed, it could represent the first step toward nationwide restrictions or an outright ban on DeepSeek. Geopolitical tension between China and the West continues to shape policies in advanced technologies, and AI appears to be the latest arena for this ongoing chess match.  

In the meantime, calls to regulate applications like DeepSeek are likely to grow louder. Conversations about data privacy, national security, and ethical boundaries in AI development are becoming ever more urgent as individuals and organisations across the globe navigate the promises and pitfalls of next-generation tools.  

DeepSeek’s rise may have, indeed, rattled the AI hierarchy, but whether it can maintain its momentum in the face of increasing global pushback remains to be seen.

(Photo by Solen Feyissa)

See also: AVAXAI brings DeepSeek to Web3 with decentralised AI agents

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepSeek ban? China data transfer boosts security concerns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/feed/ 0
NEPC: AI sprint risks environmental catastrophe https://www.artificialintelligence-news.com/news/nepc-ai-sprint-risks-environmental-catastrophe/ https://www.artificialintelligence-news.com/news/nepc-ai-sprint-risks-environmental-catastrophe/#respond Fri, 07 Feb 2025 12:32:41 +0000 https://www.artificialintelligence-news.com/?p=104189 The government is urged to mandate stricter reporting for data centres to mitigate environmental risks associated with the AI sprint. A report published today by the National Engineering Policy Centre (NEPC) highlights the urgent need for data centres to adopt greener practices, particularly as the government’s AI Opportunities Action Plan gains traction. The report, Engineering […]

The post NEPC: AI sprint risks environmental catastrophe appeared first on AI News.

]]>
The government is urged to mandate stricter reporting for data centres to mitigate environmental risks associated with the AI sprint.

A report published today by the National Engineering Policy Centre (NEPC) highlights the urgent need for data centres to adopt greener practices, particularly as the government’s AI Opportunities Action Plan gains traction.

The report, Engineering Responsible AI: Foundations for Environmentally Sustainable AI, was developed in collaboration with the Royal Academy of Engineering, the Institution of Engineering and Technology, and BCS, the Chartered Institute of IT.

While stressing that data centres enabling AI systems can be built to consume fewer resources like energy and water, the report highlights that infrastructure and regulatory conditions must align for these efficiencies to materialise.

Unlocking the potential of AI while minimising environmental risks  

AI is heralded as capable of driving economic growth, creating jobs, and improving livelihoods. Launched as a central pillar of the UK’s tech strategy, the AI Opportunities Action Plan is intended to “boost economic growth, provide jobs for the future and improve people’s everyday lives.”  

Use cases for AI that are already generating public benefits include accelerating drug discovery, forecasting weather events, optimising energy systems, and even aiding climate science and improving sustainability efforts. However, this growing reliance on AI also poses environmental risks from the infrastructure required to power these systems.  

Data centres, which serve as the foundation of AI technologies, consume vast amounts of energy and water. Increasing demand has raised concerns about global competition for limited resources, such as sustainable energy and drinking water. Google and Microsoft, for instance, have recorded rising water usage by their data centres each year since 2020. Much of this water comes from drinking sources, sparking fears about resource depletion.  

With plans already in place to reform the UK’s planning system to facilitate the construction of data centres, the report calls for urgent policies to manage their environmental impact. Accurate and transparent data on resource consumption is currently lacking, which hampers policymakers’ ability to assess the true scale of these impacts and act accordingly.

Five steps to sustainable AI  

The NEPC is urging the government to spearhead change by prioritising sustainable AI development. The report outlines five key steps policymakers can act upon immediately to position the UK as a leader in resource-efficient AI:  

  1. Expand environmental reporting mandates
  2. Communicate the sector’s environmental impacts
  3. Set sustainability requirements for data centres
  4. Reconsider data collection, storage, and management practices
  5. Lead by example with government investment

Mandatory environmental reporting forms a cornerstone of the recommendations. This involves measuring data centres’ energy sources, water consumption, carbon emissions, and e-waste recycling practices to provide the resource use data necessary for policymaking.  

Raising public awareness is also vital. Communicating the environmental costs of AI can encourage developers to optimise AI tools, use smaller datasets, and adopt more efficient approaches. Notably, the report recommends embedding environmental design and sustainability topics into computer science and AI education at both school and university levels.  

Smarter, greener data centres  

One of the most urgent calls to action involves redesigning data centres to reduce their environmental footprint. The report advocates for innovations like waste heat recovery systems, zero drinking water use for cooling, and the exclusive use of 100% carbon-free energy certificates.  

Efforts like those at Queen Mary University of London, where residual heat from a campus data centre is repurposed to provide heating and hot water, offer a glimpse into the possibilities of greener tech infrastructure.  

In addition, the report suggests revising legislation on mandatory data retention to reduce the unnecessary environmental costs of storing vast amounts of data long-term. Proposals for a National Data Library could drive best practices by centralising and streamlining data storage.  

Professor Tom Rodden, Pro-Vice-Chancellor at the University of Nottingham and Chair of the working group behind the report, urged swift action:  

“In recent years, advances in AI systems and services have largely been driven by a race for size and scale, demanding increasing amounts of computational power. As a result, AI systems and services are growing at a rate unparalleled by other high-energy systems—generally without much regard for resource efficiency.  

“This is a dangerous trend, and we face a real risk that our development, deployment, and use of AI could do irreparable damage to the environment.”  

Rodden added that reliable data on these impacts is critical. “To build systems and services that effectively use resources, we first need to effectively monitor their environmental cost. Once we have access to trustworthy data… we can begin to effectively target efficiency in development, deployment, and use – and plan a sustainable AI future for the UK.”

Dame Dawn Childs, CEO of Pure Data Centres Group, underscored the role of engineering in improving efficiency. “Some of this will come from improvements to AI models and hardware, making them less energy-intensive. But we must also ensure that the data centres housing AI’s computing power and storage are as sustainable as possible.  

“That means prioritising renewable energy, minimising water use, and reducing carbon emissions – both directly and indirectly. Using low-carbon building materials is also essential.”  

Childs emphasised the importance of a coordinated approach from the start of projects. “As the UK government accelerates AI adoption – through AI Growth Zones and streamlined planning for data centres – sustainability must be a priority at every step.”  

For Alex Bardell, Chair of BCS’ Green IT Specialist Group, the focus is on optimising AI processes. “Our report has discussed optimising models for efficiency. Previous attempts to limit the drive toward increased computational power and larger models have faced significant resistance, with concerns that the UK may fall behind in the AI arena; this may not necessarily be true.  

“It is crucial to reevaluate our approach to developing sustainable AI in the future.”  

Time for transparency around AI environmental risks

Public awareness of AI’s environmental toll remains low. Recent research by the Institution of Engineering and Technology (IET) found that fewer than one in six UK residents are aware of the significant environmental costs associated with AI systems.  

“AI providers must be transparent about these effects,” said Professor Sarvapali Ramchurn, CEO of Responsible AI UK and a Fellow of the IET. “If we cannot measure it, we cannot manage it, nor ensure benefits for all. This report’s recommendations will aid national discussions on the sustainability of AI systems and the trade-offs involved.”  

As the UK pushes forward with ambitious plans to lead in AI development, ensuring environmental sustainability must take centre stage. By adopting policies and practices outlined in the NEPC report, the government can support AI growth while safeguarding finite resources for future generations.

(Photo by Braden Collum)

See also: Sustainability is key in 2025 for businesses to advance AI efforts

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NEPC: AI sprint risks environmental catastrophe appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/nepc-ai-sprint-risks-environmental-catastrophe/feed/ 0
AI governance: Analysing emerging global regulations https://www.artificialintelligence-news.com/news/ai-governance-analysing-emerging-global-regulations/ https://www.artificialintelligence-news.com/news/ai-governance-analysing-emerging-global-regulations/#respond Thu, 19 Dec 2024 16:21:18 +0000 https://www.artificialintelligence-news.com/?p=16742 Governments are scrambling to establish regulations to govern AI, citing numerous concerns over data privacy, bias, safety, and more. AI News caught up with Nerijus Šveistys, Senior Legal Counsel at Oxylabs, to understand the state of play when it comes to AI regulation and its potential implications for industries, businesses, and innovation. “The boom of […]

The post AI governance: Analysing emerging global regulations appeared first on AI News.

]]>
Governments are scrambling to establish regulations to govern AI, citing numerous concerns over data privacy, bias, safety, and more.

AI News caught up with Nerijus Šveistys, Senior Legal Counsel at Oxylabs, to understand the state of play when it comes to AI regulation and its potential implications for industries, businesses, and innovation.

“The boom of the last few years appears to have sparked a push to establish regulatory frameworks for AI governance,” explains Šveistys.

“This is a natural development, as the rise of AI seems to pose issues in data privacy and protection, bias and discrimination, safety, intellectual property, and other legal areas, as well as ethics that need to be addressed.”

Regions diverge in regulatory strategy

The European Union’s AI Act has, unsurprisingly, positioned the region with a strict, centralised approach. The regulation, which came into force this year, is set to be fully effective by 2026.

Šveistys pointed out that the EU has acted relatively swiftly compared to other jurisdictions: “The main difference we can see is the comparative quickness with which the EU has released a uniform regulation to govern the use of all types of AI.”

Meanwhile, other regions have opted for more piecemeal approaches. China, for instance, has been implementing regulations specific to certain AI technologies in a phased-out manner. According to Šveistys, China began regulating AI models as early as 2021.

“In 2021, they introduced regulation on recommendation algorithms, which [had] increased their capabilities in digital advertising. It was followed by regulations on deep synthesis models or, in common terms, deepfakes and content generation in 2022,” he said.

“Then, in 2023, regulation on generative AI models was introduced as these models were making a splash in commercial usage.”

The US, in contrast, remains relatively uncoordinated in its approach. Federal-level regulations are yet to be enacted, with efforts mostly emerging at the state level.

“There are proposed regulations at the state level, such as the so-called California AI Act, but even if they come into power, it may still take some time before they do,” Šveistys noted.

This delay in implementing unified AI regulations in the US has raised questions about the extent to which business pushback may be contributing to the slow rollout. Šveistys said that while lobbyist pressure is a known factor, it’s not the only potential reason.

“There was pushback to the EU AI Act, too, which was nevertheless introduced. Thus, it is not clear whether the delay in the US is only due to lobbyism or other obstacles in the legislation enactment process,” explains Šveistys.

“It might also be because some still see AI as a futuristic concern, not fully appreciating the extent to which it is already a legal issue of today.”

Balancing innovation and safety

Differentiated regulatory approaches could affect the pace of innovation and business competitiveness across regions.

Europe’s regulatory framework, though more stringent, aims to ensure consumer protection and ethical adherence—something that less-regulated environments may lack.

“More rigid regulatory frameworks may impose compliance costs for businesses in the AI field and stifle competitiveness and innovation. On the other hand, they bring the benefits of protecting consumers and adhering to certain ethical norms,” comments Šveistys.

This trade-off is especially pronounced in AI-related sectors such as targeted advertising, where algorithmic bias is increasingly scrutinised.

AI governance often extends beyond laws that specifically target AI, incorporating related legal areas like those governing data collection and privacy. For example, the EU AI Act also regulates the use of AI in physical devices, such as elevators.

“Additionally, all businesses that collect data for advertisement are potentially affected as AI regulation can also cover algorithmic bias in targeted advertising,” emphasises Šveistys.

Impact on related industries

One industry that is deeply intertwined with AI developments is web scraping. Typically used for collecting publicly available data, web scraping is undergoing an AI-driven evolution.

“From data collection, validation, analysis, or overcoming anti-scraping measures, there is a lot of potential for AI to massively improve the efficiency, accuracy, and adaptability of web scraping operations,” said Šveistys. 

However, as AI regulation and related laws tighten, web scraping companies will face greater scrutiny.

“AI regulations may also bring the spotlight on certain areas of law that were always very relevant to the web scraping industry, such as privacy or copyright laws,” Šveistys added.

“At the end of the day, scraping content protected by such laws without proper authorisation could always lead to legal issues, and now so can using AI this way.”

Copyright battles and legal precedents

The implications of AI regulation are also playing out on a broader legal stage, particularly in cases involving generative AI tools.

High-profile lawsuits have been launched against AI giants like OpenAI and its primary backer, Microsoft, by authors, artists, and musicians who claim their copyrighted materials were used to train AI systems without proper permission.

“These cases are pivotal in determining the legal boundaries of using copyrighted material for AI development and establishing legal precedents for protecting intellectual property in the digital age,” said Šveistys.

While these lawsuits could take years to resolve, their outcomes may fundamentally shape the future of AI development. So, what can businesses do now as the regulatory and legal landscape continues to evolve?

“Speaking about the specific cases of using copyrighted material for AI training, businesses should approach this the same way as any web-scraping activity – that is, evaluate the specific data they wish to collect with the help of a legal expert in the field,” recommends Šveistys.

“It is important to recognise that the AI legal landscape is very new and rapidly evolving, with not many precedents in place to refer to as of yet. Hence, continuous monitoring and adaptation of your AI usage are crucial.”

Just this week, the UK Government made headlines with its announcement of a consultation on the use of copyrighted material for training AI models. Under the proposals, tech firms could be permitted to use copyrighted material unless owners have specifically opted out.

Despite the diversity of approaches globally, the AI regulatory push marks a significant moment for technological governance. Whether through the EU’s comprehensive model, China’s step-by-step strategy, or narrower, state-level initiatives like in the US, businesses worldwide must navigate a complex, evolving framework.

The challenge ahead will be striking the right balance between fostering innovation and mitigating risks, ensuring that AI remains a force for good while avoiding potential harms.

(Photo by Nathan Bingle)

See also: Anthropic urges AI regulation to avoid catastrophes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI governance: Analysing emerging global regulations appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-governance-analysing-emerging-global-regulations/feed/ 0
Understanding AI’s impact on the workforce https://www.artificialintelligence-news.com/news/understanding-ai-impact-on-the-workforce/ https://www.artificialintelligence-news.com/news/understanding-ai-impact-on-the-workforce/#respond Fri, 08 Nov 2024 10:11:03 +0000 https://www.artificialintelligence-news.com/?p=16459 The Tony Blair Institute (TBI) has examined AI’s impact on the workforce. The report outlines AI’s potential to reshape work environments, boost productivity, and create opportunities—while warning of potential challenges ahead. “Technology has a long history of profoundly reshaping the world of work,” the report begins. From the agricultural revolution to the digital age, each […]

The post Understanding AI’s impact on the workforce appeared first on AI News.

]]>
The Tony Blair Institute (TBI) has examined AI’s impact on the workforce. The report outlines AI’s potential to reshape work environments, boost productivity, and create opportunities—while warning of potential challenges ahead.

“Technology has a long history of profoundly reshaping the world of work,” the report begins.

From the agricultural revolution to the digital age, each wave of innovation has redefined labour markets. Today, AI presents a seismic shift, advancing rapidly and prompting policymakers to prepare for change.

Economic opportunities

The TBI report estimates that AI, when fully adopted by UK firms, could significantly increase productivity. It suggests that AI could save “almost a quarter of private-sector workforce time,” equivalent to the annual output of 6 million workers.

Most of these time savings are expected to stem from AI-enabled software performing cognitive tasks such as data analysis and routine administrative operations.

The report identifies sectors reliant on routine cognitive tasks, such as banking and finance, as those with significant exposure to AI. However, sectors like skilled trades or construction – which involve complex manual tasks – are likely to see less direct impact.

While AI can result in initial job losses, it also has the potential to create new demand by fostering economic growth and new industries. 

The report expects these job losses can be balanced by new job creation. Over the years, technology has historically spurred new employment opportunities, as innovation leads to the development of new products and services.

Shaping future generations

AI’s potential extends into education, where it could assist both teachers and students.

The report suggests that AI could help “raise educational attainment by around six percent” on average. By personalising and supporting learning, AI has the potential to equalise access to opportunities and improve the quality of the workforce over time.

Health and wellbeing

Beyond education, AI offers potential benefits in healthcare, supporting a healthier workforce and reducing welfare costs.

The report highlights AI’s role in speeding medical research, enabling preventive healthcare, and helping those with disabilities re-enter the workforce.

Workplace transformation

The report acknowledges potential workplace challenges, such as increased monitoring and stress from AI tools. It stresses the importance of managing these technologies thoughtfully to “deliver a more engaging, inclusive and safe working environment.”

To mitigate potential disruption, the TBI outlines recommendations. These include upgrading labour-market infrastructure and utilising AI for job matching.

The report suggests creating an “Early Awareness and Opportunity System” to help workers understand the impact of AI on their jobs and provide advice on career paths.

Preparing for an AI-powered future

In light of the uncertainties surrounding AI’s impact on the workforce, the TBI urges policy changes to maximise benefits. Recommendations include incentivising AI adoption across industries, developing AI-pathfinder programmes, and creating challenge prizes to address public-sector labour shortages.

The report concludes that while AI presents risks, the potential gains are too significant to ignore.

Policymakers are encouraged to adopt a “pro-innovation” stance while being attuned to the risks, fostering an economy that is dynamic and resilient.

(Photo by Mimi Thian)

See also: Anthropic urges AI regulation to avoid catastrophes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Understanding AI’s impact on the workforce appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/understanding-ai-impact-on-the-workforce/feed/ 0
Anthropic urges AI regulation to avoid catastrophes https://www.artificialintelligence-news.com/news/anthropic-urges-ai-regulation-avoid-catastrophes/ https://www.artificialintelligence-news.com/news/anthropic-urges-ai-regulation-avoid-catastrophes/#respond Fri, 01 Nov 2024 16:46:42 +0000 https://www.artificialintelligence-news.com/?p=16415 Anthropic has flagged the potential risks of AI systems and calls for well-structured regulation to avoid potential catastrophes. The organisation argues that targeted regulation is essential to harness AI’s benefits while mitigating its dangers. As AI systems evolve in capabilities such as mathematics, reasoning, and coding, their potential misuse in areas like cybersecurity or even […]

The post Anthropic urges AI regulation to avoid catastrophes appeared first on AI News.

]]>
Anthropic has flagged the potential risks of AI systems and calls for well-structured regulation to avoid potential catastrophes. The organisation argues that targeted regulation is essential to harness AI’s benefits while mitigating its dangers.

As AI systems evolve in capabilities such as mathematics, reasoning, and coding, their potential misuse in areas like cybersecurity or even biological and chemical disciplines significantly increases.

Anthropic warns the next 18 months are critical for policymakers to act, as the window for proactive prevention is narrowing. Notably, Anthropic’s Frontier Red Team highlights how current models can already contribute to various cyber offense-related tasks and expects future models to be even more effective.

Of particular concern is the potential for AI systems to exacerbate chemical, biological, radiological, and nuclear (CBRN) misuse. The UK AI Safety Institute found that several AI models can now match PhD-level human expertise in providing responses to science-related inquiries.

In addressing these risks, Anthropic has detailed its Responsible Scaling Policy (RSP) that was released in September 2023 as a robust countermeasure. RSP mandates an increase in safety and security measures corresponding to the sophistication of AI capabilities.

The RSP framework is designed to be adaptive and iterative, with regular assessments of AI models allowing for timely refinement of safety protocols. Anthropic says that it’s committed to maintaining and enhancing safety spans various team expansions, particularly in security, interpretability, and trust sectors, ensuring readiness for the rigorous safety standards set by its RSP.

Anthropic believes the widespread adoption of RSPs across the AI industry, while primarily voluntary, is essential for addressing AI risks.

Transparent, effective regulation is crucial to reassure society of AI companies’ adherence to promises of safety. Regulatory frameworks, however, must be strategic, incentivising sound safety practices without imposing unnecessary burdens.

Anthropic envisions regulations that are clear, focused, and adaptive to evolving technological landscapes, arguing that these are vital in achieving a balance between risk mitigation and fostering innovation.

In the US, Anthropic suggests that federal legislation could be the ultimate answer to AI risk regulation—though state-driven initiatives might need to step in if federal action lags. Legislative frameworks developed by countries worldwide should allow for standardisation and mutual recognition to support a global AI safety agenda, minimising the cost of regulatory adherence across different regions.

Furthermore, Anthropic addresses scepticism towards imposing regulations—highlighting that overly broad use-case-focused regulations would be inefficient for general AI systems, which have diverse applications. Instead, regulations should target fundamental properties and safety measures of AI models. 

While covering broad risks, Anthropic acknowledges that some immediate threats – like deepfakes – aren’t the focus of their current proposals since other initiatives are tackling these nearer-term issues.

Ultimately, Anthropic stresses the importance of instituting regulations that spur innovation rather than stifle it. The initial compliance burden, though inevitable, can be minimised through flexible and carefully-designed safety tests. Proper regulation can even help safeguard both national interests and private sector innovation by securing intellectual property against threats internally and externally.

By focusing on empirically measured risks, Anthropic plans for a regulatory landscape that neither biases against nor favours open or closed-source models. The objective remains clear: to manage the significant risks of frontier AI models with rigorous but adaptable regulation.

(Image Credit: Anthropic)

See also: President Biden issues first National Security Memorandum on AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Anthropic urges AI regulation to avoid catastrophes appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/anthropic-urges-ai-regulation-avoid-catastrophes/feed/ 0
EU AI Act: Early prep could give businesses competitive edge https://www.artificialintelligence-news.com/news/eu-ai-act-early-prep-could-give-businesses-competitive-edge/ https://www.artificialintelligence-news.com/news/eu-ai-act-early-prep-could-give-businesses-competitive-edge/#respond Tue, 22 Oct 2024 13:21:32 +0000 https://www.artificialintelligence-news.com/?p=16358 The EU AI Act is set to fully take effect in August 2026, but some provisions are coming into force even earlier. The legislation establishes a first-of-its-kind regulatory framework for AI systems, employing a risk-based approach that categorises AI applications based on their potential impact on safety, human rights, and societal wellbeing. “Some systems are […]

The post EU AI Act: Early prep could give businesses competitive edge appeared first on AI News.

]]>
The EU AI Act is set to fully take effect in August 2026, but some provisions are coming into force even earlier.

The legislation establishes a first-of-its-kind regulatory framework for AI systems, employing a risk-based approach that categorises AI applications based on their potential impact on safety, human rights, and societal wellbeing.

“Some systems are banned entirely, while systems deemed ‘high-risk’ are subject to stricter requirements and assessments before deployment,” explains the DPO Centre, a data protection consultancy.

Similar to GDPR, the Act’s extra-territorial reach means it applies to any organisation marketing, deploying, or using AI systems within the EU, regardless of where the system is developed. Businesses will be classified primarily as either ‘Providers’ or ‘Deployers,’ with additional categories for ‘Distributors,’ ‘Importers,’ ‘Product Manufacturers,’ and ‘Authorised Representatives.’

For organisations developing or deploying AI systems, particularly those classified as high-risk, compliance preparation promises to be complex. However, experts suggest viewing this as an opportunity rather than a burden.

“By embracing compliance as a catalyst for more transparent AI usage, businesses can turn regulatory demands into a competitive advantage,” notes the DPO Centre.

Key preparation strategies include comprehensive staff training, establishing robust corporate governance, and implementing strong cybersecurity measures. The legislation’s requirements often overlap with existing GDPR frameworks, particularly regarding transparency and accountability.

Organisations must also adhere to ethical AI principles and maintain clear documentation of their systems’ functionality, limitations, and intended use. The EU is currently developing specific codes of practice and templates to assist with compliance obligations.

For businesses uncertain about their obligations, experts recommend seeking professional guidance early. Tools like the EU AI Act Compliance Checker can help organisations verify their systems’ alignment with regulatory requirements.

Rather than viewing compliance as merely a regulatory burden, forward-thinking organisations should view the EU’s AI Act as an opportunity to demonstrate commitment to responsible AI development and build greater trust with their customers.

See also: AI governance gap: 95% of firms haven’t implemented frameworks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU AI Act: Early prep could give businesses competitive edge appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/eu-ai-act-early-prep-could-give-businesses-competitive-edge/feed/ 0
SolarWinds: IT professionals want stronger AI regulation https://www.artificialintelligence-news.com/news/solarwinds-it-professionals-stronger-ai-regulation/ https://www.artificialintelligence-news.com/news/solarwinds-it-professionals-stronger-ai-regulation/#respond Tue, 17 Sep 2024 14:36:25 +0000 https://www.artificialintelligence-news.com/?p=16093 A new survey from SolarWinds has unveiled a resounding call for increased government oversight of AI, with 88% of IT professionals advocating for stronger regulation. The study, which polled nearly 700 IT experts, highlights security as the paramount concern. An overwhelming 72% of respondents emphasised the critical need for measures to secure infrastructure. Privacy follows […]

The post SolarWinds: IT professionals want stronger AI regulation appeared first on AI News.

]]>
A new survey from SolarWinds has unveiled a resounding call for increased government oversight of AI, with 88% of IT professionals advocating for stronger regulation.

The study, which polled nearly 700 IT experts, highlights security as the paramount concern. An overwhelming 72% of respondents emphasised the critical need for measures to secure infrastructure. Privacy follows closely behind, with 64% of IT professionals urging for more robust rules to protect sensitive information.

Rob Johnson, VP and Global Head of Solutions Engineering at SolarWinds, commented: “It is understandable that IT leaders are approaching AI with caution. As technology rapidly evolves, it naturally presents challenges typical of any emerging innovation.

“Security and privacy remain at the forefront, with ongoing scrutiny by regulatory bodies. However, it is incumbent upon organisations to take proactive measures by enhancing data hygiene, enforcing robust AI ethics and assembling the right teams to lead these efforts. This proactive stance not only helps with compliance with evolving regulations but also maximises the potential of AI.”

The survey’s findings come at a pivotal moment, coinciding with the implementation of the EU’s AI Act. In the UK, the new Labour government recently proposed its own AI legislation during the latest King’s speech, signalling a growing recognition of the need for regulatory frameworks. In the US, the California State Assembly passed a controversial AI safety bill last month.

Beyond security and privacy, the survey reveals a broader spectrum of concerns amongst IT professionals. A majority (55%) believe government intervention is crucial to stem the tide of AI-generated misinformation. Additionally, half of the respondents support regulations aimed at ensuring transparency and ethical practices in AI development.

Challenges extend beyond AI regulation

However, the challenges facing AI adoption extend beyond regulatory concerns. The survey uncovers a troubling lack of trust in data quality—a cornerstone of successful AI implementation.

Only 38% of respondents consider themselves ‘very trusting’ of the data quality and training used in AI systems. This scepticism is not unfounded, as 40% of IT leaders who have encountered issues with AI attribute these problems to algorithmic errors stemming from insufficient or biased data.

Consequently, data quality emerges as the second most significant barrier to AI adoption (16%), trailing only behind security and privacy risks. This finding underscores the critical importance of robust, unbiased datasets in driving AI success.

“High-quality data is the cornerstone of accurate and reliable AI models, which in turn drive better decision-making and outcomes,” adds Johnson. “Trustworthy data builds confidence in AI among IT professionals, accelerating the broader adoption and integration of AI technologies.”

The survey also sheds light on widespread concerns about database readiness. Less than half (43%) of IT professionals express confidence in their company’s ability to meet the increasing data demands of AI. This lack of preparedness is further exacerbated by the perception that organisations are not moving swiftly enough to implement AI, with 46% of respondents citing ongoing data quality challenges as a contributing factor.

As AI continues to reshape the technological landscape, the findings of this SolarWinds survey serve as a clarion call for both stronger regulation and improved data practices. The message from IT professionals is clear: while AI holds immense promise, its successful integration hinges on addressing critical concerns around security, privacy, and data quality.

(Photo by Kelly Sikkema)

See also: Whitepaper dispels fears of AI-induced job losses

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post SolarWinds: IT professionals want stronger AI regulation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/solarwinds-it-professionals-stronger-ai-regulation/feed/ 0
California Assembly passes controversial AI safety bill https://www.artificialintelligence-news.com/news/california-assembly-passes-controversial-ai-safety-bill/ https://www.artificialintelligence-news.com/news/california-assembly-passes-controversial-ai-safety-bill/#respond Thu, 29 Aug 2024 15:34:07 +0000 https://www.artificialintelligence-news.com/?p=15906 The California State Assembly has approved the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047). The bill, which has sparked intense debate in Silicon Valley and beyond, aims to impose a series of safety measures on AI companies operating within California. These precautions must be implemented before training advanced foundation models. […]

The post California Assembly passes controversial AI safety bill appeared first on AI News.

]]>
The California State Assembly has approved the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047).

The bill, which has sparked intense debate in Silicon Valley and beyond, aims to impose a series of safety measures on AI companies operating within California. These precautions must be implemented before training advanced foundation models.

Key requirements of the bill include:

  • Implementing mechanisms for swift and complete model shutdown
  • Safeguarding models against “unsafe post-training modifications”
  • Establishing testing procedures to assess the potential risks of models or their derivatives causing “critical harm”

Senator Scott Wiener, the primary author of SB 1047, said: “We’ve worked hard all year, with open source advocates, Anthropic, and others, to refine and improve the bill. SB 1047 is well calibrated to what we know about foreseeable AI risks, and it deserves to be enacted.”

The senator emphasised that the bill simply asks large AI laboratories to follow through on their existing commitments to test their extensive models for catastrophic safety risks.

However, the proposed legislation has faced opposition from various quarters, including AI companies OpenAI and Anthropic, politicians Zoe Lofgren and Nancy Pelosi, and California’s Chamber of Commerce. Critics argue that the bill places excessive focus on catastrophic harms and could disproportionately affect small, open-source AI developers.

In response to these concerns, several amendments were made to the original bill. These changes include:

  • Replacing potential criminal penalties with civil ones
  • Limiting the enforcement powers granted to California’s attorney general
  • Modifying requirements for joining the “Board of Frontier Models” created by the bill

The next step for SB 1047 is a vote in the State Senate, where it is expected to pass. Should this occur, the bill will then be presented to Governor Gavin Newsom, who will have until the end of September to make a decision on its enactment.

As one of the first significant AI regulations in the US, the passage of SB 1047 could set a precedent for future legislation. The outcome of this bill may have far-reaching implications for the AI industry, potentially influencing the development and deployment of advanced AI models not only in California but across the nation and beyond.

(Photo by Josh Hild)

See also: Chinese firms use cloud loophole to access US AI tech

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post California Assembly passes controversial AI safety bill appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/california-assembly-passes-controversial-ai-safety-bill/feed/ 0
OpenAI warns California’s AI bill threatens US innovation https://www.artificialintelligence-news.com/news/openai-warns-california-ai-bill-threatens-us-innovation/ https://www.artificialintelligence-news.com/news/openai-warns-california-ai-bill-threatens-us-innovation/#respond Thu, 22 Aug 2024 15:46:57 +0000 https://www.artificialintelligence-news.com/?p=15810 OpenAI has added its voice to the growing chorus of tech leaders and politicians opposing a controversial AI safety bill in California. The company argues that the legislation, SB 1047, would stifle innovation and that regulation should be handled at a federal level. In a letter sent to California State Senator Scott Wiener’s office, OpenAI […]

The post OpenAI warns California’s AI bill threatens US innovation appeared first on AI News.

]]>
OpenAI has added its voice to the growing chorus of tech leaders and politicians opposing a controversial AI safety bill in California. The company argues that the legislation, SB 1047, would stifle innovation and that regulation should be handled at a federal level.

In a letter sent to California State Senator Scott Wiener’s office, OpenAI expressed concerns that the bill could have “broad and significant” implications for US competitiveness and national security. The company argued that SB 1047 would threaten California’s position as a global leader in AI, prompting talent to seek “greater opportunity elsewhere.” 

Introduced by Senator Wiener, the bill aims to enact “common sense safety standards” for companies developing large AI models exceeding specific size and cost thresholds. These standards would require companies to implement shut-down mechanisms, take “reasonable care” to prevent catastrophic outcomes, and submit compliance statements to the California attorney general. Failure to comply could result in lawsuits and civil penalties.

Lieutenant General John (Jack) Shanahan, who served in the US Air Force and was the inaugural director of the US Department of Defense’s Joint Artificial Intelligence Center (JAIC), believes the bill “thoughtfully navigates the serious risks that AI poses to both civil society and national security” and provides “pragmatic solutions”.

Hon. Andrew C. Weber – former Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs – echoed the national security concerns.

“The theft of a powerful AI system from a leading lab by our adversaries would impose considerable risks on us all,” said Weber. “Developers of the most advanced AI systems need to take significant cybersecurity precautions given the potential risks involved in their work. I’m glad to see that SB 1047 helps establish the necessary protective measures.”

SB 1047 has sparked fierce opposition from major tech companies, startups, and venture capitalists who argue that it overreaches for a nascent technology, potentially stifling innovation and driving businesses from the state. These concerns are echoed by OpenAI, with sources revealing that the company has paused plans to expand its San Francisco offices due to the uncertain regulatory landscape.

Senator Wiener defended the bill, stating that OpenAI’s letter fails to “criticise a single provision.” He dismissed concerns about talent exodus as “nonsensical,” stating that the law would apply to any company conducting business in California, regardless of their physical location. Wiener highlighted the bill’s “highly reasonable” requirement for large AI labs to test their models for catastrophic safety risks, a practice many have already committed to.

Critics, however, counter that mandating the submission of model details to the government will hinder innovation. They also fear that the threat of lawsuits will deter smaller, open-source developers from establishing startups.  In response to the backlash, Senator Wiener recently amended the bill to eliminate criminal liability for non-compliant companies, safeguard smaller developers, and remove the proposed “Frontier Model Division.”

OpenAI maintains that a clear federal framework, rather than state-level regulation, is essential for preserving public safety while maintaining  US competitiveness against rivals like China. The company highlighted the suitability of federal agencies, such as the White House Office of Science and Technology Policy and the Department of Commerce, to govern AI risks.

Senator Wiener acknowledged the ideal of congressional action but expressed scepticism about its likelihood. He drew parallels with California’s data privacy law, passed in the absence of federal action, suggesting that inaction from Congress shouldn’t preclude California from taking a leading role.

The California state assembly is set to vote on SB 1047 this month. If passed, the bill will land on the desk of Governor Gavin Newsom, whose stance on the legislation remains unclear. However, Newsom has publicly recognised the need to balance AI innovation with risk mitigation.

(Photo by Solen Feyissa)

See also: OpenAI delivers GPT-4o fine-tuning

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI warns California’s AI bill threatens US innovation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-warns-california-ai-bill-threatens-us-innovation/feed/ 0
Balancing innovation and trust: Experts assess the EU’s AI Act https://www.artificialintelligence-news.com/news/balancing-innovation-trust-experts-assess-eu-ai-act/ https://www.artificialintelligence-news.com/news/balancing-innovation-trust-experts-assess-eu-ai-act/#respond Wed, 31 Jul 2024 15:48:45 +0000 https://www.artificialintelligence-news.com/?p=15577 As the EU’s AI Act prepares to come into force tomorrow, industry experts are weighing in on its potential impact, highlighting its role in building trust and encouraging responsible AI adoption. Curtis Wilson, Staff Data Engineer at Synopsys’ Software Integrity Group, believes the new regulation could be a crucial step in addressing the AI industry’s […]

The post Balancing innovation and trust: Experts assess the EU’s AI Act appeared first on AI News.

]]>
As the EU’s AI Act prepares to come into force tomorrow, industry experts are weighing in on its potential impact, highlighting its role in building trust and encouraging responsible AI adoption.

Curtis Wilson, Staff Data Engineer at Synopsys’ Software Integrity Group, believes the new regulation could be a crucial step in addressing the AI industry’s most pressing challenge: building trust.

“The greatest problem facing AI developers is not regulation, but a lack of trust in AI,” Wilson stated. “For an AI system to reach its full potential, it needs to be trusted by the people who use it.”

This sentiment is echoed by Paul Cardno, Global Digital Automation & Innovation Senior Manager at 3M, who noted, “With nearly 80% of UK adults now believing AI needs to be heavily regulated, the introduction of the EU’s AI Act is something that businesses have been long-waiting for.”

Both experts emphasise the Act’s potential to foster confidence in AI technologies. Wilson explained that while his company has implemented internal measures to build trust, external regulation is equally important.

“I see regulatory frameworks like the EU AI Act as an essential component to building trust in AI,” Wilson said. “The strict rules and punishing fines will deter careless developers and help customers feel more confident in trusting and using AI systems.”

Cardno added, “We know that AI is shaping the future, but companies will only be able to reap the rewards if they have the confidence to rethink existing processes and break away from entrenched structures.”

The EU AI Act primarily focuses on high-risk systems and foundational models. Wilson noted that many of its requirements align with existing best practices in data science, such as risk management, testing procedures, and comprehensive documentation.

For UK businesses, the impact of the EU AI Act extends beyond those directly selling to EU markets. 

Wilson pointed out that certain aspects of the Act may apply to Northern Ireland due to the Windsor Framework. Additionally, the UK government is developing its own AI regulations, with a recent whitepaper emphasising interoperability with EU and US regulations.

“While the EU Act isn’t perfect, and needs to be assessed in relation to other global regulations, having a clear framework and guidance on AI from one of the world’s major economies will help encourage those who remain on the fence to tap into the AI revolution,” Cardno explained.

While acknowledging that the new regulations may create some friction, particularly around registration and certification, Wilson emphasised that many of the Act’s obligations are already standard practice for responsible companies. However, he recognised that small companies and startups might face greater challenges.

“Small companies and start-ups will experience issues more strongly,” Wilson said. “The regulation acknowledges this and has included provisions for sandboxes to foster AI innovation for these smaller businesses.”

However, Wilson notes that these sandboxes will be established at the national level by individual EU member states, potentially limiting access for UK businesses.

As the AI landscape continues to evolve, the EU AI Act represents a significant step towards establishing a framework for responsible AI development and deployment.

“Having a clear framework and guidance on AI from one of the world’s major economies will help encourage those who remain on the fence to tap into the AI revolution, ensuring it has a safe, positive ongoing influence for all organisations operating across the EU, which can only be a promising step forwards for the industry,” concludes Cardno.

(Photo by Guillaume Périgois)

See also: UAE blocks US congressional meetings with G42 amid AI transfer concerns

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Balancing innovation and trust: Experts assess the EU’s AI Act appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/balancing-innovation-trust-experts-assess-eu-ai-act/feed/ 0
Global semiconductor shortage: How the US plans to close the talent gap https://www.artificialintelligence-news.com/news/global-semiconductor-shortage-how-us-plans-close-talent-gap/ https://www.artificialintelligence-news.com/news/global-semiconductor-shortage-how-us-plans-close-talent-gap/#respond Tue, 02 Jul 2024 10:01:35 +0000 https://www.artificialintelligence-news.com/?p=15166 The semiconductor industry, which is a cornerstone of modern technology and economic prosperity, has been dealing with a serious labour shortage for some time. The skills shortage appears to be worsening, with more than one million additional skilled workers required by 2030 to meet demand in the semiconductor industry, according to Deloitte. This pervasive issue extends […]

The post Global semiconductor shortage: How the US plans to close the talent gap appeared first on AI News.

]]>
The semiconductor industry, which is a cornerstone of modern technology and economic prosperity, has been dealing with a serious labour shortage for some time. The skills shortage appears to be worsening, with more than one million additional skilled workers required by 2030 to meet demand in the semiconductor industry, according to Deloitte. This pervasive issue extends beyond the US, affecting key players worldwide and threatening to impede the sector’s growth and innovation. 

Since countries have been striving to expand their semiconductor capabilities to meet escalating global demand, particularly since the pandemic, a skilled worker shortage has emerged as a critical bottleneck, undermining efforts to maintain and advance technological leadership in this vital industry. With over two million direct employees worldwide in 2021 and more than one million extra skilled professionals required by 2030, Deloitte expects that more than 100,000 hires are needed every year. 

For background, there are less than 100,000 graduate students enrolling in electrical engineering and computer science in the US each year, as per Deloitte’s data. Even countries like Taiwan, South Korea, China, Japan, and Europe are facing challenges in finding enough qualified workers to meet the demands of their rapidly expanding semiconductor sectors. For instance, Taiwan had a shortfall of over 30,000 semiconductor workers in late 2021, and South Korea is projected to face a similar shortfall over the next decade. 

China’s shortfall is even more severe, with estimates suggesting a need for over 300,000 additional workers​, even before the current chip growth and supply chain problems. This shortage is attributed to several factors. Many nations have seen their semiconductor manufacturing expertise erode over the years as production moved offshore. 

In the US, for example, the industry accounts for only about 12% of global chip production, with most of the advanced manufacturing know-how residing in Asia​. The lack of awareness about semiconductor careers among potential recruits also contributes to the talent gap, making it difficult to attract new workers to the field​. To top it off, the competition for semiconductor talent has also been showing signs of getting even tighter.

CHIPS Act and workforce development

In response to this growing issue, the US has introduced measures under the CHIPS and Science Act, aimed at boosting the domestic semiconductor industry and addressing the labour shortage. The Act allocates substantial funding towards the development of the semiconductor workforce, focusing particularly on technician roles and jobs that do not require a bachelor’s degree. This is significant because about 60% of new semiconductor positions fall into these categories, according to McKinsey’s report.​

The CHIPS Act, passed in 2022, promotes various initiatives to build a robust talent pipeline. However, according to a recent report by Bloomberg, the US government is intensifying its efforts to address the semiconductor labor shortage through new initiatives, under the CHIPS Act, highlighting a significant expansion of educational and training programs aimed at developing a skilled workforce tailored to the industry.

“The program, described as a workforce partner alliance, will use some of the $5 billion in federal funding set aside for a new National Semiconductor Technology Center. The NSTC plans to award grants to as many as 10 workforce development projects with budgets of $500,000 to $2 million,” Bloomberg noted.

The NSTC will also be launching additional application processes in the coming months, and officials will determine the total level of spending once all the proposals have been considered. All of the finance comes from the 2022 Chips and Science Act, the landmark law that set aside $39 billion in grants to boost US chipmaking, plus $11 billion for semiconductor research and development, including the NSTC

Labour shortage: A long-term problem

Even with all these efforts, the semiconductor industry is likely to continue facing labour shortages in the long-term. The report from McKinsey highlights that even with substantial investments in education and training, the sector will struggle to find enough skilled workers to meet its needs. 

This is compounded by issues such as lack of career advancement opportunities, workplace inflexibility, and insufficient support, which drive many employees to leave the industry​, according to various analyses. Moreover, the competition for semiconductor talent is intensifying globally. Companies like Taiwan’s TSMC are recruiting experienced semiconductor workers from the US, India, Canada, Japan, and Europe. 

This global competition underscores the urgent need for collaborative initiatives to attract and retain skilled workers in the semiconductor industry​. After all, the labor shortage in the semiconductor industry is a complex challenge that requires multifaceted solutions. 

(Photo by Vishnu Mohanan)

See also: US clamps down on China-bound investments

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Global semiconductor shortage: How the US plans to close the talent gap appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/global-semiconductor-shortage-how-us-plans-close-talent-gap/feed/ 0