policy Archives - AI News https://www.artificialintelligence-news.com/news/tag/policy/ Artificial Intelligence News Thu, 24 Apr 2025 11:28:46 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png policy Archives - AI News https://www.artificialintelligence-news.com/news/tag/policy/ 32 32 Tony Blair Institute AI copyright report sparks backlash https://www.artificialintelligence-news.com/news/tony-blair-institute-ai-copyright-report-sparks-backlash/ https://www.artificialintelligence-news.com/news/tony-blair-institute-ai-copyright-report-sparks-backlash/#respond Wed, 02 Apr 2025 11:04:11 +0000 https://www.artificialintelligence-news.com/?p=105140 The Tony Blair Institute (TBI) has released a report calling for the UK to lead in navigating the complex intersection of arts and AI. According to the report, titled ‘Rebooting Copyright: How the UK Can Be a Global Leader in the Arts and AI,’ the global race for cultural and technological leadership is still up […]

The post Tony Blair Institute AI copyright report sparks backlash appeared first on AI News.

]]>
The Tony Blair Institute (TBI) has released a report calling for the UK to lead in navigating the complex intersection of arts and AI.

According to the report, titled ‘Rebooting Copyright: How the UK Can Be a Global Leader in the Arts and AI,’ the global race for cultural and technological leadership is still up for grabs, and the UK has a golden opportunity to take the lead.

The report emphasises that countries that “embrace change and harness the power of artificial intelligence in creative ways will set the technical, aesthetic, and regulatory standards for others to follow.”

Highlighting that we are in the midst of another revolution in media and communication, the report notes that AI is disrupting how textual, visual, and auditive content is created, distributed, and experienced, much like the printing press, gramophone, and camera did before it.

“AI will usher in a new era of interactive and bespoke works, as well as a counter-revolution that celebrates everything that AI can never be,” the report states.

However, far from signalling the end of human creativity, the TBI suggests AI will open up “new ways of being original.”

The AI revolution’s impact isn’t limited to the creative industries; it’s being felt across all areas of society. Scientists are using AI to accelerate discoveries, healthcare providers are employing it to analyse X-ray images, and emergency services utilise it to locate houses damaged by earthquakes.

The report stresses that these cross-industry advancements are just the beginning, with future AI systems set to become increasingly capable, fuelled by advancements in computing power, data, model architectures, and access to talent.

The UK government has expressed its ambition to be a global leader in AI through its AI Opportunities Action Plan, announced by Prime Minister Keir Starmer on 13 January 2025. For its part, the TBI welcomes the UK government’s ambition, stating that “if properly designed and deployed, AI can make human lives healthier, safer, and more prosperous.”

However, the rapid spread of AI across sectors raises urgent policy questions, particularly concerning the data used for AI training. The application of UK copyright law to the training of AI models is currently contested, with the debate often framed as a “zero-sum game” between AI developers and rights holders. The TBI argues that this framing “misrepresents the nature of the challenge and the opportunity before us.”

The report emphasises that “bold policy solutions are needed to provide all parties with legal clarity and unlock investments that spur innovation, job creation, and economic growth.”

According to the TBI, AI presents opportunities for creators—noting its use in various fields from podcasts to filmmaking. The report draws parallels with past technological innovations – such as the printing press and the internet – which were initially met with resistance, but ultimately led to societal adaptation and human ingenuity prevailing.

The TBI proposes that the solution lies not in clinging to outdated copyright laws but in allowing them to “co-evolve with technological change” to remain effective in the age of AI.

The UK government has proposed a text and data mining exception with an opt-out option for rights holders. While the TBI views this as a good starting point for balancing stakeholder interests, it acknowledges the “significant implementation and enforcement challenges” that come with it, spanning legal, technical, and geopolitical dimensions.

In the report, the Tony Blair Institute for Global Change “assesses the merits of the UK government’s proposal and outlines a holistic policy framework to make it work in practice.”

The report includes recommendations and examines novel forms of art that will emerge from AI. It also delves into the disagreement between rights holders and developers on copyright, the wider implications of copyright policy, and the serious hurdles the UK’s text and data mining proposal faces.

Furthermore, the Tony Blair Institute explores the challenges of governing an opt-out policy, implementation problems with opt-outs, making opt-outs useful and accessible, and tackling the diffusion problem. AI summaries and the problems they present regarding identity are also addressed, along with defensive tools as a partial solution and solving licensing problems.

The report also seeks to clarify the standards on human creativity, address digital watermarking, and discuss the uncertainty around the impact of generative AI on the industry. It proposes establishing a Centre for AI and the Creative Industries and discusses the risk of judicial review, the benefits of a remuneration scheme, and the advantages of a targeted levy on ISPs to raise funding for the Centre.

However, the report has faced strong criticism. Ed Newton-Rex, CEO of Fairly Trained, raised several concerns on Bluesky. These concerns include:

  • The report repeats the “misleading claim” that existing UK copyright law is uncertain, which Newton-Rex asserts is not the case.
  • The suggestion that an opt-out scheme would give rights holders more control over how their works are used is misleading. Newton-Rex argues that licensing is currently required by law, so moving to an opt-out system would actually decrease control, as some rights holders will inevitably miss the opt-out.
  • The report likens machine learning (ML) training to human learning, a comparison that Newton-Rex finds shocking, given the vastly different scalability of the two.
  • The report’s claim that AI developers won’t make long-term profits from training on people’s work is questioned, with Newton-Rex pointing to the significant funding raised by companies like OpenAI.
  • Newton-Rex suggests the report uses strawman arguments, such as stating that generative AI may not replace all human paid activities.
  • A key criticism is that the report omits data showing how generative AI replaces demand for human creative labour.
  • Newton-Rex also criticises the report’s proposed solutions, specifically the suggestion to set up an academic centre, which he notes “no one has asked for.”
  • Furthermore, he highlights the proposal to tax every household in the UK to fund this academic centre, arguing that this would place the financial burden on consumers rather than the AI companies themselves, and the revenue wouldn’t even go to creators.

Adding to these criticisms, British novelist and author Jonathan Coe noted that “the five co-authors of this report on copyright, AI, and the arts are all from the science and technology sectors. Not one artist or creator among them.”

While the report from Tony Blair Institute for Global Change supports the government’s ambition to be an AI leader, it also raises critical policy questions—particularly around copyright law and AI training data.

(Photo by Jez Timms)

See also: Amazon Nova Act: A step towards smarter, web-native AI agents

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Tony Blair Institute AI copyright report sparks backlash appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/tony-blair-institute-ai-copyright-report-sparks-backlash/feed/ 0
Hugging Face calls for open-source focus in the AI Action Plan https://www.artificialintelligence-news.com/news/hugging-face-open-source-focus-ai-action-plan/ https://www.artificialintelligence-news.com/news/hugging-face-open-source-focus-ai-action-plan/#respond Thu, 20 Mar 2025 17:41:39 +0000 https://www.artificialintelligence-news.com/?p=104946 Hugging Face has called on the US government to prioritise open-source development in its forthcoming AI Action Plan. In a statement to the Office of Science and Technology Policy (OSTP), Hugging Face emphasised that “thoughtful policy can support innovation while ensuring that AI development remains competitive, and aligned with American values.” Hugging Face, which hosts […]

The post Hugging Face calls for open-source focus in the AI Action Plan appeared first on AI News.

]]>
Hugging Face has called on the US government to prioritise open-source development in its forthcoming AI Action Plan.

In a statement to the Office of Science and Technology Policy (OSTP), Hugging Face emphasised that “thoughtful policy can support innovation while ensuring that AI development remains competitive, and aligned with American values.”

Hugging Face, which hosts over 1.5 million public models across various sectors and serves seven million users, proposes an AI Action Plan centred on three interconnected pillars:

  1. Hugging Face stresses the importance of strengthening open-source AI ecosystems.  The company argues that technical innovation stems from diverse actors across institutions and that support for infrastructure – such as the National AI Research Resource (NAIRR), and investment in open science and data – allows these contributions to have an additive effect and accelerate robust innovation.
  1. The company prioritises efficient and reliable adoption of AI. Hugging Face believes that spreading the benefits of the technology by facilitating its adoption along the value chain requires actors across sectors of activity to shape its development. It states that more efficient, modular, and robust AI models require research and infrastructural investments to enable the broadest possible participation and innovation—enabling diffusion of technology across the US economy.
  1. Hugging Face also highlights the need to promote security and standards. The company suggests that decades of practices in open-source software cybersecurity, information security, and standards can inform safer AI technology. It advocates for promoting traceability, disclosure, and interoperability standards to foster a more resilient and robust technology ecosystem.

Open-source is key for AI advancement in the US (and beyond)

Hugging Face underlines that modern AI is built on decades of open research, with commercial giants relying heavily on open-source contributions. Recent breakthroughs – such as OLMO-2 and Olympic-Coder – demonstrate that open research remains a promising path to developing systems that match the performance of commercial models, and can often surpass them, especially in terms of efficiency and performance in specific domains.

“Perhaps most striking is the rapid compression of development timelines,” notes the company, “what once required over 100B parameter models just two years ago can now be accomplished with 2B parameter models, suggesting an accelerating path to parity.”

This trend towards more accessible, efficient, and collaborative AI development indicates that open approaches to AI development have a critical role to play in enabling a successful AI strategy that maintains technical leadership and supports more widespread and secure adoption of the technology.

Hugging Face argues that open models, infrastructure, and scientific practices constitute the foundation of AI innovation, allowing a diverse ecosystem of researchers, companies, and developers to build upon shared knowledge.

The company’s platform hosts AI models and datasets from both small actors (e.g., startups, universities) and large organisations (e.g., Microsoft, Google, OpenAI, Meta), demonstrating how open approaches accelerate progress and democratise access to AI capabilities.

“The United States must lead in open-source AI and open science, which can enhance American competitiveness by fostering a robust ecosystem of innovation and ensuring a healthy balance of competition and shared innovation,” states Hugging Face.

Research has shown that open technical systems act as force multipliers for economic impact, with an estimated 2000x multiplier effect. This means that $4 billion invested in open systems could potentially generate $8 trillion in value for companies using them.

These economic benefits extend to national economies as well. Without any open-source software contributions, the average country would lose 2.2% of its GDP. Open-source drove between €65 billion and €95 billion of European GDP in 2018 alone, a finding so significant that the European Commission cited it when establishing new rules to streamline the process for open-sourcing government software.

This demonstrates how open-source impact translates directly into policy action and economic advantage at the national level, underlining the importance of open-source as a public good.

Practical factors driving commercial adoption of open-source AI

Hugging Face identifies several practical factors driving the commercial adoption of open models:

  • Cost efficiency is a major driver, as developing AI models from scratch requires significant investment, so leveraging open foundations reduces R&D expenses.
  • Customisation is crucial, as organisations can adapt and deploy models specifically tailored to their use cases rather than relying on one-size-fits-all solutions.
  • Open models reduce vendor lock-in, giving companies greater control over their technology stack and independence from single providers.
  • Open models have caught up to and, in certain cases, surpassed the capabilities of closed, proprietary systems.

These factors are particularly valuable for startups and mid-sized companies, which can access cutting-edge technology without massive infrastructure investments. Banks, pharmaceutical companies, and other industries have been adapting open models to specific market needs—demonstrating how open-source foundations support a vibrant commercial ecosystem across the value chain.

Hugging Face’s policy recommendations to support open-source AI in the US

To support the development and adoption of open AI systems, Hugging Face offers several policy recommendations:

  • Enhance research infrastructure: Fully implement and expand the National AI Research Resource (NAIRR) pilot. Hugging Face’s active participation in the NAIRR pilot has demonstrated the value of providing researchers with access to computing resources, datasets, and collaborative tools.
  • Allocate public computing resources for open-source: The public should have ways to participate via public AI infrastructure. One way to do this would be to dedicate a portion of publicly-funded computing infrastructure to support open-source AI projects, reducing barriers to innovation for smaller research teams and companies that cannot afford proprietary systems.
  • Enable access to data for developing open systems: Create sustainable data ecosystems through targeted policies that address the decreasing data commons. Publishers are increasingly signing data licensing deals with proprietary AI model developers, meaning that quality data acquisition costs are now approaching or even surpassing computational expenses of training frontier models, threatening to lock out small open developers from access to quality data.  Support organisations that contribute to public data repositories and streamline compliance pathways that reduce legal barriers to responsible data sharing.
  • Develop open datasets: Invest in the creation, curation, and maintenance of robust, representative datasets that can support the next generation of AI research and applications. Expand initiatives like the IBM AI Alliance Trusted Data Catalog and support projects like IDI’s AI-driven Digitization of the public collections in the Boston Public Library.
  • Strengthen rights-respecting data access frameworks: Establish clear guidelines for data usage, including standardised protocols for anonymisation, consent management, and usage tracking.  Support public-private partnerships to create specialised data trusts for high-value domains like healthcare and climate science, ensuring that individuals and organisations maintain appropriate control over their data while enabling innovation.    
  • Invest in stakeholder-driven innovation: Create and support programmes that enable organisations across diverse sectors (healthcare, manufacturing, education) to develop customised AI systems for their specific needs, rather than relying exclusively on general-purpose systems from major providers. This enables broader participation in the AI ecosystem and ensures that the benefits of AI extend throughout the economy.
  • Strengthen centres of excellence: Expand NIST’s role as a convener for AI experts across academia, industry, and government to share lessons and develop best practices.  In particular, the AI Risk Management Framework has played a significant role in identifying stages of AI development and research questions that are critical to ensuring more robust and secure technology deployment for all. The tools developed at Hugging Face, from model documentation to evaluation libraries, are directly shaped by these questions.
  • Support high-quality data for performance and reliability evaluation: AI development depends heavily on data, both to train models and to reliably evaluate their progress, strengths, risks, and limitations. Fostering greater access to public data in a safe and secure way and ensuring that the evaluation data used to characterise models is sound and evidence-based will accelerate progress in both performance and reliability of the technology.

Prioritising efficient and reliable AI adoption

Hugging Face highlights that smaller companies and startups face significant barriers to AI adoption due to high costs and limited resources. According to IDC, global AI spending will reach $632 billion in 2028, but these costs remain prohibitive for many small organisations.

For organisations adopting open-source AI tools, it brings financial returns. 51% of surveyed companies currently utilising open-source AI tools report positive ROI, compared to just 41% of those not using open-source.

However, energy scarcity presents a growing concern, with the International Energy Agency projecting that data centres’ electricity consumption could double from 2022 levels to 1,000 TWh by 2026, equivalent to Japan’s entire electricity demand. While training AI models is energy-intensive, inference, due to its scale and frequency, can ultimately exceed training energy consumption.

Ensuring broad AI accessibility requires both hardware optimisations and scalable software frameworks.  A range of organisations are developing models tailored to their specific needs, and US leadership in efficiency-focused AI development presents a strategic advantage. The DOE’s AI for Energy initiative further supports research into energy-efficient AI, facilitating wider adoption without excessive computational demands.

With its letter to the OSTP, Hugging Face advocates for an AI Action Plan centred on open-source principles. By taking decisive action, the US can secure its leadership, drive innovation, enhance security, and ensure the widespread benefits of AI are realised across society and the economy.

See also: UK minister in US to pitch Britain as global AI investment hub

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Hugging Face calls for open-source focus in the AI Action Plan appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/hugging-face-open-source-focus-ai-action-plan/feed/ 0
UK must act to secure its semiconductor industry leadership https://www.artificialintelligence-news.com/news/uk-must-act-secure-its-semiconductor-industry-leadership/ https://www.artificialintelligence-news.com/news/uk-must-act-secure-its-semiconductor-industry-leadership/#respond Mon, 17 Feb 2025 11:47:01 +0000 https://www.artificialintelligence-news.com/?p=104518 The UK semiconductor industry is at a critical juncture, with techUK urging the government to act to maintain its global competitiveness. Laura Foster, Associate Director of Technology and Innovation at techUK, said: “The UK has a unique opportunity to lead in the global semiconductor landscape, but success will require bold action and sustained commitment. “By […]

The post UK must act to secure its semiconductor industry leadership appeared first on AI News.

]]>
The UK semiconductor industry is at a critical juncture, with techUK urging the government to act to maintain its global competitiveness.

Laura Foster, Associate Director of Technology and Innovation at techUK, said: “The UK has a unique opportunity to lead in the global semiconductor landscape, but success will require bold action and sustained commitment.

“By accelerating the implementation of the National Semiconductor Strategy, we can unlock investment, foster innovation, and strengthen our position in this critical industry.  

Semiconductors are the backbone of modern technology, powering everything from consumer electronics to AI data centres. With the global semiconductor market projected to reach $1 trillion by 2030, the UK must act to secure its historic leadership in this lucrative and strategically vital industry.

“We must act at pace to secure the UK’s semiconductor future and as such our technological and economic resilience,” explains Foster.

UK semiconductor industry strengths and challenges

The UK has long been a leader in semiconductor design and intellectual property (IP), with Cambridge in particular serving as a global hub for innovation.

Companies like Arm, which designs chips used in 99% of the world’s smartphones, exemplify the UK’s strengths in this area. However, a techUK report warns that these strengths are under threat due to insufficient investment, skills shortages, and a lack of tailored support for the sector.

“The UK is not starting from zero,” the report states. “We have globally competitive capabilities in design and IP, but we must double down on these strengths to compete internationally.”

The UK’s semiconductor industry contributed £12 billion in turnover in 2021, with 90% of companies expecting growth in the coming years. However, the sector faces significant challenges, including high costs, limited access to private capital, and a reliance on international talent.

The report highlights that only 5% of funding for UK semiconductor startups originates domestically, with many companies struggling to find qualified investors.

A fundamental need for strategic investment and innovation

The report makes 27 recommendations across six key areas, including design and IP, R&D, manufacturing, skills, and global partnerships.

Some of the key proposals include:

  • Turn current strengths into leadership: The UK must leverage its existing capabilities in design, IP, and compound semiconductors. This includes supporting regional clusters like Cambridge and South Wales, which have proven track records of innovation.
  • Establishing a National Semiconductor Centre: This would act as a central hub for the industry, providing support for businesses, coordinating R&D efforts, and fostering collaboration between academia and industry.
  • Expanding R&D tax credits: The report calls for the inclusion of capital expenditure in R&D tax credits to incentivise investment in new facilities and equipment.
  • Creating a Design Competence Centre: This would provide shared facilities for chip designers, reducing the financial risk of innovation and supporting the development of advanced designs.
  • Nurturing skills: The UK must address the skills shortage in the semiconductor sector by upskilling workers, attracting international talent, and promoting STEM education.
  • Capitalise on global partnerships: The UK must strengthen its position in the global semiconductor supply chain by forming strategic partnerships with allied countries. This includes collaborating on R&D, securing access to critical materials, and navigating export controls.

Urgent action is required to secure the UK semiconductor industry

The report warns that the UK risks falling behind other nations if it does not act quickly. Countries like the US, China, and the EU have already announced significant investments in their domestic semiconductor industries.

The European Chips Act, for example, has committed €43 billion to support semiconductor infrastructure, skills, and startups.

“Governments across the world are acting quickly to attract semiconductor companies while also building domestic capability,” the report states. “The UK must use its existing resources tactically, playing to its globally recognised strengths within the semiconductor value chain.”

The UK’s semiconductor industry has the potential to be a global leader, but this will require sustained investment, strategic planning, and collaboration between government, industry, and academia.

“The UK Government should look to its semiconductor ambitions as an essential part of delivering the wider Industrial Strategy and securing not just the fastest growth in the G7, but also secure and resilient economic growth,” the report concludes.

(Photo by Rocco Dipoppa)

See also: AI in 2025: Purpose-driven models, human integration, and more

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK must act to secure its semiconductor industry leadership appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/uk-must-act-secure-its-semiconductor-industry-leadership/feed/ 0
NEPC: AI sprint risks environmental catastrophe https://www.artificialintelligence-news.com/news/nepc-ai-sprint-risks-environmental-catastrophe/ https://www.artificialintelligence-news.com/news/nepc-ai-sprint-risks-environmental-catastrophe/#respond Fri, 07 Feb 2025 12:32:41 +0000 https://www.artificialintelligence-news.com/?p=104189 The government is urged to mandate stricter reporting for data centres to mitigate environmental risks associated with the AI sprint. A report published today by the National Engineering Policy Centre (NEPC) highlights the urgent need for data centres to adopt greener practices, particularly as the government’s AI Opportunities Action Plan gains traction. The report, Engineering […]

The post NEPC: AI sprint risks environmental catastrophe appeared first on AI News.

]]>
The government is urged to mandate stricter reporting for data centres to mitigate environmental risks associated with the AI sprint.

A report published today by the National Engineering Policy Centre (NEPC) highlights the urgent need for data centres to adopt greener practices, particularly as the government’s AI Opportunities Action Plan gains traction.

The report, Engineering Responsible AI: Foundations for Environmentally Sustainable AI, was developed in collaboration with the Royal Academy of Engineering, the Institution of Engineering and Technology, and BCS, the Chartered Institute of IT.

While stressing that data centres enabling AI systems can be built to consume fewer resources like energy and water, the report highlights that infrastructure and regulatory conditions must align for these efficiencies to materialise.

Unlocking the potential of AI while minimising environmental risks  

AI is heralded as capable of driving economic growth, creating jobs, and improving livelihoods. Launched as a central pillar of the UK’s tech strategy, the AI Opportunities Action Plan is intended to “boost economic growth, provide jobs for the future and improve people’s everyday lives.”  

Use cases for AI that are already generating public benefits include accelerating drug discovery, forecasting weather events, optimising energy systems, and even aiding climate science and improving sustainability efforts. However, this growing reliance on AI also poses environmental risks from the infrastructure required to power these systems.  

Data centres, which serve as the foundation of AI technologies, consume vast amounts of energy and water. Increasing demand has raised concerns about global competition for limited resources, such as sustainable energy and drinking water. Google and Microsoft, for instance, have recorded rising water usage by their data centres each year since 2020. Much of this water comes from drinking sources, sparking fears about resource depletion.  

With plans already in place to reform the UK’s planning system to facilitate the construction of data centres, the report calls for urgent policies to manage their environmental impact. Accurate and transparent data on resource consumption is currently lacking, which hampers policymakers’ ability to assess the true scale of these impacts and act accordingly.

Five steps to sustainable AI  

The NEPC is urging the government to spearhead change by prioritising sustainable AI development. The report outlines five key steps policymakers can act upon immediately to position the UK as a leader in resource-efficient AI:  

  1. Expand environmental reporting mandates
  2. Communicate the sector’s environmental impacts
  3. Set sustainability requirements for data centres
  4. Reconsider data collection, storage, and management practices
  5. Lead by example with government investment

Mandatory environmental reporting forms a cornerstone of the recommendations. This involves measuring data centres’ energy sources, water consumption, carbon emissions, and e-waste recycling practices to provide the resource use data necessary for policymaking.  

Raising public awareness is also vital. Communicating the environmental costs of AI can encourage developers to optimise AI tools, use smaller datasets, and adopt more efficient approaches. Notably, the report recommends embedding environmental design and sustainability topics into computer science and AI education at both school and university levels.  

Smarter, greener data centres  

One of the most urgent calls to action involves redesigning data centres to reduce their environmental footprint. The report advocates for innovations like waste heat recovery systems, zero drinking water use for cooling, and the exclusive use of 100% carbon-free energy certificates.  

Efforts like those at Queen Mary University of London, where residual heat from a campus data centre is repurposed to provide heating and hot water, offer a glimpse into the possibilities of greener tech infrastructure.  

In addition, the report suggests revising legislation on mandatory data retention to reduce the unnecessary environmental costs of storing vast amounts of data long-term. Proposals for a National Data Library could drive best practices by centralising and streamlining data storage.  

Professor Tom Rodden, Pro-Vice-Chancellor at the University of Nottingham and Chair of the working group behind the report, urged swift action:  

“In recent years, advances in AI systems and services have largely been driven by a race for size and scale, demanding increasing amounts of computational power. As a result, AI systems and services are growing at a rate unparalleled by other high-energy systems—generally without much regard for resource efficiency.  

“This is a dangerous trend, and we face a real risk that our development, deployment, and use of AI could do irreparable damage to the environment.”  

Rodden added that reliable data on these impacts is critical. “To build systems and services that effectively use resources, we first need to effectively monitor their environmental cost. Once we have access to trustworthy data… we can begin to effectively target efficiency in development, deployment, and use – and plan a sustainable AI future for the UK.”

Dame Dawn Childs, CEO of Pure Data Centres Group, underscored the role of engineering in improving efficiency. “Some of this will come from improvements to AI models and hardware, making them less energy-intensive. But we must also ensure that the data centres housing AI’s computing power and storage are as sustainable as possible.  

“That means prioritising renewable energy, minimising water use, and reducing carbon emissions – both directly and indirectly. Using low-carbon building materials is also essential.”  

Childs emphasised the importance of a coordinated approach from the start of projects. “As the UK government accelerates AI adoption – through AI Growth Zones and streamlined planning for data centres – sustainability must be a priority at every step.”  

For Alex Bardell, Chair of BCS’ Green IT Specialist Group, the focus is on optimising AI processes. “Our report has discussed optimising models for efficiency. Previous attempts to limit the drive toward increased computational power and larger models have faced significant resistance, with concerns that the UK may fall behind in the AI arena; this may not necessarily be true.  

“It is crucial to reevaluate our approach to developing sustainable AI in the future.”  

Time for transparency around AI environmental risks

Public awareness of AI’s environmental toll remains low. Recent research by the Institution of Engineering and Technology (IET) found that fewer than one in six UK residents are aware of the significant environmental costs associated with AI systems.  

“AI providers must be transparent about these effects,” said Professor Sarvapali Ramchurn, CEO of Responsible AI UK and a Fellow of the IET. “If we cannot measure it, we cannot manage it, nor ensure benefits for all. This report’s recommendations will aid national discussions on the sustainability of AI systems and the trade-offs involved.”  

As the UK pushes forward with ambitious plans to lead in AI development, ensuring environmental sustainability must take centre stage. By adopting policies and practices outlined in the NEPC report, the government can support AI growth while safeguarding finite resources for future generations.

(Photo by Braden Collum)

See also: Sustainability is key in 2025 for businesses to advance AI efforts

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NEPC: AI sprint risks environmental catastrophe appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/nepc-ai-sprint-risks-environmental-catastrophe/feed/ 0
AI hallucinations gone wrong as Alaska uses fake stats in policy https://www.artificialintelligence-news.com/news/ai-hallucinations-gone-wrong-as-alaska-uses-fake-stats-in-policy/ https://www.artificialintelligence-news.com/news/ai-hallucinations-gone-wrong-as-alaska-uses-fake-stats-in-policy/#respond Tue, 05 Nov 2024 16:12:42 +0000 https://www.artificialintelligence-news.com/?p=16432 The combination of artificial intelligence and policymaking can occasionally have unforeseen repercussions, as seen recently in Alaska. In an unusual turn of events, Alaska legislators reportedly used AI-generated citations that were inaccurate to justify a proposed policy banning cellphones in schools. As reported by /The Alaska Beacon/, Alaska’s Department of Education and Early Development (DEED) […]

The post AI hallucinations gone wrong as Alaska uses fake stats in policy appeared first on AI News.

]]>
The combination of artificial intelligence and policymaking can occasionally have unforeseen repercussions, as seen recently in Alaska.

In an unusual turn of events, Alaska legislators reportedly used AI-generated citations that were inaccurate to justify a proposed policy banning cellphones in schools. As reported by /The Alaska Beacon/, Alaska’s Department of Education and Early Development (DEED) presented a policy draft containing references to academic studies that simply did not exist.

The situation arose when Alaska’s Education Commissioner, Deena Bishop, used generative AI to draft the cellphone policy. The document produced by the AI included supposed scholarly references that were neither verified nor accurate, yet the document did not disclose the use of AI in its preparation. Some of the AI-generated content reached the Alaska State Board of Education and Early Development before it could be reviewed, potentially influencing board discussions.

Commissioner Bishop later claimed that AI was used only to “create citations” for an initial draft and asserted that she corrected the errors before the meeting by sending updated citations to board members. However, AI “hallucinations”—fabricated information generated when AI attempts to create plausible yet unverified content—were still present in the final document that was voted on by the board.

The final resolution, published on DEED’s website, directs the department to establish a model policy for cellphone restrictions in schools. Unfortunately, the document included six citations, four of which seemed to be from respected scientific journals. However, the references were entirely made up, with URLs that led to unrelated content. The incident shows the risks of using AI-generated data without proper human verification, especially when making policy rulings.

Alaska’s case is not one of a kind. AI hallucinations are increasingly common in a variety of professional sectors. For example, some legal professionals have faced consequences for using AI-generated, fictitious case citations in court. Similarly, academic papers created using AI have included distorted data and fake sources, presenting serious credibility concerns. When left unchecked, generative AI algorithms, which are meant to produce content based on patterns rather than factual accuracy, can easily produce misleading citations.

The reliance on AI-generated data in policymaking, particularly in education, carries significant risks. When policies are developed based on fabricated information, they may misallocate resources and potentially harm students. For instance, a policy restricting cellphone use based on fabricated data may divert attention from more effective, evidence-based interventions that could genuinely benefit students.

Furthermore, using unverified AI data can erode public trust in both the policymaking process and AI technology itself. Such incidents underscore the importance of fact-checking, transparency, and caution when using AI in sensitive decision-making areas, especially in education, where impact on students can be profound.

Alaska officials attempted to downplay the situation, referring to the fabricated citations as “placeholders” intended for later correction. However, the document with the “placeholders” was still presented to the board and used as the basis for a vote, underscoring the need for rigorous oversight when using AI.

(Photo by Hartono Creative Studio)

See also: Anthropic urges AI regulation to avoid catastrophes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI hallucinations gone wrong as Alaska uses fake stats in policy appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-hallucinations-gone-wrong-as-alaska-uses-fake-stats-in-policy/feed/ 0
Anthropic urges AI regulation to avoid catastrophes https://www.artificialintelligence-news.com/news/anthropic-urges-ai-regulation-avoid-catastrophes/ https://www.artificialintelligence-news.com/news/anthropic-urges-ai-regulation-avoid-catastrophes/#respond Fri, 01 Nov 2024 16:46:42 +0000 https://www.artificialintelligence-news.com/?p=16415 Anthropic has flagged the potential risks of AI systems and calls for well-structured regulation to avoid potential catastrophes. The organisation argues that targeted regulation is essential to harness AI’s benefits while mitigating its dangers. As AI systems evolve in capabilities such as mathematics, reasoning, and coding, their potential misuse in areas like cybersecurity or even […]

The post Anthropic urges AI regulation to avoid catastrophes appeared first on AI News.

]]>
Anthropic has flagged the potential risks of AI systems and calls for well-structured regulation to avoid potential catastrophes. The organisation argues that targeted regulation is essential to harness AI’s benefits while mitigating its dangers.

As AI systems evolve in capabilities such as mathematics, reasoning, and coding, their potential misuse in areas like cybersecurity or even biological and chemical disciplines significantly increases.

Anthropic warns the next 18 months are critical for policymakers to act, as the window for proactive prevention is narrowing. Notably, Anthropic’s Frontier Red Team highlights how current models can already contribute to various cyber offense-related tasks and expects future models to be even more effective.

Of particular concern is the potential for AI systems to exacerbate chemical, biological, radiological, and nuclear (CBRN) misuse. The UK AI Safety Institute found that several AI models can now match PhD-level human expertise in providing responses to science-related inquiries.

In addressing these risks, Anthropic has detailed its Responsible Scaling Policy (RSP) that was released in September 2023 as a robust countermeasure. RSP mandates an increase in safety and security measures corresponding to the sophistication of AI capabilities.

The RSP framework is designed to be adaptive and iterative, with regular assessments of AI models allowing for timely refinement of safety protocols. Anthropic says that it’s committed to maintaining and enhancing safety spans various team expansions, particularly in security, interpretability, and trust sectors, ensuring readiness for the rigorous safety standards set by its RSP.

Anthropic believes the widespread adoption of RSPs across the AI industry, while primarily voluntary, is essential for addressing AI risks.

Transparent, effective regulation is crucial to reassure society of AI companies’ adherence to promises of safety. Regulatory frameworks, however, must be strategic, incentivising sound safety practices without imposing unnecessary burdens.

Anthropic envisions regulations that are clear, focused, and adaptive to evolving technological landscapes, arguing that these are vital in achieving a balance between risk mitigation and fostering innovation.

In the US, Anthropic suggests that federal legislation could be the ultimate answer to AI risk regulation—though state-driven initiatives might need to step in if federal action lags. Legislative frameworks developed by countries worldwide should allow for standardisation and mutual recognition to support a global AI safety agenda, minimising the cost of regulatory adherence across different regions.

Furthermore, Anthropic addresses scepticism towards imposing regulations—highlighting that overly broad use-case-focused regulations would be inefficient for general AI systems, which have diverse applications. Instead, regulations should target fundamental properties and safety measures of AI models. 

While covering broad risks, Anthropic acknowledges that some immediate threats – like deepfakes – aren’t the focus of their current proposals since other initiatives are tackling these nearer-term issues.

Ultimately, Anthropic stresses the importance of instituting regulations that spur innovation rather than stifle it. The initial compliance burden, though inevitable, can be minimised through flexible and carefully-designed safety tests. Proper regulation can even help safeguard both national interests and private sector innovation by securing intellectual property against threats internally and externally.

By focusing on empirically measured risks, Anthropic plans for a regulatory landscape that neither biases against nor favours open or closed-source models. The objective remains clear: to manage the significant risks of frontier AI models with rigorous but adaptable regulation.

(Image Credit: Anthropic)

See also: President Biden issues first National Security Memorandum on AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Anthropic urges AI regulation to avoid catastrophes appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/anthropic-urges-ai-regulation-avoid-catastrophes/feed/ 0
X now permits AI-generated adult content https://www.artificialintelligence-news.com/news/x-permits-ai-generated-adult-content/ https://www.artificialintelligence-news.com/news/x-permits-ai-generated-adult-content/#respond Mon, 03 Jun 2024 12:44:45 +0000 https://www.artificialintelligence-news.com/?p=14927 Social media network X has updated its rules to formally permit users to share consensually-produced AI-generated NSFW content, provided it is clearly labelled. This change aligns with previous experiments under Elon Musk’s leadership, which involved hosting adult content within specific communities. “We believe that users should be able to create, distribute, and consume material related […]

The post X now permits AI-generated adult content appeared first on AI News.

]]>
Social media network X has updated its rules to formally permit users to share consensually-produced AI-generated NSFW content, provided it is clearly labelled. This change aligns with previous experiments under Elon Musk’s leadership, which involved hosting adult content within specific communities.

“We believe that users should be able to create, distribute, and consume material related to sexual themes as long as it is consensually produced and distributed. Sexual expression, visual or written, can be a legitimate form of artistic expression,” X’s updated ‘adult content’ policy states.

The policy further elaborates: “We believe in the autonomy of adults to engage with and create content that reflects their own beliefs, desires, and experiences, including those related to sexuality. We balance this freedom by restricting exposure to adult content for children or adult users who choose not to see it.”

Users can mark their posts as containing sensitive media, ensuring that such content is restricted from users under 18 or those who haven’t provided their birth dates.

While X’s violent content rules have similar guidelines, the platform maintains a strict stance against excessively gory content and depictions of sexual violence. Explicit threats or content inciting or glorifying violence remain prohibited.

X’s decision to allow graphic content is aimed at enabling users to participate in discussions about current events, including sharing relevant images and videos. 

Although X has never outright banned porn, these new clauses could pave the way for developing services centred around adult content, potentially creating a competitor to services like OnlyFans and enhancing its revenue streams. This would further Musk’s vision of X becoming an “everything app,” similar to China’s WeChat.

A 2022 Reuters report, citing internal company documents, indicated that approximately 13% of posts on the platform contained adult content. This percentage has likely increased, especially with the proliferation of porn bots on X.

See also: Elon Musk’s xAI secures $6B to challenge OpenAI in AI race

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post X now permits AI-generated adult content appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/x-permits-ai-generated-adult-content/feed/ 0
MIT publishes white papers to guide AI governance https://www.artificialintelligence-news.com/news/mit-publishes-white-papers-guide-ai-governance/ https://www.artificialintelligence-news.com/news/mit-publishes-white-papers-guide-ai-governance/#respond Mon, 11 Dec 2023 16:34:19 +0000 https://www.artificialintelligence-news.com/?p=14040 A committee of MIT leaders and scholars has published a series of white papers aiming to shape the future of AI governance in the US. The comprehensive framework outlined in these papers seeks to extend existing regulatory and liability approaches to effectively oversee AI while fostering its benefits and mitigating potential harm. Titled “A Framework […]

The post MIT publishes white papers to guide AI governance appeared first on AI News.

]]>
A committee of MIT leaders and scholars has published a series of white papers aiming to shape the future of AI governance in the US. The comprehensive framework outlined in these papers seeks to extend existing regulatory and liability approaches to effectively oversee AI while fostering its benefits and mitigating potential harm.

Titled “A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector,” the main policy paper proposes leveraging current US government entities to regulate AI tools within their respective domains.

Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, emphasises the pragmatic approach of initially focusing on areas where human activity is already regulated and gradually expanding to address emerging risks associated with AI.

The framework underscores the importance of defining the purpose of AI tools, aligning regulations with specific applications and holding AI providers accountable for the intended use of their technologies.

Asu Ozdaglar, deputy dean of academics in the MIT Schwarzman College of Computing, believes having AI providers articulate the purpose and intent of their tools is crucial for determining liability in case of misuse.

Addressing the complexity of AI systems existing at multiple levels, the brief acknowledges the challenges of governing both general and specific AI tools. The proposal advocates for a self-regulatory organisation (SRO) structure to supplement existing agencies, offering responsive and flexible oversight tailored to the rapidly evolving AI landscape.

Furthermore, the policy papers call for advancements in auditing AI tools—exploring various pathways such as government-initiated, user-driven, or legal liability proceedings.

The consideration of a government-approved SRO – akin to the Financial Industry Regulatory Authority (FINRA) – is proposed to enhance domain-specific knowledge and facilitate practical engagement with the dynamic AI industry.

MIT’s involvement in AI governance stems from its recognised expertise in AI research, positioning the institution as a key contributor to addressing the challenges posed by evolving AI technologies. The release of these whitepapers signals MIT’s commitment to promoting responsible AI development and usage.

You can find MIT’s series of AI policy briefs here.

(Photo by Aaron Burden on Unsplash)

See also: AI & Big Data Expo: Demystifying AI and seeing past the hype

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post MIT publishes white papers to guide AI governance appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/mit-publishes-white-papers-guide-ai-governance/feed/ 0
NIST announces AI consortium to shape US policies https://www.artificialintelligence-news.com/news/nist-announces-ai-consortium-shape-us-policies/ https://www.artificialintelligence-news.com/news/nist-announces-ai-consortium-shape-us-policies/#respond Fri, 03 Nov 2023 10:13:14 +0000 https://www.artificialintelligence-news.com/?p=13831 In a bid to address the challenges associated with the development and deployment of AI, the National Institute of Standards and Technology (NIST) has formed a new consortium.  This development was announced in a document published to the Federal Registry on November 2, alongside an official notice inviting applications from individuals with the relevant credentials. […]

The post NIST announces AI consortium to shape US policies appeared first on AI News.

]]>
In a bid to address the challenges associated with the development and deployment of AI, the National Institute of Standards and Technology (NIST) has formed a new consortium. 

This development was announced in a document published to the Federal Registry on November 2, alongside an official notice inviting applications from individuals with the relevant credentials.

The document states, “This notice is the initial step for NIST in collaborating with non-profit organisations, universities, other government agencies, and technology companies to address challenges associated with the development and deployment of AI.”

The primary objective of this collaboration is to create and implement specific policies and measurements that ensure a human-centred approach to AI safety and governance within the United States.

Collaborators within the consortium will be tasked with a range of functions, including the development of measurement and benchmarking tools, policy recommendations, red-teaming efforts, psychoanalysis, and environmental analysis.

NIST’s initiative comes in response to a recent executive order issued by US President Joseph Biden, which outlined six new standards for AI safety and security.

While European and Asian countries have been proactive in instituting policies governing AI systems concerning user and citizen privacy, security, and potential unintended consequences, the US has lagged.

President Biden’s executive order and the establishment of the Safety Institute Consortium mark significant strides in the right direction, yet there remains a lack of clarity regarding the timeline for the implementation of laws governing AI development and deployment in the US.

Many experts have expressed concerns about the adequacy of current laws, designed for conventional businesses and technology, when applied to the rapidly-evolving AI sector.

The formation of the AI consortium signifies a crucial step towards shaping the future of AI policies in the US. It reflects a collaborative effort between government bodies, non-profit organisations, universities, and technology companies to ensure responsible and ethical AI practices within the nation.

(Photo by Muhammad Rizki on Unsplash)

See also: UK paper highlights AI risks ahead of global Safety Summit

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NIST announces AI consortium to shape US policies appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/nist-announces-ai-consortium-shape-us-policies/feed/ 0
UK Deputy PM: AI is the most ‘extensive’ industrial revolution yet https://www.artificialintelligence-news.com/news/uk-deputy-pm-ai-most-extensive-industrial-revolution-yet/ https://www.artificialintelligence-news.com/news/uk-deputy-pm-ai-most-extensive-industrial-revolution-yet/#respond Mon, 14 Aug 2023 09:52:34 +0000 https://www.artificialintelligence-news.com/?p=13466 Britain’s Deputy Prime Minister Oliver Dowden has shared his view that AI will be the most “extensive” industrial revolution yet. Dowden highlighted AI’s dual role, emphasising its capacity to augment productivity and streamline mundane tasks. However, he also put the spotlight on the looming threats it poses to democracies worldwide. in an interview with The […]

The post UK Deputy PM: AI is the most ‘extensive’ industrial revolution yet appeared first on AI News.

]]>
Britain’s Deputy Prime Minister Oliver Dowden has shared his view that AI will be the most “extensive” industrial revolution yet.

Dowden highlighted AI’s dual role, emphasising its capacity to augment productivity and streamline mundane tasks. However, he also put the spotlight on the looming threats it poses to democracies worldwide.

in an interview with The Times, Mr Dowden said: “This is a total revolution that is coming. It’s going to totally transform almost all elements of life over the coming years, and indeed, even months, in some cases.

“It is much faster than other revolutions that we’ve seen and much more extensive, whether that’s the invention of the internal combustion engine or the industrial revolution.”

Already making inroads into governmental processes, AI has been adopted for processing asylum claim applications within the UK’s Home Office. The potential for AI-driven automation also extends to reducing paperwork burdens in ministerial decision-making, ultimately enabling swifter and more efficient governance.

Sridhar Iyengar, Managing Director for Zoho Europe, commented:

“As AI continues to develop at a rapid pace, collaboration between government, business, and industry experts is needed to increase education and introduce regulations or guidelines which can guide its ethical use.

Only then can businesses confidently use AI in the right way and understand how to avoid any negative impact.”

While AI can expedite information analysis and facilitate decision-making, Dowden emphasised that the crucial task of making policy choices remains squarely within the human domain. He stressed that the objective is to utilise AI for tasks that it excels at – such as data collation – to facilitate informed decision-making by human leaders.

Discussing the broader economic implications of the AI revolution, Dowden likened the impending shift to the advent of the automobile. He recognised the potential for significant workforce upheaval and asserted that the government’s responsibility lies in aiding citizens’ transition as AI reshapes industries.

Sheila Flavell CBE, COO of FDM Group, explained:

“In order to truly maximise the potential of AI, the UK must prioritise a workforce of technically skilled staff capable of leading the development and deployment of AI to work alongside staff and make their day-to-day roles easier.

People such as graduates, ex-forces and returners are well-placed to play a central role in this workforce through education courses and training in AI, supporting businesses with this rapidly-evolving technology.”

Dowden acknowledged the inherent risks posed by AI’s exponential growth. He warned of the potential for AI to be exploited by malicious actors—ranging from terrorists using it to gain knowledge of dangerous materials, to conducting large-scale hacking operations. 

Referring to a recent breach that exposed the personal details of thousands of officers and staff from the Police Service of Northern Ireland, Dowden said the incident was an “industrial scale breach of data” that was made possible by AI.

Andy Ward, VP of International for Absolute Software, said:

“We are in the midst of an AI revolution and for all the business benefits that AI brings, however, we must also be wary of the potential cybersecurity concerns that come with any new technology.

AI can be used to positive effect when bolstering cyber defences, playing a role in threat detection through data and pattern analysis to identify certain attacks, but we have to acknowledge that malicious actors also have access to AI to increase the sophistication of their threats.“

While urging a measured response to potential AI-driven threats, Dowden emphasised the importance of addressing risks and vulnerabilities proactively. He stressed the need to strike a balance between harnessing AI’s immense potential for societal progress and ensuring that safeguards are in place to counter its misuse.

Earlier this year, the UK announced that it will host a global summit to address AI risks.

(Image Credit: UK Government under CC BY 2.0 license)

See also: Google report highlights AI’s impact on the UK economy

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK Deputy PM: AI is the most ‘extensive’ industrial revolution yet appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/uk-deputy-pm-ai-most-extensive-industrial-revolution-yet/feed/ 0
GitHub CEO: The EU ‘will define how the world regulates AI’ https://www.artificialintelligence-news.com/news/github-ceo-eu-will-define-how-world-regulates-ai/ https://www.artificialintelligence-news.com/news/github-ceo-eu-will-define-how-world-regulates-ai/#respond Mon, 06 Feb 2023 17:04:56 +0000 https://www.artificialintelligence-news.com/?p=12708 GitHub CEO Thomas Dohmke addressed the EU Open Source Policy Summit in Brussels and gave his views on the bloc’s upcoming AI Act.  “The AI Act will define how the world regulates AI and we need to get it right, for developers and the open-source community,” said Dohmke. Dohmke was born and grew up in […]

The post GitHub CEO: The EU ‘will define how the world regulates AI’ appeared first on AI News.

]]>
GitHub CEO Thomas Dohmke addressed the EU Open Source Policy Summit in Brussels and gave his views on the bloc’s upcoming AI Act

“The AI Act will define how the world regulates AI and we need to get it right, for developers and the open-source community,” said Dohmke.

Dohmke was born and grew up in Germany but now lives in the US. As such, he is all too aware of the widespread belief that the EU cannot lead when it comes to tech innovation.

“As a European, I love seeing how open-source AI innovations are beginning to break the narrative that only the US and China can lead on tech innovation.”

“I’ll be honest, as a European living in the United States, this is a pervasive – and often true – narrative. But this can change. And it’s already beginning to, thanks to open-source developers.”

AI will revolutionise just about every aspect of our lives. Regulation is vital to minimise the risks associated with AI while allowing the benefits to flourish.

“Together, OSS (Open Source Software) developers will use AI to help make our lives better. I have no doubt that OSS developers will help build AI innovations that empower those with disabilities, help us solve climate change, and save lives.”

A risk of overregulation is that it drives innovation elsewhere. Startups are more likely to establish themselves in countries like the US and China where they’re likely not subject to as strict regulations. Europe will find itself falling behind and having less influence on the global stage when it comes to AI.

“The AI Act is so crucial. This policy could well set the precedent for how the world regulates AI. It is foundationally important. Important for European technological leadership, and the future of the European economy itself. The AI Act must be fair and balanced for the open-source community.

“Policymakers should help us get there. The AI Act can foster democratised innovation and solidify Europe’s leadership in open, values-based artificial intelligence. That is why I believe that open-source developers should be exempt from the AI Act.”

In expanding on his belief that open-source developers should be exempt, Dohmke explains that the compliance burden should fall on those shipping products.

“OSS developers are often volunteers. Many are working two jobs. They are scientists, doctors, academics, professors, and university students alike. They don’t usually stand to profit from their contributions—and they certainly don’t have big budgets and compliance departments!”

EU lawmakers are hoping to agree on draft AI rules next month with the aim of winning the acceptance of member states by the end of the year.

“Open-source is forming the foundation of AI innovation in Europe. The US and China don’t have to win it all. Let’s break that narrative apart!

“Let’s give the open-source community the daylight and the clarity to grow their ideas and build them for the rest of the world! And by doing so, let’s give Europe the chance to be a leader in this new age of AI.”

GitHub’s policy paper on the AI Act can be found here.

(Image Credit: Collision Conf under CC BY 2.0 license)

Relevant: US and EU agree to collaborate on improving lives with AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GitHub CEO: The EU ‘will define how the world regulates AI’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/github-ceo-eu-will-define-how-world-regulates-ai/feed/ 0
Democrats renew push for ‘algorithmic accountability’ https://www.artificialintelligence-news.com/news/democrats-renew-push-for-algorithmic-accountability/ https://www.artificialintelligence-news.com/news/democrats-renew-push-for-algorithmic-accountability/#respond Fri, 04 Feb 2022 09:04:05 +0000 https://artificialintelligence-news.com/?p=11647 Democrats have reintroduced their Algorithmic Accountability Act that seeks to hold tech firms accountable for bias in their algorithms. The bill is an updated version of one first introduced by Senator Ron Wyden (D-OR) in 2019 but never passed the House or Senate. The updated bill was introduced this week by Wyden alongside Senator Cory […]

The post Democrats renew push for ‘algorithmic accountability’ appeared first on AI News.

]]>
Democrats have reintroduced their Algorithmic Accountability Act that seeks to hold tech firms accountable for bias in their algorithms.

The bill is an updated version of one first introduced by Senator Ron Wyden (D-OR) in 2019 but never passed the House or Senate. The updated bill was introduced this week by Wyden alongside Senator Cory Booker (D-NJ) and Representative Yvette Clarke (D-NY)

Concern about bias in algorithms is increasing as they become used for ever more critical decisions. Bias would lead to inequalities being automated—with some people being given more opportunities than others.

“As algorithms and other automated decision systems take on increasingly prominent roles in our lives, we have a responsibility to ensure that they are adequately assessed for biases that may disadvantage minority or marginalised communities,” said Booker.

A human can always be held accountable for a decision to, say, reject a mortgage/loan application. There’s currently little-to-no accountability for algorithmic decisions.

Representative Yvette Clarke explained:

“When algorithms determine who goes to college, who gets healthcare, who gets a home, and even who goes to prison, algorithmic discrimination must be treated as the highly significant issue that it is.

These large and impactful decisions, which have become increasingly void of human input, are forming the foundation of our American society that generations to come will build upon. And yet, they are subject to a wide range of flaws from programming bias to faulty datasets that can reinforce broader societal discrimination, particularly against women and people of colour.

It is long past time Congress act to hold companies and software developers accountable for their discrimination by automation

With our renewed Algorithmic Accountability Act, large companies will no longer be able to turn a blind eye towards the deleterious impact of their automated systems, intended or not. We must ensure that our 21st Century technologies become tools of empowerment, rather than marginalisation and seclusion.”

The bill would force audits of AI systems; with findings reported to the Federal Trade Commission. A public database would be created so decisions can be reviewed to give confidence to consumers.

“If someone decides not to rent you a house because of the colour of your skin, that’s flat-out illegal discrimination. Using a flawed algorithm or software that results in discrimination and bias is just as bad,” commented Wyden.

“Our bill will pull back the curtain on the secret algorithms that can decide whether Americans get to see a doctor, rent a house, or get into a school. Transparency and accountability are essential to give consumers choice and provide policymakers with the information needed to set the rules of the road for critical decision systems.”

In our predictions for the AI industry in 2022, we predicted an increased focus on Explainable AI (XAI). XAI is artificial intelligence in which the results of the solution can be understood by humans and is seen as a partial solution to algorithmic bias.

“Too often, Big Tech’s algorithms put profits before people, from negatively impacting young people’s mental health, to discriminating against people based on race, ethnicity, or gender, and everything in between,” said Senator Tammy Baldwin (D-Wis), who is co-sponsoring the bill.

“It is long past time for the American public and policymakers to get a look under the hood and see how these algorithms are being used and what next steps need to be taken to protect consumers.”

Joining Baldwin in co-sponsoring the Algorithmic Accountability Act are Senators Brian Schatz (D-Hawaii), Mazie Hirono (D-Hawaii), Ben Ray Luján (D-NM), Bob Casey (D-Pa), and Martin Heinrich (D-NM).

A copy of the full bill is available here (PDF)

(Photo by Darren Halstead on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Democrats renew push for ‘algorithmic accountability’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/democrats-renew-push-for-algorithmic-accountability/feed/ 0