report Archives - AI News https://www.artificialintelligence-news.com/news/tag/report/ Artificial Intelligence News Fri, 02 May 2025 09:54:33 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png report Archives - AI News https://www.artificialintelligence-news.com/news/tag/report/ 32 32 Are AI chatbots really changing the world of work? https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/ https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/#respond Fri, 02 May 2025 09:54:32 +0000 https://www.artificialintelligence-news.com/?p=106266 We’ve heard endless predictions about how AI chatbots will transform work, but data paints a much calmer picture—at least for now. Despite huge and ongoing advancements in generative AI, the massive wave it was supposed to create in the world of work looks more like a ripple so far. Researchers Anders Humlum (University of Chicago) […]

The post Are AI chatbots really changing the world of work? appeared first on AI News.

]]>
We’ve heard endless predictions about how AI chatbots will transform work, but data paints a much calmer picture—at least for now.

Despite huge and ongoing advancements in generative AI, the massive wave it was supposed to create in the world of work looks more like a ripple so far.

Researchers Anders Humlum (University of Chicago) and Emilie Vestergaard (University of Copenhagen) didn’t just rely on anecdotes. They dug deep, connecting responses from two big surveys (late 2023 and 2024) with official, detailed records about jobs and pay in Denmark.

The pair zoomed in on around 25,000 people working in 7,000 different places, covering 11 jobs thought to be right in the path of AI disruption.   

Everyone’s using AI chatbots for work, but where are the benefits?

What they found confirms what many of us see: AI chatbots are everywhere in Danish workplaces now. Most bosses are actually encouraging staff to use them, a real turnaround from the early days when companies were understandably nervous about things like data privacy.

Almost four out of ten employers have even rolled out their own in-house chatbots, and nearly a third of employees have had some formal training on these tools.   

When bosses gave the nod, the number of staff using chatbots practically doubled, jumping from 47% to 83%. It also helped level the playing field a bit. That gap between men and women using chatbots? It shrank noticeably when companies actively encouraged their use, especially when they threw in some training.

So, the tools are popular, companies are investing, people are getting trained… but the big economic shift? It seems to be missing in action.

Using statistical methods to compare people who used AI chatbots for work with those who didn’t, both before and after ChatGPT burst onto the scene, the researchers found… well, basically nothing.

“Precise zeros,” the researchers call their findings. No significant bump in pay, no change in recorded work hours, across all 11 job types they looked at. And they’re pretty confident about this – the numbers rule out any average effect bigger than just 1%.

This wasn’t just a blip, either. The lack of impact held true even for the keen beans who jumped on board early, those using chatbots daily, or folks working where the boss was actively pushing the tech.

Looking at whole workplaces didn’t change the story; places with lots of chatbot users didn’t see different trends in hiring, overall wages, or keeping staff compared to places using them less.

Productivity gains: More of a gentle nudge than a shove

Why the big disconnect? Why all the hype and investment if it’s not showing up in paychecks or job stats? The study flags two main culprits: the productivity boosts aren’t as huge as hoped in the real world, and what little gains there are aren’t really making their way into wages.

Sure, people using AI chatbots for work felt they were helpful. They mentioned better work quality and feeling more creative. But the number one benefit? Saving time.

However, when the researchers crunched the numbers, the average time saved was only about 2.8% of a user’s total work hours. That’s miles away from the huge 15%, 30%, even 50% productivity jumps seen in controlled lab-style experiments (RCTs) involving similar jobs.

Why the difference? A few things seem to be going on. Those experiments often focus on jobs or specific tasks where chatbots really shine (like coding help or basic customer service responses). This study looked at a wider range, including jobs like teaching where the benefits might be smaller.

The researchers stress the importance of what they call “complementary investments”. People whose companies encouraged chatbot use and provided training actually did report bigger benefits – saving more time, improving quality, and feeling more creative. This suggests that just having the tool isn’t enough; you need the right support and company environment to really unlock its potential.

And even those modest time savings weren’t padding wallets. The study reckons only a tiny fraction – maybe 3% to 7% – of the time saved actually showed up as higher earnings. It might be down to standard workplace inertia, or maybe it’s just harder to ask for a raise based on using a tool your boss hasn’t officially blessed, especially when many people started using them off their own bat.

Making new work, not less work

One fascinating twist is that AI chatbots aren’t just about doing old work tasks faster. They seem to be creating new tasks too. Around 17% of people using them said they had new workloads, mostly brand new types of tasks.

This phenomenon happened more often in workplaces that encouraged chatbot use. It even spilled over to people not using the tools – about 5% of non-users reported new tasks popping up because of AI, especially teachers having to adapt assignments or spot AI-written homework.   

What kind of new tasks? Things like figuring out how to weave AI into daily workflows, drafting content with AI help, and importantly, dealing with the ethical side and making sure everything’s above board. It hints that companies are still very much in the ‘figuring it out’ phase, spending time and effort adapting rather than just reaping instant rewards.

What’s the verdict on the work impact of AI chatbots?

The researchers are careful not to write off generative AI completely. They see pathways for it to become more influential over time, especially as companies get better at integrating it and maybe as those “new tasks” evolve.

But for now, their message is clear: the current reality doesn’t match the hype about a massive, immediate job market overhaul.

“Despite rapid adoption and substantial investments… our key finding is that AI chatbots have had minimal impact on productivity and labor market outcomes to date,” the researchers conclude.   

It brings to mind that old quote about the early computer age: seen everywhere, except in the productivity stats. Two years on from ChatGPT’s launch kicking off the fastest tech adoption we’ve ever seen, its actual mark on jobs and pay looks surprisingly light.

The revolution might still be coming, but it seems to be taking its time.   

See also: Claude Integrations: Anthropic adds AI to your favourite work tools

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Are AI chatbots really changing the world of work? appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/feed/ 0
Tony Blair Institute AI copyright report sparks backlash https://www.artificialintelligence-news.com/news/tony-blair-institute-ai-copyright-report-sparks-backlash/ https://www.artificialintelligence-news.com/news/tony-blair-institute-ai-copyright-report-sparks-backlash/#respond Wed, 02 Apr 2025 11:04:11 +0000 https://www.artificialintelligence-news.com/?p=105140 The Tony Blair Institute (TBI) has released a report calling for the UK to lead in navigating the complex intersection of arts and AI. According to the report, titled ‘Rebooting Copyright: How the UK Can Be a Global Leader in the Arts and AI,’ the global race for cultural and technological leadership is still up […]

The post Tony Blair Institute AI copyright report sparks backlash appeared first on AI News.

]]>
The Tony Blair Institute (TBI) has released a report calling for the UK to lead in navigating the complex intersection of arts and AI.

According to the report, titled ‘Rebooting Copyright: How the UK Can Be a Global Leader in the Arts and AI,’ the global race for cultural and technological leadership is still up for grabs, and the UK has a golden opportunity to take the lead.

The report emphasises that countries that “embrace change and harness the power of artificial intelligence in creative ways will set the technical, aesthetic, and regulatory standards for others to follow.”

Highlighting that we are in the midst of another revolution in media and communication, the report notes that AI is disrupting how textual, visual, and auditive content is created, distributed, and experienced, much like the printing press, gramophone, and camera did before it.

“AI will usher in a new era of interactive and bespoke works, as well as a counter-revolution that celebrates everything that AI can never be,” the report states.

However, far from signalling the end of human creativity, the TBI suggests AI will open up “new ways of being original.”

The AI revolution’s impact isn’t limited to the creative industries; it’s being felt across all areas of society. Scientists are using AI to accelerate discoveries, healthcare providers are employing it to analyse X-ray images, and emergency services utilise it to locate houses damaged by earthquakes.

The report stresses that these cross-industry advancements are just the beginning, with future AI systems set to become increasingly capable, fuelled by advancements in computing power, data, model architectures, and access to talent.

The UK government has expressed its ambition to be a global leader in AI through its AI Opportunities Action Plan, announced by Prime Minister Keir Starmer on 13 January 2025. For its part, the TBI welcomes the UK government’s ambition, stating that “if properly designed and deployed, AI can make human lives healthier, safer, and more prosperous.”

However, the rapid spread of AI across sectors raises urgent policy questions, particularly concerning the data used for AI training. The application of UK copyright law to the training of AI models is currently contested, with the debate often framed as a “zero-sum game” between AI developers and rights holders. The TBI argues that this framing “misrepresents the nature of the challenge and the opportunity before us.”

The report emphasises that “bold policy solutions are needed to provide all parties with legal clarity and unlock investments that spur innovation, job creation, and economic growth.”

According to the TBI, AI presents opportunities for creators—noting its use in various fields from podcasts to filmmaking. The report draws parallels with past technological innovations – such as the printing press and the internet – which were initially met with resistance, but ultimately led to societal adaptation and human ingenuity prevailing.

The TBI proposes that the solution lies not in clinging to outdated copyright laws but in allowing them to “co-evolve with technological change” to remain effective in the age of AI.

The UK government has proposed a text and data mining exception with an opt-out option for rights holders. While the TBI views this as a good starting point for balancing stakeholder interests, it acknowledges the “significant implementation and enforcement challenges” that come with it, spanning legal, technical, and geopolitical dimensions.

In the report, the Tony Blair Institute for Global Change “assesses the merits of the UK government’s proposal and outlines a holistic policy framework to make it work in practice.”

The report includes recommendations and examines novel forms of art that will emerge from AI. It also delves into the disagreement between rights holders and developers on copyright, the wider implications of copyright policy, and the serious hurdles the UK’s text and data mining proposal faces.

Furthermore, the Tony Blair Institute explores the challenges of governing an opt-out policy, implementation problems with opt-outs, making opt-outs useful and accessible, and tackling the diffusion problem. AI summaries and the problems they present regarding identity are also addressed, along with defensive tools as a partial solution and solving licensing problems.

The report also seeks to clarify the standards on human creativity, address digital watermarking, and discuss the uncertainty around the impact of generative AI on the industry. It proposes establishing a Centre for AI and the Creative Industries and discusses the risk of judicial review, the benefits of a remuneration scheme, and the advantages of a targeted levy on ISPs to raise funding for the Centre.

However, the report has faced strong criticism. Ed Newton-Rex, CEO of Fairly Trained, raised several concerns on Bluesky. These concerns include:

  • The report repeats the “misleading claim” that existing UK copyright law is uncertain, which Newton-Rex asserts is not the case.
  • The suggestion that an opt-out scheme would give rights holders more control over how their works are used is misleading. Newton-Rex argues that licensing is currently required by law, so moving to an opt-out system would actually decrease control, as some rights holders will inevitably miss the opt-out.
  • The report likens machine learning (ML) training to human learning, a comparison that Newton-Rex finds shocking, given the vastly different scalability of the two.
  • The report’s claim that AI developers won’t make long-term profits from training on people’s work is questioned, with Newton-Rex pointing to the significant funding raised by companies like OpenAI.
  • Newton-Rex suggests the report uses strawman arguments, such as stating that generative AI may not replace all human paid activities.
  • A key criticism is that the report omits data showing how generative AI replaces demand for human creative labour.
  • Newton-Rex also criticises the report’s proposed solutions, specifically the suggestion to set up an academic centre, which he notes “no one has asked for.”
  • Furthermore, he highlights the proposal to tax every household in the UK to fund this academic centre, arguing that this would place the financial burden on consumers rather than the AI companies themselves, and the revenue wouldn’t even go to creators.

Adding to these criticisms, British novelist and author Jonathan Coe noted that “the five co-authors of this report on copyright, AI, and the arts are all from the science and technology sectors. Not one artist or creator among them.”

While the report from Tony Blair Institute for Global Change supports the government’s ambition to be an AI leader, it also raises critical policy questions—particularly around copyright law and AI training data.

(Photo by Jez Timms)

See also: Amazon Nova Act: A step towards smarter, web-native AI agents

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Tony Blair Institute AI copyright report sparks backlash appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/tony-blair-institute-ai-copyright-report-sparks-backlash/feed/ 0
Study claims OpenAI trains AI models on copyrighted data https://www.artificialintelligence-news.com/news/study-claims-openai-trains-ai-models-copyrighted-data/ https://www.artificialintelligence-news.com/news/study-claims-openai-trains-ai-models-copyrighted-data/#respond Wed, 02 Apr 2025 09:04:28 +0000 https://www.artificialintelligence-news.com/?p=105119 A new study from the AI Disclosures Project has raised questions about the data OpenAI uses to train its large language models (LLMs). The research indicates the GPT-4o model from OpenAI demonstrates a “strong recognition” of paywalled and copyrighted data from O’Reilly Media books. The AI Disclosures Project, led by technologist Tim O’Reilly and economist […]

The post Study claims OpenAI trains AI models on copyrighted data appeared first on AI News.

]]>
A new study from the AI Disclosures Project has raised questions about the data OpenAI uses to train its large language models (LLMs). The research indicates the GPT-4o model from OpenAI demonstrates a “strong recognition” of paywalled and copyrighted data from O’Reilly Media books.

The AI Disclosures Project, led by technologist Tim O’Reilly and economist Ilan Strauss, aims to address the potentially harmful societal impacts of AI’s commercialisation by advocating for improved corporate and technological transparency. The project’s working paper highlights the lack of disclosure in AI, drawing parallels with financial disclosure standards and their role in fostering robust securities markets.

The study used a legally-obtained dataset of 34 copyrighted O’Reilly Media books to investigate whether LLMs from OpenAI were trained on copyrighted data without consent. The researchers applied the DE-COP membership inference attack method to determine if the models could differentiate between human-authored O’Reilly texts and paraphrased LLM versions.

Key findings from the report include:

  • GPT-4o shows “strong recognition” of paywalled O’Reilly book content, with an AUROC score of 82%. In contrast, OpenAI’s earlier model, GPT-3.5 Turbo, does not show the same level of recognition (AUROC score just above 50%)
  • GPT-4o exhibits stronger recognition of non-public O’Reilly book content compared to publicly accessible samples (82% vs 64% AUROC scores respectively)
  • GPT-3.5 Turbo shows greater relative recognition of publicly accessible O’Reilly book samples than non-public ones (64% vs 54% AUROC scores)
  • GPT-4o Mini, a smaller model, showed no knowledge of public or non-public O’Reilly Media content when tested (AUROC approximately 50%)

The researchers suggest that access violations may have occurred via the LibGen database, as all of the O’Reilly books tested were found there. They also acknowledge that newer LLMs have an improved ability to distinguish between human-authored and machine-generated language, which does not reduce the method’s ability to classify data.

The study highlights the potential for “temporal bias” in the results, due to language changes over time. To account for this, the researchers tested two models (GPT-4o and GPT-4o Mini) trained on data from the same period.

The report notes that while the evidence is specific to OpenAI and O’Reilly Media books, it likely reflects a systemic issue around the use of copyrighted data. It argues that uncompensated training data usage could lead to a decline in the internet’s content quality and diversity, as revenue streams for professional content creation diminish.

The AI Disclosures Project emphasises the need for stronger accountability in AI companies’ model pre-training processes. They suggest that liability provisions that incentivise improved corporate transparency in disclosing data provenance may be an important step towards facilitating commercial markets for training data licensing and remuneration.

The EU AI Act’s disclosure requirements could help trigger a positive disclosure-standards cycle if properly specified and enforced. Ensuring that IP holders know when their work has been used in model training is seen as a crucial step towards establishing AI markets for content creator data.

Despite evidence that AI companies may be obtaining data illegally for model training, a market is emerging in which AI model developers pay for content through licensing deals. Companies like Defined.ai facilitate the purchasing of training data, obtaining consent from data providers and stripping out personally identifiable information.

The report concludes by stating that using 34 proprietary O’Reilly Media books, the study provides empirical evidence that OpenAI likely trained GPT-4o on non-public, copyrighted data.

(Image by Sergei Tokmakov)

See also: Anthropic provides insights into the ‘AI biology’ of Claude

AI & Big Data Expo banner, a show where attendees will hear more about issues such as OpenAI allegedly using copyrighted data to train its new models.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Study claims OpenAI trains AI models on copyrighted data appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/study-claims-openai-trains-ai-models-copyrighted-data/feed/ 0
NEPC: AI sprint risks environmental catastrophe https://www.artificialintelligence-news.com/news/nepc-ai-sprint-risks-environmental-catastrophe/ https://www.artificialintelligence-news.com/news/nepc-ai-sprint-risks-environmental-catastrophe/#respond Fri, 07 Feb 2025 12:32:41 +0000 https://www.artificialintelligence-news.com/?p=104189 The government is urged to mandate stricter reporting for data centres to mitigate environmental risks associated with the AI sprint. A report published today by the National Engineering Policy Centre (NEPC) highlights the urgent need for data centres to adopt greener practices, particularly as the government’s AI Opportunities Action Plan gains traction. The report, Engineering […]

The post NEPC: AI sprint risks environmental catastrophe appeared first on AI News.

]]>
The government is urged to mandate stricter reporting for data centres to mitigate environmental risks associated with the AI sprint.

A report published today by the National Engineering Policy Centre (NEPC) highlights the urgent need for data centres to adopt greener practices, particularly as the government’s AI Opportunities Action Plan gains traction.

The report, Engineering Responsible AI: Foundations for Environmentally Sustainable AI, was developed in collaboration with the Royal Academy of Engineering, the Institution of Engineering and Technology, and BCS, the Chartered Institute of IT.

While stressing that data centres enabling AI systems can be built to consume fewer resources like energy and water, the report highlights that infrastructure and regulatory conditions must align for these efficiencies to materialise.

Unlocking the potential of AI while minimising environmental risks  

AI is heralded as capable of driving economic growth, creating jobs, and improving livelihoods. Launched as a central pillar of the UK’s tech strategy, the AI Opportunities Action Plan is intended to “boost economic growth, provide jobs for the future and improve people’s everyday lives.”  

Use cases for AI that are already generating public benefits include accelerating drug discovery, forecasting weather events, optimising energy systems, and even aiding climate science and improving sustainability efforts. However, this growing reliance on AI also poses environmental risks from the infrastructure required to power these systems.  

Data centres, which serve as the foundation of AI technologies, consume vast amounts of energy and water. Increasing demand has raised concerns about global competition for limited resources, such as sustainable energy and drinking water. Google and Microsoft, for instance, have recorded rising water usage by their data centres each year since 2020. Much of this water comes from drinking sources, sparking fears about resource depletion.  

With plans already in place to reform the UK’s planning system to facilitate the construction of data centres, the report calls for urgent policies to manage their environmental impact. Accurate and transparent data on resource consumption is currently lacking, which hampers policymakers’ ability to assess the true scale of these impacts and act accordingly.

Five steps to sustainable AI  

The NEPC is urging the government to spearhead change by prioritising sustainable AI development. The report outlines five key steps policymakers can act upon immediately to position the UK as a leader in resource-efficient AI:  

  1. Expand environmental reporting mandates
  2. Communicate the sector’s environmental impacts
  3. Set sustainability requirements for data centres
  4. Reconsider data collection, storage, and management practices
  5. Lead by example with government investment

Mandatory environmental reporting forms a cornerstone of the recommendations. This involves measuring data centres’ energy sources, water consumption, carbon emissions, and e-waste recycling practices to provide the resource use data necessary for policymaking.  

Raising public awareness is also vital. Communicating the environmental costs of AI can encourage developers to optimise AI tools, use smaller datasets, and adopt more efficient approaches. Notably, the report recommends embedding environmental design and sustainability topics into computer science and AI education at both school and university levels.  

Smarter, greener data centres  

One of the most urgent calls to action involves redesigning data centres to reduce their environmental footprint. The report advocates for innovations like waste heat recovery systems, zero drinking water use for cooling, and the exclusive use of 100% carbon-free energy certificates.  

Efforts like those at Queen Mary University of London, where residual heat from a campus data centre is repurposed to provide heating and hot water, offer a glimpse into the possibilities of greener tech infrastructure.  

In addition, the report suggests revising legislation on mandatory data retention to reduce the unnecessary environmental costs of storing vast amounts of data long-term. Proposals for a National Data Library could drive best practices by centralising and streamlining data storage.  

Professor Tom Rodden, Pro-Vice-Chancellor at the University of Nottingham and Chair of the working group behind the report, urged swift action:  

“In recent years, advances in AI systems and services have largely been driven by a race for size and scale, demanding increasing amounts of computational power. As a result, AI systems and services are growing at a rate unparalleled by other high-energy systems—generally without much regard for resource efficiency.  

“This is a dangerous trend, and we face a real risk that our development, deployment, and use of AI could do irreparable damage to the environment.”  

Rodden added that reliable data on these impacts is critical. “To build systems and services that effectively use resources, we first need to effectively monitor their environmental cost. Once we have access to trustworthy data… we can begin to effectively target efficiency in development, deployment, and use – and plan a sustainable AI future for the UK.”

Dame Dawn Childs, CEO of Pure Data Centres Group, underscored the role of engineering in improving efficiency. “Some of this will come from improvements to AI models and hardware, making them less energy-intensive. But we must also ensure that the data centres housing AI’s computing power and storage are as sustainable as possible.  

“That means prioritising renewable energy, minimising water use, and reducing carbon emissions – both directly and indirectly. Using low-carbon building materials is also essential.”  

Childs emphasised the importance of a coordinated approach from the start of projects. “As the UK government accelerates AI adoption – through AI Growth Zones and streamlined planning for data centres – sustainability must be a priority at every step.”  

For Alex Bardell, Chair of BCS’ Green IT Specialist Group, the focus is on optimising AI processes. “Our report has discussed optimising models for efficiency. Previous attempts to limit the drive toward increased computational power and larger models have faced significant resistance, with concerns that the UK may fall behind in the AI arena; this may not necessarily be true.  

“It is crucial to reevaluate our approach to developing sustainable AI in the future.”  

Time for transparency around AI environmental risks

Public awareness of AI’s environmental toll remains low. Recent research by the Institution of Engineering and Technology (IET) found that fewer than one in six UK residents are aware of the significant environmental costs associated with AI systems.  

“AI providers must be transparent about these effects,” said Professor Sarvapali Ramchurn, CEO of Responsible AI UK and a Fellow of the IET. “If we cannot measure it, we cannot manage it, nor ensure benefits for all. This report’s recommendations will aid national discussions on the sustainability of AI systems and the trade-offs involved.”  

As the UK pushes forward with ambitious plans to lead in AI development, ensuring environmental sustainability must take centre stage. By adopting policies and practices outlined in the NEPC report, the government can support AI growth while safeguarding finite resources for future generations.

(Photo by Braden Collum)

See also: Sustainability is key in 2025 for businesses to advance AI efforts

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NEPC: AI sprint risks environmental catastrophe appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/nepc-ai-sprint-risks-environmental-catastrophe/feed/ 0
World Economic Forum unveils blueprint for equitable AI  https://www.artificialintelligence-news.com/news/world-economic-forum-unveils-blueprint-equitable-ai/ https://www.artificialintelligence-news.com/news/world-economic-forum-unveils-blueprint-equitable-ai/#respond Tue, 21 Jan 2025 16:55:43 +0000 https://www.artificialintelligence-news.com/?p=16943 The World Economic Forum (WEF) has released a blueprint outlining how AI can drive inclusivity in global economic growth and societal progress. However, it also highlights the challenges in ensuring its benefits are equitably distributed across all nations and peoples. Developed in partnership with KPMG, the blueprint offers nine strategic objectives to support government leaders, […]

The post World Economic Forum unveils blueprint for equitable AI  appeared first on AI News.

]]>
The World Economic Forum (WEF) has released a blueprint outlining how AI can drive inclusivity in global economic growth and societal progress. However, it also highlights the challenges in ensuring its benefits are equitably distributed across all nations and peoples.

Developed in partnership with KPMG, the blueprint offers nine strategic objectives to support government leaders, organisations, and key stakeholders through every phase of the AI lifecycle – from innovation to deployment – at local, national, and international levels. These strategies aim to bridge disparities in AI access, infrastructure, advanced computing, and skill development to promote sustainable, long-term growth.

Cathy Li, Head of AI, Data, and the Metaverse at the WEF, said: “Leveraging AI for economic growth and societal progress is a shared goal, yet countries and regions have very different starting points.

“This blueprint serves as a compass, guiding decision-makers toward impact-oriented collaboration and practical solutions that can unlock AI’s full potential.”

Call for regional collaboration and local empowerment

Central to the ‘Blueprint for Intelligent Economies’ is the belief that successful AI adoption must reflect the specific needs of local communities—with strong leadership and collaboration among governments, businesses, entrepreneurs, civil society organisations, and end users.

Solly Malatsi, South Africa’s Minister of Communications and Digital Technologies, commented: “The significant potential of AI remains largely untapped in many regions worldwide. Establishing an inclusive and competitive AI ecosystem will become a crucial priority for all nations.

“Collaboration among multiple stakeholders at the national, regional, and global levels will be essential in fostering growth and prosperity through AI for everyone.”

By tailoring approaches to reflect geographic and cultural nuances, the WEF report suggests nations can create AI systems that address local challenges while also providing a robust bedrock for innovation, investment, and ethical governance. Case studies from nations at varying stages of AI maturity are used throughout the report to illustrate practical, scalable solutions.

For example, cross-border cooperation on shared AI frameworks and pooled resources (such as energy or centralised databanks) is highlighted as a way to overcome resource constraints. Public-private subsidies to make AI-ready devices more affordable present another equitable way forward. These mechanisms aim to lower barriers for local businesses and innovators, enabling them to adopt AI tools and scale their operations.  

Hatem Dowidar, Chief Executive Officer of E&, said: “All nations have a unique opportunity to advance their economic and societal progress through AI. This requires a collaborative approach of intentional leadership from governments supported by active engagement with all stakeholders at all stages of the AI journey.

“Regional and global collaborations remain fundamental pathways to address shared challenges and opportunities, ensure equitable access to key AI capabilities, and responsibly maximise its transformative potential for a lasting value for all.”  

Priority focus areas

While the blueprint features nine strategic objectives, three have been singled out as priority focus areas for national AI strategies:  

  1. Building sustainable AI infrastructure 

Resilient, scalable, and environmentally sustainable AI infrastructure is essential for innovation. However, achieving this vision will require substantial investment, energy, and cross-sector collaboration. Nations must coordinate efforts to ensure that intelligent economies grow in both an equitable and eco-friendly manner.  

  1. Curating diverse and high-quality datasets  

AI’s potential hinges on the quality of the data it can access. This strategic objective addresses barriers such as data accessibility, imbalance, and ownership. By ensuring that datasets are inclusive, diverse, and reflective of local languages and cultures, developers can create equitable AI models that avoid bias and meet the needs of all communities.  

  1. Establishing robust ethical and safety guardrails

Governance frameworks are critical for reducing risks like misuse, bias, and ethical breaches. By setting high standards at the outset, nations can cultivate trust in AI systems, laying the groundwork for responsible deployment and innovation. These safeguards are especially vital for promoting human-centred AI that benefits all of society.  

The overall framework outlined in the report has three layers:

  1. Foundation layer: Focuses on sustainable energy, diverse data curation, responsible AI infrastructure, and efficient investment mechanisms.  
  2. Growth layer: Embeds AI into workflows, processes, and devices to accelerate sectoral adoption and boost innovation.  
  3. People layer: Prioritises workforce skills, empowerment, and ethical considerations, ensuring that AI shapes society in a beneficial and inclusive way.

A blueprint for global AI adoption  

The Forum is also championing a multi-stakeholder approach to global AI adoption, blending public and private collaboration. Policymakers are being encouraged to implement supportive legislation and incentives to spark innovation and broaden AI’s reach. Examples include lifelong learning programmes to prepare workers for the AI-powered future and financial policies that enable greater technology access in underserved regions.  

The WEF’s latest initiative reflects growing global recognition that AI will be a cornerstone of the future economy. However, it remains clear that the benefits of this transformative technology will need to be shared equitably to drive societal progress and ensure no one is left behind.  

The Blueprint for Intelligent Economies provides a roadmap for nations to harness AI while addressing the structural barriers that could otherwise deepen existing inequalities. By fostering inclusivity, adopting robust governance, and placing communities at the heart of decision-making, the WEF aims to guide governments, businesses, and innovators toward a sustainable and intelligent future.  

See also: UK Government signs off sweeping AI action plan 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post World Economic Forum unveils blueprint for equitable AI  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/world-economic-forum-unveils-blueprint-equitable-ai/feed/ 0
Salesforce: UK set to lead agentic AI revolution https://www.artificialintelligence-news.com/news/salesforce-uk-set-lead-agentic-ai-revolution/ https://www.artificialintelligence-news.com/news/salesforce-uk-set-lead-agentic-ai-revolution/#respond Mon, 02 Dec 2024 13:24:31 +0000 https://www.artificialintelligence-news.com/?p=16601 Salesforce has unveiled the findings of its UK AI Readiness Index, signalling the nation is in a position to spearhead the next wave of AI innovation, also known as agentic AI. The report places the UK ahead of its G7 counterparts in terms of AI adoption but also underscores areas ripe for improvement, such as […]

The post Salesforce: UK set to lead agentic AI revolution appeared first on AI News.

]]>
Salesforce has unveiled the findings of its UK AI Readiness Index, signalling the nation is in a position to spearhead the next wave of AI innovation, also known as agentic AI.

The report places the UK ahead of its G7 counterparts in terms of AI adoption but also underscores areas ripe for improvement, such as support for SMEs, fostering cross-sector partnerships, and investing in talent development.

Zahra Bahrololoumi CBE, UKI CEO at Salesforce, commented: “Agentic AI is revolutionising enterprise software by enabling humans and agents to collaborate seamlessly and drive customer success.

“The UK AI Readiness Index positively highlights that the UK has both the vision and infrastructure to be a powerhouse globally in AI, and lead the current third wave of agentic AI.”

UK AI adoption sets the stage for agentic revolution

The Index details how both the public and private sectors in the UK have embraced AI’s transformative potential. With a readiness score of 65.5, surpassing the G7 average of 61.2, the UK is establishing itself as a hub for large-scale AI projects, driven by a robust innovation culture and pragmatic regulatory approaches.

The government has played its part in maintaining a stable and secure environment for tech investment. Initiatives such as the AI Safety Summit at Bletchley Park and risk-oriented AI legislation showcase Britain’s leadership on critical AI issues like transparency and privacy.

Business readiness is equally impressive, with UK industries scoring 52, well above the G7 average of 47.8. SMEs in the UK are increasingly prioritising AI adoption, further bolstering the nation’s stance in the international AI arena.

Adam Evans, EVP & GM of Salesforce AI Platform, is optimistic about the evolution of agentic AI. Evans foresees that, by 2025, these agents will become business-aware—expertly navigating industry-specific challenges to execute meaningful tasks and decisions.

Investments fuelling AI growth

Salesforce is committing $4 billion to the UK’s AI ecosystem over the next five years. Since establishing its UK AI Centre in London, Salesforce says it has engaged over 3,000 stakeholders in AI training and workshops.

Key investment focuses include creating a regulatory bridge between the EU’s rules-based approach and the more relaxed US approach, and ensuring SMEs have the resources to integrate AI. A strong emphasis also lies on enhancing digital skills and centralising training to support the AI workforce of the future.

Feryal Clark, Minister for AI and Digital Government, said: “These findings are further proof the UK is in prime position to take advantage of AI, and highlight our strength in spurring innovation, investment, and collaboration across the public and private sector.

“There is a global race for AI and we’ll be setting out plans for how the UK can use the technology to ramp-up adoption across the economy, kickstart growth, and build an AI sector which can scale and compete on the global stage.”

Antony Walker, Deputy CEO at techUK, added: “To build this progress, government and industry must collaborate to foster innovation, support SMEs, invest in skills, and ensure flexible regulation, cementing the UK’s leadership in the global AI economy.”

Agentic AI boosting UK business productivity 

Capita, Secret Escapes, Heathrow, and Bionic are among the organisations that have adopted Salesforce’s Agentforce to boost their productivity.

Adolfo Hernandez, CEO of Capita, said: “We want to transform Capita’s recruitment process into a fast, seamless and autonomous experience that benefits candidates, our people, and our clients.

“With autonomous agents providing 24/7 support, our goal is to enable candidates to complete the entire recruitment journey within days as opposed to what has historically taken weeks.

Secret Escapes, a curator of luxury travel deals, finds autonomous agents crucial for personalising services to its 60 million European members.

Kate Donaghy, Head of Business Technology at Secret Escapes, added: “Agentforce uses our unified data to automate routine tasks like processing cancellations, updating booking information, or even answering common travel questions about luggage, flight information, and much more—freeing up our customer service agents to handle more complex and last-minute travel needs to better serve our members.”

The UK’s AI readiness is testament to the synergy between government, business, and academia. To maintain its leadership, the UK must sustain its focus on collaboration, skills development, and innovation. 

(Photo by Matthew Wiebe)

See also: Generative AI use soars among Brits, but is it sustainable?

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Salesforce: UK set to lead agentic AI revolution appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/salesforce-uk-set-lead-agentic-ai-revolution/feed/ 0
Generative AI: Disparities between C-suite and practitioners https://www.artificialintelligence-news.com/news/generative-ai-disparities-c-suite-and-practitioners/ https://www.artificialintelligence-news.com/news/generative-ai-disparities-c-suite-and-practitioners/#respond Tue, 19 Nov 2024 12:31:35 +0000 https://www.artificialintelligence-news.com/?p=16515 A report by Publicis Sapient sheds light on the disparities between the C-suite and practitioners, dubbed the “V-suite,” in their perceptions and adoption of generative AI. The report reveals a stark contrast in how the C-suite and V-suite view the potential of generative AI. While the C-suite focuses on visible use cases such as customer […]

The post Generative AI: Disparities between C-suite and practitioners appeared first on AI News.

]]>
A report by Publicis Sapient sheds light on the disparities between the C-suite and practitioners, dubbed the “V-suite,” in their perceptions and adoption of generative AI.

The report reveals a stark contrast in how the C-suite and V-suite view the potential of generative AI. While the C-suite focuses on visible use cases such as customer experience, service, and sales, the V-suite sees opportunities across various functional areas, including operations, HR, and finance.

Risk perception

The divide extends to risk perception as well. Fifty-one percent of C-level respondents expressed more concern about the risk and ethics of generative AI than other emerging technologies. In contrast, only 23 percent of the V-suite shared these worries.

Simon James, Managing Director of Data & AI at Publicis Sapient, said: “It’s likely the C-suite is more worried about abstract, big-picture dangers – such as Hollywood-style scenarios of a rapidly-evolving superintelligence – than the V-suite.”

The report also highlights the uncertainty surrounding generative AI maturity. Organisations can be at various stages of maturity simultaneously, with many struggling to define what success looks like. More than two-thirds of respondents lack a way to measure the success of their generative AI projects.

Navigating the generative AI landscape

Despite the C-suite’s focus on high-visibility use cases, generative AI is quietly transforming back-office functions. More than half of the V-suite respondents ranked generative AI as extremely important in areas like finance and operations over the next three years, compared to a smaller percentage of the C-suite.

To harness the full potential of generative AI, the report recommends a portfolio approach to innovation projects. Leaders should focus on delivering projects, controlling shadow IT, avoiding duplication, empowering domain experts, connecting business units with the CIO’s office, and engaging the risk office early and often.

Daniel Liebermann, Managing Director at Publicis Sapient, commented: “It’s as hard for leaders to learn how individuals within their organisation are using ChatGPT or Microsoft Copilot as it is to understand how they’re using the internet.”

The path forward

The report concludes with five steps to maximise innovation: adopting a portfolio approach, improving communication between the CIO’s office and the risk office, seeking out innovators within the organisation, using generative AI to manage information, and empowering team members through company culture and upskilling.

As generative AI continues to evolve, organisations must bridge the gap between the C-suite and V-suite to unlock its full potential. The future of business transformation lies in harnessing the power of a decentralised, bottom-up approach to innovation.

See also: EU introduces draft regulatory guidance for AI models

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Generative AI: Disparities between C-suite and practitioners appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/generative-ai-disparities-c-suite-and-practitioners/feed/ 0
Understanding AI’s impact on the workforce https://www.artificialintelligence-news.com/news/understanding-ai-impact-on-the-workforce/ https://www.artificialintelligence-news.com/news/understanding-ai-impact-on-the-workforce/#respond Fri, 08 Nov 2024 10:11:03 +0000 https://www.artificialintelligence-news.com/?p=16459 The Tony Blair Institute (TBI) has examined AI’s impact on the workforce. The report outlines AI’s potential to reshape work environments, boost productivity, and create opportunities—while warning of potential challenges ahead. “Technology has a long history of profoundly reshaping the world of work,” the report begins. From the agricultural revolution to the digital age, each […]

The post Understanding AI’s impact on the workforce appeared first on AI News.

]]>
The Tony Blair Institute (TBI) has examined AI’s impact on the workforce. The report outlines AI’s potential to reshape work environments, boost productivity, and create opportunities—while warning of potential challenges ahead.

“Technology has a long history of profoundly reshaping the world of work,” the report begins.

From the agricultural revolution to the digital age, each wave of innovation has redefined labour markets. Today, AI presents a seismic shift, advancing rapidly and prompting policymakers to prepare for change.

Economic opportunities

The TBI report estimates that AI, when fully adopted by UK firms, could significantly increase productivity. It suggests that AI could save “almost a quarter of private-sector workforce time,” equivalent to the annual output of 6 million workers.

Most of these time savings are expected to stem from AI-enabled software performing cognitive tasks such as data analysis and routine administrative operations.

The report identifies sectors reliant on routine cognitive tasks, such as banking and finance, as those with significant exposure to AI. However, sectors like skilled trades or construction – which involve complex manual tasks – are likely to see less direct impact.

While AI can result in initial job losses, it also has the potential to create new demand by fostering economic growth and new industries. 

The report expects these job losses can be balanced by new job creation. Over the years, technology has historically spurred new employment opportunities, as innovation leads to the development of new products and services.

Shaping future generations

AI’s potential extends into education, where it could assist both teachers and students.

The report suggests that AI could help “raise educational attainment by around six percent” on average. By personalising and supporting learning, AI has the potential to equalise access to opportunities and improve the quality of the workforce over time.

Health and wellbeing

Beyond education, AI offers potential benefits in healthcare, supporting a healthier workforce and reducing welfare costs.

The report highlights AI’s role in speeding medical research, enabling preventive healthcare, and helping those with disabilities re-enter the workforce.

Workplace transformation

The report acknowledges potential workplace challenges, such as increased monitoring and stress from AI tools. It stresses the importance of managing these technologies thoughtfully to “deliver a more engaging, inclusive and safe working environment.”

To mitigate potential disruption, the TBI outlines recommendations. These include upgrading labour-market infrastructure and utilising AI for job matching.

The report suggests creating an “Early Awareness and Opportunity System” to help workers understand the impact of AI on their jobs and provide advice on career paths.

Preparing for an AI-powered future

In light of the uncertainties surrounding AI’s impact on the workforce, the TBI urges policy changes to maximise benefits. Recommendations include incentivising AI adoption across industries, developing AI-pathfinder programmes, and creating challenge prizes to address public-sector labour shortages.

The report concludes that while AI presents risks, the potential gains are too significant to ignore.

Policymakers are encouraged to adopt a “pro-innovation” stance while being attuned to the risks, fostering an economy that is dynamic and resilient.

(Photo by Mimi Thian)

See also: Anthropic urges AI regulation to avoid catastrophes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Understanding AI’s impact on the workforce appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/understanding-ai-impact-on-the-workforce/feed/ 0
Western drivers remain sceptical of in-vehicle AI https://www.artificialintelligence-news.com/news/western-drivers-remain-sceptical-in-vehicle-ai/ https://www.artificialintelligence-news.com/news/western-drivers-remain-sceptical-in-vehicle-ai/#respond Tue, 05 Nov 2024 12:58:15 +0000 https://www.artificialintelligence-news.com/?p=16437 A global study has unveiled a stark contrast in attitudes towards embracing in-vehicle AI between Eastern and Western markets, with European drivers particularly reluctant. The research – conducted by MHP – surveyed 4,700 car drivers across China, the US, Germany, the UK, Italy, Sweden, and Poland, revealing significant geographical disparities in AI acceptance and understanding. […]

The post Western drivers remain sceptical of in-vehicle AI appeared first on AI News.

]]>
A global study has unveiled a stark contrast in attitudes towards embracing in-vehicle AI between Eastern and Western markets, with European drivers particularly reluctant.

The research – conducted by MHP – surveyed 4,700 car drivers across China, the US, Germany, the UK, Italy, Sweden, and Poland, revealing significant geographical disparities in AI acceptance and understanding.

According to the study, while AI is becoming integral to modern vehicles, European consumers remain hesitant about its implementation and value proposition.

Regional disparities

The study found that 48 percent of Chinese respondents view in-car AI predominantly as an opportunity, while merely 23 percent of European respondents share this optimistic outlook. In Europe, 39 percent believe AI’s opportunities and risks are broadly balanced, while 24 percent take a negative stance, suggesting the risks outweigh potential benefits.

Understanding of AI technology also varies significantly by region. While over 80 percent of Chinese respondents claim to understand AI’s use in cars, this figure drops to just 54 percent among European drivers, highlighting a notable knowledge gap.

Marcus Willand, Partner at MHP and one of the study’s authors, notes: “The figures show that the prospect of greater safety and comfort due to AI can motivate purchasing decisions. However, the European respondents in particular are often hesitant and price-sensitive.”

The willingness to pay for AI features shows an equally stark divide. Just 23 percent of European drivers expressed willingness to pay for AI functions, compared to 39 percent of Chinese drivers. The study suggests that most users now expect AI features to be standard rather than optional extras.

Graphs showing what features the public view can be significantly improved by in-vehicle AI.

Dr Nils Schaupensteiner, Associated Partner at MHP and study co-author, said: “Automotive companies need to create innovations with clear added value and develop both direct and indirect monetisation of their AI offerings, for example through data-based business models and improved services.”

In-vehicle AI opportunities

Despite these challenges, traditional automotive manufacturers maintain a trust advantage over tech giants. The study reveals that 64 percent of customers trust established car manufacturers with AI implementation, compared to 50 percent for technology firms like Apple, Google, and Microsoft.

Graph highlighting the public trust in various stakeholders regarding in-vehicle AI.

The research identified several key areas where AI could provide significant value across the automotive industry’s value chain, including pattern recognition for quality management, enhanced data management capabilities, AI-driven decision-making systems, and improved customer service through AI-powered communication tools.

“It is worth OEMs and suppliers considering the opportunities offered by the new technology along their entire value chain,” explains Augustin Friedel, Senior Manager and study co-author. “However, the possible uses are diverse and implementation is quite complex.”

The study reveals that while up to 79 percent of respondents express interest in AI-powered features such as driver assistance systems, intelligent route planning, and predictive maintenance, manufacturers face significant challenges in monetising these capabilities, particularly in the European market.

Graph showing the public interest in various in-vehicle AI features.

See also: MIT breakthrough could transform robot training

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Western drivers remain sceptical of in-vehicle AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/western-drivers-remain-sceptical-in-vehicle-ai/feed/ 0
AI sector study: Record growth masks serious challenges https://www.artificialintelligence-news.com/news/ai-sector-study-record-growth-masks-serious-challenges/ https://www.artificialintelligence-news.com/news/ai-sector-study-record-growth-masks-serious-challenges/#respond Thu, 24 Oct 2024 14:31:34 +0000 https://www.artificialintelligence-news.com/?p=16382 A comprehensive AI sector study – conducted by the Department for Science, Innovation and Technology (DSIT) in collaboration with Perspective Economics, Ipsos, and glass.ai – provides a detailed overview of the industry’s current state and its future prospects. In this article, we delve deeper into the key findings and implications—drawing on additional sources to enhance […]

The post AI sector study: Record growth masks serious challenges appeared first on AI News.

]]>
A comprehensive AI sector study – conducted by the Department for Science, Innovation and Technology (DSIT) in collaboration with Perspective Economics, Ipsos, and glass.ai – provides a detailed overview of the industry’s current state and its future prospects.

In this article, we delve deeper into the key findings and implications—drawing on additional sources to enhance our understanding.

Thriving industry with significant growth

The study highlights the remarkable growth of the UK’s AI sector. With over 3,170 active AI companies, these firms have generated £10.6 billion in AI-related revenues and employed more than 50,000 people in AI-related roles. This significant contribution to GVA (Gross Value Added) underscores the sector’s transformative potential in driving the UK’s economic growth.

Mark Boost, CEO of Civo, said: “In a space that’s been dominated by US companies for too long, it’s promising to see the government now stepping up to help support the UK AI sector on the global stage.”

The study shows that AI activity is dispersed across various regions of the UK, with notable concentrations in London, the South East, and Scotland. This regional dispersion indicates a broad scope for the development of AI technology applications across different sectors and regions.

Investment and funding

Investment in the AI sector has been a key driver of growth. In 2022, £18.8 billion was secured in private investment since 2016, with investments made in 52 unique industry sectors compared to 35 sectors in 2016.

The government’s commitment to supporting AI is evident through significant investments. In 2022, the UK government unveiled a National AI Strategy and Action Plan—committing over £1.3 billion in support for the sector, complementing the £2.8 billion already invested.

However, as Boost cautions, “Major players like AWS are locking AI startups into their ecosystems with offerings like $500k cloud credits, ensuring that emerging companies start their journey reliant on their infrastructure. This not only hinders competition and promotes vendor lock-in but also risks stifling innovation across the broader UK AI ecosystem.”

Addressing bottlenecks

Despite the growth and investment, several bottlenecks must be addressed to fully harness the potential of AI:

  • Infrastructure: The UK’s digital technology infrastructure is less advanced than many other countries. This bottleneck includes inadequate data centre infrastructure and a dependent supply of powerful GPU computer chips. Boost emphasises this concern, stating “It would be dangerous for the government to ignore the immense compute power that AI relies on. We need to consider where this power is coming from and the impact it’s having on both the already over-concentrated cloud market and the environment.”
  • Commercial awareness: Many SMEs lack familiarity with digital technology. Almost a third (31%) of SMEs have yet to adopt the cloud, and nearly half (47%) do not currently use AI tools or applications.
  • Skills shortage: Two-fifths of businesses struggle to find staff with good digital skills, including traditional digital roles like data analytics or IT. There is a rising need for workers with new AI-specific skills, such as prompt engineering, that will require retraining and upskilling opportunities.

To address these bottlenecks, the government has implemented several initiatives:

  • Private sector investment: Microsoft has announced a £2.5 billion investment in AI skills, security, and data centre infrastructure, aiming to procure more than 20,000 of the most advanced GPUs by 2026.
  • Government support: The government has invested £1.5 billion in computing capacity and committed to building three new supercomputers by 2025. This support aims to enhance the UK’s infrastructure to stay competitive in the AI market.
  • Public sector integration: The UK Government Digital Service (GDS) is working to improve efficiency using predictive algorithms for future pension scheme behaviour. HMRC uses AI to help identify call centre priorities, demonstrating how AI solutions can address complex public sector challenges.

Future prospects and challenges

The future of the UK AI sector is both promising and challenging. While significant economic gains are predicted, including boosting GDP by £550 billion by 2035, delays in AI roll-out could cost the UK £150 billion over the same period. Ensuring a balanced approach between innovation and regulation will be crucial.

Boost emphasises the importance of data sovereignty and privacy: “Businesses have grown increasingly wary of how their data is collected, stored, and used by the likes of ChatGPT. The government has a real opportunity to enable the UK AI sector to offer viable alternatives.

“The forthcoming AI Action Plan will be another opportunity to identify how AI can drive economic growth and better support the UK tech sector.”

  • AI Safety Summit: The AI Safety Summit at Bletchley Park highlighted the need for responsible AI development. The “Bletchley Declaration on AI Safety” emphasises the importance of ensuring AI tools are transparent, fair, and free from bias to maintain public trust and realise AI’s benefits in public services.
  • Cybersecurity challenges: As AI systems handle sensitive or personal information, ensuring their security is paramount. This involves protecting against cyber threats, securing algorithms from manipulation, safeguarding data centres and hardware, and ensuring supply chain security.

The AI sector study underscores a thriving industry with significant growth potential. However, it also highlights several bottlenecks that must be addressed – infrastructure gaps, lack of commercial awareness, and skills shortages – to fully harness the sector’s potential.

(Photo by John Noonan)

See also: EU AI Act: Early prep could give businesses competitive edge

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI sector study: Record growth masks serious challenges appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-sector-study-record-growth-masks-serious-challenges/feed/ 0
AI governance gap: 95% of firms haven’t implemented frameworks https://www.artificialintelligence-news.com/news/ai-governance-gap-95-of-firms-havent-frameworks/ https://www.artificialintelligence-news.com/news/ai-governance-gap-95-of-firms-havent-frameworks/#respond Thu, 17 Oct 2024 16:38:58 +0000 https://www.artificialintelligence-news.com/?p=16318 Robust governance is essential to mitigate AI risks and maintain responsible systems, but the majority of firms are yet to implement a framework. Commissioned by Prove AI and conducted by Zogby Analytics, the report polled over 600 CEOs, CIOs, and CTOs from large companies across the US, UK, and Germany. The findings show that 96% […]

The post AI governance gap: 95% of firms haven’t implemented frameworks appeared first on AI News.

]]>
Robust governance is essential to mitigate AI risks and maintain responsible systems, but the majority of firms are yet to implement a framework.

Commissioned by Prove AI and conducted by Zogby Analytics, the report polled over 600 CEOs, CIOs, and CTOs from large companies across the US, UK, and Germany. The findings show that 96% of organisations are already utilising AI to support business operations, with the same percentage planning to increase their AI budgets in the coming year.

The primary motivations for AI investment include increasing productivity (82%), improving operational efficiency (73%), enhancing decision-making (65%), and achieving cost savings (60%). The most common AI use cases reported were customer service and support, predictive analytics, and marketing and ad optimisation.

Despite the surge in AI investments, business leaders are acutely aware of the additional risk exposure that AI brings to their organisations. Data integrity and security emerged as the biggest deterrents to implementing new AI solutions.

Executives also reported encountering various AI performance issues, including:

  • Data quality issues (e.g., inconsistencies or inaccuracies): 41%
  • Bias detection and mitigation challenges in AI algorithms, leading to unfair or discriminatory outcomes: 37%
  • Difficulty in quantifying and measuring the return on investment (ROI) of AI initiatives: 28%

While 95% of respondents expressed confidence in their organisation’s current AI risk management practices, the report revealed a significant gap in AI governance implementation.

Only 5% of executives reported that their organisation has implemented any AI governance framework. However, 82% stated that implementing AI governance solutions is a somewhat or extremely pressing priority, with 85% planning to implement such solutions by summer 2025.

The report also found that 82% of participants support an AI governance executive order to provide stronger oversight. Additionally, 65% expressed concern about IP infringement and data security.

Mrinal Manohar, CEO of Prove AI, commented: “Executives are making themselves clear: AI’s long-term efficacy, including providing a meaningful return on the massive investments organisations are currently making, is contingent on their ability to develop and refine comprehensive AI governance strategies.

“The wave of AI-focused legislation going into effect around the world is only increasing the urgency; for the current wave of innovation to continue responsibly, we need to implement clearer guardrails to manage and monitor the data informing AI systems.”

As global regulations like the EU AI Act loom on the horizon, the report underscores the importance of de-risking AI and the work that still needs to be done. Implementing and optimising dedicated AI governance strategies has emerged as a top priority for businesses looking to harness the power of AI while mitigating associated risks.

The findings of this report serve as a wake-up call for organisations to prioritise AI governance as they continue to invest in and deploy AI technologies. Responsible implementation and robust governance frameworks will be key to unlocking the full potential of AI while maintaining trust and compliance.

(Photo by Rob Thompson)

See also: Scoring AI models: Endor Labs unveils evaluation tool

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI governance gap: 95% of firms haven’t implemented frameworks appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-governance-gap-95-of-firms-havent-frameworks/feed/ 0
Many organisations unprepared for AI cybersecurity threats https://www.artificialintelligence-news.com/news/many-organisations-unprepared-ai-cybersecurity-threats/ https://www.artificialintelligence-news.com/news/many-organisations-unprepared-ai-cybersecurity-threats/#respond Thu, 10 Oct 2024 09:59:12 +0000 https://www.artificialintelligence-news.com/?p=16268 While AI improves the detection of cybersecurity threats, it simultaneously ushers in more advanced challenges. Research from Keeper Security finds that, despite the implementation of AI-related policies, many organisations remain inadequately prepared for AI-powered threats. 84% of IT and security leaders find AI-enhanced tools have exacerbated the challenge of detecting phishing and smishing attacks, which […]

The post Many organisations unprepared for AI cybersecurity threats appeared first on AI News.

]]>
While AI improves the detection of cybersecurity threats, it simultaneously ushers in more advanced challenges.

Research from Keeper Security finds that, despite the implementation of AI-related policies, many organisations remain inadequately prepared for AI-powered threats.

84% of IT and security leaders find AI-enhanced tools have exacerbated the challenge of detecting phishing and smishing attacks, which were already significant threats. In response, 81% of organisations have enacted AI usage policies for employees. Confidence in these measures runs high, with 77% of leaders expressing familiarity with best practices for AI security.

Gap between AI cybersecurity policy and threats preparedness

More than half (51%) of security leaders view AI-driven attacks as the most severe threat to their organisations. Alarmingly, 35% of respondents feel ill-prepared to address these attacks compared to other cyber threats.

Organisations are deploying several key strategies to meet these emerging challenges:

  • Data encryption: Utilised by 51% of IT leaders, encryption serves as a crucial defence against unauthorised access and is vital against AI-fuelled attacks.
  • Employee training and awareness: With 45% of organisations prioritising enhanced training programmes, there is a focused effort to equip employees to recognise and counter AI-driven phishing and smishing intrusions.
  • Advanced threat detection systems: 41% of organisations are investing in these systems, underscoring the need for improved detection and response to sophisticated AI threats.

The advent of AI-driven cyber threats undeniably presents new challenges. Nevertheless, fundamental cybersecurity practices – such as data encryption, employee education, and advanced threat detection – continue to be essential. Organisations must ensure these essential measures are consistently re-evaluated and adjusted to counter emerging threats.

In addition to these core practices, advanced security frameworks like zero trust and Privileged Access Management (PAM) solutions can bolster an organisation’s resilience.

Zero trust demands continuous verification of all users, devices, and applications, reducing the risk of unauthorised access and minimising potential damage during an attack. PAM offers targeted security for an organisation’s most sensitive accounts, crucial for defending against complex AI-driven threats that aim at high-level credentials.

Darren Guccione, CEO and Co-Founder of Keeper Security, commented: “AI-driven attacks are a formidable challenge, but by reinforcing our cybersecurity fundamentals and adopting advanced security measures, we can build resilient defences against these evolving threats.”

Proactivity is also key for organisations—regularly reviewing security policies, performing routine audits, and fostering a culture of cybersecurity awareness are all essential.

While organisations are advancing, cybersecurity requires perpetual vigilance. Merging traditional practices with modern approaches like zero trust and PAM will empower organisations to maintain an edge over developing AI-powered threats.

(Photo by Growtika)

See also: King’s Business School: How AI is transforming problem-solving

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Many organisations unprepared for AI cybersecurity threats appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/many-organisations-unprepared-ai-cybersecurity-threats/feed/ 0