society Archives - AI News https://www.artificialintelligence-news.com/news/tag/society/ Artificial Intelligence News Fri, 02 May 2025 12:38:13 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png society Archives - AI News https://www.artificialintelligence-news.com/news/tag/society/ 32 32 Google AMIE: AI doctor learns to ‘see’ medical images https://www.artificialintelligence-news.com/news/google-amie-ai-doctor-learns-to-see-medical-images/ https://www.artificialintelligence-news.com/news/google-amie-ai-doctor-learns-to-see-medical-images/#respond Fri, 02 May 2025 12:38:12 +0000 https://www.artificialintelligence-news.com/?p=106274 Google is giving its diagnostic AI the ability to understand visual medical information with its latest research on AMIE (Articulate Medical Intelligence Explorer). Imagine chatting with an AI about a health concern, and instead of just processing your words, it could actually look at the photo of that worrying rash or make sense of your […]

The post Google AMIE: AI doctor learns to ‘see’ medical images appeared first on AI News.

]]>
Google is giving its diagnostic AI the ability to understand visual medical information with its latest research on AMIE (Articulate Medical Intelligence Explorer).

Imagine chatting with an AI about a health concern, and instead of just processing your words, it could actually look at the photo of that worrying rash or make sense of your ECG printout. That’s what Google is aiming for.

We already knew AMIE showed promise in text-based medical chats, thanks to earlier work published in Nature. But let’s face it, real medicine isn’t just about words.

Doctors rely heavily on what they can see – skin conditions, readings from machines, lab reports. As the Google team rightly points out, even simple instant messaging platforms “allow static multimodal information (e.g., images and documents) to enrich discussions.”

Text-only AI was missing a huge piece of the puzzle. The big question, as the researchers put it, was “Whether LLMs can conduct diagnostic clinical conversations that incorporate this more complex type of information.”

Google teaches AMIE to look and reason

Google’s engineers have beefed up AMIE using their Gemini 2.0 Flash model as the brains of the operation. They’ve combined this with what they call a “state-aware reasoning framework.” In plain English, this means the AI doesn’t just follow a script; it adapts its conversation based on what it’s learned so far and what it still needs to figure out.

It’s close to how a human clinician works: gathering clues, forming ideas about what might be wrong, and then asking for more specific information – including visual evidence – to narrow things down.

“This enables AMIE to request relevant multimodal artifacts when needed, interpret their findings accurately, integrate this information seamlessly into the ongoing dialogue, and use it to refine diagnoses,” Google explains.

Think of the conversation flowing through stages: first gathering the patient’s history, then moving towards diagnosis and management suggestions, and finally follow-up. The AI constantly assesses its own understanding, asking for that skin photo or lab result if it senses a gap in its knowledge.

To get this right without endless trial-and-error on real people, Google built a detailed simulation lab.

Google created lifelike patient cases, pulling realistic medical images and data from sources like the PTB-XL ECG database and the SCIN dermatology image set, adding plausible backstories using Gemini. Then, they let AMIE ‘chat’ with simulated patients within this setup and automatically check how well it performed on things like diagnostic accuracy and avoiding errors (or ‘hallucinations’).

The virtual OSCE: Google puts AMIE through its paces

The real test came in a setup designed to mirror how medical students are assessed: the Objective Structured Clinical Examination (OSCE).

Google ran a remote study involving 105 different medical scenarios. Real actors, trained to portray patients consistently, interacted either with the new multimodal AMIE or with actual human primary care physicians (PCPs). These chats happened through an interface where the ‘patient’ could upload images, just like you might in a modern messaging app.

Afterwards, specialist doctors (in dermatology, cardiology, and internal medicine) and the patient actors themselves reviewed the conversations.

The human doctors scored everything from how well history was taken, the accuracy of the diagnosis, the quality of the suggested management plan, right down to communication skills and empathy—and, of course, how well the AI interpreted the visual information.

Surprising results from the simulated clinic

Here’s where it gets really interesting. In this head-to-head comparison within the controlled study environment, Google found AMIE didn’t just hold its own—it often came out ahead.

The AI was rated as being better than the human PCPs at interpreting the multimodal data shared during the chats. It also scored higher on diagnostic accuracy, producing differential diagnosis lists (the ranked list of possible conditions) that specialists deemed more accurate and complete based on the case details.

Specialist doctors reviewing the transcripts tended to rate AMIE’s performance higher across most areas. They particularly noted “the quality of image interpretation and reasoning,” the thoroughness of its diagnostic workup, the soundness of its management plans, and its ability to flag when a situation needed urgent attention.

Perhaps one of the most surprising findings came from the patient actors: they often found the AI to be more empathetic and trustworthy than the human doctors in these text-based interactions.

And, on a critical safety note, the study found no statistically significant difference between how often AMIE made errors based on the images (hallucinated findings) compared to the human physicians.

Technology never stands still, so Google also ran some early tests swapping out the Gemini 2.0 Flash model for the newer Gemini 2.5 Flash.

Using their simulation framework, the results hinted at further gains, particularly in getting the diagnosis right (Top-3 Accuracy) and suggesting appropriate management plans.

While promising, the team is quick to add a dose of realism: these are just automated results, and “rigorous assessment through expert physician review is essential to confirm these performance benefits.”

Important reality checks

Google is commendably upfront about the limitations here. “This study explores a research-only system in an OSCE-style evaluation using patient actors, which substantially under-represents the complexity… of real-world care,” they state clearly. 

Simulated scenarios, however well-designed, aren’t the same as dealing with the unique complexities of real patients in a busy clinic. They also stress that the chat interface doesn’t capture the richness of a real video or in-person consultation.

So, what’s the next step? Moving carefully towards the real world. Google is already partnering with Beth Israel Deaconess Medical Center for a research study to see how AMIE performs in actual clinical settings with patient consent.

The researchers also acknowledge the need to eventually move beyond text and static images towards handling real-time video and audio—the kind of interaction common in telehealth today.

Giving AI the ability to ‘see’ and interpret the kind of visual evidence doctors use every day offers a glimpse of how AI might one day assist clinicians and patients. However, the path from these promising findings to a safe and reliable tool for everyday healthcare is still a long one that requires careful navigation.

(Photo by Alexander Sinn)

See also: Are AI chatbots really changing the world of work?

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google AMIE: AI doctor learns to ‘see’ medical images appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/google-amie-ai-doctor-learns-to-see-medical-images/feed/ 0
Are AI chatbots really changing the world of work? https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/ https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/#respond Fri, 02 May 2025 09:54:32 +0000 https://www.artificialintelligence-news.com/?p=106266 We’ve heard endless predictions about how AI chatbots will transform work, but data paints a much calmer picture—at least for now. Despite huge and ongoing advancements in generative AI, the massive wave it was supposed to create in the world of work looks more like a ripple so far. Researchers Anders Humlum (University of Chicago) […]

The post Are AI chatbots really changing the world of work? appeared first on AI News.

]]>
We’ve heard endless predictions about how AI chatbots will transform work, but data paints a much calmer picture—at least for now.

Despite huge and ongoing advancements in generative AI, the massive wave it was supposed to create in the world of work looks more like a ripple so far.

Researchers Anders Humlum (University of Chicago) and Emilie Vestergaard (University of Copenhagen) didn’t just rely on anecdotes. They dug deep, connecting responses from two big surveys (late 2023 and 2024) with official, detailed records about jobs and pay in Denmark.

The pair zoomed in on around 25,000 people working in 7,000 different places, covering 11 jobs thought to be right in the path of AI disruption.   

Everyone’s using AI chatbots for work, but where are the benefits?

What they found confirms what many of us see: AI chatbots are everywhere in Danish workplaces now. Most bosses are actually encouraging staff to use them, a real turnaround from the early days when companies were understandably nervous about things like data privacy.

Almost four out of ten employers have even rolled out their own in-house chatbots, and nearly a third of employees have had some formal training on these tools.   

When bosses gave the nod, the number of staff using chatbots practically doubled, jumping from 47% to 83%. It also helped level the playing field a bit. That gap between men and women using chatbots? It shrank noticeably when companies actively encouraged their use, especially when they threw in some training.

So, the tools are popular, companies are investing, people are getting trained… but the big economic shift? It seems to be missing in action.

Using statistical methods to compare people who used AI chatbots for work with those who didn’t, both before and after ChatGPT burst onto the scene, the researchers found… well, basically nothing.

“Precise zeros,” the researchers call their findings. No significant bump in pay, no change in recorded work hours, across all 11 job types they looked at. And they’re pretty confident about this – the numbers rule out any average effect bigger than just 1%.

This wasn’t just a blip, either. The lack of impact held true even for the keen beans who jumped on board early, those using chatbots daily, or folks working where the boss was actively pushing the tech.

Looking at whole workplaces didn’t change the story; places with lots of chatbot users didn’t see different trends in hiring, overall wages, or keeping staff compared to places using them less.

Productivity gains: More of a gentle nudge than a shove

Why the big disconnect? Why all the hype and investment if it’s not showing up in paychecks or job stats? The study flags two main culprits: the productivity boosts aren’t as huge as hoped in the real world, and what little gains there are aren’t really making their way into wages.

Sure, people using AI chatbots for work felt they were helpful. They mentioned better work quality and feeling more creative. But the number one benefit? Saving time.

However, when the researchers crunched the numbers, the average time saved was only about 2.8% of a user’s total work hours. That’s miles away from the huge 15%, 30%, even 50% productivity jumps seen in controlled lab-style experiments (RCTs) involving similar jobs.

Why the difference? A few things seem to be going on. Those experiments often focus on jobs or specific tasks where chatbots really shine (like coding help or basic customer service responses). This study looked at a wider range, including jobs like teaching where the benefits might be smaller.

The researchers stress the importance of what they call “complementary investments”. People whose companies encouraged chatbot use and provided training actually did report bigger benefits – saving more time, improving quality, and feeling more creative. This suggests that just having the tool isn’t enough; you need the right support and company environment to really unlock its potential.

And even those modest time savings weren’t padding wallets. The study reckons only a tiny fraction – maybe 3% to 7% – of the time saved actually showed up as higher earnings. It might be down to standard workplace inertia, or maybe it’s just harder to ask for a raise based on using a tool your boss hasn’t officially blessed, especially when many people started using them off their own bat.

Making new work, not less work

One fascinating twist is that AI chatbots aren’t just about doing old work tasks faster. They seem to be creating new tasks too. Around 17% of people using them said they had new workloads, mostly brand new types of tasks.

This phenomenon happened more often in workplaces that encouraged chatbot use. It even spilled over to people not using the tools – about 5% of non-users reported new tasks popping up because of AI, especially teachers having to adapt assignments or spot AI-written homework.   

What kind of new tasks? Things like figuring out how to weave AI into daily workflows, drafting content with AI help, and importantly, dealing with the ethical side and making sure everything’s above board. It hints that companies are still very much in the ‘figuring it out’ phase, spending time and effort adapting rather than just reaping instant rewards.

What’s the verdict on the work impact of AI chatbots?

The researchers are careful not to write off generative AI completely. They see pathways for it to become more influential over time, especially as companies get better at integrating it and maybe as those “new tasks” evolve.

But for now, their message is clear: the current reality doesn’t match the hype about a massive, immediate job market overhaul.

“Despite rapid adoption and substantial investments… our key finding is that AI chatbots have had minimal impact on productivity and labor market outcomes to date,” the researchers conclude.   

It brings to mind that old quote about the early computer age: seen everywhere, except in the productivity stats. Two years on from ChatGPT’s launch kicking off the fastest tech adoption we’ve ever seen, its actual mark on jobs and pay looks surprisingly light.

The revolution might still be coming, but it seems to be taking its time.   

See also: Claude Integrations: Anthropic adds AI to your favourite work tools

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Are AI chatbots really changing the world of work? appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/are-ai-chatbots-really-changing-the-world-of-work/feed/ 0
AI in education: Balancing promises and pitfalls https://www.artificialintelligence-news.com/news/ai-in-education-balancing-promises-and-pitfalls/ https://www.artificialintelligence-news.com/news/ai-in-education-balancing-promises-and-pitfalls/#respond Mon, 28 Apr 2025 12:27:09 +0000 https://www.artificialintelligence-news.com/?p=106158 The role of AI in education is a controversial subject, bringing both exciting possibilities and serious challenges. There’s a real push to bring AI into schools, and you can see why. The recent executive order on youth education from President Trump recognised that if future generations are going to do well in an increasingly automated […]

The post AI in education: Balancing promises and pitfalls appeared first on AI News.

]]>
The role of AI in education is a controversial subject, bringing both exciting possibilities and serious challenges.

There’s a real push to bring AI into schools, and you can see why. The recent executive order on youth education from President Trump recognised that if future generations are going to do well in an increasingly automated world, they need to be ready.

“To ensure the United States remains a global leader in this technological revolution, we must provide our nation’s youth with opportunities to cultivate the skills and understanding necessary to use and create the next generation of AI technology,” President Trump declared.

So, what does AI actually look like in the classroom?

One of the biggest hopes for AI in education is making learning more personal. Imagine software that can figure out how individual students are doing, then adjust the pace and materials just for them. This could mean finally moving away from the old one-size-fits-all approach towards learning environments that adapt and offer help exactly where it’s needed.

The US executive order hints at this, wanting to improve results through things like “AI-based high-quality instructional resources” and “high-impact tutoring.”

And what about teachers? AI could be a huge help here too, potentially taking over tedious admin tasks like grading, freeing them up to actually teach. Plus, AI software might offer fresh ways to present information.

Getting kids familiar with AI early on could also take away some of the mystery around the technology. It might spark their “curiosity and creativity” and give them the foundation they need to become “active and responsible participants in the workforce of the future.”

The focus stretches to lifelong learning and getting people ready for the job market. On top of that, AI tools like text-to-speech or translation features can make learning much more accessible for students with disabilities, opening up educational environments for everyone.

Not all smooth sailing: The challenges ahead for AI in education

While the potential is huge, we need to be realistic about the significant hurdles and potential downsides.

First off, AI runs on student data – lots of it. That means we absolutely need strong rules and security to make sure this data is collected ethically, used correctly, and kept safe from breaches. Privacy is paramount here.

Then there’s the bias problem. If the data used to train AI reflects existing unfairness in society (and let’s be honest, it often does), the AI could end up repeating or even worsening those inequalities. Think biased assessments or unfair resource allocation. Careful testing and constant checks are crucial to catch and fix this.

We also can’t ignore the digital divide. If some students don’t have reliable internet, the right devices, or the necessary tech infrastructure at home or school, AI could widen the gap between the haves and have-nots. It’s vital that everyone gets fair access.

There’s also a risk that leaning too heavily on AI education tools might stop students from developing essential skills like critical thinking. We need to teach them how to use AI as a helpful tool, not a crutch they can’t function without.

Maybe the biggest piece of the puzzle, though, is making sure our teachers are ready. As the executive order rightly points out, “We must also invest in our educators and equip them with the tools and knowledge.”

This isn’t just about knowing which buttons to push; teachers need to understand how AI fits into teaching effectively and ethically. That requires solid professional development and ongoing support.

A recent GMB Union poll found that while about a fifth of UK schools are using AI now, the staff often aren’t getting the training they need:

View on Threads

Finding the right path forward

It’s going to take everyone – governments, schools, tech companies, and teachers – pulling together in order to ensure that AI plays a positive role in education.

We absolutely need clear policies and standards covering ethics, privacy, bias, and making sure AI is accessible to all students. We also need to keep investing in research to figure out the best ways to use AI in education and to build tools that are fair and effective.

And critically, we need a long-term commitment to teacher education to get educators comfortable and skilled with these changes. Part of this is building broad AI literacy, making sure all students get a basic understanding of this technology and how it impacts society.

AI could be a positive force in education – making it more personalised, efficient, and focused on the skills students actually need. But turning that potential into reality means carefully navigating those tricky ethical, practical, and teaching challenges head-on.

See also: How does AI judge? Anthropic studies the values of Claude

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI in education: Balancing promises and pitfalls appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-in-education-balancing-promises-and-pitfalls/feed/ 0
Meta will train AI models using EU user data https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/ https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/#respond Tue, 15 Apr 2025 16:32:02 +0000 https://www.artificialintelligence-news.com/?p=105325 Meta has confirmed plans to utilise content shared by its adult users in the EU (European Union) to train its AI models. The announcement follows the recent launch of Meta AI features in Europe and aims to enhance the capabilities and cultural relevance of its AI systems for the region’s diverse population.    In a statement, […]

The post Meta will train AI models using EU user data appeared first on AI News.

]]>
Meta has confirmed plans to utilise content shared by its adult users in the EU (European Union) to train its AI models.

The announcement follows the recent launch of Meta AI features in Europe and aims to enhance the capabilities and cultural relevance of its AI systems for the region’s diverse population.   

In a statement, Meta wrote: “Today, we’re announcing our plans to train AI at Meta using public content – like public posts and comments – shared by adults on our products in the EU.

“People’s interactions with Meta AI – like questions and queries – will also be used to train and improve our models.”

Starting this week, users of Meta’s platforms (including Facebook, Instagram, WhatsApp, and Messenger) within the EU will receive notifications explaining the data usage. These notifications, delivered both in-app and via email, will detail the types of public data involved and link to an objection form.

“We have made this objection form easy to find, read, and use, and we’ll honor all objection forms we have already received, as well as newly submitted ones,” Meta explained.

Meta explicitly clarified that certain data types remain off-limits for AI training purposes.

The company says it will not “use people’s private messages with friends and family” to train its generative AI models. Furthermore, public data associated with accounts belonging to users under the age of 18 in the EU will not be included in the training datasets.

Meta wants to build AI tools designed for EU users

Meta positions this initiative as a necessary step towards creating AI tools designed for EU users. Meta launched its AI chatbot functionality across its messaging apps in Europe last month, framing this data usage as the next phase in improving the service.

“We believe we have a responsibility to build AI that’s not just available to Europeans, but is actually built for them,” the company explained. 

“That means everything from dialects and colloquialisms, to hyper-local knowledge and the distinct ways different countries use humor and sarcasm on our products.”

This becomes increasingly pertinent as AI models evolve with multi-modal capabilities spanning text, voice, video, and imagery.   

Meta also situated its actions in the EU within the broader industry landscape, pointing out that training AI on user data is common practice.

“It’s important to note that the kind of AI training we’re doing is not unique to Meta, nor will it be unique to Europe,” the statement reads. 

“We’re following the example set by others including Google and OpenAI, both of which have already used data from European users to train their AI models.”

Meta further claimed its approach surpasses others in openness, stating, “We’re proud that our approach is more transparent than many of our industry counterparts.”   

Regarding regulatory compliance, Meta referenced prior engagement with regulators, including a delay initiated last year while awaiting clarification on legal requirements. The company also cited a favourable opinion from the European Data Protection Board (EDPB) in December 2024.

“We welcome the opinion provided by the EDPB in December, which affirmed that our original approach met our legal obligations,” wrote Meta.

Broader concerns over AI training data

While Meta presents its approach in the EU as transparent and compliant, the practice of using vast swathes of public user data from social media platforms to train large language models (LLMs) and generative AI continues to raise significant concerns among privacy advocates.

Firstly, the definition of “public” data can be contentious. Content shared publicly on platforms like Facebook or Instagram may not have been posted with the expectation that it would become raw material for training commercial AI systems capable of generating entirely new content or insights. Users might share personal anecdotes, opinions, or creative works publicly within their perceived community, without envisaging its large-scale, automated analysis and repurposing by the platform owner.

Secondly, the effectiveness and fairness of an “opt-out” system versus an “opt-in” system remain debatable. Placing the onus on users to actively object, often after receiving notifications buried amongst countless others, raises questions about informed consent. Many users may not see, understand, or act upon the notification, potentially leading to their data being used by default rather than explicit permission.

Thirdly, the issue of inherent bias looms large. Social media platforms reflect and sometimes amplify societal biases, including racism, sexism, and misinformation. AI models trained on this data risk learning, replicating, and even scaling these biases. While companies employ filtering and fine-tuning techniques, eradicating bias absorbed from billions of data points is an immense challenge. An AI trained on European public data needs careful curation to avoid perpetuating stereotypes or harmful generalisations about the very cultures it aims to understand.   

Furthermore, questions surrounding copyright and intellectual property persist. Public posts often contain original text, images, and videos created by users. Using this content to train commercial AI models, which may then generate competing content or derive value from it, enters murky legal territory regarding ownership and fair compensation—issues currently being contested in courts worldwide involving various AI developers.

Finally, while Meta highlights its transparency relative to competitors, the actual mechanisms of data selection, filtering, and its specific impact on model behaviour often remain opaque. Truly meaningful transparency would involve deeper insights into how specific data influences AI outputs and the safeguards in place to prevent misuse or unintended consequences.

The approach taken by Meta in the EU underscores the immense value technology giants place on user-generated content as fuel for the burgeoning AI economy. As these practices become more widespread, the debate surrounding data privacy, informed consent, algorithmic bias, and the ethical responsibilities of AI developers will undoubtedly intensify across Europe and beyond.

(Photo by Julio Lopez)

See also: Apple AI stresses privacy with synthetic and anonymised data

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta will train AI models using EU user data appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/feed/ 0
Nina Schick, author: Generative AI’s impact on business, politics and society https://www.artificialintelligence-news.com/news/nina-schick-author-generative-ais-impact-on-business-politics-and-society/ https://www.artificialintelligence-news.com/news/nina-schick-author-generative-ais-impact-on-business-politics-and-society/#respond Thu, 10 Apr 2025 05:46:00 +0000 https://www.artificialintelligence-news.com/?p=105109 Nina Schick is a leading speaker and expert on generative AI, renowned for her groundbreaking work at the intersection of technology, society and geopolitics. As one of the first authors to publish a book on generative AI, she has emerged as a sought-after speaker helping global leaders, businesses, and institutions understand and adapt to this […]

The post Nina Schick, author: Generative AI’s impact on business, politics and society appeared first on AI News.

]]>
Nina Schick is a leading speaker and expert on generative AI, renowned for her groundbreaking work at the intersection of technology, society and geopolitics.

As one of the first authors to publish a book on generative AI, she has emerged as a sought-after speaker helping global leaders, businesses, and institutions understand and adapt to this transformative moment.

We spoke to Nina to explore the future of AI-driven innovation, its ethical and political dimensions, and how organisations can lead in this rapidly evolving landscape.

In your view, how will generative AI redefine the foundational structures of business and economic productivity in the coming decade?

I believe generative AI is absolutely going to transform the entire economy as we know it. This moment feels quite similar to around 1993, when we were first being told to prepare for the Internet. Back then, some thirty years ago, we didn’t fully grasp, in our naivety, how profoundly the Internet would go on to reshape business and the broader global economy.

Now, we are witnessing something even more significant. You can think of generative AI as a kind of new combustion engine, but for all forms of human creative and intelligent activity. It’s a fundamental enabler. Every industry, every facet of productivity, will be impacted and ultimately transformed by generative AI. We’re already beginning to see those use cases emerge, and this is only the beginning.

As AI and data continue to evolve as forces shaping society, how do you see them redefining the political agenda and global power dynamics?

When you reflect on just how profound AI is in its capacity to reshape the entire framework of society, it becomes clear that this AI revolution is going to emerge as one of the most important political questions of our generation. Over the past 30 years, we’ve already seen how the information revolution — driven by the Internet, smartphones, and cloud computing — has become a defining geopolitical force.

Now, we’re layering the AI revolution on top of that, along with the data that fuels it, and the impact is nothing short of seismic. This will evolve into one of the most pressing and influential issues society must address over the coming decades. So, to answer the question directly — AI won’t just influence politics; it will, in many ways, become the very fabric of politics itself.

There’s been much discussion about the Metaverse and immersive tech — how do you see these experiences evolving, and what role do you believe AI will play in architecting this next frontier of digital interaction?

The Metaverse represents a vision for where the Internet may be heading — a future where digital experiences become far more immersive, intuitive, and experiential. It’s a concept that imagines how we might engage with digital content in a far more lifelike way.

But the really fascinating element here is that artificial intelligence is the key enabler — the actual vehicle — that will allow us to build and scale these kinds of immersive digital environments. So, even though the Metaverse remains largely an untested concept in terms of its final form, what is clear right now is that AI is going to be the engine that generates and populates the content that will live within these immersive spaces.

Considering the transformative power of AI and big data, what ethical imperatives must policymakers and society address to ensure equitable and responsible deployment?

The conversation around ethics, artificial intelligence, and big data is one that is set to become intensely political and highly consequential. It will likely remain a predominant issue for many years to come.

What we’re dealing with here is a technology so transformative that it has the potential to reshape the economy, redefine the labour market, and fundamentally alter the structure of society itself. That’s why the ethical questions — how to ensure this technology is applied in a fair, safe, and responsible manner — will be one of the defining political challenges of our time.

For business leaders navigating digital transformation, what mindset shifts are essential to meaningfully integrate AI into long-term strategy and operations?

For businesses aiming to digitally transform, especially in the era of artificial intelligence, it’s critical to first understand the conceptual paradigm shift we are currently undergoing. Once that foundational understanding is in place, it becomes much easier to explore and adopt AI technologies effectively.

If companies wish to remain competitive and gain a strategic edge, now is the time to start investigating how generative AI can be thoughtfully and effectively integrated into their business models. This includes identifying priority areas where AI can deliver long-term value — not just short-term.

If you put together a generative AI working group to look into this, your business will be transformed and able to compete with other businesses that are using AI to transform their processes.

As one of the earliest voices to articulate the societal implications of generative AI, what catalysed your foresight to explore this space before it entered the mainstream conversation?

My interest in AI didn’t come from a technical background. I’m not a techie. My experience has always been in analysing macro trends that shape society, geopolitics, and the wider world. That perspective is what led me to AI, as it quickly became clear that this technology would have far-reaching societal implications.

I began researching and writing about AI because I saw it as more than just a technological shift. Ultimately, this isn’t only a story about innovation. It’s a story about humanity. Generative AI, as an exponential technology built and directed by humans, is going to transform not just the way we work, but the way we live. It will even challenge our understanding of what it means to be human.

Photo by Heidi Fin on Unsplash

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

The post Nina Schick, author: Generative AI’s impact on business, politics and society appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/nina-schick-author-generative-ais-impact-on-business-politics-and-society/feed/ 0
Tony Blair Institute AI copyright report sparks backlash https://www.artificialintelligence-news.com/news/tony-blair-institute-ai-copyright-report-sparks-backlash/ https://www.artificialintelligence-news.com/news/tony-blair-institute-ai-copyright-report-sparks-backlash/#respond Wed, 02 Apr 2025 11:04:11 +0000 https://www.artificialintelligence-news.com/?p=105140 The Tony Blair Institute (TBI) has released a report calling for the UK to lead in navigating the complex intersection of arts and AI. According to the report, titled ‘Rebooting Copyright: How the UK Can Be a Global Leader in the Arts and AI,’ the global race for cultural and technological leadership is still up […]

The post Tony Blair Institute AI copyright report sparks backlash appeared first on AI News.

]]>
The Tony Blair Institute (TBI) has released a report calling for the UK to lead in navigating the complex intersection of arts and AI.

According to the report, titled ‘Rebooting Copyright: How the UK Can Be a Global Leader in the Arts and AI,’ the global race for cultural and technological leadership is still up for grabs, and the UK has a golden opportunity to take the lead.

The report emphasises that countries that “embrace change and harness the power of artificial intelligence in creative ways will set the technical, aesthetic, and regulatory standards for others to follow.”

Highlighting that we are in the midst of another revolution in media and communication, the report notes that AI is disrupting how textual, visual, and auditive content is created, distributed, and experienced, much like the printing press, gramophone, and camera did before it.

“AI will usher in a new era of interactive and bespoke works, as well as a counter-revolution that celebrates everything that AI can never be,” the report states.

However, far from signalling the end of human creativity, the TBI suggests AI will open up “new ways of being original.”

The AI revolution’s impact isn’t limited to the creative industries; it’s being felt across all areas of society. Scientists are using AI to accelerate discoveries, healthcare providers are employing it to analyse X-ray images, and emergency services utilise it to locate houses damaged by earthquakes.

The report stresses that these cross-industry advancements are just the beginning, with future AI systems set to become increasingly capable, fuelled by advancements in computing power, data, model architectures, and access to talent.

The UK government has expressed its ambition to be a global leader in AI through its AI Opportunities Action Plan, announced by Prime Minister Keir Starmer on 13 January 2025. For its part, the TBI welcomes the UK government’s ambition, stating that “if properly designed and deployed, AI can make human lives healthier, safer, and more prosperous.”

However, the rapid spread of AI across sectors raises urgent policy questions, particularly concerning the data used for AI training. The application of UK copyright law to the training of AI models is currently contested, with the debate often framed as a “zero-sum game” between AI developers and rights holders. The TBI argues that this framing “misrepresents the nature of the challenge and the opportunity before us.”

The report emphasises that “bold policy solutions are needed to provide all parties with legal clarity and unlock investments that spur innovation, job creation, and economic growth.”

According to the TBI, AI presents opportunities for creators—noting its use in various fields from podcasts to filmmaking. The report draws parallels with past technological innovations – such as the printing press and the internet – which were initially met with resistance, but ultimately led to societal adaptation and human ingenuity prevailing.

The TBI proposes that the solution lies not in clinging to outdated copyright laws but in allowing them to “co-evolve with technological change” to remain effective in the age of AI.

The UK government has proposed a text and data mining exception with an opt-out option for rights holders. While the TBI views this as a good starting point for balancing stakeholder interests, it acknowledges the “significant implementation and enforcement challenges” that come with it, spanning legal, technical, and geopolitical dimensions.

In the report, the Tony Blair Institute for Global Change “assesses the merits of the UK government’s proposal and outlines a holistic policy framework to make it work in practice.”

The report includes recommendations and examines novel forms of art that will emerge from AI. It also delves into the disagreement between rights holders and developers on copyright, the wider implications of copyright policy, and the serious hurdles the UK’s text and data mining proposal faces.

Furthermore, the Tony Blair Institute explores the challenges of governing an opt-out policy, implementation problems with opt-outs, making opt-outs useful and accessible, and tackling the diffusion problem. AI summaries and the problems they present regarding identity are also addressed, along with defensive tools as a partial solution and solving licensing problems.

The report also seeks to clarify the standards on human creativity, address digital watermarking, and discuss the uncertainty around the impact of generative AI on the industry. It proposes establishing a Centre for AI and the Creative Industries and discusses the risk of judicial review, the benefits of a remuneration scheme, and the advantages of a targeted levy on ISPs to raise funding for the Centre.

However, the report has faced strong criticism. Ed Newton-Rex, CEO of Fairly Trained, raised several concerns on Bluesky. These concerns include:

  • The report repeats the “misleading claim” that existing UK copyright law is uncertain, which Newton-Rex asserts is not the case.
  • The suggestion that an opt-out scheme would give rights holders more control over how their works are used is misleading. Newton-Rex argues that licensing is currently required by law, so moving to an opt-out system would actually decrease control, as some rights holders will inevitably miss the opt-out.
  • The report likens machine learning (ML) training to human learning, a comparison that Newton-Rex finds shocking, given the vastly different scalability of the two.
  • The report’s claim that AI developers won’t make long-term profits from training on people’s work is questioned, with Newton-Rex pointing to the significant funding raised by companies like OpenAI.
  • Newton-Rex suggests the report uses strawman arguments, such as stating that generative AI may not replace all human paid activities.
  • A key criticism is that the report omits data showing how generative AI replaces demand for human creative labour.
  • Newton-Rex also criticises the report’s proposed solutions, specifically the suggestion to set up an academic centre, which he notes “no one has asked for.”
  • Furthermore, he highlights the proposal to tax every household in the UK to fund this academic centre, arguing that this would place the financial burden on consumers rather than the AI companies themselves, and the revenue wouldn’t even go to creators.

Adding to these criticisms, British novelist and author Jonathan Coe noted that “the five co-authors of this report on copyright, AI, and the arts are all from the science and technology sectors. Not one artist or creator among them.”

While the report from Tony Blair Institute for Global Change supports the government’s ambition to be an AI leader, it also raises critical policy questions—particularly around copyright law and AI training data.

(Photo by Jez Timms)

See also: Amazon Nova Act: A step towards smarter, web-native AI agents

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Tony Blair Institute AI copyright report sparks backlash appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/tony-blair-institute-ai-copyright-report-sparks-backlash/feed/ 0
AI in 2025: Purpose-driven models, human integration, and more https://www.artificialintelligence-news.com/news/ai-in-2025-purpose-driven-models-human-integration-and-more/ https://www.artificialintelligence-news.com/news/ai-in-2025-purpose-driven-models-human-integration-and-more/#respond Fri, 14 Feb 2025 17:16:32 +0000 https://www.artificialintelligence-news.com/?p=104468 As AI becomes increasingly embedded in our daily lives, industry leaders and experts are forecasting a transformative 2025. From groundbreaking developments to existential challenges, AI’s evolution will continue to shape industries, change workflows, and spark deeper conversations about its implications. For this article, AI News caught up with some of the world’s leading minds to […]

The post AI in 2025: Purpose-driven models, human integration, and more appeared first on AI News.

]]>
As AI becomes increasingly embedded in our daily lives, industry leaders and experts are forecasting a transformative 2025.

From groundbreaking developments to existential challenges, AI’s evolution will continue to shape industries, change workflows, and spark deeper conversations about its implications.

For this article, AI News caught up with some of the world’s leading minds to see what they envision for the year ahead.

Smaller, purpose-driven models

Grant Shipley, Senior Director of AI at Red Hat, predicts a shift away from valuing AI models by their sizeable parameter counts.

Grant Shipley, Senior Director of AI at Red Hat

“2025 will be the year when we stop using the number of parameters that models have as a metric to indicate the value of a model,” he said.  

Instead, AI will focus on specific applications. Developers will move towards chaining together smaller models in a manner akin to microservices in software development. This modular, task-based approach is likely to facilitate more efficient and bespoke applications suited to particular needs.

Open-source leading the way

Bill Higgins, VP of watsonx Platform Engineering and Open Innovation at IBM

Bill Higgins, VP of watsonx Platform Engineering and Open Innovation at IBM, expects open-source AI models will grow in popularity in 2025.

“Despite mounting pressure, many enterprises are still struggling to show measurable returns on their AI investments—and the high licensing fees of proprietary models is a major factor. In 2025, open-source AI solutions will emerge as a dominant force in closing this gap,” he explains.

Alongside the affordability of open-source AI models comes transparency and increased customisation potential, making them ideal for multi-cloud environments. With open-source models matching proprietary systems in power, they could offer a way for enterprises to move beyond experimentation and into scalability.

Nick Burling, SVP at Nasuni

This plays into a prediction from Nick Burling, SVP at Nasuni, who believes that 2025 will usher in a more measured approach to AI investments. 

“Enterprises will focus on using AI strategically, ensuring that every AI initiative is justified by clear, measurable returns,” said Burling.

Cost efficiency and edge data management will become crucial, helping organisations optimise operations while keeping budgets in check.  

Augmenting human expertise

Jonathan Siddharth, CEO of Turing

For Jonathan Siddharth, CEO of Turing, the standout feature of 2025 AI systems will be their ability to learn from human expertise at scale.

“The key advancement will come from teaching AI not just what to do, but how to approach problems with the logical reasoning that coding naturally cultivates,” he says.

Competitiveness, particularly in industries like finance and healthcare, will hinge on mastering this integration of human expertise with AI.  

Behavioural psychology will catch up

Understanding the interplay between human behaviour and AI systems is at the forefront of predictions for Niklas Mortensen, Chief Design Officer at Designit.

Niklas Mortensen, Chief Design Officer at Designit

“With so many examples of algorithmic bias leading to unwanted outputs – and humans being, well, humans – behavioural psychology will catch up to the AI train,” explained Mortensen.  

The solutions? Experimentation with ‘pause moments’ for human oversight and intentional balance between automation and human control in critical operations such as healthcare and transport.

Mortensen also believes personal AI assistants will finally prove their worth by meeting their long-touted potential in organising our lives efficiently and intuitively.

Bridge between physical and digital worlds

Andy Wilson, Senior Director at Dropbox

Andy Wilson, Senior Director at Dropbox, envisions AI becoming an indispensable part of our daily lives.

“AI will evolve from being a helpful tool to becoming an integral part of daily life and work – offering innovative ways to connect, create, and collaborate,” Wilson says.  

Mobile devices and wearables will be at the forefront of this transformation, delivering seamless AI-driven experiences.

However, Wilson warns of new questions on boundaries between personal and workplace data, spurred by such integrations.

Driving sustainability goals 

Kendra DeKeyrel, VP ESG & Asset Management at IBM

With 2030 sustainability targets looming over companies, Kendra DeKeyrel, VP ESG & Asset Management at IBM, highlights how AI can help fill the gap.

DeKeyrel calls on organisations to adopt AI-powered technologies for managing energy consumption, lifecycle performance, and data centre strain.

“These capabilities can ultimately help progress sustainability goals overall,” she explains.

Unlocking computational power and inference

James Ingram, VP Technology at Streetbees

James Ingram, VP Technology at Streetbees, foresees a shift in computational requirements as AI scales to handle increasingly complex problems.

“The focus will move from pre-training to inference compute,” he said, highlighting the importance of real-time reasoning capabilities.

Expanding context windows will also significantly enhance how AI retains and processes information, likely surpassing human efficiency in certain domains.

Rise of agentic AI and unified data foundations

Dominic Wellington, Enterprise Architect at SnapLogic

According to Dominic Wellington, Enterprise Architect at SnapLogic, “Agentic AI marks a more flexible and creative era for AI in 2025.”

However, such systems require robust data integration because siloed information risks undermining their reliability.

Wellington anticipates that 2025 will witness advanced solutions for improving data hygiene, integrity, and lineage—all vital for enabling agentic AI to thrive.  

From hype to reality

Jason Schern, Field CTO of Cognite

Jason Schern, Field CTO of Cognite, predicts that 2025 will be remembered as the year when truly transformative, validated generative AI solutions emerge.

“Through the fog of AI for AI’s sake noise, singular examples of truly transformative embedding of Gen AI into actual workflows will stand out,” predicts Schern.  

These domain-specific AI agents will revolutionise industrial workflows by offering tailored decision-making. Schern cited an example in which AI slashed time-consuming root cause analyses from months to mere minutes.

Deepfakes and crisis of trust

Siggi Stefnisson, CTO at Gen

Sophisticated generative AI threatens the authenticity of images, videos, and information, warns Siggi Stefnisson, Cyber Safety CTO at Gen.

“Even experts may not be able to tell what’s authentic,” warns Stefnisson.

Combating this crisis requires robust digital credentials for verifying authenticity and promoting trust in increasingly blurred digital realities.

2025: Foundational shifts in the AI landscape

As multiple predictions converge, it’s clear that foundational shifts are on the horizon.

The experts that contributed to this year’s industry predictions highlight smarter applications, stronger integration with human expertise, closer alignment with sustainability goals, and heightened security. However, many also foresee significant ethical challenges.

2025 represents a crucial year: a transition from the initial excitement of AI proliferation to mature and measured adoption that promises value and a more nuanced understanding of its impact.

See also: AI Action Summit: Leaders call for unity and equitable development

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI in 2025: Purpose-driven models, human integration, and more appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-in-2025-purpose-driven-models-human-integration-and-more/feed/ 0
Eric Schmidt: AI misuse poses an ‘extreme risk’ https://www.artificialintelligence-news.com/news/eric-schmidt-ai-misuse-poses-extreme-risk/ https://www.artificialintelligence-news.com/news/eric-schmidt-ai-misuse-poses-extreme-risk/#respond Thu, 13 Feb 2025 12:17:38 +0000 https://www.artificialintelligence-news.com/?p=104423 Eric Schmidt, former CEO of Google, has warned that AI misuse poses an “extreme risk” and could do catastrophic harm. Speaking to BBC Radio 4’s Today programme, Schmidt cautioned that AI could be weaponised by extremists and “rogue states” such as North Korea, Iran, and Russia to “harm innocent people.” Schmidt expressed concern that rapid […]

The post Eric Schmidt: AI misuse poses an ‘extreme risk’ appeared first on AI News.

]]>
Eric Schmidt, former CEO of Google, has warned that AI misuse poses an “extreme risk” and could do catastrophic harm.

Speaking to BBC Radio 4’s Today programme, Schmidt cautioned that AI could be weaponised by extremists and “rogue states” such as North Korea, Iran, and Russia to “harm innocent people.”

Schmidt expressed concern that rapid AI advancements could be exploited to create weapons, including biological attacks. Highlighting the dangers, he said: “The real fears that I have are not the ones that most people talk about AI, I talk about extreme risk.”

Using a chilling analogy, Schmidt referenced the al-Qaeda leader responsible for the 9/11 attacks: “I’m always worried about the Osama bin Laden scenario, where you have some truly evil person who takes over some aspect of our modern life and uses it to harm innocent people.”

He emphasised the pace of AI development and its potential to be co-opted by nations or groups with malevolent intent.

“Think about North Korea, or Iran, or even Russia, who have some evil goal … they could misuse it and do real harm,” Schmidt warns.

Oversight without stifling innovation

Schmidt urged governments to closely monitor private tech companies pioneering AI research. While noting that tech leaders are generally aware of AI’s societal implications, they may make decisions based on different values from those of public officials.

“My experience with the tech leaders is that they do have an understanding of the impact they’re having, but they might make a different values judgement than the government would make.”

Schmidt also endorsed the export controls introduced under former US President Joe Biden last year to restrict the sale of advanced microchips. The measure is aimed at slowing the progress of geopolitical adversaries in AI research.  

Global divisions around preventing AI misuse

The tech veteran was in Paris when he made his remarks, attending the AI Action Summit, a two-day event that wrapped up on Tuesday.

The summit, attended by 57 countries, saw the announcement of an agreement on “inclusive” AI development. Signatories included major players like China, India, the EU, and the African Union.  

However, the UK and the US declined to sign the communique. The UK government said the agreement lacked “practical clarity” and failed to address critical “harder questions” surrounding national security. 

Schmidt cautioned against excessive regulation that might hinder progress in this transformative field. This was echoed by US Vice-President JD Vance who warned that heavy-handed regulation “would kill a transformative industry just as it’s taking off”.  

This reluctance to endorse sweeping international accords reflects diverging approaches to AI governance. The EU has championed a more restrictive framework for AI, prioritising consumer protections, while countries like the US and UK are opting for more agile and innovation-driven strategies. 

Schmidt pointed to the consequences of Europe’s tight regulatory stance, predicting that the region would miss out on pioneering roles in AI.

“The AI revolution, which is the most important revolution in my opinion since electricity, is not going to be invented in Europe,” he remarked.

Prioritising national and global safety

Schmidt’s comments come against a backdrop of increasing scrutiny over AI’s dual-use potential—its ability to be used for both beneficial and harmful purposes.

From deepfakes to autonomous weapons, AI poses a bevy of risks if left without measures to guard against misuse. Leaders and experts, including Schmidt, are advocating for a balanced approach that fosters innovation while addressing these dangers head-on.

While international cooperation remains a complex and contentious issue, the overarching consensus is clear: without safeguards, AI’s evolution could have unintended – and potentially catastrophic – consequences.

(Photo by Guillaume Paumier under CC BY 3.0 license. Cropped to landscape from original version.)

See also: NEPC: AI sprint risks environmental catastrophe

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Eric Schmidt: AI misuse poses an ‘extreme risk’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/eric-schmidt-ai-misuse-poses-extreme-risk/feed/ 0
Ursula von der Leyen: AI race ‘is far from over’ https://www.artificialintelligence-news.com/news/ursula-von-der-leyen-ai-race-is-far-from-over/ https://www.artificialintelligence-news.com/news/ursula-von-der-leyen-ai-race-is-far-from-over/#respond Tue, 11 Feb 2025 16:51:29 +0000 https://www.artificialintelligence-news.com/?p=104314 Europe has no intention of playing catch-up in the global AI race, European Commission President Ursula von der Leyen declared at the AI Action Summit in Paris. While the US and China are often seen as frontrunners, von der Leyen emphasised that the AI race “is far from over” and that Europe has distinct strengths […]

The post Ursula von der Leyen: AI race ‘is far from over’ appeared first on AI News.

]]>
Europe has no intention of playing catch-up in the global AI race, European Commission President Ursula von der Leyen declared at the AI Action Summit in Paris.

While the US and China are often seen as frontrunners, von der Leyen emphasised that the AI race “is far from over” and that Europe has distinct strengths to carve a leading role for itself.

“This is the third summit on AI safety in just over one year,” von der Leyen remarked. “In the same period, three new generations of ever more powerful AI models have been released. Some expect models that will approach human reasoning within a year’s time.”

The European Commission President set the tone of the event by contrasting the groundwork laid in previous summits with the urgency of this one.

“Past summits focused on laying the groundwork for AI safety. Together, we built a shared consensus that AI will be safe, that it will promote our values and benefit humanity. But this Summit is focused on action. And that is exactly what we need right now.”

As the world witnesses AI’s disruptive power, von der Leyen urged Europe to “formulate a vision of where we want AI to take us, as society and as humanity.” Growing adoption, “in the key sectors of our economy, and for the key challenges of our times,” provides a golden opportunity for the continent to lead, she argued.

The case for a European approach to the AI race 

Von der Leyen rejected notions that Europe has fallen behind its global competitors.

“Too often, I hear that Europe is late to the race – while the US and China have already got ahead. I disagree,” she stated. “The frontier is constantly moving. And global leadership is still up for grabs.”

Instead of replicating what other regions are doing, she called for doubling down on Europe’s unique strengths to define the continent’s distinct approach to AI.

“Too often, I have heard that we should replicate what others are doing and run after their strengths,” she said. “I think that instead, we should invest in what we can do best and build on our strengths here in Europe, which are our science and technology mastery that we have given to the world.”

Von der Leyen defined three pillars of the so-called “European brand of AI” that sets it apart: 1) focusing on high-complexity, industry-specific applications, 2) taking a cooperative, collaborative approach to innovation, and 3) embracing open-source principles.

“This summit shows there is a distinct European brand of AI,” she asserted. “It is already driving innovation and adoption. And it is picking up speed.”

Accelerating innovation: AI factories and gigafactories  

To maintain its competitive edge, Europe must supercharge its AI innovation, von der Leyen stressed.

A key component of this strategy lies in its computational infrastructure. Europe already boasts some of the world’s fastest supercomputers, which are now being leveraged through the creation of “AI factories.”

“In just a few months, we have set up a record of 12 AI factories,” von der Leyen revealed. “And we are investing €10 billion in them. This is not a promise—it is happening right now, and it is the largest public investment for AI in the world, which will unlock over ten times more private investment.”

Beyond these initial steps, von der Leyen unveiled an even more ambitious initiative. AI gigafactories, built on the scale of CERN’s Large Hadron Collider, will provide the infrastructure needed for training AI systems at unprecedented scales. They aim to foster collaboration between researchers, entrepreneurs, and industry leaders.

“We provide the infrastructure for large computational power,” von der Leyen explained. “Talents of the world are welcome. Industries will be able to collaborate and federate their data.”

The cooperative ethos underpinning AI gigafactories is part of a broader European push to balance competition with collaboration.

“AI needs competition but also collaboration,” she emphasised, highlighting that the initiative will serve as a “safe space” for these cooperative efforts.

Building trust with the AI Act

Crucially, von der Leyen reiterated Europe’s commitment to making AI safe and trustworthy. She pointed to the EU AI Act as the cornerstone of this strategy, framing it as a harmonised framework to replace fragmented national regulations across member states.

“The AI Act [will] provide one single set of safety rules across the European Union – 450 million people – instead of 27 different national regulations,” she said, before acknowledging businesses’ concerns about regulatory complexities.

“At the same time, I know, we have to make it easier, we have to cut red tape. And we will.”

€200 billion to remain in the AI race

Financing such ambitious plans naturally requires significant resources. Von der Leyen praised the recently launched EU AI Champions Initiative, which has already pledged €150 billion from providers, investors, and industry.

During her speech at the summit, von der Leyen announced the Commission’s complementary InvestAI initiative that will bring in an additional €50 billion. Altogether, the result is mobilising a massive €200 billion in public-private AI investments.

“We will have a focus on industrial and mission-critical applications,” she said. “It will be the largest public-private partnership in the world for the development of trustworthy AI.”

Ethical AI is a global responsibility

Von der Leyen closed her address by framing Europe’s AI ambitions within a broader, humanitarian perspective, arguing that ethical AI is a global responsibility.

“Cooperative AI can be attractive well beyond Europe, including for our partners in the Global South,” she proclaimed, extending a message of inclusivity.

Von der Leyen expressed full support for the AI Foundation launched at the summit, highlighting its mission to ensure widespread access to AI’s benefits.

“AI can be a gift to humanity. But we must make sure that benefits are widespread and accessible to all,” she remarked.

“We want AI to be a force for good. We want an AI where everyone collaborates and everyone benefits. That is our path – our European way.”

See also: AI Action Summit: Leaders call for unity and equitable development

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Ursula von der Leyen: AI race ‘is far from over’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ursula-von-der-leyen-ai-race-is-far-from-over/feed/ 0
AI Action Summit: Leaders call for unity and equitable development https://www.artificialintelligence-news.com/news/ai-action-summit-leaders-call-for-unity-equitable-development/ https://www.artificialintelligence-news.com/news/ai-action-summit-leaders-call-for-unity-equitable-development/#respond Mon, 10 Feb 2025 13:07:09 +0000 https://www.artificialintelligence-news.com/?p=104258 As the 2025 AI Action Summit kicks off in Paris, global leaders, industry experts, and academics are converging to address the challenges and opportunities presented by AI. Against the backdrop of rapid technological advancements and growing societal concerns, the summit aims to build on the progress made since the 2024 Seoul Safety Summit and establish […]

The post AI Action Summit: Leaders call for unity and equitable development appeared first on AI News.

]]>
As the 2025 AI Action Summit kicks off in Paris, global leaders, industry experts, and academics are converging to address the challenges and opportunities presented by AI.

Against the backdrop of rapid technological advancements and growing societal concerns, the summit aims to build on the progress made since the 2024 Seoul Safety Summit and establish a cohesive global framework for AI governance.  

AI Action Summit is ‘a wake-up call’

French President Emmanuel Macron has described the summit as “a wake-up call for Europe,” emphasising the need for collective action in the face of AI’s transformative potential. This comes as the US has committed $500 billion to AI infrastructure.

The UK, meanwhile, has unveiled its Opportunities Action Plan ahead of the full implementation of the UK AI Act. Ahead of the AI Summit, UK tech minister Peter Kyle told The Guardian the AI race must be led by “western, liberal, democratic” countries.

These developments signal a renewed global dedication to harnessing AI’s capabilities while addressing its risks.  

Matt Cloke, CTO at Endava, highlighted the importance of bridging the gap between AI’s potential and its practical implementation.

Headshot of Matt Cloke.

“Much of the conversation is set to focus on understanding the risks involved with using AI while helping to guide decision-making in an ever-evolving landscape,” he said.  

Cloke also stressed the role of organisations in ensuring AI adoption goes beyond regulatory frameworks.

“Modernising core systems enables organisations to better harness AI while ensuring regulatory compliance,” he explained.

“With improved data management, automation, and integration capabilities, these systems make it easier for organisations to stay agile and quickly adapt to impending regulatory changes.”  

Governance and workforce among critical AI Action Summit topics

Kit Cox, CTO and Founder of Enate, outlined three critical areas for the summit’s agenda.

Headshot of Kit Cox ahead of the 2025 AI Action Summit in Paris.

“First, AI governance needs urgent clarity,” he said. “We must establish global guidelines to ensure AI is safe, ethical, and aligned across nations. A disconnected approach won’t work; we need unity to build trust and drive long-term progress.”

Cox also emphasised the need for a future-ready workforce.

“Employers and governments must invest in upskilling the workforce for an AI-driven world,” he said. “This isn’t just about automation replacing jobs; it’s about creating opportunities through education and training that genuinely prepare people for the future of work.”  

Finally, Cox called for democratising AI’s benefits.

“AI must be fair and democratic both now and in the future,” he said. “The benefits can’t be limited to a select few. We must ensure that AI’s power reaches beyond Silicon Valley to all corners of the globe, creating opportunities for everyone to thrive.”  

Developing AI in the public interest

Professor Gina Neff, Professor of Responsible AI at Queen Mary University of London and Executive Director at Cambridge University’s Minderoo Centre for Technology & Democracy, stressed the importance of making AI relatable to everyday life.

Headshot of Professor Gina Neff.

“For us in civil society, it’s essential that we bring imaginaries about AI into the everyday,” she said. “From the barista who makes your morning latte to the mechanic fixing your car, they all have to understand how AI impacts them and, crucially, why AI is a human issue.”  

Neff also pushed back against big tech’s dominance in AI development.

“I’ll be taking this spirit of public interest into the Summit and pushing back against big tech’s push for hyperscaling. Thinking about AI as something we’re building together – like we do our cities and local communities – puts us all in a better place.”

Addressing bias and building equitable AI

Professor David Leslie, Professor of Ethics, Technology, and Society at Queen Mary University of London, highlighted the unresolved challenges of bias and diversity in AI systems.

“Over a year after the first AI Safety Summit at Bletchley Park, only incremental progress has been made to address the many problems of cultural bias and toxic and imbalanced training data that have characterised the development and use of Silicon Valley-led frontier AI systems,” he said.

Headshot of Professor David Leslie ahead of the 2025 AI Action Summit in Paris.

Leslie called for a renewed focus on public interest AI.

“The French AI Action Summit promises to refocus the conversation on AI governance to tackle these and other areas of immediate risk and harm,” he explained. “A main focus will be to think about how to advance public interest AI for all through mission-driven and society-led funding.”  

He proposed the creation of a public interest AI foundation, supported by governments, companies, and philanthropic organisations.

“This type of initiative will have to address issues of algorithmic and data biases head on, at concrete and practice-based levels,” he said. “Only then can it stay true to the goal of making AI technologies – and the infrastructures upon which they depend – accessible global public goods.”  

Systematic evaluation  

Professor Maria Liakata, Professor of Natural Language Processing at Queen Mary University of London, emphasised the need for rigorous evaluation of AI systems.

Headshot of Professor Maria Liakata ahead of the 2025 AI Action Summit in Paris.

“AI has the potential to make public service more efficient and accessible,” she said. “But at the moment, we are not evaluating AI systems properly. Regulators are currently on the back foot with evaluation, and developers have no systematic way of offering the evidence regulators need.”  

Liakata called for a flexible and systematic approach to AI evaluation.

“We must remain agile and listen to the voices of all stakeholders,” she said. “This would give us the evidence we need to develop AI regulation and help us get there faster. It would also help us get better at anticipating the risks posed by AI.”  

AI in healthcare: Balancing innovation and ethics

Dr Vivek Singh, Lecturer in Digital Pathology at Barts Cancer Institute, Queen Mary University of London, highlighted the ethical implications of AI in healthcare.

Headshot of Dr Vivek Singh ahead of the 2025 AI Action Summit in Paris.

“The Paris AI Action Summit represents a critical opportunity for global collaboration on AI governance and innovation,” he said. “I hope to see actionable commitments that balance ethical considerations with the rapid advancement of AI technologies, ensuring they benefit society as a whole.”  

Singh called for clear frameworks for international cooperation.

“A key outcome would be the establishment of clear frameworks for international cooperation, fostering trust and accountability in AI development and deployment,” he said.  

AI Action Summit: A pivotal moment

The 2025 AI Action Summit in Paris represents a pivotal moment for global AI governance. With calls for unity, equity, and public interest at the forefront, the summit aims to address the challenges of bias, regulation, and workforce readiness while ensuring AI’s benefits are shared equitably.

As world leaders and industry experts converge, the hope is that actionable commitments will pave the way for a more inclusive and ethical AI future.

(Photo by Jorge Gascón)

See also: EU AI Act: What businesses need to know as regulations go live

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI Action Summit: Leaders call for unity and equitable development appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-action-summit-leaders-call-for-unity-equitable-development/feed/ 0
World Economic Forum unveils blueprint for equitable AI  https://www.artificialintelligence-news.com/news/world-economic-forum-unveils-blueprint-equitable-ai/ https://www.artificialintelligence-news.com/news/world-economic-forum-unveils-blueprint-equitable-ai/#respond Tue, 21 Jan 2025 16:55:43 +0000 https://www.artificialintelligence-news.com/?p=16943 The World Economic Forum (WEF) has released a blueprint outlining how AI can drive inclusivity in global economic growth and societal progress. However, it also highlights the challenges in ensuring its benefits are equitably distributed across all nations and peoples. Developed in partnership with KPMG, the blueprint offers nine strategic objectives to support government leaders, […]

The post World Economic Forum unveils blueprint for equitable AI  appeared first on AI News.

]]>
The World Economic Forum (WEF) has released a blueprint outlining how AI can drive inclusivity in global economic growth and societal progress. However, it also highlights the challenges in ensuring its benefits are equitably distributed across all nations and peoples.

Developed in partnership with KPMG, the blueprint offers nine strategic objectives to support government leaders, organisations, and key stakeholders through every phase of the AI lifecycle – from innovation to deployment – at local, national, and international levels. These strategies aim to bridge disparities in AI access, infrastructure, advanced computing, and skill development to promote sustainable, long-term growth.

Cathy Li, Head of AI, Data, and the Metaverse at the WEF, said: “Leveraging AI for economic growth and societal progress is a shared goal, yet countries and regions have very different starting points.

“This blueprint serves as a compass, guiding decision-makers toward impact-oriented collaboration and practical solutions that can unlock AI’s full potential.”

Call for regional collaboration and local empowerment

Central to the ‘Blueprint for Intelligent Economies’ is the belief that successful AI adoption must reflect the specific needs of local communities—with strong leadership and collaboration among governments, businesses, entrepreneurs, civil society organisations, and end users.

Solly Malatsi, South Africa’s Minister of Communications and Digital Technologies, commented: “The significant potential of AI remains largely untapped in many regions worldwide. Establishing an inclusive and competitive AI ecosystem will become a crucial priority for all nations.

“Collaboration among multiple stakeholders at the national, regional, and global levels will be essential in fostering growth and prosperity through AI for everyone.”

By tailoring approaches to reflect geographic and cultural nuances, the WEF report suggests nations can create AI systems that address local challenges while also providing a robust bedrock for innovation, investment, and ethical governance. Case studies from nations at varying stages of AI maturity are used throughout the report to illustrate practical, scalable solutions.

For example, cross-border cooperation on shared AI frameworks and pooled resources (such as energy or centralised databanks) is highlighted as a way to overcome resource constraints. Public-private subsidies to make AI-ready devices more affordable present another equitable way forward. These mechanisms aim to lower barriers for local businesses and innovators, enabling them to adopt AI tools and scale their operations.  

Hatem Dowidar, Chief Executive Officer of E&, said: “All nations have a unique opportunity to advance their economic and societal progress through AI. This requires a collaborative approach of intentional leadership from governments supported by active engagement with all stakeholders at all stages of the AI journey.

“Regional and global collaborations remain fundamental pathways to address shared challenges and opportunities, ensure equitable access to key AI capabilities, and responsibly maximise its transformative potential for a lasting value for all.”  

Priority focus areas

While the blueprint features nine strategic objectives, three have been singled out as priority focus areas for national AI strategies:  

  1. Building sustainable AI infrastructure 

Resilient, scalable, and environmentally sustainable AI infrastructure is essential for innovation. However, achieving this vision will require substantial investment, energy, and cross-sector collaboration. Nations must coordinate efforts to ensure that intelligent economies grow in both an equitable and eco-friendly manner.  

  1. Curating diverse and high-quality datasets  

AI’s potential hinges on the quality of the data it can access. This strategic objective addresses barriers such as data accessibility, imbalance, and ownership. By ensuring that datasets are inclusive, diverse, and reflective of local languages and cultures, developers can create equitable AI models that avoid bias and meet the needs of all communities.  

  1. Establishing robust ethical and safety guardrails

Governance frameworks are critical for reducing risks like misuse, bias, and ethical breaches. By setting high standards at the outset, nations can cultivate trust in AI systems, laying the groundwork for responsible deployment and innovation. These safeguards are especially vital for promoting human-centred AI that benefits all of society.  

The overall framework outlined in the report has three layers:

  1. Foundation layer: Focuses on sustainable energy, diverse data curation, responsible AI infrastructure, and efficient investment mechanisms.  
  2. Growth layer: Embeds AI into workflows, processes, and devices to accelerate sectoral adoption and boost innovation.  
  3. People layer: Prioritises workforce skills, empowerment, and ethical considerations, ensuring that AI shapes society in a beneficial and inclusive way.

A blueprint for global AI adoption  

The Forum is also championing a multi-stakeholder approach to global AI adoption, blending public and private collaboration. Policymakers are being encouraged to implement supportive legislation and incentives to spark innovation and broaden AI’s reach. Examples include lifelong learning programmes to prepare workers for the AI-powered future and financial policies that enable greater technology access in underserved regions.  

The WEF’s latest initiative reflects growing global recognition that AI will be a cornerstone of the future economy. However, it remains clear that the benefits of this transformative technology will need to be shared equitably to drive societal progress and ensure no one is left behind.  

The Blueprint for Intelligent Economies provides a roadmap for nations to harness AI while addressing the structural barriers that could otherwise deepen existing inequalities. By fostering inclusivity, adopting robust governance, and placing communities at the heart of decision-making, the WEF aims to guide governments, businesses, and innovators toward a sustainable and intelligent future.  

See also: UK Government signs off sweeping AI action plan 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post World Economic Forum unveils blueprint for equitable AI  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/world-economic-forum-unveils-blueprint-equitable-ai/feed/ 0
Rodolphe Malaguti, Conga: Poor data hinders AI in public services https://www.artificialintelligence-news.com/news/rodolphe-malaguti-conga-poor-data-ai-potential-in-public-services/ https://www.artificialintelligence-news.com/news/rodolphe-malaguti-conga-poor-data-ai-potential-in-public-services/#respond Tue, 21 Jan 2025 11:15:19 +0000 https://www.artificialintelligence-news.com/?p=16916 According to Rodolphe Malaguti, Product Strategy and Transformation at Conga, poor data structures and legacy systems are hindering the potential of AI in transforming public services. Taxpayer-funded services in the UK, from the NHS to local councils, are losing out on potential productivity savings of £45 billion per year due to an overwhelming reliance on […]

The post Rodolphe Malaguti, Conga: Poor data hinders AI in public services appeared first on AI News.

]]>
According to Rodolphe Malaguti, Product Strategy and Transformation at Conga, poor data structures and legacy systems are hindering the potential of AI in transforming public services.

Taxpayer-funded services in the UK, from the NHS to local councils, are losing out on potential productivity savings of £45 billion per year due to an overwhelming reliance on outdated technology—a figure equivalent to the total cost of running every primary school in the country for a year.   

A report published this week highlights how nearly half of public services are still not accessible online. This forces British citizens to engage in time-consuming and frustrating processes such as applying for support in person, enduring long wait times on hold, or travelling across towns to council offices. Public sector workers are similarly hindered by inefficiencies, such as sifting through mountains of physical letters, which slows down response times and leaves citizens to bear the brunt of government red tape.

Headshot of Rodolphe Malaguti, Product Strategy and Transformation at Conga, for an article on how poor data and legacy systems are holding back the potential of AI in transforming public services.

“As this report has shown, there is clearly a gap between what the government and public bodies intend to achieve with their digital projects and what they actually deliver,” explained Malaguti. “The public sector still relies heavily upon legacy systems and has clearly struggled to tackle existing poor data structures and inefficiencies across key departments. No doubt this has had a clear impact on decision-making and hindered vital services for vulnerable citizens.”

The struggles persist even in deeply personal and critical scenarios. For example, the current process for registering a death still demands a physical presence, requiring grieving individuals to manage cumbersome bureaucracy while mourning the loss of a loved one. Other outdated processes unnecessarily burden small businesses—one striking example being the need to publish notices in local newspapers simply to purchase a lorry licence, creating further delays and hindering economic growth.

A lack of coordination between departments amplifies these challenges. In some cases, government bodies are using over 500 paper-based processes, leaving systems fragmented and inefficient. Vulnerable individuals suffer disproportionately under this disjointed framework. For instance, patients with long-term health conditions can be forced into interactions with up to 40 different services, repeating the same information as departments repeatedly fail to share data.

“The challenge is that government leaders have previously focused on technology and online interactions, adding layers to services whilst still relying on old data and legacy systems—this has ultimately led to inefficiencies across departments,” added Malaguti.

“Put simply, they have failed to address existing issues or streamline their day-to-day operations. It is critical that data is more readily available and easily shared between departments, particularly if leaders are hoping to employ new technology like AI to analyse this data and drive better outcomes or make strategic decisions for the public sector as a whole.”

Ageing Infrastructure: High costs and security risks

The report underscores that ageing infrastructure comes at a steep financial and operational cost. More than one-in-four digital systems used across the UK’s central government are outdated, with this figure ballooning to 70 percent in some departments. Maintenance costs for legacy systems are significantly higher, up to three-to-four times more, compared to keeping technology up-to-date.  

Furthermore, a growing number of these outdated systems are now classified as “red-rated” for reliability and cybersecurity risk. Alarmingly, NHS England experienced 123 critical service outages last year alone. These outages often meant missed appointments and forced healthcare workers to resort to paper-based systems, making it harder for patients to access care when they needed it most.

Malaguti stresses that addressing such challenges goes beyond merely upgrading technology.

“The focus should be on improving data structure, quality, and timeliness. All systems, data, and workflows must be properly structured and fully optimised prior to implementation for these technologies to be effective. Public sector leaders should look to establish clear measurable objectives, as they continue to improve service delivery and core mission impacts.”

Transforming public services

In response to these challenges, Technology Secretary Peter Kyle is announcing an ambitious overhaul of public sector technology to usher in a more modern, efficient, and accessible system. Emphasising the use of AI, digital tools, and “common sense,” the goal is to reform how public services are designed and delivered—streamlining operations across local government, the NHS, and other critical departments.

A package of tools known as ‘Humphrey’ – named after the fictional Whitehall official in popular BBC drama ‘Yes, Minister’ – is set to be made available to all civil servants soon, with some available today.

Humphrey includes:

  • Consult: Analyses the thousands of responses received during government consultations within hours, presenting policymakers and experts with interactive dashboards to directly explore public feedback.
  • Parlex: A tool that enables policymakers to search and analyze decades of parliamentary debate, helping them refine their thinking and manage bills more effectively through both the Commons and the Lords.
  • Minute: A secure AI transcription service that creates customisable meeting summaries in the formats needed by public servants. It is currently being used by multiple central departments in meetings with ministers and is undergoing trials with local councils.
  • Redbox: A generative AI tool tailored to assist civil servants with everyday tasks, such as summarising policies and preparing briefings.
  • Lex: A tool designed to support officials in researching the law by providing analysis and summaries of relevant legislation for specific, complex issues.

The new tools and changes will help to tackle the inefficiencies highlighted in the report while delivering long-term cost savings. By reducing the burden of administrative tasks, the reforms aim to enable public servants, such as doctors and nurses, to spend more time helping the people they serve. For businesses, this could mean faster approvals for essential licences and permits, boosting economic growth and innovation.

“The government’s upcoming reforms and policy updates, where it is expected to deliver on its ‘AI Opportunities Action Plan,’ [will no doubt aim] to speed up processes,” said Malaguti. “Public sector leaders need to be more strategic with their investments and approach these projects with a level head, rolling out a programme in a phased manner, considering each phase of their operations.”

This sweeping transformation will also benefit from an expanded role for the Government Digital Service (GDS). Planned measures include using the GDS to identify cybersecurity vulnerabilities in public sector systems that could be exploited by hackers, enabling services to be made more robust and secure. Such reforms are critical to protect citizens, particularly as the reliance on digital solutions increases.

The broader aim of these reforms is to modernise the UK’s public services to reflect the convenience and efficiencies demanded in a digital-first world. By using technologies like AI, the government hopes to make interactions with public services faster and more intuitive while saving billions for taxpayers in the long run.

As technology reshapes the future of how services are delivered, leaders must ensure they are comprehensively addressing the root causes of inefficiency—primarily old data infrastructure and fragmented workflows. Only then can technological solutions, whether AI or otherwise, achieve their full potential in helping services deliver for the public.

(Photo by Claudio Schwarz)

See also: Biden’s executive order targets energy needs for AI data centres

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Rodolphe Malaguti, Conga: Poor data hinders AI in public services appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/rodolphe-malaguti-conga-poor-data-ai-potential-in-public-services/feed/ 0