Interviews | Latest AI News Interviews | AI News https://www.artificialintelligence-news.com/categories/interviews/ Artificial Intelligence News Fri, 25 Apr 2025 14:07:30 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Interviews | Latest AI News Interviews | AI News https://www.artificialintelligence-news.com/categories/interviews/ 32 32 Red Hat on open, small language models for responsible, practical AI https://www.artificialintelligence-news.com/news/red-hat-on-open-small-language-models-for-responsible-practical-ai/ https://www.artificialintelligence-news.com/news/red-hat-on-open-small-language-models-for-responsible-practical-ai/#respond Tue, 22 Apr 2025 07:49:15 +0000 https://www.artificialintelligence-news.com/?p=105184 As geopolitical events shape the world, it’s no surprise that they affect technology too – specifically, in the ways that the current AI market is changing, alongside its accepted methodology, how it’s developed, and the ways it’s put to use in the enterprise. The expectations of results from AI are balanced at present with real-world […]

The post Red Hat on open, small language models for responsible, practical AI appeared first on AI News.

]]>
As geopolitical events shape the world, it’s no surprise that they affect technology too – specifically, in the ways that the current AI market is changing, alongside its accepted methodology, how it’s developed, and the ways it’s put to use in the enterprise.

The expectations of results from AI are balanced at present with real-world realities. And there remains a good deal of suspicion about the technology, again in balance with those who are embracing it even in its current nascent stages. The closed-loop nature of the well-known LLMs is being challenged by instances like Llama, DeepSeek, and Baidu’s recently-released Ernie X1.

In contrast, open source development provides transparency and the ability to contribute back, which is more in tune with the desire for “responsible AI”: a phrase that encompasses the environmental impact of large models, how AIs are used, what comprises their learning corpora, and issues around data sovereignty, language, and politics. 

As the company that’s demonstrated the viability of an economically-sustainable open source development model for its business, Red Hat wants to extend its open, collaborative, and community-driven approach to AI. We spoke recently to Julio Guijarro, the CTO for EMEA at Red Hat, about the organisation’s efforts to unlock the undoubted power of generative AI models in ways that bring value to the enterprise, in a manner that’s responsible, sustainable, and as transparent as possible. 

Julio underlined how much education is still needed in order for us to more fully understand AI, stating, “Given the significant unknowns about AI’s inner workings, which are rooted in complex science and mathematics, it remains a ‘black box’ for many. This lack of transparency is compounded where it has been developed in largely inaccessible, closed environments.”

There are also issues with language (European and Middle-Eastern languages are very much under-served), data sovereignty, and fundamentally, trust. “Data is an organisation’s most valuable asset, and businesses need to make sure they are aware of the risks of exposing sensitive data to public platforms with varying privacy policies.” 

The Red Hat response 

Red Hat’s response to global demand for AI has been to pursue what it feels will bring most benefit to end-users, and remove many of the doubts and caveats that are quickly becoming apparent when the de facto AI services are deployed. 

One answer, Julio said, is small language models, running locally or in hybrid clouds, on non-specialist hardware, and accessing local business information. SLMs are compact, efficient alternatives to LLMs, designed to deliver strong performance for specific tasks while requiring significantly fewer computational resources. There are smaller cloud providers that can be utilised to offload some compute, but the key is having the flexibility and freedom to choose to keep business-critical information in-house, close to the model, if desired. That’s important, because information in an organisation changes rapidly. “One challenge with large language models is they can get obsolete quickly because the data generation is not happening in the big clouds. The data is happening next to you and your business processes,” he said. 

There’s also the cost. “Your customer service querying an LLM can present a significant hidden cost – before AI, you knew that when you made a data query, it had a limited and predictable scope. Therefore, you could calculate how much that transaction could cost you. In the case of LLMs, they work on an iterative model. So the more you use it, the better its answer can get, and the more you like it, the more questions you may ask. And every interaction is costing you money. So the same query that before was a single transaction can now become a hundred, depending on who and how is using the model. When you are running a model on-premise, you can have greater control, because the scope is limited by the cost of your own infrastructure, not by the cost of each query.”

Organisations needn’t brace themselves for a procurement round that involves writing a huge cheque for GPUs, however. Part of Red Hat’s current work is optimising models (in the open, of course) to run on more standard hardware. It’s possible because the specialist models that many businesses will use don’t need the huge, general-purpose data corpus that has to be processed at high cost with every query. 

“A lot of the work that is happening right now is people looking into large models and removing everything that is not needed for a particular use case. If we want to make AI ubiquitous, it has to be through smaller language models. We are also focused on supporting and improving vLLM (the inference engine project) to make sure people can interact with all these models in an efficient and standardised way wherever they want: locally, at the edge or in the cloud,” Julio said. 

Keeping it small 

Using and referencing local data pertinent to the user means that the outcomes can be crafted according to need. Julio cited projects in the Arab- and Portuguese-speaking worlds that wouldn’t be viable using the English-centric household name LLMs. 

There are a couple of other issues, too, that early adopter organisations have found in practical, day-to-day use LLMs. The first is latency – which can be problematic in time-sensitive or customer-facing contexts. Having the focused resources and relevantly-tailored results just a network hop or two away makes sense. 

Secondly, there is the trust issue: an integral part of responsible AI. Red Hat advocates for open platforms, tools, and models so we can move towards greater transparency, understanding, and the ability for as many people as possible to contribute. “It is going to be critical for everybody,” Julio said. “We are building capabilities to democratise AI, and that’s not only publishing a model, it’s giving users the tools to be able to replicate them, tune them, and serve them.” 

Red Hat recently acquired Neural Magic to help enterprises more easily scale AI, to improve performance of inference, and to provide even greater choice and accessibility of how enterprises build and deploy AI workloads with the vLLM project for open model serving. Red Hat, together with IBM Research, also released InstructLab to open the door to would-be AI builders who aren’t data scientists but who have the right business knowledge. 

There’s a great deal of speculation around if, or when, the AI bubble might burst, but such conversations tend to gravitate to the economic reality that the big LLM providers will soon have to face. Red Hat believes that AI has a future in a use case-specific and inherently open source form, a technology that will make business sense and that will be available to all. To quote Julio’s boss, Matt Hicks (CEO of Red Hat), “The future of AI is open.” 

Supporting Assets: 

Tech Journey: Adopt and scale AI

The post Red Hat on open, small language models for responsible, practical AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/red-hat-on-open-small-language-models-for-responsible-practical-ai/feed/ 0
BCG: Analysing the geopolitics of generative AI https://www.artificialintelligence-news.com/news/bcg-analysing-the-geopolitics-of-generative-ai/ https://www.artificialintelligence-news.com/news/bcg-analysing-the-geopolitics-of-generative-ai/#respond Fri, 11 Apr 2025 16:11:17 +0000 https://www.artificialintelligence-news.com/?p=105294 Generative AI is reshaping global competition and geopolitics, presenting challenges and opportunities for nations and businesses alike. Senior figures from Boston Consulting Group (BCG) and its tech division, BCG X, discussed the intricate dynamics of the global AI race, the dominance of superpowers like the US and China, the role of emerging “middle powers,” and […]

The post BCG: Analysing the geopolitics of generative AI appeared first on AI News.

]]>
Generative AI is reshaping global competition and geopolitics, presenting challenges and opportunities for nations and businesses alike.

Senior figures from Boston Consulting Group (BCG) and its tech division, BCG X, discussed the intricate dynamics of the global AI race, the dominance of superpowers like the US and China, the role of emerging “middle powers,” and the implications for multinational corporations.

AI investments expose businesses to increasingly tense geopolitics

Sylvain Duranton, Global Leader at BCG X, noted the significant geopolitical risk companies face: “For large companies, close to half of them, 44%, have teams around the world, not just in one country where their headquarters are.”

Sylvain Duranton, Global Leader at BCG X

Many of these businesses operate across numerous countries, making them vulnerable to differing regulations and sovereignty issues. “They’ve built their AI teams and ecosystem far before there was such tension around the world.”

Duranton also pointed to the stark imbalance in the AI supply race, particularly in investment.

Comparing the market capitalisation of tech companies, the US dwarfs Europe by a factor of 20 and the Asia Pacific region by five. Investment figures paint a similar picture, showing a “completely disproportionate” imbalance compared to the relative sizes of the economies.

This AI race is fuelled by massive investments in compute power, frontier models, and the emergence of lighter, open-weight models changing the competitive dynamic.   

Benchmarking national AI capabilities

Nikolaus Lang, Global Leader at the BCG Henderson Institute – BCG’s think tank – detailed the extensive research undertaken to benchmark national GenAI capabilities objectively.

The team analysed the “upstream of GenAI,” focusing on large language model (LLM) development and its six key enablers: capital, computing power, intellectual property, talent, data, and energy.

Using hard data like AI researcher numbers, patents, data centre capacity, and VC investment, they created a comparative analysis. Unsurprisingly, the analysis revealed the US and China as the clear AI frontrunners and maintain leads in geopolitics.

Nikolaus Lang, Global Leader at the BCG Henderson Institute

The US boasts the largest pool of AI specialists (around half a million), immense capital power ($303bn in VC funding, $212bn in tech R&D), and leading compute power (45 GW).

Lang highlighted America’s historical dominance, noting, “the US has been the largest producer of notable AI models with 67%” since 1950, a lead reflected in today’s LLM landscape. This strength is reinforced by “outsized capital power” and strategic restrictions on advanced AI chip access through frameworks like the US AI Diffusion Framework.   

China, the second AI superpower, shows particular strength in data—ranking highly in e-governance and mobile broadband subscriptions, alongside significant data centre capacity (20 GW) and capital power. 

Despite restricted access to the latest chips, Chinese LLMs are rapidly closing the gap with US models. Lang mentioned the emergence of models like DeepSpeech as evidence of this trend, achieved with smaller teams, fewer GPU hours, and previous-generation chips.

China’s progress is also fuelled by heavy investment in AI academic institutions (hosting 45 of the world’s top 100), a leading position in AI patent applications, and significant government-backed VC funding. Lang predicts “governments will play an important role in funding AI work going forward.”

The middle powers: Europe, Middle East, and Asia

Beyond the superpowers, several “middle powers” are carving out niches.

  • EU: While trailing the US and China, the EU holds the third spot with significant data centre capacity (8 GW) and the world’s second-largest AI talent pool (275,000 specialists) when capabilities are combined. Europe also leads in top AI publications. Lang stressed the need for bundled capacities, suggesting AI, defence, and renewables are key areas for future EU momentum.
  • Middle East (UAE & Saudi Arabia): These nations leverage strong capital power via sovereign wealth funds and competitively low electricity prices to attract talent and build compute power, aiming to become AI drivers “from scratch”. They show positive dynamics in attracting AI specialists and are climbing the ranks in AI publications.   
  • Asia (Japan & South Korea): Leveraging strong existing tech ecosystems in hardware and gaming, these countries invest heavily in R&D (around $207bn combined by top tech firms). Government support, particularly in Japan, fosters both supply and demand. Local LLMs and strategic investments by companies like Samsung and SoftBank demonstrate significant activity.   
  • Singapore: Singapore is boosting its AI ecosystem by focusing on talent upskilling programmes, supporting Southeast Asia’s first LLM, ensuring data centre capacity, and fostering adoption through initiatives like establishing AI centres of excellence.   

The geopolitics of generative AI: Strategy and sovereignty

The geopolitics of generative AI is being shaped by four clear dynamics: the US retains its lead, driven by an unrivalled tech ecosystem; China is rapidly closing the gap; middle powers face a strategic choice between building supply or accelerating adoption; and government funding is set to play a pivotal role, particularly as R&D costs climb and commoditisation sets in.

As geopolitical tensions mount, businesses are likely to diversify their GenAI supply chains to spread risk. The race ahead will be defined by how nations and companies navigate the intersection of innovation, policy, and resilience.

(Photo by Markus Krisetya)

See also: OpenAI counter-sues Elon Musk for attempts to ‘take down’ AI rival

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post BCG: Analysing the geopolitics of generative AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/bcg-analysing-the-geopolitics-of-generative-ai/feed/ 0
Nina Schick, author: Generative AI’s impact on business, politics and society https://www.artificialintelligence-news.com/news/nina-schick-author-generative-ais-impact-on-business-politics-and-society/ https://www.artificialintelligence-news.com/news/nina-schick-author-generative-ais-impact-on-business-politics-and-society/#respond Thu, 10 Apr 2025 05:46:00 +0000 https://www.artificialintelligence-news.com/?p=105109 Nina Schick is a leading speaker and expert on generative AI, renowned for her groundbreaking work at the intersection of technology, society and geopolitics. As one of the first authors to publish a book on generative AI, she has emerged as a sought-after speaker helping global leaders, businesses, and institutions understand and adapt to this […]

The post Nina Schick, author: Generative AI’s impact on business, politics and society appeared first on AI News.

]]>
Nina Schick is a leading speaker and expert on generative AI, renowned for her groundbreaking work at the intersection of technology, society and geopolitics.

As one of the first authors to publish a book on generative AI, she has emerged as a sought-after speaker helping global leaders, businesses, and institutions understand and adapt to this transformative moment.

We spoke to Nina to explore the future of AI-driven innovation, its ethical and political dimensions, and how organisations can lead in this rapidly evolving landscape.

In your view, how will generative AI redefine the foundational structures of business and economic productivity in the coming decade?

I believe generative AI is absolutely going to transform the entire economy as we know it. This moment feels quite similar to around 1993, when we were first being told to prepare for the Internet. Back then, some thirty years ago, we didn’t fully grasp, in our naivety, how profoundly the Internet would go on to reshape business and the broader global economy.

Now, we are witnessing something even more significant. You can think of generative AI as a kind of new combustion engine, but for all forms of human creative and intelligent activity. It’s a fundamental enabler. Every industry, every facet of productivity, will be impacted and ultimately transformed by generative AI. We’re already beginning to see those use cases emerge, and this is only the beginning.

As AI and data continue to evolve as forces shaping society, how do you see them redefining the political agenda and global power dynamics?

When you reflect on just how profound AI is in its capacity to reshape the entire framework of society, it becomes clear that this AI revolution is going to emerge as one of the most important political questions of our generation. Over the past 30 years, we’ve already seen how the information revolution — driven by the Internet, smartphones, and cloud computing — has become a defining geopolitical force.

Now, we’re layering the AI revolution on top of that, along with the data that fuels it, and the impact is nothing short of seismic. This will evolve into one of the most pressing and influential issues society must address over the coming decades. So, to answer the question directly — AI won’t just influence politics; it will, in many ways, become the very fabric of politics itself.

There’s been much discussion about the Metaverse and immersive tech — how do you see these experiences evolving, and what role do you believe AI will play in architecting this next frontier of digital interaction?

The Metaverse represents a vision for where the Internet may be heading — a future where digital experiences become far more immersive, intuitive, and experiential. It’s a concept that imagines how we might engage with digital content in a far more lifelike way.

But the really fascinating element here is that artificial intelligence is the key enabler — the actual vehicle — that will allow us to build and scale these kinds of immersive digital environments. So, even though the Metaverse remains largely an untested concept in terms of its final form, what is clear right now is that AI is going to be the engine that generates and populates the content that will live within these immersive spaces.

Considering the transformative power of AI and big data, what ethical imperatives must policymakers and society address to ensure equitable and responsible deployment?

The conversation around ethics, artificial intelligence, and big data is one that is set to become intensely political and highly consequential. It will likely remain a predominant issue for many years to come.

What we’re dealing with here is a technology so transformative that it has the potential to reshape the economy, redefine the labour market, and fundamentally alter the structure of society itself. That’s why the ethical questions — how to ensure this technology is applied in a fair, safe, and responsible manner — will be one of the defining political challenges of our time.

For business leaders navigating digital transformation, what mindset shifts are essential to meaningfully integrate AI into long-term strategy and operations?

For businesses aiming to digitally transform, especially in the era of artificial intelligence, it’s critical to first understand the conceptual paradigm shift we are currently undergoing. Once that foundational understanding is in place, it becomes much easier to explore and adopt AI technologies effectively.

If companies wish to remain competitive and gain a strategic edge, now is the time to start investigating how generative AI can be thoughtfully and effectively integrated into their business models. This includes identifying priority areas where AI can deliver long-term value — not just short-term.

If you put together a generative AI working group to look into this, your business will be transformed and able to compete with other businesses that are using AI to transform their processes.

As one of the earliest voices to articulate the societal implications of generative AI, what catalysed your foresight to explore this space before it entered the mainstream conversation?

My interest in AI didn’t come from a technical background. I’m not a techie. My experience has always been in analysing macro trends that shape society, geopolitics, and the wider world. That perspective is what led me to AI, as it quickly became clear that this technology would have far-reaching societal implications.

I began researching and writing about AI because I saw it as more than just a technological shift. Ultimately, this isn’t only a story about innovation. It’s a story about humanity. Generative AI, as an exponential technology built and directed by humans, is going to transform not just the way we work, but the way we live. It will even challenge our understanding of what it means to be human.

Photo by Heidi Fin on Unsplash

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

The post Nina Schick, author: Generative AI’s impact on business, politics and society appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/nina-schick-author-generative-ais-impact-on-business-politics-and-society/feed/ 0
Kay Firth-Butterfield, formerly WEF: The future of AI, the metaverse and digital transformation https://www.artificialintelligence-news.com/news/kay-firth-butterfield-formerly-wef-the-future-of-ai-the-metaverse-and-digital-transformation/ https://www.artificialintelligence-news.com/news/kay-firth-butterfield-formerly-wef-the-future-of-ai-the-metaverse-and-digital-transformation/#respond Thu, 03 Apr 2025 06:48:00 +0000 https://www.artificialintelligence-news.com/?p=105112 Kay Firth-Butterfield is a globally recognised leader in ethical artificial intelligence and a distinguished AI ethics speaker. As the former head of AI and Machine Learning at the World Economic Forum (WEF) and one of the foremost voices in AI governance, she has spent her career advocating for technology that enhances, rather than harms, society. […]

The post Kay Firth-Butterfield, formerly WEF: The future of AI, the metaverse and digital transformation appeared first on AI News.

]]>
Kay Firth-Butterfield is a globally recognised leader in ethical artificial intelligence and a distinguished AI ethics speaker. As the former head of AI and Machine Learning at the World Economic Forum (WEF) and one of the foremost voices in AI governance, she has spent her career advocating for technology that enhances, rather than harms, society.

We spoke to Kay to discuss the promise and pitfalls of generative AI, the future of the Metaverse, and how organisations can prepare for a decade of unprecedented digital transformation.

Generative AI has captured global attention, but there’s still a great deal of misunderstanding around what it actually is. Could you walk us through what defines generative AI, how it works, and why it’s considered such a transformative evolution of artificial intelligence?

It’s very exciting because it represents the next iteration of artificial intelligence. What generative AI allows you to do is ask questions of the world’s data simply by typing a prompt. If we think back to science fiction, that’s essentially what we’ve always dreamed of — just being able to ask a computer a question and have it draw on all its knowledge to provide an answer.

How does it do that? Well, it predicts which word is likely to come next in a sequence. It does this by accessing enormous volumes of data. We refer to these as large language models. Essentially, the machine ‘reads’ — or at least accesses — all the data available on the open web. In some cases, and this is an area of legal contention, it also accesses IP-protected and copyrighted material. We can expect a great deal of legal debate in this space.

Once the model has ingested all this data, it begins to predict what word naturally follows another, enabling it to construct highly complex and nuanced responses. Anyone who has experimented with it knows that it can return some surprisingly eloquent and insightful content simply through this predictive capability.

Of course, sometimes it gets things wrong. In the AI community, we call this ‘hallucination’ — essentially, the system fabricates information. That’s a serious issue because in order to rely on AI-generated outputs, we need to reach a point where we can trust the responses. The problem is, once a hallucination enters the data pool, it can be repeated and reinforced by the model.

While much has been said about generative AI’s technical potential, what do you see as the most meaningful societal and business benefits it offers? And what challenges must we address to ensure these advantages are equitably realised?

AI is now accessible to everyone, and that’s incredibly powerful. It’s a hugely democratising tool. It means that small and medium-sized enterprises, which previously couldn’t afford to leverage AI, now can.

However, we also need to be aware that most of the world’s data is created in the United States first, followed by Europe and China. There are clear challenges regarding the datasets these large language models are trained on. They’re not truly using ‘global’ data. They’re working with a limited subset. That has led to discussions around digital colonisation, where content generated from American and European data is projected onto the rest of the world, with an implicit expectation that others will adopt and use it.

Different cultures, of course, require different responses. So, while there are countless benefits to generative AI, there are also significant challenges that we must address if we want to ensure fair and inclusive outcomes.

The Metaverse has seen both hype and hesitation in recent years. From your perspective, what is the current trajectory of the Metaverse, and how do you see its role evolving within business environments over the next five years?

It’s interesting. We went through a phase of huge excitement around the Metaverse, where everyone wanted to be involved. But now we’ve entered more of a Metaverse winter, or perhaps autumn, as it’s become clear just how difficult it is to create compelling content for these immersive spaces.

We’re seeing strong use cases in industrial applications, but we’re still far from achieving that Ready Player One vision — where we live, shop, buy property, and fully interact in 3D virtual environments. That’s largely because the level of compute power and creative resources needed to build truly immersive experiences is enormous.

In five years’ time, I think we’ll start to see the Metaverse delivering on more of its promises for business. Customers may enjoy exceptional shopping experiences—entering virtual stores rather than simply browsing online, where they can ‘feel’ fabrics virtually and make informed decisions in real time.

We may also see remote working evolve, where employees collaborate inside the Metaverse as if they were in the same room. One study found that younger workers often lack adequate supervision when working remotely. In a Metaverse setting, you could offer genuine, interactive supervision and mentorship. It may also help with fostering colleague relationships that are often missed in remote work settings.

Ultimately, the Metaverse removes physical constraints and offers new ways of working and interacting—but we’ll need balance. Many people may not want to spend all their time in fully immersive environments.

Looking ahead, which emerging technologies and AI-driven trends do you anticipate will have the most profound global impact over the next decade. And how should we be preparing for their implications, both economically and ethically?

That’s a great question. It’s a bit like pulling out a crystal ball. But without doubt, generative AI is one of the most significant shifts we’re seeing today. As the technology becomes more refined, it will increasingly power new AI applications through natural language interactions.

Natural Language Processing (NLP) is the AI term for the machine’s ability to understand and interpret human language. In the near future, only elite developers will need to code manually. The rest of us will interact with machines by typing or speaking requests. These systems will not only provide answers, but also write code on our behalf. It’s incredibly powerful, transformative technology.

But there are downsides. One major concern is that AI sometimes fabricates information. And as generative AI becomes more prolific, it’s generating massive volumes of data 24/7. Over time, machine-generated data may outnumber human data, which could distort the digital landscape. We must ensure the AI doesn’t perpetuate falsehoods it has previously generated.

Looking further ahead, this shift raises deep questions about the future of human work. If AI systems can outperform humans in many tasks without fatigue, what becomes of our role? There may be cost savings, but also the very real risk of widespread unemployment.

AI also powers the Metaverse, so progress there is tied to improvements in AI capabilities. I’m also very excited about synthetic biology, which could see huge advancements driven by AI. There’s also likely to be significant interplay between quantum computing and AI, which could bring both benefits and serious challenges.

We’ll see more Internet of Things (IoT) devices as well—but that introduces new issues around security and data protection.

It’s a time of extraordinary opportunity, but also serious risks. Some worry about artificial general intelligence becoming sentient, but I don’t see that as likely just yet. Current models lack causal reasoning. They’re still predictive tools. We would need to add something fundamentally different to reach human-level intelligence. But make no mistake—we are entering an incredibly exciting era.

Adopting new technologies can be both an opportunity and a risk for businesses. In your view, how can organisations strike the right balance between embracing digital transformation and making strategic, informed decisions about AI adoption?

I think it’s vital to adopt the latest technologies, just as it would have been important for Kodak to see the shift coming in the photography industry. Businesses that fail to even explore digital transformation risk being left behind.

However, a word of caution: it’s easy to jump in too quickly and end up with the wrong AI solution — or the wrong systems entirely — for your business. So, I would advise approaching digital transformation with careful thought. Keep your eyes open, and treat each step as a deliberate, strategic business decision.

When you decide that you’re ready to adopt AI, it’s crucial to hold your suppliers to account. Ask the hard questions. Ask detailed questions. Make sure you have someone in-house, or bring in a consultant, who knows enough to help you interrogate the technology properly.

As we all know, one of the greatest wastes of money in digital transformation happens when the right questions aren’t asked up front. Getting it wrong can be incredibly costly, so take the time to get it right.

Photo by petr sidorov on Unsplash

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

The post Kay Firth-Butterfield, formerly WEF: The future of AI, the metaverse and digital transformation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/kay-firth-butterfield-formerly-wef-the-future-of-ai-the-metaverse-and-digital-transformation/feed/ 0
Dame Wendy Hall, AI Council: Shaping AI with ethics, diversity and innovation https://www.artificialintelligence-news.com/news/dame-wendy-hall-ai-council-shaping-ai-with-ethics-diversity-and-innovation/ https://www.artificialintelligence-news.com/news/dame-wendy-hall-ai-council-shaping-ai-with-ethics-diversity-and-innovation/#respond Mon, 31 Mar 2025 10:54:40 +0000 https://www.artificialintelligence-news.com/?p=105089 Dame Wendy Hall is a pioneering force in AI and computer science. As a renowned ethical AI speaker and one of the leading voices in technology, she has dedicated her career to shaping the ethical, technical and societal dimensions of emerging technologies. She is the co-founder of the Web Science Research Initiative, an AI Council […]

The post Dame Wendy Hall, AI Council: Shaping AI with ethics, diversity and innovation appeared first on AI News.

]]>
Dame Wendy Hall is a pioneering force in AI and computer science. As a renowned ethical AI speaker and one of the leading voices in technology, she has dedicated her career to shaping the ethical, technical and societal dimensions of emerging technologies. She is the co-founder of the Web Science Research Initiative, an AI Council Member and was named as one of the 100 Most Powerful Women in the UK by Woman’s Hour on BBC Radio 4.

A key advocate for responsible AI governance and diversity in tech, Wendy has played a crucial role in global discussions on the future of AI.

In our Q&A, we spoke to her about the gender imbalance in the AI industry, the ethical implications of emerging technologies, and how businesses can harness AI while ensuring it remains an asset to humanity.

The AI sector remains heavily male-dominated. Can you share your experience of breaking into the industry and the challenges women face in achieving greater representation in AI and technology?

It’s incredibly frustrating because I wrote my first paper about the lack of women in computing back in 1987, when we were just beginning to teach computer science degree courses at Southampton. That October, we arrived at the university and realised we had no women registered on the course — none at all.

So, those of us working in computing started discussing why that was the case. There were several reasons. One significant factor was the rise of the personal computer, which was marketed as a toy for boys, fundamentally changing the culture. Since then, in the West — though not as much in countries like India or Malaysia — computing has been seen as something nerdy, something that only ‘geeks’ do. Many young girls simply do not want to be associated with that stereotype. By the time they reach their GCSE choices, they often don’t see computing as an option, and that’s where the problem begins.

Despite many efforts, we haven’t managed to change this culture. Nearly 40 years later, the industry is still overwhelmingly male-dominated, even though women make up more than

half of the global population. Women are largely absent from the design and development of computers and software. We apply them, we use them, but we are not part of the fundamental conversations shaping future technologies.

AI is even worse in this regard. If you want to work in machine learning, you need a degree in mathematics or computer science, which means we are funnelling an already male-dominated sector into an even more male-dominated pipeline.

But AI is about more than just machine learning and programming. It’s about application, ethics, values, opportunities, and mitigating potential risks. This requires a broad diversity of voices — not just in terms of gender, but also in age, ethnicity, culture, and accessibility. People with disabilities should be part of these discussions, ensuring technology is developed for everyone.

AI’s development needs input from many disciplines — law, philosophy, psychology, business, and history, to name just a few. We need all these different voices. That’s why I believe we must see AI as a socio-technical system to truly understand its impact. We need diversity in every sense of the word.

As businesses increasingly integrate AI into their operations, what steps should they take to ensure emerging technologies are developed and deployed ethically?

Take, for example, facial recognition. We still haven’t fully established the rules and regulations for when and how this technology should be applied. Did anyone ask you whether you wanted facial recognition on your phone? It was simply offered as a system update, and you could either enable it or not.

We know facial recognition is used extensively for surveillance in China, but it is creeping into use across Europe and the US as well. Security forces are adopting it, which raises concerns about privacy. At the same time, I appreciate the presence of CCTV cameras in car parks at night — they make me feel safer.

This duality applies to all emerging technologies, including AI tools we haven’t even developed yet. Every new technology has a good and a bad side — the yin and the yang, if you will. There are always benefits and risks.

The challenge is learning how to maximise the benefits for humanity, society and business while mitigating the risks. That’s what we must focus on — ensuring AI works in service of people rather than against them.

The rapid advancement of AI is transforming everyday life. How do you envision the future of AI, and what significant changes will it bring to society and the way we work?

I see a future where AI becomes part of the decision-making process, whether in legal cases, medical diagnoses, or education.

AI is already deeply embedded in our daily lives. If you use Google on your phone, you’re using AI. If you unlock your phone with facial recognition, that’s AI. Google Translate? AI. Speech processing, video analysis, image recognition, text generation, and natural language processing — these are all AI-driven technologies.

Right now, the buzz is around generative AI, particularly ChatGPT. It’s like how ‘Hoover’ became synonymous with vacuum cleaners — ChatGPT has become shorthand for AI. In reality, it’s just a clever interface created by OpenAI to allow public access to its generative AI model.

It feels like you’re having a conversation with the system, asking questions and receiving natural language responses. It works with images and videos too, making it seem incredibly advanced. But the truth is, it’s not actually intelligent. It’s not sentient. It’s simply predicting the next word in a sequence based on training data. That’s a crucial distinction.

With generative AI becoming a powerful tool for businesses, what strategies should companies adopt to leverage its capabilities while maintaining human authenticity in their processes and decision-making?

Generative AI is nothing to be afraid of, and I believe we will all start using it more and more. Essentially, it’s software that can assist with writing, summarising, and analysing information.

I compare it to when calculators first appeared. People were outraged: ‘How can we allow calculators in schools? Can we trust the answers they provide?’ But over time, we adapted. The finance industry, for example, is now entirely run by computers, yet it employs more people than ever before. I expect we’ll see something similar with generative AI.

People will be relieved not to have to write endless essays. AI will enhance creativity and efficiency, but it must be viewed as a tool to augment human intelligence, not replace it, because it’s simply not advanced enough to take over.

Look at the legal industry. AI can summarise vast amounts of data, assess the viability of legal cases, and provide predictive analysis. In the medical field, AI could support diagnoses. In education, it could help assess struggling students.

I envision AI being integrated into decision-making teams. We will consult AI, ask it questions, and use its responses as a guide — but it’s crucial to remember that AI is not infallible.

Right now, AI models are trained on biased data. If they rely on information from the internet, much of that data is inaccurate. AI systems also ‘hallucinate’ by generating false information when they don’t have a definitive answer. That’s why we can’t fully trust AI yet.

Instead, we must treat it as a collaborative partner — one that helps us be more productive and creative while ensuring that humans remain in control. Perhaps AI will even pave the way for shorter workweeks, giving us more time for other pursuits.

Photo by Igor Omilaev on Unsplash and AI Speakers Agency.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Dame Wendy Hall, AI Council: Shaping AI with ethics, diversity and innovation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/dame-wendy-hall-ai-council-shaping-ai-with-ethics-diversity-and-innovation/feed/ 0
Frankie Woodhead, Thrive: Why neurodiverse input is crucial for AI development https://www.artificialintelligence-news.com/news/frankie-woodhead-thrive-why-neurodiverse-input-is-crucial-for-ai-development/ https://www.artificialintelligence-news.com/news/frankie-woodhead-thrive-why-neurodiverse-input-is-crucial-for-ai-development/#respond Mon, 24 Mar 2025 13:18:28 +0000 https://www.artificialintelligence-news.com/?p=104971 AI is shaping the future, but is it truly designed for everyone? Frankie Woodhead, chief product & technology officer at AI-powered learning management system, Thrive, argues that neurodiverse input is not just beneficial but essential for creating inclusive, ethical and effective AI systems. In this Q&A, Woodhead explores how neurodivergent talent enhances AI development, helps […]

The post Frankie Woodhead, Thrive: Why neurodiverse input is crucial for AI development appeared first on AI News.

]]>
AI is shaping the future, but is it truly designed for everyone? Frankie Woodhead, chief product & technology officer at AI-powered learning management system, Thrive, argues that neurodiverse input is not just beneficial but essential for creating inclusive, ethical and effective AI systems. In this Q&A, Woodhead explores how neurodivergent talent enhances AI development, helps combat bias, and drives innovation – offering insights on how businesses can foster a more inclusive tech industry.

Why is it important to have neurodiverse input into AI development?

Neurodiverse perspectives are absolutely critical for AI development, and it goes far beyond simply ticking a box for diversity. It’s about building AI that’s truly inclusive and reflects the diverse ways people think, learn, and interact with technology. Neurodiverse individuals bring fresh perspectives to UX and design, ensuring AI interfaces are intuitive and accessible for a wider range of cognitive styles. In my experience, there’s a direct correlation between neurodiversity and the creation of breakthrough solutions. Without those different perspectives, we risk building biased systems that only work for a narrow segment of the population, perpetuating existing inequalities and limiting the potential of AI.

AI models often struggle with biases. How can neurodivergent perspectives help create more inclusive and ethical AI systems?

Neurodivergent individuals often possess unique cognitive strengths, such as a heightened ability to identify patterns and inconsistencies, coupled with meticulous attention to detail and logical thinking. This makes them invaluable for spotting biases in AI algorithms and datasets. Their unique perspectives allow them to see potential pitfalls that others might overlook, leading to fairer, more reliable, and ultimately more ethical AI systems that benefit everyone.

How does Thrive incorporate neurodivergent talent in its AI development processes, and what benefits have you seen from this approach?

We’re passionate about making learning accessible and inclusive for everyone, and that starts with recognising the diverse ways people learn. That’s why incorporating diverse perspectives, including neurodivergent talent, is crucial for identifying and mitigating biases in our AI algorithms.

Our focus on accessibility is inherently linked to incorporating neurodivergent talent. We understand that a diverse workforce learns in diverse ways, and AI allows us to tailor the learning experience to individual needs and preferences. By actively working to incorporate features like people and product bots for automated answers without human interaction, we are creating a more inclusive learning experience for everyone, including those with diverse learning styles.

As a result, we’ve seen significant improvements in the quality and inclusivity of our AI learning platform, leading to more effective learning, a broader reach, and a stronger ethical foundation.

What are some of the biggest barriers preventing neurodivergent individuals from entering the AI and tech industries, and how can businesses address them?

The biggest barriers are often rigid workplace structures designed for neurotypical employees, coupled with a lack of understanding and acceptance of neurodiversity. Businesses need to prioritise flexibility in work arrangements and communication styles, create sensory-friendly spaces with quiet areas and adjustable lighting, and foster a culture where everyone feels safe, valued, and supported. It’s also important to offer alternatives to traditional social events and team-building activities, implement mentorship programmes pairing neurodivergent employees with supportive colleagues, and enable colleagues to choose their work environment to match their strengths and needs. Providing dedicated space for deeper, focused work with fewer distractions is critical for enabling neurodivergent colleagues to thrive.

With AI playing a growing role in workplace automation, how can it be used to support neurodivergent employees rather than exclude them?

AI should empower neurodivergent employees by providing tools and resources that support their individual needs and learning styles, rather than replacing human interaction or creating new barriers. This includes smart reminders and task management systems to help with organisation, AI-powered chat assistants that can provide quick answers, automated meeting summaries to ensure everyone has clear outputs, and tools to reduce distractions like AI-filtered emails. Personalised learning platforms that offer continuous learning and development with tailored recommendations are also essential. The goal is to leverage AI to create a more accessible, inclusive, and supportive work environment where everyone can reach their full potential.

What practical steps should AI companies take to ensure they are fostering a more neurodiverse and inclusive workforce?

AI companies need to move beyond simply raising awareness and take concrete, measurable action to create a truly neurodiverse and inclusive workforce. This includes moving beyond traditional interviews that often prioritise social skills over technical ability and allowing candidates to choose their preferred interview format. It also means creating an inclusive and accessible work environment with neurodivergent-friendly communication and sensory-friendly office spaces. Investing in comprehensive neurodiversity training for all employees is also crucial for better collaboration. Enabling open and honest conversations amongst smaller groups (one to four people) is also critical for creating a safe space for people to articulate themselves and share their perspectives.

Image by alexmogopro from Pixabay

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Frankie Woodhead, Thrive: Why neurodiverse input is crucial for AI development appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/frankie-woodhead-thrive-why-neurodiverse-input-is-crucial-for-ai-development/feed/ 0
Dream properties at the touch of a button: Quant and Huawei in Saudi Arabia https://www.artificialintelligence-news.com/news/property-investment-market-changed-by-quant-and-huawei-in-saudi-arabia/ https://www.artificialintelligence-news.com/news/property-investment-market-changed-by-quant-and-huawei-in-saudi-arabia/#respond Mon, 17 Mar 2025 09:30:50 +0000 https://www.artificialintelligence-news.com/?p=104800 Despite being one of the richest countries in the world, and the overwhelming preference of investors for the property market of the country over stocks, Saudi Arabia remained data-poor in the real-estate sector until relatively recently. That was something Ahmed Bukhamseen wanted to change, and his company Quant set out to transform the property investment market and […]

The post Dream properties at the touch of a button: Quant and Huawei in Saudi Arabia appeared first on AI News.

]]>
Despite being one of the richest countries in the world, and the overwhelming preference of investors for the property market of the country over stocks, Saudi Arabia remained data-poor in the real-estate sector until relatively recently.

That was something Ahmed Bukhamseen wanted to change, and his company Quant set out to transform the property investment market and open up pricing information to buyers, speculators, and investors.

As you might expect, at the core of Quant is data – advertised and realised prices, mapping, details of proposed and ongoing construction, geo-spacial data, imagery, and much more. We spoke to Ahmed at the recent MWC (Mobile World Congress) in Barcelona, to talk about the company’s vision and its journey to date. But first we asked what his chosen data platform of choice was – the basis on which the business runs.

“Our main challenge was to first comply with data protection. So, Saudi implemented new regulation laws about data protection, similar to GDPR, run by the Saudi Data and AI Authority. And for that we needed to have a local host for data protection. So that was our main driver for moving from Azure – Microsoft here in Europe – back to Saudi.”

Saudi Arabia, the Middle East, Africa and the APAC are target markets for Chinese tech giant Huawei, but it wasn’t an immediate given for Quant. “We discovered that Huawei offer cloud services late […] about this time last year. We started comparing Huawei with others.”

As Quant’s business model developed, so too did its requirements. The company started out just supplying information it drew from the Saudi municipal data registries, but has since enriched that core information with local data, and high-res satellite imagery.

“We were using a data centre [here] and the data had to go from Saudi to Europe and back. We were running AI on the edge, on images without storing them. But now we have different use cases, so we needed to store data before and after processing, after enrichment or maximisation. There was a huge cost, and we didn’t expect it at the beginning.”

Part of the issue was the imagery the company was pulling down from satellites to get up-to-date pictures of existing buildings on the ground, and those in the process of construction. It was grabbing detailed images once every two weeks, and is about to go live with a daily satellite photography cadence.

“We started testing Huawei in terms of latency, especially, because we streamed high volume data, and would like the individuals using our mobile application to get a seamless experience, and start navigating in-app.”

The app released by Quant opens the property market to anyone: individuals buying their first place, portfolio managers scoping new possibilities, investors of all sizes, landlords, and those – like Ahmed was once himself – hoping just to find a place to rent without the massive variance in prices that was the norm before Quant began its service.

Quant combines different data sources such a property prices, maps, satellite imagery, and government-approved planning documents among them to give its app users everything they need to find property or land to buy and sell.

The future will see Quant revising its data, further enriching it, and adding value for its clients. “We need to develop specific feature detection or object detection. For example, municipality data could state a company has zoning for a warehouse or extension, and we don’t know which one the owner decided on. But from space, we can build models and see the exact the building plan.”

Although Quant likely has the technical ‘chops’ to build its own models, it’s turning to Huawei:

“With our new development, computing the model [ourselves] would take us six months to deploy. Now when we run it, it takes four hours. And last time I checked with the team, we run it every week,” Ahmed said.

Quant’s plans include a service for retailers that will advise on the best areas to site their stores, given the known demographics of a neighbourhood – all data it collects, collates, enriches, and presents. Plus, there is a project on the table that extends the data scope to take in the rental market.

Given the speed at which Saudi Arabia develops, Quant and delivery partner Huawei need to move at least as fast. “We want to see you in Saudi, and see the return on real estate! We don’t talk about 2% or 5% growth: We’re talking about multiplying investments similar to the Bitcoin market. Foreign investors will be able to buy property or land with just the touch of a button, and they need data for that. They need to know the growth areas, how much it costs in perspective, before making decisions.”

You can download the Quant app, read more about the company on its website, and check out Huawei’s service offerings for Saudi and beyond.

(Image source: European Space Agency, licensed under CC BY-SA 3.0 IGO.)

The post Dream properties at the touch of a button: Quant and Huawei in Saudi Arabia appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/property-investment-market-changed-by-quant-and-huawei-in-saudi-arabia/feed/ 0
CBRE: Leveraging Artificial Intelligence for business growth https://www.artificialintelligence-news.com/news/cbre-leveraging-artificial-intelligence-for-business-growth/ https://www.artificialintelligence-news.com/news/cbre-leveraging-artificial-intelligence-for-business-growth/#respond Fri, 14 Mar 2025 11:10:58 +0000 https://www.artificialintelligence-news.com/?p=104834 At the latest TechEx Global event, we spoke to Ricky Bartlett, UK Lead for Artificial Intelligence and Automation at CBRE GWE, to discuss how AI is transforming business operations at one of the world’s largest real estate firms. From optimising workflows to enhancing customer experiences, Ricky discusses the real-world applications of AI, overcoming scepticism, and […]

The post CBRE: Leveraging Artificial Intelligence for business growth appeared first on AI News.

]]>
At the latest TechEx Global event, we spoke to Ricky Bartlett, UK Lead for Artificial Intelligence and Automation at CBRE GWE, to discuss how AI is transforming business operations at one of the world’s largest real estate firms. From optimising workflows to enhancing customer experiences, Ricky discusses the real-world applications of AI, overcoming scepticism, and the future of AI within CBRE. Whether you’re a large corporation or a small business, this conversation highlights the power of AI in driving efficiency and innovation.

The post CBRE: Leveraging Artificial Intelligence for business growth appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/cbre-leveraging-artificial-intelligence-for-business-growth/feed/ 0
Experian: Governance, AI, and Democratising Data https://www.artificialintelligence-news.com/news/experian-governance-ai-and-democratising-data/ https://www.artificialintelligence-news.com/news/experian-governance-ai-and-democratising-data/#respond Tue, 11 Mar 2025 15:06:55 +0000 https://www.artificialintelligence-news.com/?p=104912 We spoke to Laurie Schnidman, UK&I Chief Product Officer, Platforms & Software at Experian about their approach to governance, AI, and democratising data. In this insightful interview, Laurie shares strategies for leveraging robust data governance to gain competitive advantages, and highlights common data quality challenges faced by businesses. We delve into preventing bias in machine […]

The post Experian: Governance, AI, and Democratising Data appeared first on AI News.

]]>
We spoke to Laurie Schnidman, UK&I Chief Product Officer, Platforms & Software at Experian about their approach to governance, AI, and democratising data. In this insightful interview, Laurie shares strategies for leveraging robust data governance to gain competitive advantages, and highlights common data quality challenges faced by businesses.

We delve into preventing bias in machine learning, the opportunities and risks of generative AI, and how the Ascend platform empowers non-technical users to manage data effectively, fostering data democratisation across organisations.

The post Experian: Governance, AI, and Democratising Data appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/experian-governance-ai-and-democratising-data/feed/ 0
Streambased: Pioneering the streaming data lake https://www.artificialintelligence-news.com/news/streambased-pioneering-the-streaming-data-lake/ https://www.artificialintelligence-news.com/news/streambased-pioneering-the-streaming-data-lake/#respond Sat, 01 Mar 2025 15:33:35 +0000 https://www.artificialintelligence-news.com/?p=104923 Ryan Daws (@Gadget_Ry) sat down with Tom Scott, Founder and CEO of Streambased, at AI & Big Data Expo Global to hear about the power of combining event streaming with large-scale analytics using Apache Kafka. Tom also delves into the emergence of streaming databases and balancing the advantages of event streaming with the ease of […]

The post Streambased: Pioneering the streaming data lake appeared first on AI News.

]]>
Ryan Daws (@Gadget_Ry) sat down with Tom Scott, Founder and CEO of Streambased, at AI & Big Data Expo Global to hear about the power of combining event streaming with large-scale analytics using Apache Kafka. Tom also delves into the emergence of streaming databases and balancing the advantages of event streaming with the ease of SQL queries.

The post Streambased: Pioneering the streaming data lake appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/streambased-pioneering-the-streaming-data-lake/feed/ 0
Endor Labs: AI transparency vs ‘open-washing’ https://www.artificialintelligence-news.com/news/endor-labs-ai-transparency-vs-open-washing/ https://www.artificialintelligence-news.com/news/endor-labs-ai-transparency-vs-open-washing/#respond Mon, 24 Feb 2025 18:15:45 +0000 https://www.artificialintelligence-news.com/?p=104605 As the AI industry focuses on transparency and security, debates around the true meaning of “openness” are intensifying. Experts from open-source security firm Endor Labs weighed in on these pressing topics. Andrew Stiefel, Senior Product Marketing Manager at Endor Labs, emphasised the importance of applying lessons learned from software security to AI systems. “The US […]

The post Endor Labs: AI transparency vs ‘open-washing’ appeared first on AI News.

]]>
As the AI industry focuses on transparency and security, debates around the true meaning of “openness” are intensifying. Experts from open-source security firm Endor Labs weighed in on these pressing topics.

Andrew Stiefel, Senior Product Marketing Manager at Endor Labs, emphasised the importance of applying lessons learned from software security to AI systems.

“The US government’s 2021 Executive Order on Improving America’s Cybersecurity includes a provision requiring organisations to produce a software bill of materials (SBOM) for each product sold to federal government agencies.”

An SBOM is essentially an inventory detailing the open-source components within a product, helping detect vulnerabilities. Stiefel argued that “applying these same principles to AI systems is the logical next step.”  

“Providing better transparency for citizens and government employees not only improves security,” he explained, “but also gives visibility into a model’s datasets, training, weights, and other components.”

What does it mean for an AI model to be “open”?  

Julien Sobrier, Senior Product Manager at Endor Labs, added crucial context to the ongoing discussion about AI transparency and “openness.” Sobrier broke down the complexity inherent in categorising AI systems as truly open.

“An AI model is made of many components: the training set, the weights, and programs to train and test the model, etc. It is important to make the whole chain available as open source to call the model ‘open’. It is a broad definition for now.”  

Sobrier noted the lack of consistency across major players, which has led to confusion about the term.

“Among the main players, the concerns about the definition of ‘open’ started with OpenAI, and Meta is in the news now for their LLAMA model even though that’s ‘more open’. We need a common understanding of what an open model means. We want to watch out for any ‘open-washing,’ as we saw it with free vs open-source software.”  

One potential pitfall, Sobrier highlighted, is the increasingly common practice of “open-washing,” where organisations claim transparency while imposing restrictions.

“With cloud providers offering a paid version of open-source projects (such as databases) without contributing back, we’ve seen a shift in many open-source projects: The source code is still open, but they added many commercial restrictions.”  

“Meta and other ‘open’ LLM providers might go this route to keep their competitive advantage: more openness about the models, but preventing competitors from using them,” Sobrier warned.

DeepSeek aims to increase AI transparency

DeepSeek, one of the rising — albeit controversial — players in the AI industry, has taken steps to address some of these concerns by making portions of its models and code open-source. The move has been praised for advancing transparency while providing security insights.  

“DeepSeek has already released the models and their weights as open-source,” said Andrew Stiefel. “This next move will provide greater transparency into their hosted services, and will give visibility into how they fine-tune and run these models in production.”

Such transparency has significant benefits, noted Stiefel. “This will make it easier for the community to audit their systems for security risks and also for individuals and organisations to run their own versions of DeepSeek in production.”  

Beyond security, DeepSeek also offers a roadmap on how to manage AI infrastructure at scale.

“From a transparency side, we’ll see how DeepSeek is running their hosted services. This will help address security concerns that emerged after it was discovered they left some of their Clickhouse databases unsecured.”

Stiefel highlighted that DeepSeek’s practices with tools like Docker, Kubernetes (K8s), and other infrastructure-as-code (IaC) configurations could empower startups and hobbyists to build similar hosted instances.  

Open-source AI is hot right now

DeepSeek’s transparency initiatives align with the broader trend toward open-source AI. A report by IDC reveals that 60% of organisations are opting for open-source AI models over commercial alternatives for their generative AI (GenAI) projects.  

Endor Labs research further indicates that organisations use, on average, between seven and twenty-one open-source models per application. The reasoning is clear: leveraging the best model for specific tasks and controlling API costs.

“As of February 7th, Endor Labs found that more than 3,500 additional models have been trained or distilled from the original DeepSeek R1 model,” said Stiefel. “This shows both the energy in the open-source AI model community, and why security teams need to understand both a model’s lineage and its potential risks.”  

For Sobrier, the growing adoption of open-source AI models reinforces the need to evaluate their dependencies.

“We need to look at AI models as major dependencies that our software depends on. Companies need to ensure they are legally allowed to use these models but also that they are safe to use in terms of operational risks and supply chain risks, just like open-source libraries.”

He emphasised that any risks can extend to training data: “They need to be confident that the datasets used for training the LLM were not poisoned or had sensitive private information.”  

Building a systematic approach to AI model risk  

As open-source AI adoption accelerates, managing risk becomes ever more critical. Stiefel outlined a systematic approach centred around three key steps:  

  1. Discovery: Detect the AI models your organisation currently uses.  
  2. Evaluation: Review these models for potential risks, including security and operational concerns.  
  3. Response: Set and enforce guardrails to ensure safe and secure model adoption.  

“The key is finding the right balance between enabling innovation and managing risk,” Stiefel said. “We need to give software engineering teams latitude to experiment but must do so with full visibility. The security team needs line-of-sight and the insight to act.”  

Sobrier further argued that the community must develop best practices for safely building and adopting AI models. A shared methodology is needed to evaluate AI models across parameters such as security, quality, operational risks, and openness.

Beyond transparency: Measures for a responsible AI future  

To ensure the responsible growth of AI, the industry must adopt controls that operate across several vectors:  

  • SaaS models: Safeguarding employee use of hosted models.
  • API integrations: Developers embedding third-party APIs like DeepSeek into applications, which, through tools like OpenAI integrations, can switch deployment with just two lines of code.
  • Open-source models: Developers leveraging community-built models or creating their own models from existing foundations maintained by companies like DeepSeek.

Sobrier warned of complacency in the face of rapid AI progress. “The community needs to build best practices to develop safe and open AI models,” he advised, “and a methodology to rate them along security, quality, operational risks, and openness.”  

As Stiefel succinctly summarised: “Think about security across multiple vectors and implement the appropriate controls for each.”

See also: AI in 2025: Purpose-driven models, human integration, and more

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Endor Labs: AI transparency vs ‘open-washing’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/endor-labs-ai-transparency-vs-open-washing/feed/ 0
Kosh Duo: Ethical AI integration and future trends https://www.artificialintelligence-news.com/news/kosh-duo-ethical-ai-integration-and-future-trends/ https://www.artificialintelligence-news.com/news/kosh-duo-ethical-ai-integration-and-future-trends/#respond Mon, 24 Feb 2025 15:40:49 +0000 https://www.artificialintelligence-news.com/?p=104927 Grace Zheng, Data Analyst at Canon and Founder of Kosh Duo, recently sat down for an interview with AI News during AI & Big Data Expo Global to discuss integrating AI ethically as well as providing her insights around future trends.

The post Kosh Duo: Ethical AI integration and future trends appeared first on AI News.

]]>
Grace Zheng, Data Analyst at Canon and Founder of Kosh Duo, recently sat down for an interview with AI News during AI & Big Data Expo Global to discuss integrating AI ethically as well as providing her insights around future trends.

The post Kosh Duo: Ethical AI integration and future trends appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/kosh-duo-ethical-ai-integration-and-future-trends/feed/ 0