diversity Archives - AI News https://www.artificialintelligence-news.com/news/tag/diversity/ Artificial Intelligence News Fri, 25 Apr 2025 14:08:10 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png diversity Archives - AI News https://www.artificialintelligence-news.com/news/tag/diversity/ 32 32 Dame Wendy Hall, AI Council: Shaping AI with ethics, diversity and innovation https://www.artificialintelligence-news.com/news/dame-wendy-hall-ai-council-shaping-ai-with-ethics-diversity-and-innovation/ https://www.artificialintelligence-news.com/news/dame-wendy-hall-ai-council-shaping-ai-with-ethics-diversity-and-innovation/#respond Mon, 31 Mar 2025 10:54:40 +0000 https://www.artificialintelligence-news.com/?p=105089 Dame Wendy Hall is a pioneering force in AI and computer science. As a renowned ethical AI speaker and one of the leading voices in technology, she has dedicated her career to shaping the ethical, technical and societal dimensions of emerging technologies. She is the co-founder of the Web Science Research Initiative, an AI Council […]

The post Dame Wendy Hall, AI Council: Shaping AI with ethics, diversity and innovation appeared first on AI News.

]]>
Dame Wendy Hall is a pioneering force in AI and computer science. As a renowned ethical AI speaker and one of the leading voices in technology, she has dedicated her career to shaping the ethical, technical and societal dimensions of emerging technologies. She is the co-founder of the Web Science Research Initiative, an AI Council Member and was named as one of the 100 Most Powerful Women in the UK by Woman’s Hour on BBC Radio 4.

A key advocate for responsible AI governance and diversity in tech, Wendy has played a crucial role in global discussions on the future of AI.

In our Q&A, we spoke to her about the gender imbalance in the AI industry, the ethical implications of emerging technologies, and how businesses can harness AI while ensuring it remains an asset to humanity.

The AI sector remains heavily male-dominated. Can you share your experience of breaking into the industry and the challenges women face in achieving greater representation in AI and technology?

It’s incredibly frustrating because I wrote my first paper about the lack of women in computing back in 1987, when we were just beginning to teach computer science degree courses at Southampton. That October, we arrived at the university and realised we had no women registered on the course — none at all.

So, those of us working in computing started discussing why that was the case. There were several reasons. One significant factor was the rise of the personal computer, which was marketed as a toy for boys, fundamentally changing the culture. Since then, in the West — though not as much in countries like India or Malaysia — computing has been seen as something nerdy, something that only ‘geeks’ do. Many young girls simply do not want to be associated with that stereotype. By the time they reach their GCSE choices, they often don’t see computing as an option, and that’s where the problem begins.

Despite many efforts, we haven’t managed to change this culture. Nearly 40 years later, the industry is still overwhelmingly male-dominated, even though women make up more than

half of the global population. Women are largely absent from the design and development of computers and software. We apply them, we use them, but we are not part of the fundamental conversations shaping future technologies.

AI is even worse in this regard. If you want to work in machine learning, you need a degree in mathematics or computer science, which means we are funnelling an already male-dominated sector into an even more male-dominated pipeline.

But AI is about more than just machine learning and programming. It’s about application, ethics, values, opportunities, and mitigating potential risks. This requires a broad diversity of voices — not just in terms of gender, but also in age, ethnicity, culture, and accessibility. People with disabilities should be part of these discussions, ensuring technology is developed for everyone.

AI’s development needs input from many disciplines — law, philosophy, psychology, business, and history, to name just a few. We need all these different voices. That’s why I believe we must see AI as a socio-technical system to truly understand its impact. We need diversity in every sense of the word.

As businesses increasingly integrate AI into their operations, what steps should they take to ensure emerging technologies are developed and deployed ethically?

Take, for example, facial recognition. We still haven’t fully established the rules and regulations for when and how this technology should be applied. Did anyone ask you whether you wanted facial recognition on your phone? It was simply offered as a system update, and you could either enable it or not.

We know facial recognition is used extensively for surveillance in China, but it is creeping into use across Europe and the US as well. Security forces are adopting it, which raises concerns about privacy. At the same time, I appreciate the presence of CCTV cameras in car parks at night — they make me feel safer.

This duality applies to all emerging technologies, including AI tools we haven’t even developed yet. Every new technology has a good and a bad side — the yin and the yang, if you will. There are always benefits and risks.

The challenge is learning how to maximise the benefits for humanity, society and business while mitigating the risks. That’s what we must focus on — ensuring AI works in service of people rather than against them.

The rapid advancement of AI is transforming everyday life. How do you envision the future of AI, and what significant changes will it bring to society and the way we work?

I see a future where AI becomes part of the decision-making process, whether in legal cases, medical diagnoses, or education.

AI is already deeply embedded in our daily lives. If you use Google on your phone, you’re using AI. If you unlock your phone with facial recognition, that’s AI. Google Translate? AI. Speech processing, video analysis, image recognition, text generation, and natural language processing — these are all AI-driven technologies.

Right now, the buzz is around generative AI, particularly ChatGPT. It’s like how ‘Hoover’ became synonymous with vacuum cleaners — ChatGPT has become shorthand for AI. In reality, it’s just a clever interface created by OpenAI to allow public access to its generative AI model.

It feels like you’re having a conversation with the system, asking questions and receiving natural language responses. It works with images and videos too, making it seem incredibly advanced. But the truth is, it’s not actually intelligent. It’s not sentient. It’s simply predicting the next word in a sequence based on training data. That’s a crucial distinction.

With generative AI becoming a powerful tool for businesses, what strategies should companies adopt to leverage its capabilities while maintaining human authenticity in their processes and decision-making?

Generative AI is nothing to be afraid of, and I believe we will all start using it more and more. Essentially, it’s software that can assist with writing, summarising, and analysing information.

I compare it to when calculators first appeared. People were outraged: ‘How can we allow calculators in schools? Can we trust the answers they provide?’ But over time, we adapted. The finance industry, for example, is now entirely run by computers, yet it employs more people than ever before. I expect we’ll see something similar with generative AI.

People will be relieved not to have to write endless essays. AI will enhance creativity and efficiency, but it must be viewed as a tool to augment human intelligence, not replace it, because it’s simply not advanced enough to take over.

Look at the legal industry. AI can summarise vast amounts of data, assess the viability of legal cases, and provide predictive analysis. In the medical field, AI could support diagnoses. In education, it could help assess struggling students.

I envision AI being integrated into decision-making teams. We will consult AI, ask it questions, and use its responses as a guide — but it’s crucial to remember that AI is not infallible.

Right now, AI models are trained on biased data. If they rely on information from the internet, much of that data is inaccurate. AI systems also ‘hallucinate’ by generating false information when they don’t have a definitive answer. That’s why we can’t fully trust AI yet.

Instead, we must treat it as a collaborative partner — one that helps us be more productive and creative while ensuring that humans remain in control. Perhaps AI will even pave the way for shorter workweeks, giving us more time for other pursuits.

Photo by Igor Omilaev on Unsplash and AI Speakers Agency.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Dame Wendy Hall, AI Council: Shaping AI with ethics, diversity and innovation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/dame-wendy-hall-ai-council-shaping-ai-with-ethics-diversity-and-innovation/feed/ 0
Google pledges to fix Gemini’s inaccurate and biased image generation https://www.artificialintelligence-news.com/news/google-pledges-fix-gemini-inaccurate-biased-image-generation/ https://www.artificialintelligence-news.com/news/google-pledges-fix-gemini-inaccurate-biased-image-generation/#respond Thu, 22 Feb 2024 15:11:11 +0000 https://www.artificialintelligence-news.com/?p=14437 Google’s Gemini model has come under fire for its production of historically-inaccurate and racially-skewed images, reigniting concerns about bias in AI systems. The controversy arose as users on social media platforms flooded feeds with examples of Gemini generating pictures depicting racially-diverse Nazis, black medieval English kings, and other improbable scenarios. Google Gemini Image generation model […]

The post Google pledges to fix Gemini’s inaccurate and biased image generation appeared first on AI News.

]]>
Google’s Gemini model has come under fire for its production of historically-inaccurate and racially-skewed images, reigniting concerns about bias in AI systems.

The controversy arose as users on social media platforms flooded feeds with examples of Gemini generating pictures depicting racially-diverse Nazis, black medieval English kings, and other improbable scenarios.

Meanwhile, critics also pointed out Gemini’s refusal to depict Caucasians, churches in San Francisco out of respect for indigenous sensitivities, and sensitive historical events like Tiananmen Square in 1989.

In response to the backlash, Jack Krawczyk, the product lead for Google’s Gemini Experiences, acknowledged the issue and pledged to rectify it. Krawczyk took to social media platform X to reassure users:

https://twitter.com/JackK/status/1760334258722250785

For now, Google says it is pausing the image generation of people:

While acknowledging the need to address diversity in AI-generated content, some argue that Google’s response has been an overcorrection.

Marc Andreessen, the co-founder of Netscape and a16z, recently created an “outrageously safe” parody AI model called Goody-2 LLM that refuses to answer questions deemed problematic. Andreessen warns of a broader trend towards censorship and bias in commercial AI systems, emphasising the potential consequences of such developments.

Addressing the broader implications, experts highlight the centralisation of AI models under a few major corporations and advocate for the development of open-source AI models to promote diversity and mitigate bias.

Yann LeCun, Meta’s chief AI scientist, has stressed the importance of fostering a diverse ecosystem of AI models akin to the need for a free and diverse press:

Bindu Reddy, CEO of Abacus.AI, has similar concerns about the concentration of power without a healthy ecosystem of open-source models:

As discussions around the ethical and practical implications of AI continue, the need for transparent and inclusive AI development frameworks becomes increasingly apparent.

(Photo by Matt Artz on Unsplash)

See also: Reddit is reportedly selling data for AI training

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google pledges to fix Gemini’s inaccurate and biased image generation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/google-pledges-fix-gemini-inaccurate-biased-image-generation/feed/ 0
Lack of STEM diversity is causing AI to have a ‘white male’ bias https://www.artificialintelligence-news.com/news/stem-diversity-ai-white-male-bias/ https://www.artificialintelligence-news.com/news/stem-diversity-ai-white-male-bias/#comments Thu, 18 Apr 2019 15:34:03 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5568 A report from New York University’s AI Now Institute has found a predominantly white male coding workforce is causing bias in algorithms. The report highlights that – while gradually narrowing – the lack of diverse representation at major technology companies such as Microsoft, Google, and Facebook is causing AIs to cater more towards white males. […]

The post Lack of STEM diversity is causing AI to have a ‘white male’ bias appeared first on AI News.

]]>
A report from New York University’s AI Now Institute has found a predominantly white male coding workforce is causing bias in algorithms.

The report highlights that – while gradually narrowing – the lack of diverse representation at major technology companies such as Microsoft, Google, and Facebook is causing AIs to cater more towards white males.

For example, at Facebook just 15 percent of the company’s AI staff are women. The problem is even more substantial at Google where just 10 percent are female.

Report authors Sarah Myers West, Meredith Whittaker and Kate Crawford wrote:

“To date, the diversity problems of the AI industry and the issues of bias in the systems it builds have tended to be considered separately.

We suggest that these are two versions of the same problem: issues of discrimination in the workforce and in system building are deeply intertwined.”

As artificial intelligence becomes used more across society, there’s a danger of some groups being left behind from its advantages while “reinforcing a narrow idea of the ‘normal’ person”.

The researchers highlight examples of where this is already happening:

  • Amazon’s controversial Rekognition facial recognition AI struggled with dark-skin females in particular, although separate analysis has found other AIs also face such difficulties with non-white males.
  • A résumé-scanning AI which relied on previous examples of successful applicants as a benchmark. The AI downgraded people who included “women’s” in their résumé or who attended women’s colleges.

AI is currently being deployed in few life-changing areas, but that’s rapidly changing. Law enforcement is already looking to use the technology for identifying criminals, even preemptively in some cases, and for making sentencing decisions – including whether someone should be granted bail.

“The use of AI systems for the classification, detection, and prediction of race and gender is in urgent need of re-evaluation,” the researchers noted. “The commercial deployment of these tools is cause for deep concern.”

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post Lack of STEM diversity is causing AI to have a ‘white male’ bias appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/stem-diversity-ai-white-male-bias/feed/ 1
Stanford’s institute ensuring AI ‘represents humanity’ lacks diversity https://www.artificialintelligence-news.com/news/stanford-institute-ai-humanity-diversity/ https://www.artificialintelligence-news.com/news/stanford-institute-ai-humanity-diversity/#respond Fri, 22 Mar 2019 12:09:32 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5375 An institute established by Stanford University to address concerns that AI may not represent the whole of humanity is lacking in diversity. The goal of the Institute for Human-Centered Artificial Intelligence is admirable, but the fact it consists primarily of white males brings into doubt its ability to ensure adequate representation. Cybersecurity expert Chad Loder […]

The post Stanford’s institute ensuring AI ‘represents humanity’ lacks diversity appeared first on AI News.

]]>
An institute established by Stanford University to address concerns that AI may not represent the whole of humanity is lacking in diversity.

The goal of the Institute for Human-Centered Artificial Intelligence is admirable, but the fact it consists primarily of white males brings into doubt its ability to ensure adequate representation.

Cybersecurity expert Chad Loder noticed that not a single member of Stanford’s new AI faculty was black. Tech site Gizmodo reached out to Stanford and the university quickly added Juliana Bidadanure, an assistant professor of philosophy.

Part of the institute’s problem could be the very thing it’s attempting to address – that, while improving, there’s still a lack of diversity in STEM-based careers. With revolutionary technologies such as AI, parts of society are in danger of being left behind.

The institute has backing from some big-hitters. People like Bill Gates and Gavin Newsom have pledged their support that “creators and designers of AI must be broadly representative of humanity.”

Fighting Algorithmic Bias

Stanford isn’t the only institution fighting the good fight against bias in algorithms.

Earlier this week, AI News reported on the UK government’s launch of an investigation to determine the levels of bias in algorithms that could affect people’s lives.

Conducted by the Centre for Data Ethics and Innovation (CDEI), the investigation will focus on areas where AI has tremendous potential – such as policing, recruitment, and financial services – but would have a serious negative impact on lives if not implemented correctly.

Meanwhile, activists like Joy Buolamwini from the Algorithmic Justice League are doing their part to raise awareness of the dangers which bias in AI poses.

In a speech earlier this year, Buolamwini analysed current popular facial recognition algorithms and found serious disparities in accuracy – particularly when recognising black females.

Just imagine surveillance being used with these algorithms. Lighter skinned males would be recognised in most cases, but darker skinned females would be mistakenly stopped more often. We’re in serious danger of automating profiling.

Some efforts are being made to create AIs which detect unintentional bias in other algorithms – but it’s early days for such developments, and they will also need diverse creators.

However it’s tackled, algorithmic bias needs to be eliminated before it’s adopted in areas of society where it will have a negative impact on individuals.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post Stanford’s institute ensuring AI ‘represents humanity’ lacks diversity appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/stanford-institute-ai-humanity-diversity/feed/ 0