Face Recognition | Face Recognition AI News | AI News https://www.artificialintelligence-news.com/categories/ai-applications/ai-face-recognition/ Artificial Intelligence News Fri, 25 Apr 2025 14:07:38 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Face Recognition | Face Recognition AI News | AI News https://www.artificialintelligence-news.com/categories/ai-applications/ai-face-recognition/ 32 32 Dame Wendy Hall, AI Council: Shaping AI with ethics, diversity and innovation https://www.artificialintelligence-news.com/news/dame-wendy-hall-ai-council-shaping-ai-with-ethics-diversity-and-innovation/ https://www.artificialintelligence-news.com/news/dame-wendy-hall-ai-council-shaping-ai-with-ethics-diversity-and-innovation/#respond Mon, 31 Mar 2025 10:54:40 +0000 https://www.artificialintelligence-news.com/?p=105089 Dame Wendy Hall is a pioneering force in AI and computer science. As a renowned ethical AI speaker and one of the leading voices in technology, she has dedicated her career to shaping the ethical, technical and societal dimensions of emerging technologies. She is the co-founder of the Web Science Research Initiative, an AI Council […]

The post Dame Wendy Hall, AI Council: Shaping AI with ethics, diversity and innovation appeared first on AI News.

]]>
Dame Wendy Hall is a pioneering force in AI and computer science. As a renowned ethical AI speaker and one of the leading voices in technology, she has dedicated her career to shaping the ethical, technical and societal dimensions of emerging technologies. She is the co-founder of the Web Science Research Initiative, an AI Council Member and was named as one of the 100 Most Powerful Women in the UK by Woman’s Hour on BBC Radio 4.

A key advocate for responsible AI governance and diversity in tech, Wendy has played a crucial role in global discussions on the future of AI.

In our Q&A, we spoke to her about the gender imbalance in the AI industry, the ethical implications of emerging technologies, and how businesses can harness AI while ensuring it remains an asset to humanity.

The AI sector remains heavily male-dominated. Can you share your experience of breaking into the industry and the challenges women face in achieving greater representation in AI and technology?

It’s incredibly frustrating because I wrote my first paper about the lack of women in computing back in 1987, when we were just beginning to teach computer science degree courses at Southampton. That October, we arrived at the university and realised we had no women registered on the course — none at all.

So, those of us working in computing started discussing why that was the case. There were several reasons. One significant factor was the rise of the personal computer, which was marketed as a toy for boys, fundamentally changing the culture. Since then, in the West — though not as much in countries like India or Malaysia — computing has been seen as something nerdy, something that only ‘geeks’ do. Many young girls simply do not want to be associated with that stereotype. By the time they reach their GCSE choices, they often don’t see computing as an option, and that’s where the problem begins.

Despite many efforts, we haven’t managed to change this culture. Nearly 40 years later, the industry is still overwhelmingly male-dominated, even though women make up more than

half of the global population. Women are largely absent from the design and development of computers and software. We apply them, we use them, but we are not part of the fundamental conversations shaping future technologies.

AI is even worse in this regard. If you want to work in machine learning, you need a degree in mathematics or computer science, which means we are funnelling an already male-dominated sector into an even more male-dominated pipeline.

But AI is about more than just machine learning and programming. It’s about application, ethics, values, opportunities, and mitigating potential risks. This requires a broad diversity of voices — not just in terms of gender, but also in age, ethnicity, culture, and accessibility. People with disabilities should be part of these discussions, ensuring technology is developed for everyone.

AI’s development needs input from many disciplines — law, philosophy, psychology, business, and history, to name just a few. We need all these different voices. That’s why I believe we must see AI as a socio-technical system to truly understand its impact. We need diversity in every sense of the word.

As businesses increasingly integrate AI into their operations, what steps should they take to ensure emerging technologies are developed and deployed ethically?

Take, for example, facial recognition. We still haven’t fully established the rules and regulations for when and how this technology should be applied. Did anyone ask you whether you wanted facial recognition on your phone? It was simply offered as a system update, and you could either enable it or not.

We know facial recognition is used extensively for surveillance in China, but it is creeping into use across Europe and the US as well. Security forces are adopting it, which raises concerns about privacy. At the same time, I appreciate the presence of CCTV cameras in car parks at night — they make me feel safer.

This duality applies to all emerging technologies, including AI tools we haven’t even developed yet. Every new technology has a good and a bad side — the yin and the yang, if you will. There are always benefits and risks.

The challenge is learning how to maximise the benefits for humanity, society and business while mitigating the risks. That’s what we must focus on — ensuring AI works in service of people rather than against them.

The rapid advancement of AI is transforming everyday life. How do you envision the future of AI, and what significant changes will it bring to society and the way we work?

I see a future where AI becomes part of the decision-making process, whether in legal cases, medical diagnoses, or education.

AI is already deeply embedded in our daily lives. If you use Google on your phone, you’re using AI. If you unlock your phone with facial recognition, that’s AI. Google Translate? AI. Speech processing, video analysis, image recognition, text generation, and natural language processing — these are all AI-driven technologies.

Right now, the buzz is around generative AI, particularly ChatGPT. It’s like how ‘Hoover’ became synonymous with vacuum cleaners — ChatGPT has become shorthand for AI. In reality, it’s just a clever interface created by OpenAI to allow public access to its generative AI model.

It feels like you’re having a conversation with the system, asking questions and receiving natural language responses. It works with images and videos too, making it seem incredibly advanced. But the truth is, it’s not actually intelligent. It’s not sentient. It’s simply predicting the next word in a sequence based on training data. That’s a crucial distinction.

With generative AI becoming a powerful tool for businesses, what strategies should companies adopt to leverage its capabilities while maintaining human authenticity in their processes and decision-making?

Generative AI is nothing to be afraid of, and I believe we will all start using it more and more. Essentially, it’s software that can assist with writing, summarising, and analysing information.

I compare it to when calculators first appeared. People were outraged: ‘How can we allow calculators in schools? Can we trust the answers they provide?’ But over time, we adapted. The finance industry, for example, is now entirely run by computers, yet it employs more people than ever before. I expect we’ll see something similar with generative AI.

People will be relieved not to have to write endless essays. AI will enhance creativity and efficiency, but it must be viewed as a tool to augment human intelligence, not replace it, because it’s simply not advanced enough to take over.

Look at the legal industry. AI can summarise vast amounts of data, assess the viability of legal cases, and provide predictive analysis. In the medical field, AI could support diagnoses. In education, it could help assess struggling students.

I envision AI being integrated into decision-making teams. We will consult AI, ask it questions, and use its responses as a guide — but it’s crucial to remember that AI is not infallible.

Right now, AI models are trained on biased data. If they rely on information from the internet, much of that data is inaccurate. AI systems also ‘hallucinate’ by generating false information when they don’t have a definitive answer. That’s why we can’t fully trust AI yet.

Instead, we must treat it as a collaborative partner — one that helps us be more productive and creative while ensuring that humans remain in control. Perhaps AI will even pave the way for shorter workweeks, giving us more time for other pursuits.

Photo by Igor Omilaev on Unsplash and AI Speakers Agency.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Dame Wendy Hall, AI Council: Shaping AI with ethics, diversity and innovation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/dame-wendy-hall-ai-council-shaping-ai-with-ethics-diversity-and-innovation/feed/ 0
Lighthouse AI for Review enhances document eDiscovery https://www.artificialintelligence-news.com/news/lighthouse-ai-for-review-enhances-document-ediscovery/ https://www.artificialintelligence-news.com/news/lighthouse-ai-for-review-enhances-document-ediscovery/#respond Wed, 26 Mar 2025 12:02:32 +0000 https://www.artificialintelligence-news.com/?p=105013 In an increasing number of industries, eDiscovery of regulation and compliance documents can make trading (across state borders in the US, for example) less complex. In an industry like pharmaceutical, and its often complex supply chains, companies have to be aware of the mass of changing rules and regulations emanating from different legislatures at local […]

The post Lighthouse AI for Review enhances document eDiscovery appeared first on AI News.

]]>
In an increasing number of industries, eDiscovery of regulation and compliance documents can make trading (across state borders in the US, for example) less complex.

In an industry like pharmaceutical, and its often complex supply chains, companies have to be aware of the mass of changing rules and regulations emanating from different legislatures at local and federal levels. It’s no surprise, therefore, that it’s in regulated supply chain compliance that AI can be hugely beneficial. Given that AIs excel at reading and parsing documentation and images, service providers like Lighthouse AI use the technology in its different forms to comb through existing and new documentation that governs the industry.

The company’s latest suite, Lighthouse AI for Review uses the variations on machine learning of predictive and generative AI, image recognition and OCR, plus linguistic modelling, to handle use cases in large volume, time-sensitive settings.

Predictive AI is used for classification of documents and generative AI helps with the review process for better, more defensible, downstream results. The company claims that the linguistic modelling element of the suite refines the platform’s accuracy to levels normally “beyond AI’s capabilities.”

eDiscovery – the broad term

Lighthouse AI is currently six years old, and has analysed billions of documents since 2019, but predictive AI remains important to the software, despite – it might be said – generative AI grabbing most of the headlines in the last 18 months. Fernando Delgado, Director of AI and Analytics at Lighthouse, said, “While much attention has been rightly paid to the impact of GenAI recently, the power and relevancy of predictive AI cannot be overlooked. They do different things, and there is often real value in combining them to handle different elements in the same workflow.”

Given that the blanket term ‘the pharmaceutical industry’ includes concerns as disparate as medical technology, drug research, and production, right through to dispensing stores, the compliance requirements for an individual company in the sector can be wildly varied. “Rather than a one-size-fits-all approach, we’ve been able to shape the technology to fit our unique needs – turning our ideas into real, impactful solutions,” says Christian Mahoney, Counsel at Cleary Gottlieb Steen & Hamilton.

Lighthouse AI for Review includes use cases including AI for Responsive Review, AI for Privilege Review, AI for Privilege Analysis, and AI for PII/PHI/PCI Identification. The Lighthouse AI claims that its users see an up to 40% reduction in the volume of classification and summary documents with the AI for Responsive Review feature, with less training required by the LLM before it begins to create ROI.
AI Privilege for Review is also “60% more accurate than keyword-based models,” Lighthouse AI says.

AI’s acuity with visual data is handled by AI for Image Analysis uses GenAI to analyse images and, for example, produce text descriptions of media, presenting results using the interface users interact with for other tasks.

Lighthouse’s AI for PII/PHI/PCI Identification automates the mapping of relationships between entities, and can reduce the need for manual reviews. “The new offerings are highly differentiated and designed to provide the most impact for the volume, velocity, and complexity of eDiscovery,” said Lighthouse CEO, Ron Markezich.

(Image source: “Basel – Roche Building 1” by corno.fulgur75 is licensed under CC BY 2.0.)

See also: Hugging Face calls for open-source focus in the AI Action Plan

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Lighthouse AI for Review enhances document eDiscovery appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/lighthouse-ai-for-review-enhances-document-ediscovery/feed/ 0
LG EXAONE Deep is a maths, science, and coding buff https://www.artificialintelligence-news.com/news/lg-exaone-deep-maths-science-and-coding-buff/ https://www.artificialintelligence-news.com/news/lg-exaone-deep-maths-science-and-coding-buff/#respond Tue, 18 Mar 2025 12:49:26 +0000 https://www.artificialintelligence-news.com/?p=104905 LG AI Research has unveiled EXAONE Deep, a reasoning model that excels in complex problem-solving across maths, science, and coding. The company highlighted the global challenge in creating advanced reasoning models, noting that currently, only a handful of organisations with foundational models are actively pursuing this complex area. EXAONE Deep aims to compete directly with […]

The post LG EXAONE Deep is a maths, science, and coding buff appeared first on AI News.

]]>
LG AI Research has unveiled EXAONE Deep, a reasoning model that excels in complex problem-solving across maths, science, and coding.

The company highlighted the global challenge in creating advanced reasoning models, noting that currently, only a handful of organisations with foundational models are actively pursuing this complex area. EXAONE Deep aims to compete directly with these leading models, showcasing a competitive level of reasoning ability.

LG AI Research has focused its efforts on dramatically improving EXAONE Deep’s reasoning capabilities in core domains. The model also demonstrates a strong ability to understand and apply knowledge across a broader range of subjects.

The performance benchmarks released by LG AI Research are impressive:

  • Maths: The EXAONE Deep 32B model outperformed a competing model, despite being only 5% of its size, in a demanding mathematics benchmark. Furthermore, the 7.8B and 2.4B versions achieved first place in all major mathematics benchmarks for their respective model sizes.
  • Science and coding: In these areas, the EXAONE Deep models (7.8B and 2.4B) have secured the top spot across all major benchmarks.
  • MMLU (Massive Multitask Language Understanding): The 32B model achieved a score of 83.0 on the MMLU benchmark, which LG AI Research claims is the best performance among domestic Korean models.

The capabilities of the EXAONE Deep 32B model have already garnered international recognition.

Shortly after its release, it was included in the ‘Notable AI Models’ list by US-based non-profit research organisation Epoch AI. This listing places EXAONE Deep alongside its predecessor, EXAONE 3.5, making LG the only Korean entity with models featured on this prestigious list in the past two years.

Maths prowess

EXAONE Deep has demonstrated exceptional mathematical reasoning skills across its various model sizes (32B, 7.8B, and 2.4B). In assessments based on the 2025 academic year’s mathematics curriculum, all three models outperformed global reasoning models of comparable size.

The 32B model achieved a score of 94.5 in a general mathematics competency test and 90.0 in the American Invitational Mathematics Examination (AIME) 2024, a qualifying exam for the US Mathematical Olympiad.

In the AIME 2025, the 32B model matched the performance of DeepSeek-R1—a significantly larger 671B model. This result showcases EXAONE Deep’s efficient learning and strong logical reasoning abilities, particularly when tackling challenging mathematical problems.

The smaller 7.8B and 2.4B models also achieved top rankings in major benchmarks for lightweight and on-device models, respectively. The 7.8B model scored 94.8 on the MATH-500 benchmark and 59.6 on AIME 2025, while the 2.4B model achieved scores of 92.3 and 47.9 in the same evaluations.

Science and coding excellence

EXAONE Deep has also showcased remarkable capabilities in professional science reasoning and software coding.

The 32B model scored 66.1 on the GPQA Diamond test, which assesses problem-solving skills in doctoral-level physics, chemistry, and biology. In the LiveCodeBench evaluation, which measures coding proficiency, the model achieved a score of 59.5, indicating its potential for high-level applications in these expert domains.

The 7.8B and 2.4B models continued this trend of strong performance, both securing first place in the GPQA Diamond and LiveCodeBench benchmarks within their respective size categories. This achievement builds upon the success of the EXAONE 3.5 2.4B model, which previously topped Hugging Face’s LLM Readerboard in the edge division.

Enhanced general knowledge

Beyond its specialised reasoning capabilities, EXAONE Deep has also demonstrated improved performance in general knowledge understanding.

The 32B model achieved an impressive score of 83.0 on the MMLU benchmark, positioning it as the top-performing domestic model in this comprehensive evaluation. This indicates that EXAONE Deep’s reasoning enhancements extend beyond specific domains and contribute to a broader understanding of various subjects.

LG AI Research believes that EXAONE Deep’s reasoning advancements represent a leap towards a future where AI can tackle increasingly complex problems and contribute to enriching and simplifying human lives through continuous research and innovation.

See also: Baidu undercuts rival AI models with ERNIE 4.5 and ERNIE X1

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post LG EXAONE Deep is a maths, science, and coding buff appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/lg-exaone-deep-maths-science-and-coding-buff/feed/ 0
From punch cards to mind control: Human-computer interactions https://www.artificialintelligence-news.com/news/from-punch-cards-to-mind-control-human-computer-interactions/ https://www.artificialintelligence-news.com/news/from-punch-cards-to-mind-control-human-computer-interactions/#respond Wed, 05 Mar 2025 15:22:07 +0000 https://www.artificialintelligence-news.com/?p=104721 The way we interact with our computers and smart devices is very different from previous years. Over the decades, human-computer interfaces have transformed, progressing from simple cardboard punch cards to keyboards and mice, and now extended reality-based AI agents that can converse with us in the same way as we do with friends. With each […]

The post From punch cards to mind control: Human-computer interactions appeared first on AI News.

]]>
The way we interact with our computers and smart devices is very different from previous years. Over the decades, human-computer interfaces have transformed, progressing from simple cardboard punch cards to keyboards and mice, and now extended reality-based AI agents that can converse with us in the same way as we do with friends.

With each advance in human-computer interfaces, we’re getting closer to achieving the goal of interactions with machines, making computers more accessible and integrated with our lives.

Where did it all begin?

Modern computers emerged in the first half of the 20th century and relied on punch cards to feed data into the system and enable binary computations. The cards had a series of punched holes, and light was shone at them. If the light passed through a hole and was detected by the machine, it represented a “one”. Otherwise, it was a “zero”. As you can imagine, it was extremely cumbersome, time-consuming, and error-prone.

That changed with the arrival of ENIAC, or Electronic Numerical Integrator and Computer, widely considered to be the first “Turing-complete” device that could solve a variety of numerical problems. Instead of punch cards, operating ENIAC involved manually setting a series of switches and plugging patch cords into a board to configure the computer for specific calculations, while data was inputted via a further series of switches and buttons. It was an improvement over punch cards, but not nearly as dramatic as the arrival of the modern QWERTY electronic keyboard in the early 1950s.

Keyboards, adapted from typewriters, were a game-changer, allowing users to input text-based commands more intuitively. But while they made programming faster, accessibility was still limited to those with knowledge of the highly-technical programming commands required to operate computers.

GUIs and touch

The most important development in terms of computer accessibility was the graphical user interface or GUI, which finally opened computing to the masses. The first GUIs appeared in the late 1960s and were later refined by companies like IBM, Apple, and Microsoft, replacing text-based commands with a visual display made up of icons, menus, and windows.

Alongside the GUI came the iconic “mouse“, which enabled users to “point-and-click” to interact with computers. Suddenly, these machines became easily navigable, allowing almost anyone to operate one. With the arrival of the internet a few years later, the GUI and the mouse helped pave the way for the computing revolution, with computers becoming commonplace in every home and office.

The next major milestone in human-computer interfaces was the touchscreen, which first appeared in the late 1990s and did away with the need for a mouse or a separate keyboard. Users could now interact with their computers by tapping icons on the screen directly, pinching to zoom, and swiping left and right. Touchscreens eventually paved the way for the smartphone revolution that started with the arrival of the Apple iPhone in 2007 and, later, Android devices.

With the rise of mobile computing, the variety of computing devices evolved further, and in the late 2000s and early 2010s, we witnessed the emergence of wearable devices like fitness trackers and smartwatches. Such devices are designed to integrate computers into our everyday lives, and it’s possible to interact with them in newer ways, like subtle gestures and biometric signals. Fitness trackers, for instance, use sensors to keep track of how many steps we take or how far we run, and can monitor a user’s pulse to measure heart rate.

Extended reality & AI avatars

In the last decade, we also saw the first artificial intelligence systems, with early examples being Apple’s Siri and Amazon’s Alexa. AI chatbots use voice recognition technology to enable users to communicate with their devices using their voice.

As AI has advanced, these systems have become increasingly sophisticated and better able to understand complex instructions or questions, and can respond based on the context of the situation. With more advanced chatbots like ChatGPT, it’s possible to engage in lifelike conversations with machines, eliminating the need for any kind of physical input device.

AI is now being combined with emerging augmented reality and virtual reality technologies to further refine human-computer interactions. With AR, we can insert digital information into our surroundings by overlaying it on top of our physical environment. This is enabled using VR devices like the Oculus Rift, HoloLens, and Apple Vision Pro, and further pushes the boundaries of what’s possible.

So-called extended reality, or XR, is the latest take on the technology, replacing traditional input methods with eye-tracking, and gestures, and can provide haptic feedback, enabling users to interact with digital objects in physical environments. Instead of being restricted to flat, two-dimensional screens, our entire world becomes a computer through a blend of virtual and physical reality.

The convergence of XR and AI opens the doors to more possibilities. Mawari Network is bringing AI agents and chatbots into the real world through the use of XR technology. It’s creating more meaningful, lifelike interactions by streaming AI avatars directly into our physical environments. The possibilities are endless – imagine an AI-powered virtual assistant standing in your home or a digital concierge that meets you in the hotel lobby, or even an AI passenger that sits next to you in your car, directing you on how to avoid the worst traffic jams. Through its decentralised DePin infrastructure, it’s enabling AI agents to drop into our lives in real-time.

The technology is nascent but it’s not fantasy. In Germany, tourists can call on an avatar called Emma to guide them to the best spots and eateries in dozens of German cities. Other examples include digital popstars like Naevis, which is pioneering the concept of virtual concerts that can be attended from anywhere.

In the coming years, we can expect to see this XR-based spatial computing combined with brain-computer interfaces, which promise to let users control computers with their thoughts. BCIs use electrodes placed on the scalp and pick up the electrical signals generated by our brains. Although it’s still in its infancy, this technology promises to deliver the most effective human-computer interactions possible.

The future will be seamless

The story of the human-computer interface is still under way, and as our technological capabilities advance, the distinction between digital and physical reality will more blurred.

Perhaps one day soon, we’ll be living in a world where computers are omnipresent, integrated into every aspect of our lives, similar to Star Trek’s famed holodeck. Our physical realities will be merged with the digital world, and we’ll be able to communicate, find information, and perform actions using only our thoughts. This vision would have been considered fanciful only a few years ago, but the rapid pace of innovation suggests it’s not nearly so far-fetched. Rather, it’s something that the majority of us will live to see.

(Image source: Unsplash)

The post From punch cards to mind control: Human-computer interactions appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/from-punch-cards-to-mind-control-human-computer-interactions/feed/ 0
EU AI Act: What businesses need to know as regulations go live https://www.artificialintelligence-news.com/news/eu-ai-act-what-businesses-need-know-regulations-go-live/ https://www.artificialintelligence-news.com/news/eu-ai-act-what-businesses-need-know-regulations-go-live/#respond Fri, 31 Jan 2025 12:52:49 +0000 https://www.artificialintelligence-news.com/?p=17015 Next week marks the beginning of a new era for AI regulations as the first obligations of the EU AI Act take effect. While the full compliance requirements won’t come into force until mid-2025, the initial phase of the EU AI Act begins February 2nd and includes significant prohibitions on specific AI applications. Businesses across […]

The post EU AI Act: What businesses need to know as regulations go live appeared first on AI News.

]]>
Next week marks the beginning of a new era for AI regulations as the first obligations of the EU AI Act take effect.

While the full compliance requirements won’t come into force until mid-2025, the initial phase of the EU AI Act begins February 2nd and includes significant prohibitions on specific AI applications. Businesses across the globe that operate in the EU must now navigate a regulatory landscape with strict rules and high stakes.

The new regulations prohibit the deployment or use of several high-risk AI systems. These include applications such as social scoring, emotion recognition, real-time remote biometric identification in public spaces, and other scenarios deemed unacceptable under the Act.

Companies found in violation of the rules could face penalties of up to 7% of their global annual turnover, making it imperative for organisations to understand and comply with the restrictions.  

Early compliance challenges  

“It’s finally here,” says Levent Ergin, Chief Strategist for Climate, Sustainability, and AI at Informatica. “While we’re still in a phased approach, businesses’ hard-earned preparations for the EU AI Act will now face the ultimate test.”

Headshot of Levent Ergin, Chief Strategist for Climate, Sustainability, and AI at Informatica, for an article on what the regulations introduced in EU AI Act means for businesses in the European Union and beyond.

Ergin highlights that even though most compliance requirements will not take effect until mid-2025, the early prohibitions set a decisive tone.

“For businesses, the pressure in 2025 is twofold. They must demonstrate tangible ROI from AI investments while navigating challenges around data quality and regulatory uncertainty. It’s already the perfect storm, with 89% of large businesses in the EU reporting conflicting expectations for their generative AI initiatives. At the same time, 48% say technology limitations are a major barrier to moving AI pilots into production,” he remarks.

Ergin believes the key to compliance and success lies in data governance.

“Without robust data foundations, organisations risk stagnation, limiting their ability to unlock AI’s full potential. After all, isn’t ensuring strong data governance a core principle that the EU AI Act is built upon?”

To adapt, companies must prioritise strengthening their approach to data quality.

“Strengthening data quality and governance is no longer optional, it’s critical. To ensure both compliance and prove the value of AI, businesses must invest in making sure data is accurate, holistic, integrated, up-to-date and well-governed,” says Ergin.

“This isn’t just about meeting regulatory demands; it’s about enabling AI to deliver real business outcomes. As 82% of EU companies plan to increase their GenAI investments in 2025, ensuring their data is AI-ready will be the difference between those who succeed and those who remain in the starting blocks.”

EU AI Act has no borders

The extraterritorial scope of the EU AI Act means non-EU organisations are assuredly not off the hook. As Marcus Evans, a partner at Norton Rose Fulbright, explains, the Act applies far beyond the EU’s borders.

Headshot of Marcus Evans, a partner at Norton Rose Fulbright, for an article on what the regulations introduced in EU AI Act means for businesses in the European Union and beyond.

“The AI Act will have a truly global application,” says Evans. “That’s because it applies not only to organisations in the EU using AI or those providing, importing, or distributing AI to the EU market, but also AI provision and use where the output is used in the EU. So, for instance, a company using AI for recruitment in the EU – even if it is based elsewhere – would still be captured by these new rules.”  

Evans advises businesses to start by auditing their AI use. “At this stage, businesses must first understand where AI is being used in their organisation so that they can then assess whether any use cases may trigger the prohibitions. Building on that initial inventory, a wider governance process can then be introduced to ensure AI use is assessed, remains outside the prohibitions, and complies with the AI Act.”  

While organisations work to align their AI practices with the new regulations, additional challenges remain. Compliance requires addressing other legal complexities such as data protection, intellectual property (IP), and discrimination risks.  

Evans emphasises that raising AI literacy within organisations is also a critical step.

“Any organisations in scope must also take measures to ensure their staff – and anyone else dealing with the operation and use of their AI systems on their behalf – have a sufficient level of AI literacy,” he states.

“AI literacy will play a vital role in AI Act compliance, as those involved in governing and using AI must understand the risks they are managing.”

Encouraging responsible innovation  

The EU AI Act is being hailed as a milestone for responsible AI development. By prohibiting harmful practices and requiring transparency and accountability, the regulation seeks to balance innovation with ethical considerations.

Headshot of Beatriz Sanz Sáiz, AI Sector Leader at EY Global, for an article on what the regulations introduced in EU AI Act means for businesses in the European Union and beyond.

“This framework is a pivotal step towards building a more responsible and sustainable future for artificial intelligence,” says Beatriz Sanz Sáiz, AI Sector Leader at EY Global.

Sanz Sáiz believes the legislation fosters trust while providing a foundation for transformative technological progress.

“It has the potential to foster further trust, accountability, and innovation in AI development, as well as strengthen the foundations upon which the technology continues to be built,” Sanz Sáiz asserts.

“It is critical that we focus on eliminating bias and prioritising fundamental rights like fairness, equity, and privacy. Responsible AI development is a crucial step in the quest to further accelerate innovation.”

What’s prohibited under the EU AI Act?  

To ensure compliance, businesses need to be crystal-clear on which activities fall under the EU AI Act’s strict prohibitions. The current list of prohibited activities includes:  

  • Harmful subliminal, manipulative, and deceptive techniques  
  • Harmful exploitation of vulnerabilities  
  • Unacceptable social scoring  
  • Individual crime risk assessment and prediction (with some exceptions)  
  • Untargeted scraping of internet or CCTV material to develop or expand facial recognition databases  
  • Emotion recognition in areas such as the workplace and education (with some exceptions)  
  • Biometric categorisation to infer sensitive categories (with some exceptions)  
  • Real-time remote biometric identification (RBI) in publicly accessible spaces for law enforcement purposes (with some exceptions)  

The Commission’s forthcoming guidance on which “AI systems” fall under these categories will be critical for businesses seeking to ensure compliance and reduce legal risks. Additionally, companies should anticipate further clarification and resources at the national and EU levels, such as the upcoming webinar hosted by the AI Office.

A new landscape for AI regulations

The early implementation of the EU AI Act represents just the beginning of what is a remarkably complex and ambitious regulatory endeavour. As AI continues to play an increasingly pivotal role in business strategy, organisations must learn to navigate new rules and continuously adapt to future changes.  

For now, businesses should focus on understanding the scope of their AI use, enhancing data governance, educating staff to build AI literacy, and adopting a proactive approach to compliance. By doing so, they can position themselves as leaders in a fast-evolving AI landscape and unlock the technology’s full potential while upholding ethical and legal standards.

(Photo by Guillaume Périgois)

See also: ChatGPT Gov aims to modernise US government agencies

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU AI Act: What businesses need to know as regulations go live appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/eu-ai-act-what-businesses-need-know-regulations-go-live/feed/ 0
Yiannis Antoniou, Lab49: OpenAI Operator kickstarts era of browser AI agents https://www.artificialintelligence-news.com/news/yiannis-antoniou-lab49-openai-operator-era-browser-ai-agents/ https://www.artificialintelligence-news.com/news/yiannis-antoniou-lab49-openai-operator-era-browser-ai-agents/#respond Fri, 24 Jan 2025 14:03:14 +0000 https://www.artificialintelligence-news.com/?p=16963 OpenAI has unveiled Operator, a tool that integrates seamlessly with web browsers to perform tasks autonomously. From filling out forms to ordering groceries, Operator promises to simplify repetitive online activities by interacting directly with websites through clicks, typing, and scrolling. Designed around a new model called the Computer-Using Agent (CUA), Operator combines GPT-4o’s vision recognition […]

The post Yiannis Antoniou, Lab49: OpenAI Operator kickstarts era of browser AI agents appeared first on AI News.

]]>
OpenAI has unveiled Operator, a tool that integrates seamlessly with web browsers to perform tasks autonomously. From filling out forms to ordering groceries, Operator promises to simplify repetitive online activities by interacting directly with websites through clicks, typing, and scrolling.

Designed around a new model called the Computer-Using Agent (CUA), Operator combines GPT-4o’s vision recognition with advanced reasoning capabilities—allowing it to function as a virtual “human-in-the-browser.” Yet, for all its innovation, industry experts see room for refinement.

Yiannis Antoniou, Head of AI, Data, and Analytics at specialist consultancy Lab49, shared his insights on Operator’s significance and positioning in the competitive landscape of agent AI systems.

Agentic AI through a familiar interface

“OpenAI’s announcement of Operator, its latest foray into the agentic AI wars, is both fascinating and incomplete,” said Antoniou, who has over two decades of experience designing AI systems for financial services firms.

Headshot of Yiannis Antoniou, Head of AI, Data, and Analytics at specialist consultancy Lab49, for an article on how OpenAI operator is kickstarting the era of browser AI agents.

“Clearly influenced by Anthropic Claude’s Computer Use system, introduced back in October, Operator streamlines the experience by removing the need for complex infrastructure and focusing on a familiar interface: the browser.”

By designing Operator to operate within an environment users already understand, the web browser, OpenAI sidesteps the need for bespoke APIs or integrations.

“By leveraging the world’s most popular interface, OpenAI enhances the user experience and captures immediate interest from the general public. This browser-centric approach creates significant potential for widespread adoption, something Anthropic – despite its early-mover advantage – has struggled to achieve.”

Unlike some competing systems that may feel technical or niche in their application, Operator’s browser-focused framework lowers the barrier to entry and is a step forward in OpenAI’s efforts to democratise AI.

Unique take on usability and security

One of the hallmarks of Operator is its emphasis on adaptability and security, implemented through human-in-the-loop protocols. Antoniou acknowledged these thoughtful usability features but noted that more work is needed.

“Architecturally, Operator’s browser integration closely mirrors Claude’s system. Both involve taking screenshots of the user’s browser and sending them for analysis, as well as controlling the screen via virtual keystrokes and mouse movements. However, Operator introduces thoughtful usability touches. 

“Features like custom instructions for specific websites add a layer of personalisation, and the emphasis on human-in-the-loop safeguards against unauthorised actions – such as purchases, sending emails, or applying for jobs – demonstrate OpenAI’s awareness of potential security risks posed by malicious websites, but more work is clearly needed to make this system widely safe across a variety of scenarios.”

OpenAI has implemented a multi-layered safety framework for Operator, including takeover mode for secure inputs, user confirmations prior to significant actions, and monitoring systems to detect adversarial behavior. Furthermore, users can delete browsing data and manage privacy settings directly within the tool.

However, Antoniou emphasised that these measures are still evolving—particularly as Operator encounters complex or sensitive tasks. 

OpenAI Operator further democratises AI

Antoniou also sees the release of Operator as a pivotal moment for the consumer AI landscape, albeit one that is still in its early stages. 

“Overall, this is an excellent first attempt at building an agentic system for everyday users, designed around how they naturally interact with technology. As the system develops – with added capabilities and more robust security controls – this limited rollout, priced at $200/month, will serve as a testing ground. 

“Once matured and extended to lower subscription tiers and the free version, Operator has the potential to usher in the era of consumer-facing agents, further democratising AI and embedding it into daily life.”

Designed initially for Pro users at a premium price point, Operator provides OpenAI with an opportunity to learn from early adopters and refine its capabilities.

Antoniou noted that while $200/month might not yet justify the system’s value for most users, investment in making Operator more powerful and accessible could lead to significant competitive advantages for OpenAI in the long run.

“Is it worth $200/month? Perhaps not yet. But as the system evolves, OpenAI’s moat will grow, making it harder for competitors to catch up. Now, the challenge shifts back to Anthropic and Google – both of whom have demonstrated similar capabilities in niche or engineering-focused products – to respond and stay in the game,” he concludes.

As OpenAI continues to fine-tune Operator, the potential to revolutionise how people interact with technology becomes apparent. From collaborations with companies like Instacart, DoorDash, and Uber to use cases in the public sector, Operator aims to balance innovation with trust and safety.

While early limitations and pricing may deter widespread adoption for now, these hurdles might only be temporary as OpenAI commits to enhancing usability and accessibility over time.

See also: OpenAI argues against ChatGPT data deletion in Indian court

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Yiannis Antoniou, Lab49: OpenAI Operator kickstarts era of browser AI agents appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/yiannis-antoniou-lab49-openai-operator-era-browser-ai-agents/feed/ 0
Rethinking video surveillance: The case for smarter, more flexible solutions https://www.artificialintelligence-news.com/news/rethinking-video-surveillance-the-case-for-smarter-more-flexible-solutions/ https://www.artificialintelligence-news.com/news/rethinking-video-surveillance-the-case-for-smarter-more-flexible-solutions/#respond Thu, 02 Jan 2025 10:17:18 +0000 https://www.artificialintelligence-news.com/?p=16758 Video surveillance has come a long way from simple CCTV setups. Today’s businesses demand more – smarter analytics, enhanced security, and seamless scalability. As organisations adopt AI and automation across their operations, video management systems (VMS) face new challenges: These questions are not hypothetical. They represent real obstacles businesses face when managing video surveillance systems. […]

The post Rethinking video surveillance: The case for smarter, more flexible solutions appeared first on AI News.

]]>
Video surveillance has come a long way from simple CCTV setups. Today’s businesses demand more – smarter analytics, enhanced security, and seamless scalability. As organisations adopt AI and automation across their operations, video management systems (VMS) face new challenges:

  • How to keep video surveillance scalable and easy to manage?
  • Can AI analytics like face recognition or behaviour detection be integrated without breaking the budget?
  • Is my current system prepared for modern security risks?

These questions are not hypothetical. They represent real obstacles businesses face when managing video surveillance systems. Solving them requires innovative thinking, flexible tools, and a smarter approach to how systems are designed and operated.

The Shift to Smarter Surveillance

Traditional video surveillance systems often fail to meet the needs of dynamic, modern environments. Whether it’s a retail chain looking to analyse customer behaviour or a factory monitoring equipment safety, the tools of yesterday aren’t enough to address today’s demands.

The shift towards smarter surveillance involves integrating modular, AI-driven systems that:

  • Adapt to your specific needs,
  • Automate tedious tasks like footage analysis,
  • Offer advanced analytics, like emotion detection or license plate recognition,
  • Remain accessible to both tech-savvy professionals and beginners.

This isn’t just a technical shift; it’s a shift in mindset. Businesses now see surveillance not only as a security measure but as a strategic tool for operational insight.

Meet Xeoma: The modular approach to smarter surveillance

At the forefront of this smarter surveillance revolution is Xeoma, a modular, AI-powered video surveillance software that provides various solutions to challenges of modern businesses:

Modularity for customisation. Xeoma’s plug-and-play structure allows businesses to tailor their surveillance systems. Whether you need facial recognition, vehicle detection, or heatmaps of customer activity, Xeoma makes it easy to add or remove modules as needed.

AI-powered analytics: Xeoma offers cutting-edge features like:

  • Object recognition: Detect and classify objects like people, animals, and vehicles,
  • Voice-to-text: Transcribe spoken words into text,
  • Fire detection: Detect the presence of fire or smoke,
  • Licence plate recognition: Automatically read and record vehicle licence plates,
  • Age and gender recognition: Determine the age range and gender of individuals.

Ease of use: Unlike many systems with steep learning curves, Xeoma is designed to be user-friendly. Its intuitive interface ensures that even non-technical users can quickly set up and operate the software.

Seamless integration: Xeoma integrates with IoT devices, access control systems, and other third-party tools, making it an ideal choice for businesses looking to enhance their existing setups.

Cost efficiency: With Xeoma, you only pay once thanks to the lifetime licences. The pricing structure ensures that businesses of all sizes, from startups to enterprises, can find a solution that fits their budgets.

Unlimited scalability: Xeoma has no limitations in number of cameras it can work with. Either the system has tens, hundreds or thousands of cameras – Xeoma will handle them all

Encrypted communication: Xeoma uses secure communication protocols (HTTPS, SSL/TLS) to encrypt data transmitted between the server, cameras, and clients. The prevents unauthorised access during data transmission.

Xeoma’s flexible design and robust features allow it to be tailored to a wide range of scenarios, empowering organisations to meet their unique challenges while staying efficient, secure, and scalable.

How Xeoma benefits your business: Scenarios

Xeoma isn’t just a tool for security – it’s a versatile platform that adapts to your environment, whether you run a small retail store, manage a factory floor, or oversee an entire urban surveillance network.

Retail: Elevating customer experience

Picture this: You manage a busy store where you need to understand peak traffic hours and monitor for shoplifting. With Xeoma one can:

  • Deploy AI-based ‘face recognition’ to discreetly flag known shoplifters or VIP customers to enhance service,
  • Use ‘visitors counter’ and ‘crowd detector’ to identify when foot traffic is highest and allocate staff accordingly,
  • Analyse heatmaps to see which areas of the store attract the most attention, optimising product placement,
  • Add ‘unique visitors counter’ module to your system to group people by frequency of attendance. At the same time, age and gender recognition will assist you in tailoring your promo more accurately,
  • Enhance the results of your marketing efforts with eye tracking by getting insights into human psychology.

Manufacturing: Ensuring workplace safety

On a bustling factory floor, every second matters, and safety is critical. Xeoma can help by:

  • Detecting if workers are in restricted zones using ‘cross-line detector,’
  • Monitoring compliance with safety protocols with helmet and mask detectors.
  • Sending real-time alerts to supervisors about potential hazards, like machinery malfunctions or unauthorised access, via a plethora of means from push notifications to personalised alerts,
  • Elevating trust and satisfaction levels with timelapse and streaming to YouTube.

Urban surveillance: Protecting communities

If you’re part of a city planning team or law enforcement agency, Xeoma scales effortlessly to monitor entire districts:

  • Use licence plate recognition to track vehicles entering and exiting restricted areas,
  • Automate responses to emergencies, from traffic incidents and rule violations (for example, speeding, passing on red traffic light or illegal parking detectors) to public safety threats,
  • Identify suspicious behaviour in crowded public spaces using ‘loitering detector,’
  • Detect graffiti and ads that have prohibited words like “drugs” with text recognition,
  • Recognise faces to find wanted or missing people with face identification.

Education: Safeguarding schools

For schools and universities, safety is a top priority. Xeoma provides:

  • AI alerts with ‘detector of abandoned objects’ and ‘sound detector’ for detecting unattended bags or abnormal behaviour, ensuring quick response times,
  • Smoke and fire detection that allows you to prevent or promptly respond to the body of fire.
  • Smart automated verification with ‘smart-card reader’ and ‘face ID’ that help to avoid the penetration by unauthorised persons,
  • Integration with existing access control systems via API or HTTP protocol for a seamless security solution,
  • Live streaming to your educational entity website or YouTube can enhance parental engagement or build a positive image, while eye tracking serves as an effective anti-cheat solution in monitoring systems.

Hospitality: Enhancing guest experiences

In the hospitality industry, guest satisfaction is everything. Xeoma helps you:

  • Monitor entrances and exits with access control integration for smooth check-ins and check-outs,
  • Use ’emotion detector’ to gauge customer satisfaction in common areas,
  • Ensure staff compliance with protocols to maintain service quality with ‘voice-to-text’ module.

Conclusion: Connecting Xeoma to your vision

Every business has its unique challenges, and Xeoma’s versatility means it can be the solution you need to overcome yours. Imagine running a business where:

  • Your team has actionable insights at their fingertips,
  • Potential threats are flagged before they escalate,
  • Your surveillance system doesn’t just protect – it empowers decision-making and growth.

Xeoma isn’t just about surveillance; it’s about giving you peace of mind, actionable intelligence, and the flexibility to focus on what matters most – your people, your customers, and your vision for the future.

Whether you’re securing a retail space, safeguarding a factory, or protecting an entire community, Xeoma’s modular, AI-powered platform adapts to your goals and grows alongside you.

Ready to see how Xeoma can transform your video surveillance strategy? Explore a free demo and start building your ideal system today.

The post Rethinking video surveillance: The case for smarter, more flexible solutions appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/rethinking-video-surveillance-the-case-for-smarter-more-flexible-solutions/feed/ 0
Western drivers remain sceptical of in-vehicle AI https://www.artificialintelligence-news.com/news/western-drivers-remain-sceptical-in-vehicle-ai/ https://www.artificialintelligence-news.com/news/western-drivers-remain-sceptical-in-vehicle-ai/#respond Tue, 05 Nov 2024 12:58:15 +0000 https://www.artificialintelligence-news.com/?p=16437 A global study has unveiled a stark contrast in attitudes towards embracing in-vehicle AI between Eastern and Western markets, with European drivers particularly reluctant. The research – conducted by MHP – surveyed 4,700 car drivers across China, the US, Germany, the UK, Italy, Sweden, and Poland, revealing significant geographical disparities in AI acceptance and understanding. […]

The post Western drivers remain sceptical of in-vehicle AI appeared first on AI News.

]]>
A global study has unveiled a stark contrast in attitudes towards embracing in-vehicle AI between Eastern and Western markets, with European drivers particularly reluctant.

The research – conducted by MHP – surveyed 4,700 car drivers across China, the US, Germany, the UK, Italy, Sweden, and Poland, revealing significant geographical disparities in AI acceptance and understanding.

According to the study, while AI is becoming integral to modern vehicles, European consumers remain hesitant about its implementation and value proposition.

Regional disparities

The study found that 48 percent of Chinese respondents view in-car AI predominantly as an opportunity, while merely 23 percent of European respondents share this optimistic outlook. In Europe, 39 percent believe AI’s opportunities and risks are broadly balanced, while 24 percent take a negative stance, suggesting the risks outweigh potential benefits.

Understanding of AI technology also varies significantly by region. While over 80 percent of Chinese respondents claim to understand AI’s use in cars, this figure drops to just 54 percent among European drivers, highlighting a notable knowledge gap.

Marcus Willand, Partner at MHP and one of the study’s authors, notes: “The figures show that the prospect of greater safety and comfort due to AI can motivate purchasing decisions. However, the European respondents in particular are often hesitant and price-sensitive.”

The willingness to pay for AI features shows an equally stark divide. Just 23 percent of European drivers expressed willingness to pay for AI functions, compared to 39 percent of Chinese drivers. The study suggests that most users now expect AI features to be standard rather than optional extras.

Graphs showing what features the public view can be significantly improved by in-vehicle AI.

Dr Nils Schaupensteiner, Associated Partner at MHP and study co-author, said: “Automotive companies need to create innovations with clear added value and develop both direct and indirect monetisation of their AI offerings, for example through data-based business models and improved services.”

In-vehicle AI opportunities

Despite these challenges, traditional automotive manufacturers maintain a trust advantage over tech giants. The study reveals that 64 percent of customers trust established car manufacturers with AI implementation, compared to 50 percent for technology firms like Apple, Google, and Microsoft.

Graph highlighting the public trust in various stakeholders regarding in-vehicle AI.

The research identified several key areas where AI could provide significant value across the automotive industry’s value chain, including pattern recognition for quality management, enhanced data management capabilities, AI-driven decision-making systems, and improved customer service through AI-powered communication tools.

“It is worth OEMs and suppliers considering the opportunities offered by the new technology along their entire value chain,” explains Augustin Friedel, Senior Manager and study co-author. “However, the possible uses are diverse and implementation is quite complex.”

The study reveals that while up to 79 percent of respondents express interest in AI-powered features such as driver assistance systems, intelligent route planning, and predictive maintenance, manufacturers face significant challenges in monetising these capabilities, particularly in the European market.

Graph showing the public interest in various in-vehicle AI features.

See also: MIT breakthrough could transform robot training

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Western drivers remain sceptical of in-vehicle AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/western-drivers-remain-sceptical-in-vehicle-ai/feed/ 0
AI and bots allegedly used to fraudulently boost music streams https://www.artificialintelligence-news.com/news/ai-and-bots-allegedly-used-to-fraudulently-boost-music-streams/ https://www.artificialintelligence-news.com/news/ai-and-bots-allegedly-used-to-fraudulently-boost-music-streams/#respond Mon, 16 Sep 2024 14:37:10 +0000 https://www.artificialintelligence-news.com/?p=16043 A singer from the United States has been accused of manipulating music streaming platforms using AI technologies and bots to fraudulently inflate his stream statistics and earn millions of dollars in royalties. Michael Smith, 52, from North Carolina, faces charges of wire fraud, conspiracy to commit wire fraud and money laundering. According to the BBC, […]

The post AI and bots allegedly used to fraudulently boost music streams appeared first on AI News.

]]>
A singer from the United States has been accused of manipulating music streaming platforms using AI technologies and bots to fraudulently inflate his stream statistics and earn millions of dollars in royalties.

Michael Smith, 52, from North Carolina, faces charges of wire fraud, conspiracy to commit wire fraud and money laundering.

According to the BBC, authorities allege that this is the first time AI has been used to allow such a large-scale streaming scam. U.S. Attorney Damian Williams emphasised the scope of the fraud, claiming that Smith took millions of dollars in royalties that should have gone to real musicians, songwriters and rights holders.

The accusations stem from an unsealed indictment alleging that Smith distributed hundreds of thousands of AI-generated songs across multiple streaming platforms. To avoid detection, automated bots streamed the tracks—sometimes up to 10,000 at a time. Smith allegedly earned more than $10 million in illegal royalties over several years.

The FBI played a crucial role in the investigation. The agency’s acting assistant director, Christie M. Curtis, explained that the agency was dedicated to tracking down those who misuse technology to rob people of their earnings while simultaneously undermining the efforts of real artists.

According to the indictment, Smith began working with the CEO of an undisclosed AI music firm around 2018. This co-conspirator allegedly provided Smith with thousands of AI-generated tracks each month. In exchange, Smith offered metadata such as song titles and artist names, and offered a share of streaming earnings.

One email exchange between Smith and the unnamed CEO in March 2019 demonstrates how the plot took shape. The executive stated, “Keep in mind what we’re doing musically here…this is not ‘music,’ [but] ‘instant music’).” The email emphasises the operation’s intentional nature, as well as the use of AI to generate large amounts of content with minimal effort. According to the indictment, the technology improved over time, making it harder for streaming platforms to detect fraudulent streams.

In another email dated February, Smith boasted that his AI-generated tracks had accumulated over 4 billion streams and $12 million in royalties since 2019. If convicted, Smith faces significant prison time for the charges brought against him.

The Smith case is not the only one involving bogus music streaming royalties. Earlier this year, a Danish man received an 18-month term for a similar plan. Music streaming platforms like Spotify, Apple Music and YouTube forbid bots and artificial streams from being used to boost royalties. Such behaviour is disruptive and illegal, and platforms have taken steps to combat it through policy changes. For instance, if artificial streams are detected, Spotify charges the label or distributor and music can earn royalties only if it meets certain criteria.

Nevertheless, the proliferation of AI-generated music continues to disrupt the music industry. Musicians and record companies fear they will lose revenue and recognition due to AI tools capable of creating music, text and images. Such tools reportedly sometimes use content that musicians and other creators have posted on the internet, raising questions about copyright infringement.

Tension came to a head in 2023 when a track that mimicked the voices of popular artists Drake and The Weeknd went viral, prompting streaming platforms to remove it. Earlier this year, several high-profile musicians, including Billie Eilish, Elvis Costello and Aerosmith, signed an open letter urging the music industry to address the “predatory” use of AI to generate content.

(Photo by israel palacio)

See also: Whitepaper dispels fears of AI-induced job losses

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI and bots allegedly used to fraudulently boost music streams appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-and-bots-allegedly-used-to-fraudulently-boost-music-streams/feed/ 0
Alibaba Cloud launches English version of AI model hub https://www.artificialintelligence-news.com/news/alibaba-cloud-launches-english-version-ai-model-hub/ https://www.artificialintelligence-news.com/news/alibaba-cloud-launches-english-version-ai-model-hub/#respond Tue, 25 Jun 2024 12:16:49 +0000 https://www.artificialintelligence-news.com/?p=15116 Alibaba Cloud has taken a step towards globalising its AI offerings by unveiling an English version of ModelScope, its open-source AI model community. The move aims to bring generative AI capabilities to a wider audience of businesses and developers worldwide. ModelScope, which embodies Alibaba Cloud’s concept of “Model-as-a-Service,” transforms AI models into readily available and […]

The post Alibaba Cloud launches English version of AI model hub appeared first on AI News.

]]>
Alibaba Cloud has taken a step towards globalising its AI offerings by unveiling an English version of ModelScope, its open-source AI model community. The move aims to bring generative AI capabilities to a wider audience of businesses and developers worldwide.

ModelScope, which embodies Alibaba Cloud’s concept of “Model-as-a-Service,” transforms AI models into readily available and deployable services. Since its launch in mainland China in 2022, the platform has grown to become the country’s largest AI model community, boasting over five million developer users.

With this international expansion, developers around the globe will now have access to more than 5,000 advanced AI models. The platform also welcomes user-contributed models, fostering a collaborative ecosystem for AI development.

The English version of ModelScope provides a comprehensive suite of tools and resources to support developers in bringing their AI projects to fruition. This includes access to over 1,500 high-quality Chinese-language datasets and an extensive range of toolkits for data processing. Moreover, the platform offers various modules that allow developers to customise model inference, training, and evaluation with minimal coding requirements.

Alibaba Cloud announced the English version of ModelScope during the 2024 Computer Vision and Pattern Recognition (CVPR) Conference in Seattle. This annual event brings together academics, researchers, and business leaders for a five-day exploration of cutting-edge developments in AI and machine learning through workshops, panels, and keynotes.

The company’s presence at CVPR was further bolstered by the acceptance of more than 30 papers from Alibaba Group, with six selected as oral and highlighted papers. This achievement underscores Alibaba’s commitment to advancing the field of AI research and development.

Conference attendees also had the opportunity to experience firsthand the capabilities of Alibaba’s proprietary Qwen model series at the company’s booth. The demonstration showcased the model’s impressive image and video generation capabilities, providing a glimpse into the potential applications of Alibaba’s AI technologies.

The launch of the English version of ModelScope represents a significant milestone in Alibaba Cloud’s strategy to expand its AI offerings globally.

As businesses and developers worldwide increasingly seek to harness the power of AI, platforms like ModelScope are set to play a crucial role in democratising access to advanced AI capabilities. With its extensive collection of models, datasets, and development tools, Alibaba Cloud’s ModelScope will help to accelerate AI innovation and adoption on a global scale.

(Image Source: www.alibabagroup.com)

See also: SoftBank chief: Forget AGI, ASI will be here within 10 years

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Alibaba Cloud launches English version of AI model hub appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/alibaba-cloud-launches-english-version-ai-model-hub/feed/ 0
SAS aims to make AI accessible regardless of skill set with packaged AI models https://www.artificialintelligence-news.com/news/sas-aims-to-make-ai-accessible-regardless-of-skill-set-with-packaged-ai-models/ https://www.artificialintelligence-news.com/news/sas-aims-to-make-ai-accessible-regardless-of-skill-set-with-packaged-ai-models/#respond Wed, 17 Apr 2024 23:37:00 +0000 https://www.artificialintelligence-news.com/?p=14696 SAS, a specialist in data and AI solutions, has unveiled what it describes as a “game-changing approach” for organisations to tackle business challenges head-on. Introducing lightweight, industry-specific AI models for individual licence, SAS hopes to equip organisations with readily deployable AI technology to productionise real-world use cases with unparalleled efficiency. Chandana Gopal, research director, Future […]

The post SAS aims to make AI accessible regardless of skill set with packaged AI models appeared first on AI News.

]]>
SAS, a specialist in data and AI solutions, has unveiled what it describes as a “game-changing approach” for organisations to tackle business challenges head-on.

Introducing lightweight, industry-specific AI models for individual licence, SAS hopes to equip organisations with readily deployable AI technology to productionise real-world use cases with unparalleled efficiency.

Chandana Gopal, research director, Future of Intelligence, IDC, said: “SAS is evolving its portfolio to meet wider user needs and capture market share with innovative new offerings,

“An area that is ripe for SAS is productising models built on SAS’ core assets, talent and IP from its wealth of experience working with customers to solve industry problems.”

In today’s market, the consumption of models is primarily focused on large language models (LLMs) for generative AI. In reality, LLMs are a very small part of the modelling needs of real-world production deployments of AI and decision making for businesses. With the new offering, SAS is moving beyond LLMs and delivering industry-proven deterministic AI models for industries that span use cases such as fraud detection, supply chain optimization, entity management, document conversation and health care payment integrity and more.

Unlike traditional AI implementations that can be cumbersome and time-consuming, SAS’ industry-specific models are engineered for quick integration, enabling organisations to operationalise trustworthy AI technology and accelerate the realisation of tangible benefits and trusted results.

Expanding market footprint

Organisations are facing pressure to compete effectively and are looking to AI to gain an edge. At the same time, staffing data science teams has never been more challenging due to AI skills shortages. Consequently, businesses are demanding agility in using AI to solve problems and require flexible AI solutions to quickly drive business outcomes. SAS’ easy-to-use, yet powerful models tuned for the enterprise enable organisations to benefit from a half-century of SAS’ leadership across industries.

Delivering industry models as packaged offerings is one outcome of SAS’ commitment of $1 billion to AIpowered industry solutions. As outlined in the May 2023 announcement, the investment in AI builds on SAS’ decades-long focus on providing packaged solutions to address industry challenges in banking, government, health care and more.

Udo Sglavo, VP for AI and Analytics, SAS, said: “Models are the perfect complement to our existing solutions and SAS Viya platform offerings and cater to diverse business needs across various audiences, ensuring that innovation reaches every corner of our ecosystem. 

“By tailoring our approach to understanding specific industry needs, our frameworks empower businesses to flourish in their distinctive Environments.”

Bringing AI to the masses

SAS is democratising AI by offering out-of-the-box, lightweight AI models – making AI accessible regardless of skill set – starting with an AI assistant for warehouse space optimisation. Leveraging technology like large language models, these assistants cater to nontechnical users, translating interactions into optimised workflows seamlessly and aiding in faster planning decisions.

Sgvalo said: “SAS Models provide organisations with flexible, timely and accessible AI that aligns with industry challenges.

“Whether you’re embarking on your AI journey or seeking to accelerate the expansion of AI across your enterprise, SAS offers unparalleled depth and breadth in addressing your business’s unique needs.”

The first SAS Models are expected to be generally available later this year.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post SAS aims to make AI accessible regardless of skill set with packaged AI models appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/sas-aims-to-make-ai-accessible-regardless-of-skill-set-with-packaged-ai-models/feed/ 0
Hugging Face launches Idefics2 vision-language model https://www.artificialintelligence-news.com/news/hugging-face-launches-idefics2-vision-language-model/ https://www.artificialintelligence-news.com/news/hugging-face-launches-idefics2-vision-language-model/#respond Tue, 16 Apr 2024 11:04:20 +0000 https://www.artificialintelligence-news.com/?p=14686 Hugging Face has announced the release of Idefics2, a versatile model capable of understanding and generating text responses based on both images and texts. The model sets a new benchmark for answering visual questions, describing visual content, story creation from images, document information extraction, and even performing arithmetic operations based on visual input. Idefics2 leapfrogs […]

The post Hugging Face launches Idefics2 vision-language model appeared first on AI News.

]]>
Hugging Face has announced the release of Idefics2, a versatile model capable of understanding and generating text responses based on both images and texts. The model sets a new benchmark for answering visual questions, describing visual content, story creation from images, document information extraction, and even performing arithmetic operations based on visual input.

Idefics2 leapfrogs its predecessor, Idefics1, with just eight billion parameters and the versatility afforded by its open license (Apache 2.0), along with remarkably enhanced Optical Character Recognition (OCR) capabilities.

The model not only showcases exceptional performance in visual question answering benchmarks but also holds its ground against far larger contemporaries such as LLava-Next-34B and MM1-30B-chat:

Central to Idefics2’s appeal is its integration with Hugging Face’s Transformers from the outset, ensuring ease of fine-tuning for a broad array of multimodal applications. For those eager to dive in, models are available for experimentation on the Hugging Face Hub.

A standout feature of Idefics2 is its comprehensive training philosophy, blending openly available datasets including web documents, image-caption pairs, and OCR data. Furthermore, it introduces an innovative fine-tuning dataset dubbed ‘The Cauldron,’ amalgamating 50 meticulously curated datasets for multifaceted conversational training.

Idefics2 exhibits a refined approach to image manipulation, maintaining native resolutions and aspect ratios—a notable deviation from conventional resizing norms in computer vision. Its architecture benefits significantly from advanced OCR capabilities, adeptly transcribing textual content within images and documents, and boasts improved performance in interpreting charts and figures.

Simplifying the integration of visual features into the language backbone marks a shift from its predecessor’s architecture, with the adoption of a learned Perceiver pooling and MLP modality projection enhancing Idefics2’s overall efficacy.

This advancement in vision-language models opens up new avenues for exploring multimodal interactions, with Idefics2 poised to serve as a foundational tool for the community. Its performance enhancements and technical innovations underscore the potential of combining visual and textual data in creating sophisticated, contextually-aware AI systems.

For enthusiasts and researchers looking to leverage Idefics2’s capabilities, Hugging Face provides a detailed fine-tuning tutorial.

See also: OpenAI makes GPT-4 Turbo with Vision API generally available

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Hugging Face launches Idefics2 vision-language model appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/hugging-face-launches-idefics2-vision-language-model/feed/ 0