machine learning Archives - AI News https://www.artificialintelligence-news.com/news/tag/machine-learning/ Artificial Intelligence News Thu, 24 Apr 2025 11:41:00 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png machine learning Archives - AI News https://www.artificialintelligence-news.com/news/tag/machine-learning/ 32 32 Spot AI introduces the world’s first universal AI agent builder for security cameras https://www.artificialintelligence-news.com/news/spot-ai-introduces-the-worlds-first-universal-ai-agent-builder-for-security-cameras/ https://www.artificialintelligence-news.com/news/spot-ai-introduces-the-worlds-first-universal-ai-agent-builder-for-security-cameras/#respond Thu, 10 Apr 2025 03:31:47 +0000 https://www.artificialintelligence-news.com/?p=105242 Spot AI has introduced Iris, which the company describes as the world’s first universal video AI agent builder for enterprise camera systems. The tool allows businesses to create customised AI agents through a conversational interface, making it easier to monitor and act on video data from physical settings without the need for technical expertise. Designed […]

The post Spot AI introduces the world’s first universal AI agent builder for security cameras appeared first on AI News.

]]>
Spot AI has introduced Iris, which the company describes as the world’s first universal video AI agent builder for enterprise camera systems.

The tool allows businesses to create customised AI agents through a conversational interface, making it easier to monitor and act on video data from physical settings without the need for technical expertise.

Designed for industries like manufacturing, logistics, retail, construction, and healthcare, Iris builds on Spot AI’s earlier launch of out-of-the-box Video AI Agents for safety, security, and operations. While those prebuilt agents focus on common use cases, Iris gives organisations the flexibility to train agents for more specific, business-critical scenarios.

According to Spot AI, users can build video agents in a matter of minutes. The system allows training through reinforcement—using examples of what the AI should and shouldn’t detect—and can be configured to trigger real-world responses like shutting down equipment, locking doors, or generating alerts.

CEO and Co-Founder Rish Gupta said the tool dramatically shortens the time required to create specialised video detection systems.

“What used to take months of development now happens in minutes,” Gupta explained. Before Iris, creating specialised video detection required dedicated AI/ML teams with advanced degrees, thousands of annotated images, and 8 weeks of complex development,” he explained. “Iris puts that same power in the hands of any business leader through simple conversation with 8 minutes and 20 training images.”

Examples from real-world settings

Spot AI highlighted a variety of industry-specific use cases that Iris could support:

  • Manufacturing: Detecting product backups or fluid leaks, with automatic responses based on severity.
  • Warehousing: Spotting unsafe stacking of boxes or pallets to prevent accidents.
  • Retail: Monitoring shelf stock levels and generating alerts for restocking.
  • Healthcare: Distinguishing between staff and patients wearing similar uniforms to optimise traffic flow and safety.
  • Security: Identifying tools like bolt cutters in parking areas to address evolving security threats.
  • Safety compliance: Verifying whether workers are wearing required safety gear on-site.

Video AI agents continuously monitor critical areas and help teams respond quickly to safety hazards, operational inefficiencies, and security issues. With Iris, those agents can be developed and modified through natural language interaction, reducing the need for engineering support and making video insights more accessible across departments.

Looking ahead

Iris is part of Spot AI’s broader effort to make video data more actionable in physical environments. The company plans to discuss the tool and its capabilities at Google Cloud Next, where Rish Gupta is scheduled to speak during a media roundtable on April 9.

(Image by Spot AI)

See also: ChatGPT hits record usage after viral Ghibli feature—Here are four risks to know first

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Spot AI introduces the world’s first universal AI agent builder for security cameras appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/spot-ai-introduces-the-worlds-first-universal-ai-agent-builder-for-security-cameras/feed/ 0
Tony Blair Institute AI copyright report sparks backlash https://www.artificialintelligence-news.com/news/tony-blair-institute-ai-copyright-report-sparks-backlash/ https://www.artificialintelligence-news.com/news/tony-blair-institute-ai-copyright-report-sparks-backlash/#respond Wed, 02 Apr 2025 11:04:11 +0000 https://www.artificialintelligence-news.com/?p=105140 The Tony Blair Institute (TBI) has released a report calling for the UK to lead in navigating the complex intersection of arts and AI. According to the report, titled ‘Rebooting Copyright: How the UK Can Be a Global Leader in the Arts and AI,’ the global race for cultural and technological leadership is still up […]

The post Tony Blair Institute AI copyright report sparks backlash appeared first on AI News.

]]>
The Tony Blair Institute (TBI) has released a report calling for the UK to lead in navigating the complex intersection of arts and AI.

According to the report, titled ‘Rebooting Copyright: How the UK Can Be a Global Leader in the Arts and AI,’ the global race for cultural and technological leadership is still up for grabs, and the UK has a golden opportunity to take the lead.

The report emphasises that countries that “embrace change and harness the power of artificial intelligence in creative ways will set the technical, aesthetic, and regulatory standards for others to follow.”

Highlighting that we are in the midst of another revolution in media and communication, the report notes that AI is disrupting how textual, visual, and auditive content is created, distributed, and experienced, much like the printing press, gramophone, and camera did before it.

“AI will usher in a new era of interactive and bespoke works, as well as a counter-revolution that celebrates everything that AI can never be,” the report states.

However, far from signalling the end of human creativity, the TBI suggests AI will open up “new ways of being original.”

The AI revolution’s impact isn’t limited to the creative industries; it’s being felt across all areas of society. Scientists are using AI to accelerate discoveries, healthcare providers are employing it to analyse X-ray images, and emergency services utilise it to locate houses damaged by earthquakes.

The report stresses that these cross-industry advancements are just the beginning, with future AI systems set to become increasingly capable, fuelled by advancements in computing power, data, model architectures, and access to talent.

The UK government has expressed its ambition to be a global leader in AI through its AI Opportunities Action Plan, announced by Prime Minister Keir Starmer on 13 January 2025. For its part, the TBI welcomes the UK government’s ambition, stating that “if properly designed and deployed, AI can make human lives healthier, safer, and more prosperous.”

However, the rapid spread of AI across sectors raises urgent policy questions, particularly concerning the data used for AI training. The application of UK copyright law to the training of AI models is currently contested, with the debate often framed as a “zero-sum game” between AI developers and rights holders. The TBI argues that this framing “misrepresents the nature of the challenge and the opportunity before us.”

The report emphasises that “bold policy solutions are needed to provide all parties with legal clarity and unlock investments that spur innovation, job creation, and economic growth.”

According to the TBI, AI presents opportunities for creators—noting its use in various fields from podcasts to filmmaking. The report draws parallels with past technological innovations – such as the printing press and the internet – which were initially met with resistance, but ultimately led to societal adaptation and human ingenuity prevailing.

The TBI proposes that the solution lies not in clinging to outdated copyright laws but in allowing them to “co-evolve with technological change” to remain effective in the age of AI.

The UK government has proposed a text and data mining exception with an opt-out option for rights holders. While the TBI views this as a good starting point for balancing stakeholder interests, it acknowledges the “significant implementation and enforcement challenges” that come with it, spanning legal, technical, and geopolitical dimensions.

In the report, the Tony Blair Institute for Global Change “assesses the merits of the UK government’s proposal and outlines a holistic policy framework to make it work in practice.”

The report includes recommendations and examines novel forms of art that will emerge from AI. It also delves into the disagreement between rights holders and developers on copyright, the wider implications of copyright policy, and the serious hurdles the UK’s text and data mining proposal faces.

Furthermore, the Tony Blair Institute explores the challenges of governing an opt-out policy, implementation problems with opt-outs, making opt-outs useful and accessible, and tackling the diffusion problem. AI summaries and the problems they present regarding identity are also addressed, along with defensive tools as a partial solution and solving licensing problems.

The report also seeks to clarify the standards on human creativity, address digital watermarking, and discuss the uncertainty around the impact of generative AI on the industry. It proposes establishing a Centre for AI and the Creative Industries and discusses the risk of judicial review, the benefits of a remuneration scheme, and the advantages of a targeted levy on ISPs to raise funding for the Centre.

However, the report has faced strong criticism. Ed Newton-Rex, CEO of Fairly Trained, raised several concerns on Bluesky. These concerns include:

  • The report repeats the “misleading claim” that existing UK copyright law is uncertain, which Newton-Rex asserts is not the case.
  • The suggestion that an opt-out scheme would give rights holders more control over how their works are used is misleading. Newton-Rex argues that licensing is currently required by law, so moving to an opt-out system would actually decrease control, as some rights holders will inevitably miss the opt-out.
  • The report likens machine learning (ML) training to human learning, a comparison that Newton-Rex finds shocking, given the vastly different scalability of the two.
  • The report’s claim that AI developers won’t make long-term profits from training on people’s work is questioned, with Newton-Rex pointing to the significant funding raised by companies like OpenAI.
  • Newton-Rex suggests the report uses strawman arguments, such as stating that generative AI may not replace all human paid activities.
  • A key criticism is that the report omits data showing how generative AI replaces demand for human creative labour.
  • Newton-Rex also criticises the report’s proposed solutions, specifically the suggestion to set up an academic centre, which he notes “no one has asked for.”
  • Furthermore, he highlights the proposal to tax every household in the UK to fund this academic centre, arguing that this would place the financial burden on consumers rather than the AI companies themselves, and the revenue wouldn’t even go to creators.

Adding to these criticisms, British novelist and author Jonathan Coe noted that “the five co-authors of this report on copyright, AI, and the arts are all from the science and technology sectors. Not one artist or creator among them.”

While the report from Tony Blair Institute for Global Change supports the government’s ambition to be an AI leader, it also raises critical policy questions—particularly around copyright law and AI training data.

(Photo by Jez Timms)

See also: Amazon Nova Act: A step towards smarter, web-native AI agents

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Tony Blair Institute AI copyright report sparks backlash appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/tony-blair-institute-ai-copyright-report-sparks-backlash/feed/ 0
How AI helped refine Hungarian accents in The Brutalist https://www.artificialintelligence-news.com/news/how-ai-helped-refine-hungarian-accents-in-the-brutalist/ https://www.artificialintelligence-news.com/news/how-ai-helped-refine-hungarian-accents-in-the-brutalist/#respond Fri, 24 Jan 2025 13:38:07 +0000 https://www.artificialintelligence-news.com/?p=16952 When it comes to movies buzzing with Oscar potential, Brady Corbet’s The Brutalist is a standout this awards season. The visually stunning drama transports viewers to the post-World War II era, unravelling the story of László Tóth, played by Adrien Brody. Tóth, a fictional Hungarian-Jewish architect, starts over in the United States after being forced […]

The post How AI helped refine Hungarian accents in The Brutalist appeared first on AI News.

]]>
When it comes to movies buzzing with Oscar potential, Brady Corbet’s The Brutalist is a standout this awards season.

The visually stunning drama transports viewers to the post-World War II era, unravelling the story of László Tóth, played by Adrien Brody. Tóth, a fictional Hungarian-Jewish architect, starts over in the United States after being forced to leave his family behind as he emigrates.

Beyond its vintage allure, something modern brews in the background: the use of AI. Specifically, AI was employed to refine Brody’s and co-star Felicity Jones’ Hungarian pronunciation. The decision has sparked lively debates about technology’s role in film-making.

The role of AI in The Brutalist

According to Dávid Jancsó, the film’s editor, the production team turned to Respeecher, an AI software developed by a Ukrainian company, to tweak the actors’ Hungarian dialogue. Speaking to RedShark News (as cited by Mashable SEA), Jancsó explained that Hungarian – a Uralic language known for its challenging sounds – was a significant hurdle for the actors, despite their talent and dedication.

Respeecher’s software isn’t magic, but just a few years ago, it would have seemed wondrous. It creates a voice model based on a speaker’s characteristics and adjusts specific elements, like pronunciation. In this case, it was used to fine-tune the letter and vowel sounds that Brody and Jones found tricky. Most of the corrections were minimal, with Jancsó himself providing some replacement sounds to preserve the authenticity of the performances. “Most of their Hungarian dialogue has a part of me talking in there,” he joked, emphasising the care taken to maintain the actors’ original delivery.

Respeecher: AI behind the scenes

The is not Respeecher’s first foray into Hollywood. The software is known for restoring iconic voices like that of Darth Vader for the Obi-Wan Kenobi series, and has recreated Edith Piaf’s voice for an upcoming biopic. Outside of film, Respeecher has helped to preserve endangered languages like Crimean Tatar.

For The Brutalist, the AI tool wasn’t just a luxury – it was a time and budget saver. With so much dialogue in Hungarian, manually editing every line would have required painstaking, manual work. Jancsó said that using AI sped up the process significantly, an important factor given the film’s modest $10 million budget.

Beyond voice: AI’s other roles in the film

AI was also used in other aspects of the production process, used for example to generate some of Tóth’s architectural drawings and complete buildings in the film’s Venice Biennale sequence. However, director Corbet has clarified that these images were not fully AI-generated; instead, the AI was used for specific background elements.

Corbet and Jancsó have been candid about their perspectives on AI in film-making. Jancsó sees it as a valuable tool, saying, “There’s nothing in the film using AI that hasn’t been done before. It just makes the process a lot faster.” Corbet added that the software’s purpose was to enhance authenticity, not replace the actors’ hard work.

A broader conversation

The debate surrounding AI in the film industry isn’t new. From script-writing to music production, concerns about generative AI’s impact were central to the 2023 Writers Guild of America (WGA) and SAG-AFTRA strikes. Although agreements have been reached to regulate the use of AI, the topic remains a hot-button issue.

The Brutalist awaits a possible Oscar nomination. From its story line to its cinematic style, the film wears its ambition on its sleeve. It’s not just a celebration of the postwar Brutalist architectural movement, it’s also a nod to classic American cinema. Shot in the rarely used VistaVision format, the film captures the grandeur of mid-20th-century film-making. Adding to its nostalgic charm, it includes a 15-minute intermission during its epic three-and-a-half-hour runtime.

Yet the use of AI has given a new dimension to the ongoing conversation about AI in the creative industry. Whether people see AI as a betrayal of craftsmanship or an exciting innovative tool that can add to a final creation, one thing is certain: AI continues to transform how stories are delivered on screen.

See also: AI music sparks new copyright battle in US courts

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post How AI helped refine Hungarian accents in The Brutalist appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/how-ai-helped-refine-hungarian-accents-in-the-brutalist/feed/ 0
Copyright concerns create need for a fair alternative in AI sector https://www.artificialintelligence-news.com/news/copyright-concerns-create-need-for-a-fair-alternative-in-ai-sector/ https://www.artificialintelligence-news.com/news/copyright-concerns-create-need-for-a-fair-alternative-in-ai-sector/#respond Thu, 09 Jan 2025 14:27:11 +0000 https://www.artificialintelligence-news.com/?p=16834 When future generations look back at the rise of artificial intelligence technologies, the year 2025 may be remembered as a major turning point, when the industry took concrete steps towards greater inclusion, and embraced decentralised frameworks that recognise and fairly compensate every stakeholder. The growth of AI has already sparked transformation in multiple industries, but […]

The post Copyright concerns create need for a fair alternative in AI sector appeared first on AI News.

]]>
When future generations look back at the rise of artificial intelligence technologies, the year 2025 may be remembered as a major turning point, when the industry took concrete steps towards greater inclusion, and embraced decentralised frameworks that recognise and fairly compensate every stakeholder.

The growth of AI has already sparked transformation in multiple industries, but the pace of uptake has also led to concerns around data ownership, privacy and copyright infringement. Because AI is centralised with the most powerful models controlled by corporations, content creators have largely been sidelined.

OpenAI, the world’s most prominent AI company, has already admitted that’s the case. In January 2024, it told the UK’s House of Lords Communications and Digital Select Committee that it would not have been able to create its iconic chatbot, ChatGPT, without training it on copyrighted material.

OpenAI trained ChatGPT on everything that was posted on the public internet prior to 2023, but the people who created that content – much of which is copyrighted – have not been paid any compensation; a major source of contention.

There’s an opportunity for decentralised AI projects like that proposed by the ASI Alliance to offer an alternative way of AI model development. The Alliance is building a framework that gives content creators a method to retain control over their data, along with mechanisms for fair reward should they choose to share their material with AI model makers. It’s a more ethical basis for AI development, and 2025 could be the year it gets more attention.

AI’s copyright conundrum

OpenAI isn’t the only AI company that’s been accused of copyright infringement. The vast majority of AI models, including those that purport to be open-source, like Meta Platforms’ Llama 3 model, are guilty of scraping the public internet for training data.

AI developers routinely help themselves to whatever content they find online, ignoring the fact that much of the material is copyrighted. Copyright laws are designed to protect the creators of original works, like books, articles, songs, software, artworks and photos, from being exploited, and make unauthorised use of such materials illegal.

The likes of OpenAI, Meta, Anthropic, StabilityAI, Perplexity AI, Cohere, and AI21 Labs get round the law by claiming ‘fair use,’ reference to an ambiguous clause in copyright law that allows the limited use of protected content without the need to obtain permission from the creator. But there’s no clear definition of what actually constitutes ‘fair use,’ and many authors claim that AI threatens their livelihoods.

Many content creators have resorted to legal action, with a prominent lawsuits filed by the New York Times against OpenAI. In the suit, the Times alleges that OpenAI committed copyright infringement when it ingested thousands of articles to train its large language models. The media organisation claims that such practice is unlawful, as ChatGPT is a competing product that aims to ‘steal audience’ from the Times website.

The lawsuit has led to a debate – should AI companies be allowed to keep consuming any content on the internet, or should they be compelled to ask for permission first, and compensate those who create training data?

Consensus appears to be shifting toward the latter. For instance, the late former OpenAI researcher Suchir Balaji, told the Times in an interview that he was tasked with leading the collection of data to train ChatGPT’s models. He said his job involved scraping content from every possible source, including user-generated posts on social media, pirated book archives and articles behind paywalls. All content was scraped without permission being sought, he said.

Balaji said he initially bought OpenAI’s argument that if the information was posted online and freely available, scraping constituted fair use. However, he said that later, he began to question the stance after realising that products like ChatGPT could harm content creators. Ultimately, he said, he could no longer justify the practice of scraping data, resigning from the company in the summer of 2024.

A growing case for decentralised AI

Balaji’s departure from OpenAI appears to coincide with a realisation among AI companies that the practice of helping themselves to any content found online is unsustainable, and that content creators need legal protection.

Evidence of this comes from the spate of content licensing deals announced over the last year. OpenAI has agreed deals with a number of high-profile content publishers, including the Financial Times, NewsCorp, Conde Nast, Axel Springer, Associated Press, and Reddit, which hosts millions of pages of user-generated content on its forums. Other AI developers, like Google, Microsoft and Meta, have forged similar partnerships.

But it remains to be seen if these arrangements will prove to be satisfactory, especially if AI firms generate billions of dollars in revenue. While the terms of the content licensing deals haven’t been made public, The Information claims they are worth a few million dollars per year at most. Considering that OpenAI’s former chief scientist Ilya Sutskever was paid a salary of $1.9 million in 2016, the money offered to publishers may fall short of what content is really worth.

There’s also the fact that millions of smaller content creators – like bloggers, social media influencers etc. – continue to be excluded from deals.

The arguments around AI’s infringement of copyright are likely to last years without being resolved, and the legal ambiguity around data scraping, along with the growing recognition among practitioners that such practices are unethical, are helping to strengthen the case for decentralised frameworks.

Decentralised AI frameworks provide developers with a more principled model for AI training where the rights of content creators are respected, and where every contributor can be rewarded fairly.

Sitting at the heart of decentralised AI is blockchain, which enables the development, training, deployment, and governance of AI models across distributed, global networks owned by everyone. This means everyone can participate in building AI systems that are transparent, as opposed to centralised, corporate-owned AI models that are often described as “black boxes.”

Just as the arguments around AI copyright infringement intensify, decentralised AI projects are making inroads; this year promises to be an important one in the shift towards more transparent and ethical AI development.

Decentralised AI in action

Late in 2024, three blockchain-based AI startups formed the Artificial Superintelligence (ASI) Alliance, an organisation working towards the creation of a “decentralised superintelligence” to power advanced AI systems anyone can use.

The ASI Alliance says it’s the largest open-source, independent player in AI research and development. It was created by SingularityNET, which has developed a decentralised AI network and compute layer; Fetch.ai, focused on building autonomous AI agents that can perform complex tasks without human assistance; and Ocean Protocol, the creator of a transparent exchange for AI training data.

The ASI Alliance’s mission is to provide an alternative to centralised AI systems, emphasising open-source and decentralised platforms, including data and compute resources.

To protect content creators, the ASI Alliance is building an exchange framework based on Ocean Protocol’s technology, where anyone can contribute data to be used for AI training. Users will be able to upload data to the blockchain-based system and retain ownership of it, earning rewards whenever it’s accessed by AI models or developers. Others will be able to contribute by helping to label and annotate data to make it more accessible to AI models, and earn rewards for performing this work. In this way, the ASI Alliance promotes a more ethical way for developers to obtain the training data they need to create AI models.

Shortly after forming, the Alliance launched the ASI<Train/> initiative, focused on the development of more transparent and ethical “domain-specific models” specialising in areas like robotics, science, and medicine. Its first model is Cortex, which is said to be modeled on the human brain and designed to power autonomous robots in real-world environments.

The specialised models differ from general-purpose LLMs, which are great at answering questions and creating content and images, but less useful when asked to solve more complex problems that require significant expertise. But creating specialised models will be a community effort: the ASI Alliance needs industry experts to provide the necessary data to train models.

Fetch.ai’s CEO Humayun Sheikh said the ASI Alliance’s decentralised ownership model creates an ecosystem “where individuals support groundbreaking technology and share in value creation.”

Users without specific knowledge can buy and “stake” FET tokens to become part-owners of decentralised AI models and earn a share of the revenue they generate when they’re used by AI applications.

For content creators, the benefits of a decentralised approach to AI are clear. ASI’s framework lets them keep control of their data and track when it’s used by AI models. It integrates mechanisms encoded in smart contracts to ensure that everyone is fairly compensated. Participants earn rewards for contributing computational resources, data, and expertise, or by supporting the ecosystem through staking.

The ASI Alliance operates a model of decentralised governance, where token holders can vote on key decisions to ensure the project evolves to benefit stakeholders, rather than the shareholders of corporations.

AI for everyone is a necessity

The progress made by decentralised AI is exciting, and it comes at a time when it’s needed. AI is evolving quickly and centralised AI companies are currently at the forefront of adoption; for many, a major cause of concern.

Given the transformative potential of AI and the risks it poses to individual livelihoods, it’s important that the industry shifts to more responsible models. AI systems should be developed for the benefit of everyone, and this means every contributor rewarded for participation. Only decentralised AI systems have shown they can do this.

Decentralised AI is not just a nice-to-have but a necessity, representing the only viable alternative capable of breaking big tech’s stranglehold on creativity.

The post Copyright concerns create need for a fair alternative in AI sector appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/copyright-concerns-create-need-for-a-fair-alternative-in-ai-sector/feed/ 0
Machine unlearning: Researchers make AI models ‘forget’ data https://www.artificialintelligence-news.com/news/machine-unlearning-researchers-ai-models-forget-data/ https://www.artificialintelligence-news.com/news/machine-unlearning-researchers-ai-models-forget-data/#respond Tue, 10 Dec 2024 17:18:26 +0000 https://www.artificialintelligence-news.com/?p=16680 Researchers from the Tokyo University of Science (TUS) have developed a method to enable large-scale AI models to selectively “forget” specific classes of data. Progress in AI has provided tools capable of revolutionising various domains, from healthcare to autonomous driving. However, as technology advances, so do its complexities and ethical considerations.  The paradigm of large-scale […]

The post Machine unlearning: Researchers make AI models ‘forget’ data appeared first on AI News.

]]>
Researchers from the Tokyo University of Science (TUS) have developed a method to enable large-scale AI models to selectively “forget” specific classes of data.

Progress in AI has provided tools capable of revolutionising various domains, from healthcare to autonomous driving. However, as technology advances, so do its complexities and ethical considerations. 

The paradigm of large-scale pre-trained AI systems, such as OpenAI’s ChatGPT and CLIP (Contrastive Language–Image Pre-training), has reshaped expectations for machines. These highly generalist models, capable of handling a vast array of tasks with consistent precision, have seen widespread adoption for both professional and personal use.  

However, such versatility comes at a hefty price. Training and running these models demands prodigious amounts of energy and time, raising sustainability concerns, as well as requiring cutting-edge hardware significantly more expensive than standard computers. Compounding these issues is that generalist tendencies may hinder the efficiency of AI models when applied to specific tasks.  

For instance, “in practical applications, the classification of all kinds of object classes is rarely required,” explains Associate Professor Go Irie, who led the research. “For example, in an autonomous driving system, it would be sufficient to recognise limited classes of objects such as cars, pedestrians, and traffic signs.

“We would not need to recognise food, furniture, or animal species. Retaining classes that do not need to be recognised may decrease overall classification accuracy, as well as cause operational disadvantages such as the waste of computational resources and the risk of information leakage.”  

A potential solution lies in training models to “forget” redundant or unnecessary information—streamlining their processes to focus solely on what is required. While some existing methods already cater to this need, they tend to assume a “white-box” approach where users have access to a model’s internal architecture and parameters. Oftentimes, however, users get no such visibility.  

“Black-box” AI systems, more common due to commercial and ethical restrictions, conceal their inner mechanisms, rendering traditional forgetting techniques impractical. To address this gap, the research team turned to derivative-free optimisation—an approach that sidesteps reliance on the inaccessible internal workings of a model.  

Advancing through forgetting

The study, set to be presented at the Neural Information Processing Systems (NeurIPS) conference in 2024, introduces a methodology dubbed “black-box forgetting.”

The process modifies the input prompts (text instructions fed to models) in iterative rounds to make the AI progressively “forget” certain classes. Associate Professor Irie collaborated on the work with co-authors Yusuke Kuwana and Yuta Goto (both from TUS), alongside Dr Takashi Shibata from NEC Corporation.  

For their experiments, the researchers targeted CLIP, a vision-language model with image classification abilities. The method they developed is built upon the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), an evolutionary algorithm designed to optimise solutions step-by-step. In this study, CMA-ES was harnessed to evaluate and hone prompts provided to CLIP, ultimately suppressing its ability to classify specific image categories.

As the project progressed, challenges arose. Existing optimisation techniques struggled to scale up for larger volumes of targeted categories, leading the team to devise a novel parametrisation strategy known as “latent context sharing.”  

This approach breaks latent context – a representation of information generated by prompts – into smaller, more manageable pieces. By allocating certain elements to a single token (word or character) while reusing others across multiple tokens, they dramatically reduced the problem’s complexity. Crucially, this made the process computationally tractable even for extensive forgetting applications.  

Through benchmark tests on multiple image classification datasets, the researchers validated the efficacy of black-box forgetting—achieving the goal of making CLIP “forget” approximately 40% of target classes without direct access to the AI model’s internal architecture.

This research marks the first successful attempt to induce selective forgetting in a black-box vision-language model, demonstrating promising results.  

Benefits of helping AI models forget data

Beyond its technical ingenuity, this innovation holds significant potential for real-world applications where task-specific precision is paramount.

Simplifying models for specialised tasks could make them faster, more resource-efficient, and capable of running on less powerful devices—hastening the adoption of AI in areas previously deemed unfeasible.  

Another key use lies in image generation, where forgetting entire categories of visual context could prevent models from inadvertently creating undesirable or harmful content, be it offensive material or misinformation.  

Perhaps most importantly, this method addresses one of AI’s greatest ethical quandaries: privacy.

AI models, particularly large-scale ones, are often trained on massive datasets that may inadvertently contain sensitive or outdated information. Requests to remove such data—especially in light of laws advocating for the “Right to be Forgotten”—pose significant challenges.

Retraining entire models to exclude problematic data is costly and time-intensive, yet the risks of leaving it unaddressed can have far-reaching consequences.

“Retraining a large-scale model consumes enormous amounts of energy,” notes Associate Professor Irie. “‘Selective forgetting,’ or so-called machine unlearning, may provide an efficient solution to this problem.”  

These privacy-focused applications are especially relevant in high-stakes industries like healthcare and finance, where sensitive data is central to operations.  

As the global race to advance AI accelerates, the Tokyo University of Science’s black-box forgetting approach charts an important path forward—not only by making the technology more adaptable and efficient but also by adding significant safeguards for users.  

While the potential for misuse remains, methods like selective forgetting demonstrate that researchers are proactively addressing both ethical and practical challenges.  

See also: Why QwQ-32B-Preview is the reasoning AI to watch

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Machine unlearning: Researchers make AI models ‘forget’ data appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/machine-unlearning-researchers-ai-models-forget-data/feed/ 0
New AI training techniques aim to overcome current challenges https://www.artificialintelligence-news.com/news/o1-model-llm-ai-openai-training-research-next-generation/ https://www.artificialintelligence-news.com/news/o1-model-llm-ai-openai-training-research-next-generation/#respond Thu, 28 Nov 2024 11:58:28 +0000 https://www.artificialintelligence-news.com/?p=16574 OpenAI and other leading AI companies are developing new training techniques to overcome limitations of current methods. Addressing unexpected delays and complications in the development of larger, more powerful language models, these fresh techniques focus on human-like behaviour to teach algorithms to ‘think. Reportedly led by a dozen AI researchers, scientists, and investors, the new […]

The post New AI training techniques aim to overcome current challenges appeared first on AI News.

]]>
OpenAI and other leading AI companies are developing new training techniques to overcome limitations of current methods. Addressing unexpected delays and complications in the development of larger, more powerful language models, these fresh techniques focus on human-like behaviour to teach algorithms to ‘think.

Reportedly led by a dozen AI researchers, scientists, and investors, the new training techniques, which underpin OpenAI’s recent ‘o1’ model (formerly Q* and Strawberry), have the potential to transform the landscape of AI development. The reported advances may influence the types or quantities of resources AI companies need continuously, including specialised hardware and energy to aid the development of AI models.

The o1 model is designed to approach problems in a way that mimics human reasoning and thinking, breaking down numerous tasks into steps. The model also utilises specialised data and feedback provided by experts in the AI industry to enhance its performance.

Since ChatGPT was unveiled by OpenAI in 2022, there has been a surge in AI innovation, and many technology companies claim existing AI models require expansion, be it through greater quantities of data or improved computing resources. Only then can AI models consistently improve.

Now, AI experts have reported limitations in scaling up AI models. The 2010s were a revolutionary period for scaling, but Ilya Sutskever, co-founder of AI labs Safe Superintelligence (SSI) and OpenAI, says that the training of AI models, particularly in the understanding language structures and patterns, has levelled off.

“The 2010s were the age of scaling, now we’re back in the age of wonder and discovery once again. Scaling the right thing matters more now,” they said.

In recent times, AI lab researchers have experienced delays in and challenges to developing and releasing large language models (LLM) that are more powerful than OpenAI’s GPT-4 model.

First, there is the cost of training large models, often running into tens of millions of dollars. And, due to complications that arise, like hardware failing due to system complexity, a final analysis of how these models run can take months.

In addition to these challenges, training runs require substantial amounts of energy, often resulting in power shortages that can disrupt processes and impact the wider electriciy grid. Another issue is the colossal amount of data large language models use, so much so that AI models have reportedly used up all accessible data worldwide.

Researchers are exploring a technique known as ‘test-time compute’ to improve current AI models when being trained or during inference phases. The method can involve the generation of multiple answers in real-time to decide on a range of best solutions. Therefore, the model can allocate greater processing resources to difficult tasks that require human-like decision-making and reasoning. The aim – to make the model more accurate and capable.

Noam Brown, a researcher at OpenAI who helped develop the o1 model, shared an example of how a new approach can achieve surprising results. At the TED AI conference in San Francisco last month, Brown explained that “having a bot think for just 20 seconds in a hand of poker got the same boosting performance as scaling up the model by 100,000x and training it for 100,000 times longer.”

Rather than simply increasing the model size and training time, this can change how AI models process information and lead to more powerful, efficient systems.

It is reported that other AI labs have been developing versions of the o1 technique. The include xAI, Google DeepMind, and Anthropic. Competition in the AI world is nothing new, but we could see a significant impact on the AI hardware market as a result of new techniques. Companies like Nvidia, which currently dominates the supply of AI chips due to the high demand for their products, may be particularly affected by updated AI training techniques.

Nvidia became the world’s most valuable company in October, and its rise in fortunes can be largely attributed to its chips’ use in AI arrays. New techniques may impact Nvidia’s market position, forcing the company to adapt its products to meet the evolving AI hardware demand. Potentially, this could open more avenues for new competitors in the inference market.

A new age of AI development may be on the horizon, driven by evolving hardware demands and more efficient training methods such as those deployed in the o1 model. The future of both AI models and the companies behind them could be reshaped, unlocking unprecedented possibilities and greater competition.

See also: Anthropic urges AI regulation to avoid catastrophes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, a

The post New AI training techniques aim to overcome current challenges appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/o1-model-llm-ai-openai-training-research-next-generation/feed/ 0
NVIDIA AI Summit Japan: NVIDIA’s role in Japan’s big AI ambitions https://www.artificialintelligence-news.com/news/nvidia-ai-summit-japan-nvidia-role-in-japans-big-ai-ambitions/ https://www.artificialintelligence-news.com/news/nvidia-ai-summit-japan-nvidia-role-in-japans-big-ai-ambitions/#respond Wed, 13 Nov 2024 09:01:41 +0000 https://www.artificialintelligence-news.com/?p=16475 Japan is on a mission to become a global AI powerhouse, and it’s starting with some impressive advances in AI-driven language models. Japanese technology experts are developing advanced models that grasp the unique nuances of the Japanese language and culture—essential for industries such as healthcare, finance, and manufacturing – where precision is key. But this […]

The post NVIDIA AI Summit Japan: NVIDIA’s role in Japan’s big AI ambitions appeared first on AI News.

]]>
Japan is on a mission to become a global AI powerhouse, and it’s starting with some impressive advances in AI-driven language models.

Japanese technology experts are developing advanced models that grasp the unique nuances of the Japanese language and culture—essential for industries such as healthcare, finance, and manufacturing – where precision is key.

But this effort isn’t Japan’s alone. Consulting giants like Accenture, Deloitte, EY Japan, FPT, Kyndryl, and TCS Japan are partnering with NVIDIA to create AI innovation hubs across the country. The centres are using NVIDIA’s AI software and specialised Japanese language models to build tailored AI solutions, helping industries boost productivity in a digital workforce. The goal? To get Japanese companies fully on board with enterprise and physical AI.

One standout technology supporting the drive is NVIDIA’s Omniverse platform. With Omniverse, Japanese companies can create digital twins—virtual replicas of real-world assets—and test complex AI systems safely before implementing them. This is a game-changer for industries such as manufacturing and robotics, allowing businesses to fine-tune processes without the risk of real-world trial and error. This use of AI is more than just innovation; it represents Japan’s plan for addressing some major challenges ahead.

Japan faces a shrinking workforce presence as its population ages. With its strengths in robotics and automation, Japan is well-positioned to use AI solutions to bridge the gap. In fact, Japan’s government recently shared its vision of becoming “the world’s most AI-friendly country,” underscoring the perceived role AI will play in the nation’s future.

Supporting this commitment, Japan’s AI market hit $5.9 billion in value this year; a 31.2% growth rate according to IDC. New AI-focused consulting centres in Tokyo and Kansai give Japanese businesses hands-on access to NVIDIA’s latest technologies, equipping them to solve social challenges and aid economic growth.

Top cloud providers like SoftBank, GMO Internet Group, KDDI, Highreso, Rutilea, and SAKURA Internet are also involved, working with NVIDIA to build AI infrastructure. Backed by Japan’s Ministry of Economy, Trade and Industry, they’re establishing AI data centres across Japan to accelerate growth in robotics, automotive, healthcare, and telecoms.

NVIDIA and SoftBank have also formed a remarkable partnership to build Japan’s most powerful AI supercomputer using NVIDIA’s Blackwell platform. Additionally, SoftBank has tested the world’s first AI and 5G hybrid telecoms network with NVIDIA’s AI Aerial platform, allowing Japan to set a worldwide standard. With these developments, Japan is taking big strides toward establishing itself as a leader in the AI-powered industrial revolution.

(Photo by Andrey Matveev)

See also: NVIDIA’s share price nosedives as antitrust clouds gather

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NVIDIA AI Summit Japan: NVIDIA’s role in Japan’s big AI ambitions appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/nvidia-ai-summit-japan-nvidia-role-in-japans-big-ai-ambitions/feed/ 0
Project Jarvis leak reveals Google’s vision for next-gen Gemini https://www.artificialintelligence-news.com/news/project-jarvis-leak-reveals-google-vision-for-the-next-gen-gemini/ https://www.artificialintelligence-news.com/news/project-jarvis-leak-reveals-google-vision-for-the-next-gen-gemini/#respond Mon, 04 Nov 2024 14:30:52 +0000 https://www.artificialintelligence-news.com/?p=16420 Google has big hopes for AI, as evidenced by the consistent improvements to its Gemini chatbot in recent months. Google briefly introduced its vision for a “universal AI agent” aimed to help users with daily tasks at the I/O developer conference in May, hinting that elements of the technology could be incorporated into Gemini soon. […]

The post Project Jarvis leak reveals Google’s vision for next-gen Gemini appeared first on AI News.

]]>
Google has big hopes for AI, as evidenced by the consistent improvements to its Gemini chatbot in recent months.

Google briefly introduced its vision for a “universal AI agent” aimed to help users with daily tasks at the I/O developer conference in May, hinting that elements of the technology could be incorporated into Gemini soon. Recent insights from The Information have shed more light on its initiative, known internally as Project Jarvis.

Project Jarvis represents a major advancement in AI for Google. Unlike traditional voice assistants that respond to user commands, Jarvis is designed to perform tasks autonomously, navigate the web, and make independent decisions. For instance, Jarvis could manage emails, conduct research, and even schedule appointments, reducing the cognitive load involved in managing digital tasks.

Jarvis’s core objective is to revolutionise how users interact with their devices. Rather than serving as a passive tool awaiting commands, Jarvis would actively engage in real-time task management, positioning it as an AI partner rather than a utility.

For legal professionals, Jarvis could review large volumes of case documents and organise them by relevance, streamlining workflow. Similarly, marketers could use Jarvis to integrate data from numerous sources, allowing them to focus more on strategy and less on administrative work.

The evolution of AI agents such as Jarvis may have an impact on specific job roles. Tasks formerly performed by entry-level administrative personnel may come within the capabilities of AI assistants. However, the shift is likely to generate opportunities in roles that require critical thinking, creativity, and emotional intelligence—qualities not easily replicated by AI.

Industry observers anticipate a shift toward higher-value work, with people concentrating less on routine tasks and more on areas that promote innovation and strategic decision-making.

Privacy and security considerations

Project Jarvis raises significant privacy and security issues due to its ability to access sensitive information such as emails and documents. To prevent unauthorised access, Google will most likely deploy enhanced encryption, strict user restrictions, and, maybe, multi-factor authentication. Cybersecurity will also be essential to keep Jarvis secure from external threats.

Surveys indicate that, while AI holds considerable appeal, privacy remains a top concern for many users. Experts recommend measures such as a transparent privacy dashboard that enables users to monitor and control Jarvis’s access to data. To build trust and drive the adoption of AI agents like Jarvis, Google will need to strike a balance between convenience and robust privacy protections.

Enhancing user experience and accessibility

Beyond productivity, Jarvis has the potential to improve accessibility for a wide range of users. For those with disabilities, Jarvis could read web content aloud or use voice commands to assist with form navigation. For less tech-savvy users, Jarvis could simplify digital interactions by handling tasks like locating files or managing settings.

Jarvis could also assist in planning a busy workday or booking a trip by actively supporting task management. Project Jarvis aims to reimagine AI as a supportive digital partner, enhancing the user experience beyond that of a conventional tool.

(Photo by Igor Bumba)

See also: Google advances mobile AI in Pixel 9 smartphones

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Project Jarvis leak reveals Google’s vision for next-gen Gemini appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/project-jarvis-leak-reveals-google-vision-for-the-next-gen-gemini/feed/ 0
AI governance gap: 95% of firms haven’t implemented frameworks https://www.artificialintelligence-news.com/news/ai-governance-gap-95-of-firms-havent-frameworks/ https://www.artificialintelligence-news.com/news/ai-governance-gap-95-of-firms-havent-frameworks/#respond Thu, 17 Oct 2024 16:38:58 +0000 https://www.artificialintelligence-news.com/?p=16318 Robust governance is essential to mitigate AI risks and maintain responsible systems, but the majority of firms are yet to implement a framework. Commissioned by Prove AI and conducted by Zogby Analytics, the report polled over 600 CEOs, CIOs, and CTOs from large companies across the US, UK, and Germany. The findings show that 96% […]

The post AI governance gap: 95% of firms haven’t implemented frameworks appeared first on AI News.

]]>
Robust governance is essential to mitigate AI risks and maintain responsible systems, but the majority of firms are yet to implement a framework.

Commissioned by Prove AI and conducted by Zogby Analytics, the report polled over 600 CEOs, CIOs, and CTOs from large companies across the US, UK, and Germany. The findings show that 96% of organisations are already utilising AI to support business operations, with the same percentage planning to increase their AI budgets in the coming year.

The primary motivations for AI investment include increasing productivity (82%), improving operational efficiency (73%), enhancing decision-making (65%), and achieving cost savings (60%). The most common AI use cases reported were customer service and support, predictive analytics, and marketing and ad optimisation.

Despite the surge in AI investments, business leaders are acutely aware of the additional risk exposure that AI brings to their organisations. Data integrity and security emerged as the biggest deterrents to implementing new AI solutions.

Executives also reported encountering various AI performance issues, including:

  • Data quality issues (e.g., inconsistencies or inaccuracies): 41%
  • Bias detection and mitigation challenges in AI algorithms, leading to unfair or discriminatory outcomes: 37%
  • Difficulty in quantifying and measuring the return on investment (ROI) of AI initiatives: 28%

While 95% of respondents expressed confidence in their organisation’s current AI risk management practices, the report revealed a significant gap in AI governance implementation.

Only 5% of executives reported that their organisation has implemented any AI governance framework. However, 82% stated that implementing AI governance solutions is a somewhat or extremely pressing priority, with 85% planning to implement such solutions by summer 2025.

The report also found that 82% of participants support an AI governance executive order to provide stronger oversight. Additionally, 65% expressed concern about IP infringement and data security.

Mrinal Manohar, CEO of Prove AI, commented: “Executives are making themselves clear: AI’s long-term efficacy, including providing a meaningful return on the massive investments organisations are currently making, is contingent on their ability to develop and refine comprehensive AI governance strategies.

“The wave of AI-focused legislation going into effect around the world is only increasing the urgency; for the current wave of innovation to continue responsibly, we need to implement clearer guardrails to manage and monitor the data informing AI systems.”

As global regulations like the EU AI Act loom on the horizon, the report underscores the importance of de-risking AI and the work that still needs to be done. Implementing and optimising dedicated AI governance strategies has emerged as a top priority for businesses looking to harness the power of AI while mitigating associated risks.

The findings of this report serve as a wake-up call for organisations to prioritise AI governance as they continue to invest in and deploy AI technologies. Responsible implementation and robust governance frameworks will be key to unlocking the full potential of AI while maintaining trust and compliance.

(Photo by Rob Thompson)

See also: Scoring AI models: Endor Labs unveils evaluation tool

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI governance gap: 95% of firms haven’t implemented frameworks appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-governance-gap-95-of-firms-havent-frameworks/feed/ 0
Scoring AI models: Endor Labs unveils evaluation tool https://www.artificialintelligence-news.com/news/scoring-ai-models-endor-labs-evaluation-tool/ https://www.artificialintelligence-news.com/news/scoring-ai-models-endor-labs-evaluation-tool/#respond Wed, 16 Oct 2024 13:06:26 +0000 https://www.artificialintelligence-news.com/?p=16305 Endor Labs has begun scoring AI models based on their security, popularity, quality, and activity. Dubbed ‘Endor Scores for AI Models,’ this unique capability aims to simplify the process of identifying the most secure open-source AI models currently available on Hugging Face – a platform for sharing Large Language Models (LLMs), machine learning models, and […]

The post Scoring AI models: Endor Labs unveils evaluation tool appeared first on AI News.

]]>
Endor Labs has begun scoring AI models based on their security, popularity, quality, and activity.

Dubbed ‘Endor Scores for AI Models,’ this unique capability aims to simplify the process of identifying the most secure open-source AI models currently available on Hugging Face – a platform for sharing Large Language Models (LLMs), machine learning models, and other open-source AI models and datasets – by providing straightforward scores.

The announcement comes as developers increasingly turn to platforms like Hugging Face for ready-made AI models, mirroring the early days of readily-available open-source software (OSS). This new release improves AI governance by enabling developers to “start clean” with AI models, a goal that has so far proved elusive.

Varun Badhwar, Co-Founder and CEO of Endor Labs, said: “It’s always been our mission to secure everything your code depends on, and AI models are the next great frontier in that critical task.

“Every organisation is experimenting with AI models, whether to power particular applications or build entire AI-based businesses. Security has to keep pace, and there’s a rare opportunity here to start clean and avoid risks and high maintenance costs down the road.”

George Apostolopoulos, Founding Engineer at Endor Labs, added: “Everybody is experimenting with AI models right now. Some teams are building brand new AI-based businesses while others are looking for ways to slap a ‘powered by AI’ sticker on their product. One thing is for sure, your developers are playing with AI models.”

However, this convenience does not come without risks. Apostolopoulos warns that the current landscape resembles “the wild west,” with people grabbing models that fit their needs without considering potential vulnerabilities.

Endor Labs’ approach treats AI models as dependencies within the software supply chain

“Our mission at Endor Labs is to ‘secure everything your code depends on,'” Apostolopoulos states. This perspective allows organisations to apply similar risk evaluation methodologies to AI models as they do to other open-source components.

Endor’s tool for scoring AI models focuses on several key risk areas:

  • Security vulnerabilities: Pre-trained models can harbour malicious code or vulnerabilities within model weights, potentially leading to security breaches when integrated into an organisation’s environment.
  • Legal and licensing issues: Compliance with licensing terms is crucial, especially considering the complex lineage of AI models and their training sets.
  • Operational risks: The dependency on pre-trained models creates a complex graph that can be challenging to manage and secure.

To combat these issues, Endor Labs’ evaluation tool applies 50 out-of-the-box checks to AI models on Hugging Face. The system generates an “Endor Score” based on factors such as the number of maintainers, corporate sponsorship, release frequency, and known vulnerabilities.

Screenshot of Endor Labs' tool for scoring AI models.

Positive factors in the system for scoring AI models include the use of safe weight formats, the presence of licensing information, and high download and engagement metrics. Negative factors encompass incomplete documentation, lack of performance data, and the use of unsafe weight formats.

A key feature of Endor Scores is its user-friendly approach. Developers don’t need to know specific model names; they can start their search with general questions like “What models can I use to classify sentiments?” or “What are the most popular models from Meta?” The tool then provides clear scores ranking both positive and negative aspects of each model, allowing developers to select the most appropriate options for their needs.

“Your teams are being asked about AI every single day, and they’ll look for the models they can use to accelerate innovation,” Apostolopoulos notes. “Evaluating Open Source AI models with Endor Labs helps you make sure the models you’re using do what you expect them to do, and are safe to use.”

(Photo by Element5 Digital)

See also: China Telecom trains AI model with 1 trillion parameters on domestic chips

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Scoring AI models: Endor Labs unveils evaluation tool appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/scoring-ai-models-endor-labs-evaluation-tool/feed/ 0
AI winter: A cycle of hype, disappointment, and recovery https://www.artificialintelligence-news.com/news/ai-winter-cycle-of-hype-disappointment-and-recovery/ https://www.artificialintelligence-news.com/news/ai-winter-cycle-of-hype-disappointment-and-recovery/#respond Mon, 09 Sep 2024 13:45:35 +0000 https://www.artificialintelligence-news.com/?p=16009 The term AI winter refers to a period of funding cuts in AI research and development, often following overhyped expectations that fail to deliver. With recent generative AI systems falling short of investor promises — from OpenAI’s GPT-4o to Google’s AI-powered overviews — this pattern feels all too familiar today. Search Engine Land reported that […]

The post AI winter: A cycle of hype, disappointment, and recovery appeared first on AI News.

]]>
The term AI winter refers to a period of funding cuts in AI research and development, often following overhyped expectations that fail to deliver.

With recent generative AI systems falling short of investor promises — from OpenAI’s GPT-4o to Google’s AI-powered overviews — this pattern feels all too familiar today.

Search Engine Land reported that AI winters have historically followed cycles of excitement and disappointment. The first of these, in the 1970s, occurred due to the underwhelming results from ambitious projects aiming to achieve machine translation and speech recognition. Given that there was insufficient computing power, and the expectations of what computers could achieve in the field were unrealistic, funding was frozen.

The expert systems in the 1980s showed promise, but the second AI winter occurred when these systems failed to handle unexpected inputs. The decline of LISP machines, and the failure of Japan’s Fifth Generation project, were additional factors that contributed to the slowdown. Many researchers distanced themselves from AI, opting to call their work informatics or machine learning, to avoid the negative stigma.

AI’s resilience through winters

AI pushed through the 1990s, albeit slowly and painfully, and was mostly impractical. Even though IBM Watson was supposed to revolutionise the way humans treat illnesses, its implementation in real-world medical practices encountered challenges at every turn. The AI machine was unable to interpret doctors’ notes, and cater to local population needs. In other words, AI was exposed in delicate situations requiring a delicate approach.

AI research and funding surged again in the early 2000s with advances in machine learning, and big data. However, AI’s reputation, tainted by past failures, led many to rebrand AI technologies. Terms like blockchain, autonomous vehicles, and voice-command devices gained investor interest, only for most to fade when they failed to meet inflated expectations.

Lessons from past AI winters

Each AI winter follows a familiar sequence: expectations lead to hype, followed by disappointments in technology, and finances. AI researchers retreat from the field, and dedicate themselves to more focused projects.

However, these projects do not support the development of long-term research, favouring short-term efforts, and making everyone reconsider AI’s potential. Not only does this have an undesirable impact on the technology, but it also influences the workforce, whose talents eventually deem the technology unsustainable. Some life-changing projects are also abandoned.

Yet, these periods provide valuable lessons. They remind us to be realistic about AI’s capabilities, focus on foundational research, and communicate transparently with investors, and the public.

Are we headed toward another AI winter?

After an explosive 2023, the pace of AI progress appears to have slowed; breakthroughs in generative AI are becoming less frequent. Investor calls have seen fewer mentions of AI, and companies struggle to realise the productivity gains initially promised by tools like ChatGPT.

The use of generative AI models is limited due to difficulties, such as the presence of hallucinations, and a lack of true understanding. Moreover, when discussing real-world applications, the spread of AI-generated content, and numerous problematic aspects concerning data usage, also present problems that may slow progress.

However, it may be possible to avoid a full-blown AI winter. Open-source models are catching up quickly to closed alternatives and companies are shifting toward implementing different applications across industries. Monetary investments have not stopped either, particularly in the case of Perplexity, where a niche in the search space might have been found despite general scepticism toward the company’s claims.

The future of AI and its impact on businesses

It is difficult to say with certainty what will happen with AI in the future. On the one hand, progress will likely continue, and better AI systems will be developed, with improved productivity rates for the search marketing industry. On the other hand, if the technology is unable to address the current issues — including the ethics of AI’s existence, the safety of the data used, and the accuracy of the systems — falling confidence in AI may result in a reduction of investments and, consequently, a more substantial industry slowdown.

In either case, businesses will need authenticity, trust, and a strategic approach to adopt AI. Search marketers, and AI professionals, must be well-informed and understand the limits of AI tools. They should apply them responsibly, and experiment with them cautiously in search of productivity gains, while avoiding the trap of relying too heavily on an emerging technology.

(Photo by Filip Bunkens)

See also: OpenAI co-founder’s ‘Safe AI’ startup secures $1bn, hits $5bn valuation.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI winter: A cycle of hype, disappointment, and recovery appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-winter-cycle-of-hype-disappointment-and-recovery/feed/ 0
Primate Labs launches Geekbench AI benchmarking tool https://www.artificialintelligence-news.com/news/primate-labs-launches-geekbench-ai-benchmarking-tool/ https://www.artificialintelligence-news.com/news/primate-labs-launches-geekbench-ai-benchmarking-tool/#respond Fri, 16 Aug 2024 09:13:49 +0000 https://www.artificialintelligence-news.com/?p=15773 Primate Labs has officially launched Geekbench AI, a benchmarking tool designed specifically for machine learning and AI-centric workloads. The release of Geekbench AI 1.0 marks the culmination of years of development and collaboration with customers, partners, and the AI engineering community. The benchmark, previously known as Geekbench ML during its preview phase, has been rebranded […]

The post Primate Labs launches Geekbench AI benchmarking tool appeared first on AI News.

]]>
Primate Labs has officially launched Geekbench AI, a benchmarking tool designed specifically for machine learning and AI-centric workloads.

The release of Geekbench AI 1.0 marks the culmination of years of development and collaboration with customers, partners, and the AI engineering community. The benchmark, previously known as Geekbench ML during its preview phase, has been rebranded to align with industry terminology and ensure clarity about its purpose.

Geekbench AI is now available for Windows, macOS, and Linux through the Primate Labs website, as well as on the Google Play Store and Apple App Store for mobile devices.

Primate Labs’ latest benchmarking tool aims to provide a standardised method for measuring and comparing AI capabilities across different platforms and architectures. The benchmark offers a unique approach by providing three overall scores, reflecting the complexity and heterogeneity of AI workloads.

“Measuring performance is, put simply, really hard,” explained Primate Labs. “That’s not because it’s hard to run an arbitrary test, but because it’s hard to determine which tests are the most important for the performance you want to measure – especially across different platforms, and particularly when everyone is doing things in subtly different ways.”

The three-score system accounts for the varied precision levels and hardware optimisations found in modern AI implementations. This multi-dimensional approach allows developers, hardware vendors, and enthusiasts to gain deeper insights into a device’s AI performance across different scenarios.

A notable addition to Geekbench AI is the inclusion of accuracy measurements for each test. This feature acknowledges that AI performance isn’t solely about speed but also about the quality of results. By combining speed and accuracy metrics, Geekbench AI provides a more holistic view of AI capabilities, helping users understand the trade-offs between performance and precision.

Geekbench AI 1.0 introduces support for a wide range of AI frameworks, including OpenVINO on Linux and Windows, and vendor-specific TensorFlow Lite delegates like Samsung ENN, ArmNN, and Qualcomm QNN on Android. This broad framework support ensures that the benchmark reflects the latest tools and methodologies used by AI developers.

The benchmark also utilises more extensive and diverse datasets, which not only enhance the accuracy evaluations but also better represent real-world AI use cases. All workloads in Geekbench AI 1.0 run for a minimum of one second, allowing devices to reach their maximum performance levels during testing while still reflecting the bursty nature of real-world applications.

Primate Labs has published detailed technical descriptions of the workloads and models used in Geekbench AI 1.0, emphasising their commitment to transparency and industry-standard testing methodologies. The benchmark is integrated with the Geekbench Browser, facilitating easy cross-platform comparisons and result sharing.

The company anticipates regular updates to Geekbench AI to keep pace with market changes and emerging AI features. However, Primate Labs believes that Geekbench AI has already reached a level of reliability that makes it suitable for integration into professional workflows, with major tech companies like Samsung and Nvidia already utilising the benchmark.

(Image Credit: Primate Labs)

See also: xAI unveils Grok-2 to challenge the AI hierarchy

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Primate Labs launches Geekbench AI benchmarking tool appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/primate-labs-launches-geekbench-ai-benchmarking-tool/feed/ 0