openai Archives - AI News https://www.artificialintelligence-news.com/news/tag/openai/ Artificial Intelligence News Tue, 29 Apr 2025 16:42:00 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png openai Archives - AI News https://www.artificialintelligence-news.com/news/tag/openai/ 32 32 OpenAI’s latest LLM opens doors for China’s AI startups https://www.artificialintelligence-news.com/news/openai-latest-llm-opens-doors-for-china-ai-startups/ https://www.artificialintelligence-news.com/news/openai-latest-llm-opens-doors-for-china-ai-startups/#respond Tue, 29 Apr 2025 16:41:59 +0000 https://www.artificialintelligence-news.com/?p=16158 At the Apsara Conference in Hangzhou, hosted by Alibaba Cloud, China’s AI startups emphasised their efforts to develop large language models. The companies’ efforts follow the announcement of OpenAI’s latest LLMs, including the o1 generative pre-trained transformer model backed by Microsoft. The model is intended to tackle difficult tasks, paving the way for advances in […]

The post OpenAI’s latest LLM opens doors for China’s AI startups appeared first on AI News.

]]>
At the Apsara Conference in Hangzhou, hosted by Alibaba Cloud, China’s AI startups emphasised their efforts to develop large language models.

The companies’ efforts follow the announcement of OpenAI’s latest LLMs, including the o1 generative pre-trained transformer model backed by Microsoft. The model is intended to tackle difficult tasks, paving the way for advances in science, coding, and mathematics.

During the conference, Kunal Zhilin, founder of Moonshot AI, underlined the importance of the o1 model, adding that it has the potential to reshape various industries and create new opportunities for AI startups.

Zhilin stated that reinforcement learning and scalability might be pivotal for AI development. He spoke of the scaling law, which states that larger models with more training data perform better.

“This approach pushes the ceiling of AI capabilities,” Zhilin said, adding that OpenAI o1 has the potential to disrupt sectors and generate new opportunities for startups.

OpenAI has also stressed the model’s ability to solve complex problems, which it says operate in a manner similar to human thinking. By refining its strategies and learning from mistakes, the model improves its problem-solving capabilities.

Zhilin said companies with enough computing power will be able to innovate not only in algorithms, but also in foundational AI models. He sees this as pivotal, as AI engineers rely increasingly on reinforcement learning to generate new data after exhausting available organic data sources.

StepFun CEO Jiang Daxin concurred with Zhilin but stated that computational power remains a big challenge for many start-ups, particularly due to US trade restrictions that hinder Chinese enterprises’ access to advanced semiconductors.

“The computational requirements are still substantial,” Daxin stated.

An insider at Baichuan AI has said that only a small group of Chinese AI start-ups — including Moonshot AI, Baichuan AI, Zhipu AI, and MiniMax — are in a position to make large-scale investments in reinforcement learning. These companies — collectively referred to as the “AI tigers” — are involved heavily in LLM development, pushing the next generation of AI.

More from the Apsara Conference

Also at the conference, Alibaba Cloud made several announcements, including the release of its Qwen 2.5 model family, which features advances in coding and mathematics. The models range from 0.5 billion to 72 billion parameters and support approximately 29 languages, including Chinese, English, French, and Spanish.

Specialised models such as Qwen2.5-Coder and Qwen2.5-Math have already gained some traction, with over 40 million downloads on platforms Hugging Face and ModelScope.

Alibaba Cloud added to its product portfolio, delivering a text-to-video model in its picture generator, Tongyi Wanxiang. The model can create videos in realistic and animated styles, with possible uses in advertising and filmmaking.

Alibaba Cloud unveiled Qwen 2-VL, the latest version of its vision language model. It handles videos longer than 20 minutes, supports video-based question-answering, and is optimised for mobile devices and robotics.

For more information on the conference, click here.

(Photo by: @Guy_AI_Wise via X)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI’s latest LLM opens doors for China’s AI startups appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-latest-llm-opens-doors-for-china-ai-startups/feed/ 0
Coalition opposes OpenAI shift from nonprofit roots https://www.artificialintelligence-news.com/news/coalition-opposes-openai-shift-from-nonprofit-roots/ https://www.artificialintelligence-news.com/news/coalition-opposes-openai-shift-from-nonprofit-roots/#respond Thu, 24 Apr 2025 15:02:57 +0000 https://www.artificialintelligence-news.com/?p=106036 A coalition of experts, including former OpenAI employees, has voiced strong opposition to the company’s shift away from its nonprofit roots. In an open letter addressed to the Attorneys General of California and Delaware, the group – which also includes legal experts, corporate governance specialists, AI researchers, and nonprofit representatives – argues that the proposed […]

The post Coalition opposes OpenAI shift from nonprofit roots appeared first on AI News.

]]>
A coalition of experts, including former OpenAI employees, has voiced strong opposition to the company’s shift away from its nonprofit roots.

In an open letter addressed to the Attorneys General of California and Delaware, the group – which also includes legal experts, corporate governance specialists, AI researchers, and nonprofit representatives – argues that the proposed changes fundamentally threaten OpenAI’s original charitable mission.   

OpenAI was founded with a unique structure. Its core purpose, enshrined in its Articles of Incorporation, is “to ensure that artificial general intelligence benefits all of humanity” rather than serving “the private gain of any person.”

The letter’s signatories contend that the planned restructuring – transforming the current for-profit subsidiary (OpenAI-profit) controlled by the original nonprofit entity (OpenAI-nonprofit) into a Delaware public benefit corporation (PBC) – would dismantle crucial governance safeguards.

This shift, the signatories argue, would transfer ultimate control over the development and deployment of potentially transformative Artificial General Intelligence (AGI) from a charity focused on humanity’s benefit to a for-profit enterprise accountable to shareholders.

Original vision of OpenAI: Nonprofit control as a bulwark

OpenAI defines AGI as “highly autonomous systems that outperform humans at most economically valuable work”. While acknowledging AGI’s potential to “elevate humanity,” OpenAI’s leadership has also warned of “serious risk of misuse, drastic accidents, and societal disruption.”

Co-founder Sam Altman and others have even signed statements equating mitigating AGI extinction risks with preventing pandemics and nuclear war.   

The company’s founders – including Altman, Elon Musk, and Greg Brockman – were initially concerned about AGI being developed by purely commercial entities like Google. They established OpenAI as a nonprofit specifically “unconstrained by a need to generate financial return”. As Altman stated in 2017, “The only people we want to be accountable to is humanity as a whole.”

Even when OpenAI introduced a “capped-profit” subsidiary in 2019 to attract necessary investment, it emphasised that the nonprofit parent would retain control and that the mission remained paramount. Key safeguards included:   

  • Nonprofit control: The for-profit subsidiary was explicitly “controlled by OpenAI Nonprofit’s board”.   
  • Capped profits: Investor returns were capped, with excess value flowing back to the nonprofit for humanity’s benefit.   
  • Independent board: A majority of nonprofit board members were required to be independent, holding no financial stake in the subsidiary.   
  • Fiduciary duty: The board’s legal duty was solely to the nonprofit’s mission, not to maximising investor profit.   
  • AGI ownership: AGI technologies were explicitly reserved for the nonprofit to govern.

Altman himself testified to Congress in 2023 that this “unusual structure” “ensures it remains focused on [its] long-term mission.”

A threat to the mission?

The critics argue the move to a PBC structure would jeopardise these safeguards:   

  • Subordination of mission: A PBC board – while able to consider public benefit – would also have duties to shareholders, potentially balancing profit against the mission rather than prioritising the mission above all else.   
  • Loss of enforceable duty: The current structure gives Attorneys General the power to enforce the nonprofit’s duty to the public. Under a PBC, this direct public accountability – enforceable by regulators – would likely vanish, leaving shareholder derivative suits as the primary enforcement mechanism.   
  • Uncapped profits?: Reports suggest the profit cap might be removed, potentially reallocating vast future wealth from the public benefit mission to private shareholders.   
  • Board independence uncertain: Commitments to a majority-independent board overseeing AI development could disappear.   
  • AGI control shifts: Ownership and control of AGI would likely default to the PBC and its investors, not the mission-focused nonprofit. Reports even suggest OpenAI and Microsoft have discussed removing contractual restrictions on Microsoft’s access to future AGI.   
  • Charter commitments at risk: Commitments like the “stop-and-assist” clause (pausing competition to help a safer, aligned AGI project) might not be honoured by a profit-driven entity.  

OpenAI has publicly cited competitive pressures (i.e. attracting investment and talent against rivals with conventional equity structures) as reasons for the change.

However, the letter counters that competitive advantage isn’t the charitable purpose of OpenAI and that its unique nonprofit structure was designed to impose certain competitive costs in favour of safety and public benefit. 

“Obtaining a competitive advantage by abandoning the very governance safeguards designed to ensure OpenAI remains true to its mission is unlikely to, on balance, advance the mission,” the letter states.   

The authors also question why OpenAI abandoning nonprofit control is necessary merely to simplify the capital structure, suggesting the core issue is the subordination of investor interests to the mission. They argue that while the nonprofit board can consider investor interests if it serves the mission, the restructuring appears aimed at allowing these interests to prevail at the expense of the mission.

Many of these arguments have also been pushed by Elon Musk in his legal action against OpenAI. Earlier this month, OpenAI counter-sued Musk for allegedly orchestrating a “relentless” and “malicious” campaign designed to “take down OpenAI” after he left the company years ago and started rival AI firm xAI.

Call for intervention

The signatories of the open letter urge intervention, demanding answers from OpenAI about how the restructuring away from a nonprofit serves its mission and why safeguards previously deemed essential are now obstacles.

Furthemore, the signatories request a halt to the restructuring, preservation of nonprofit control and other safeguards, and measures to ensure the board’s independence and ability to oversee management effectively in line with the charitable purpose.   

“The proposed restructuring would eliminate essential safeguards, effectively handing control of, and profits from, what could be the most powerful technology ever created to a for-profit entity with legal duties to prioritise shareholder returns,” the signatories conclude.

See also: How does AI judge? Anthropic studies the values of Claude

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Coalition opposes OpenAI shift from nonprofit roots appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/coalition-opposes-openai-shift-from-nonprofit-roots/feed/ 0
OpenAI counter-sues Elon Musk for attempts to ‘take down’ AI rival https://www.artificialintelligence-news.com/news/openai-counter-sues-elon-musk-attempts-take-down-ai-rival/ https://www.artificialintelligence-news.com/news/openai-counter-sues-elon-musk-attempts-take-down-ai-rival/#respond Thu, 10 Apr 2025 12:05:31 +0000 https://www.artificialintelligence-news.com/?p=105285 OpenAI has launched a legal counteroffensive against one of its co-founders, Elon Musk, and his competing AI venture, xAI. In court documents filed yesterday, OpenAI accuses Musk of orchestrating a “relentless” and “malicious” campaign designed to “take down OpenAI” after he left the organisation years ago. The court filing, submitted to the US District Court […]

The post OpenAI counter-sues Elon Musk for attempts to ‘take down’ AI rival appeared first on AI News.

]]>
OpenAI has launched a legal counteroffensive against one of its co-founders, Elon Musk, and his competing AI venture, xAI.

In court documents filed yesterday, OpenAI accuses Musk of orchestrating a “relentless” and “malicious” campaign designed to “take down OpenAI” after he left the organisation years ago.

The court filing, submitted to the US District Court for the Northern District of California, alleges Musk could not tolerate OpenAI’s success after he had “abandoned and declared [it] doomed.”

OpenAI is now seeking legal remedies, including an injunction to stop Musk’s alleged “unlawful and unfair action” and compensation for damages already caused.   

Origin story of OpenAI and the departure of Elon Musk

The legal documents recount OpenAI’s origins in 2015, stemming from an idea discussed by current CEO Sam Altman and President Greg Brockman to create an AI lab focused on developing artificial general intelligence (AGI) – AI capable of outperforming humans – for the “benefit of all humanity.”

Musk was involved in the launch, serving on the initial non-profit board and pledging $1 billion in donations.   

However, the relationship fractured. OpenAI claims that between 2017 and 2018, Musk’s demands for “absolute control” of the enterprise – or its potential absorption into Tesla – were rebuffed by Altman, Brockman, and then-Chief Scientist Ilya Sutskever. The filing quotes Sutskever warning Musk against creating an “AGI dictatorship.”

Following this disagreement, OpenAI alleges Elon Musk quit in February 2018, declaring the venture would fail without him and that he would pursue AGI development at Tesla instead. Critically, OpenAI contends the pledged $1 billion “was never satisfied—not even close”.   

Restructuring, success, and Musk’s alleged ‘malicious’ campaign

Facing escalating costs for computing power and talent retention, OpenAI restructured and created a “capped-profit” entity in 2019 to attract investment while remaining controlled by the non-profit board and bound by its mission. This structure, OpenAI states, was announced publicly and Musk was offered equity in the new entity but declined and raised no objection at the time.   

OpenAI highlights its subsequent breakthroughs – including GPT-3, ChatGPT, and GPT-4 – achieved massive public adoption and critical acclaim. These successes, OpenAI emphasises, were made after the departure of Elon Musk and allegedly spurred his antagonism.

The filing details a chronology of alleged actions by Elon Musk aimed at harming OpenAI:   

  • Founding xAI: Musk “quietly created” his competitor, xAI, in March 2023.   
  • Moratorium call: Days later, Musk supported a call for a development moratorium on AI more advanced than GPT-4, a move OpenAI claims was intended “to stall OpenAI while all others, most notably Musk, caught up”.   
  • Records demand: Musk allegedly made a “pretextual demand” for confidential OpenAI documents, feigning concern while secretly building xAI.   
  • Public attacks: Using his social media platform X (formerly Twitter), Musk allegedly broadcast “press attacks” and “malicious campaigns” to his vast following, labelling OpenAI a “lie,” “evil,” and a “total scam”.   
  • Legal actions: Musk filed lawsuits, first in state court (later withdrawn) and then the current federal action, based on what OpenAI dismisses as meritless claims of a “Founding Agreement” breach.   
  • Regulatory pressure: Musk allegedly urged state Attorneys General to investigate OpenAI and force an asset auction.   
  • “Sham bid”: In February 2025, a Musk-led consortium made a purported $97.375 billion offer for OpenAI, Inc.’s assets. OpenAI derides this as a “sham bid” and a “stunt” lacking evidence of financing and designed purely to disrupt OpenAI’s operations, potential restructuring, fundraising, and relationships with investors and employees, particularly as OpenAI considers evolving its capped-profit arm into a Public Benefit Corporation (PBC). One investor involved allegedly admitted the bid’s aim was to gain “discovery”.   

Based on these allegations, OpenAI asserts two primary counterclaims against both Elon Musk and xAI:

  • Unfair competition: Alleging the “sham bid” constitutes an unfair and fraudulent business practice under California law, intended to disrupt OpenAI and gain an unfair advantage for xAI.   
  • Tortious interference with prospective economic advantage: Claiming the sham bid intentionally disrupted OpenAI’s existing and potential relationships with investors, employees, and customers. 

OpenAI argues Musk’s actions have forced it to divert resources and expend funds, causing harm. They claim his campaign threatens “irreparable harm” to their mission, governance, and crucial business relationships. The filing also touches upon concerns regarding xAI’s own safety record, citing reports of its AI Grok generating harmful content and misinformation.

The counterclaims mark a dramatic escalation in the legal battle between the AI pioneer and its departed co-founder. While Elon Musk initially sued OpenAI alleging a betrayal of its founding non-profit, open-source principles, OpenAI now contends Musk’s actions are a self-serving attempt to undermine a competitor he couldn’t control.

With billions at stake and the future direction of AGI in the balance, this dispute is far from over.

See also: Deep Cogito open LLMs use IDA to outperform same size models

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI counter-sues Elon Musk for attempts to ‘take down’ AI rival appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-counter-sues-elon-musk-attempts-take-down-ai-rival/feed/ 0
ChatGPT hits record usage after viral Ghibli feature—Here are four risks to know first https://www.artificialintelligence-news.com/news/chatgpt-hits-record-usage-after-viral-ghibli-feature/ https://www.artificialintelligence-news.com/news/chatgpt-hits-record-usage-after-viral-ghibli-feature/#respond Tue, 08 Apr 2025 10:00:47 +0000 https://www.artificialintelligence-news.com/?p=105218 Following the release of ChatGPT’s new image-generation tool, user activity has surged; millions of people have been drawn to a trend whereby uploaded images are inspired by the unique visual style of Studio Ghibli. The spike in interest contributed to record use levels for the chatbot and strained OpenAI’s infrastructure temporarily. Social media platforms were […]

The post ChatGPT hits record usage after viral Ghibli feature—Here are four risks to know first appeared first on AI News.

]]>
Following the release of ChatGPT’s new image-generation tool, user activity has surged; millions of people have been drawn to a trend whereby uploaded images are inspired by the unique visual style of Studio Ghibli.

The spike in interest contributed to record use levels for the chatbot and strained OpenAI’s infrastructure temporarily.

Social media platforms were soon flooded with AI-generated images styled after work by the renowned Japanese animation studio, known for titles like Spirited Away and My Neighbor Totoro. According to Similarweb, weekly active ChatGPT users passed 150 million for the first time this year.

OpenAI CEO Sam Altman said the chatbot gained one million users in a single hour in early April – matching the numbers the text-centric ChatGPT reached over five days when it first launched.

SensorTower data shows the company also recorded a jump in app activity. Weekly active users, downloads, and in-app revenue all hit record levels last week, following the update to GPT-4o that enabled new image-generation features. Compared to late March, downloads rose by 11%, active users grew 5%, and revenue increased by 6%.

The new tool’s popularity caused service slowdowns and intermittent outages. OpenAI acknowledged the increased load, with Altman saying that users should expect delays in feature roll-outs and occasional service disruption as capacity issues are settled.

Legal questions surface around ChatGPT’s Ghibli-style AI art

The viral use of Studio Ghibli-inspired AI imagery from OpenAI’s ChatGPT has raised concerns about copyright. Legal experts point out that while artistic styles themselves may not always be protected, closely mimicking a well-known look could fall into a legal grey area.

“The legal landscape of AI-generated images mimicking Studio Ghibli’s distinctive style is an uncertain terrain. Copyright law has generally protected only specific expressions rather than artistic styles themselves,” said Evan Brown, partner at law firm Neal & McDevitt.

Miyazaki’s past comments have also resurfaced. In 2016, the Studio Ghibli co-founder responded to early AI-generated artwork by saying, “I am utterly disgusted. I would never wish to incorporate this technology into my work at all.”

OpenAI has not commented on whether the model used for its image generation was trained on content similar to Ghibli’s animation.

Data privacy and personal risk

The trend has also drawn attention to user privacy and data security. Christoph C. Cemper, founder of AI prompt management firm AIPRM, cautioned that uploading a photo for artistic transformation may come with more risks than many users realise.

“When you upload a photo to an AI art generator, you’re giving away your biometric data (your face). Some AI tools store that data, use it to train future models, or even sell it to third parties – none of which you may be fully aware of unless you read the fine print,” Cemper said.

OpenAI’s privacy policy confirms that it collects both personal information and use data, including images and content submitted by users. Unless users opt out of training data collection or request deletion via their settings, content will be retained and used to improve future AI models.

Cemper said that once a facial image is uploaded, it becomes vulnerable to misuse. That data could be scraped, leaked, or used in identity theft, deepfake content, or other impersonation scams. He also pointed to prior incidents where private images were found in public AI datasets like LAION-5B, which are used to train various tools like Stable Diffusion.

Copyright and licensing considerations

There are also concerns that AI-generated content styled after recognisable artistic brands could cross into copyright infringement. While creating art in the style of Studio Ghibli, Disney, or Pixar might seem harmless, legal experts warn that such works may be considered derivative, especially if the mimicry is too close.

In 2022, several artists filed a class-action lawsuit against AI companies, claiming their models were trained on original artwork without consent. The cases reflect the broader conversation around how to balance innovation with creators’ rights as generative AI becomes more widely used.

Cemper also advised users to review carefully the terms of service on AI platforms. Many contain licensing clauses with language like “transferable rights,” “non-exclusive,” or “irrevocable licence,” which allow platforms to reproduce, modify, or distribute submitted content – even after the app is deleted.

“The rollout of ChatGPT’s 4o image generator shows just how powerful AI has become as it replicates iconic artistic styles with just a few clicks. But this unprecedented capability comes with a growing risk – the lines between creativity and copyright infringement are increasingly blurred,” Cemper said.

“The rapid pace of AI development also raises significant concerns about privacy and data security. There’s a pressing need for clearer, more transparent privacy policies. Users should be empowered to make informed decisions about uploading their photos or personal data.”

Search interest in “ChatGPT Studio Ghibli” has increased by more than 1,200% in the past week, but alongside the creativity and virality comes a wave of serious problems about privacy, copyright, and data use. As AI image tools get more advanced and accessible, users may want to think twice before uploading personal images, especially if they’re not sure where the data may ultimately end up.

(Image by YouTube Fireship)

See also: Midjourney V7: Faster AI image generation


Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ChatGPT hits record usage after viral Ghibli feature—Here are four risks to know first appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/chatgpt-hits-record-usage-after-viral-ghibli-feature/feed/ 0
Study claims OpenAI trains AI models on copyrighted data https://www.artificialintelligence-news.com/news/study-claims-openai-trains-ai-models-copyrighted-data/ https://www.artificialintelligence-news.com/news/study-claims-openai-trains-ai-models-copyrighted-data/#respond Wed, 02 Apr 2025 09:04:28 +0000 https://www.artificialintelligence-news.com/?p=105119 A new study from the AI Disclosures Project has raised questions about the data OpenAI uses to train its large language models (LLMs). The research indicates the GPT-4o model from OpenAI demonstrates a “strong recognition” of paywalled and copyrighted data from O’Reilly Media books. The AI Disclosures Project, led by technologist Tim O’Reilly and economist […]

The post Study claims OpenAI trains AI models on copyrighted data appeared first on AI News.

]]>
A new study from the AI Disclosures Project has raised questions about the data OpenAI uses to train its large language models (LLMs). The research indicates the GPT-4o model from OpenAI demonstrates a “strong recognition” of paywalled and copyrighted data from O’Reilly Media books.

The AI Disclosures Project, led by technologist Tim O’Reilly and economist Ilan Strauss, aims to address the potentially harmful societal impacts of AI’s commercialisation by advocating for improved corporate and technological transparency. The project’s working paper highlights the lack of disclosure in AI, drawing parallels with financial disclosure standards and their role in fostering robust securities markets.

The study used a legally-obtained dataset of 34 copyrighted O’Reilly Media books to investigate whether LLMs from OpenAI were trained on copyrighted data without consent. The researchers applied the DE-COP membership inference attack method to determine if the models could differentiate between human-authored O’Reilly texts and paraphrased LLM versions.

Key findings from the report include:

  • GPT-4o shows “strong recognition” of paywalled O’Reilly book content, with an AUROC score of 82%. In contrast, OpenAI’s earlier model, GPT-3.5 Turbo, does not show the same level of recognition (AUROC score just above 50%)
  • GPT-4o exhibits stronger recognition of non-public O’Reilly book content compared to publicly accessible samples (82% vs 64% AUROC scores respectively)
  • GPT-3.5 Turbo shows greater relative recognition of publicly accessible O’Reilly book samples than non-public ones (64% vs 54% AUROC scores)
  • GPT-4o Mini, a smaller model, showed no knowledge of public or non-public O’Reilly Media content when tested (AUROC approximately 50%)

The researchers suggest that access violations may have occurred via the LibGen database, as all of the O’Reilly books tested were found there. They also acknowledge that newer LLMs have an improved ability to distinguish between human-authored and machine-generated language, which does not reduce the method’s ability to classify data.

The study highlights the potential for “temporal bias” in the results, due to language changes over time. To account for this, the researchers tested two models (GPT-4o and GPT-4o Mini) trained on data from the same period.

The report notes that while the evidence is specific to OpenAI and O’Reilly Media books, it likely reflects a systemic issue around the use of copyrighted data. It argues that uncompensated training data usage could lead to a decline in the internet’s content quality and diversity, as revenue streams for professional content creation diminish.

The AI Disclosures Project emphasises the need for stronger accountability in AI companies’ model pre-training processes. They suggest that liability provisions that incentivise improved corporate transparency in disclosing data provenance may be an important step towards facilitating commercial markets for training data licensing and remuneration.

The EU AI Act’s disclosure requirements could help trigger a positive disclosure-standards cycle if properly specified and enforced. Ensuring that IP holders know when their work has been used in model training is seen as a crucial step towards establishing AI markets for content creator data.

Despite evidence that AI companies may be obtaining data illegally for model training, a market is emerging in which AI model developers pay for content through licensing deals. Companies like Defined.ai facilitate the purchasing of training data, obtaining consent from data providers and stripping out personally identifiable information.

The report concludes by stating that using 34 proprietary O’Reilly Media books, the study provides empirical evidence that OpenAI likely trained GPT-4o on non-public, copyrighted data.

(Image by Sergei Tokmakov)

See also: Anthropic provides insights into the ‘AI biology’ of Claude

AI & Big Data Expo banner, a show where attendees will hear more about issues such as OpenAI allegedly using copyrighted data to train its new models.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Study claims OpenAI trains AI models on copyrighted data appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/study-claims-openai-trains-ai-models-copyrighted-data/feed/ 0
OpenAI pulls free GPT-4o image generator after one day https://www.artificialintelligence-news.com/news/openai-pulls-free-gpt-4o-image-generator-after-one-day/ https://www.artificialintelligence-news.com/news/openai-pulls-free-gpt-4o-image-generator-after-one-day/#respond Thu, 27 Mar 2025 12:24:39 +0000 https://www.artificialintelligence-news.com/?p=105037 OpenAI has pulled its upgraded image generation feature, powered by the advanced GPT-4o reasoning model, from the free tier of ChatGPT. The decision comes just a day after the update was launched, following an unforeseen surge in users creating images in the distinctive style of renowned Japanese animation house, Studio Ghibli. The update, which promised […]

The post OpenAI pulls free GPT-4o image generator after one day appeared first on AI News.

]]>
OpenAI has pulled its upgraded image generation feature, powered by the advanced GPT-4o reasoning model, from the free tier of ChatGPT.

The decision comes just a day after the update was launched, following an unforeseen surge in users creating images in the distinctive style of renowned Japanese animation house, Studio Ghibli.

The update, which promised to deliver enhanced realism in both AI-generated images and text, was intended to showcase the capabilities of GPT-4o. 

This new model employs an “autoregressive approach” to image creation, building visuals from left to right and top to bottom, a method that contrasts with the simultaneous generation employed by older models. This technique is designed to improve the accuracy and lifelike quality of the imagery produced.

Furthermore, the new model generates sharper and more coherent text within images, addressing a common shortcoming of previous AI models which often resulted in blurry or nonsensical text. 

OpenAI also conducted post-launch training, guided by human feedback, to identify and rectify common errors in both text and image outputs.

However, the public response to the image generation upgrade took an unexpected turn almost immediately after its release on ChatGPT. 

Users embraced the ability to create images in the iconic style of Studio Ghibli, sharing their imaginative creations across various social media platforms. These included reimagined scenes from classic films like “The Godfather” and “Star Wars,” as well as popular internet memes such as “distracted boyfriend” and “disaster girl,” all rendered with the aesthetic of the beloved animation studio.

Even OpenAI CEO Sam Altman joined in on the fun, changing his X profile picture to a Studio Ghibli-esque rendition of himself:

Screenshot of the profile of OpenAI CEO Sam Altman on Twitter

However, later that day, Altman posted on X announcing a temporary delay in the rollout of the image generator update for free ChatGPT users.

While paid subscribers to ChatGPT Plus, Pro, and Team continue to have access to the feature, Altman provided no specific timeframe for when the functionality would return to the free tier.

The virality of the Studio Ghibli-style images seemingly prompted OpenAI to reconsider its rollout strategy. While the company had attempted to address ethical and legal considerations surrounding AI image generation, the sheer volume and nature of the user-generated content appear to have caught them off-guard.

The intersection of AI-generated art and intellectual property rights is a complex and often debated area. Style is not historically considered as being protected by copyright law in the same respect as specific works.

Despite this legal nuance, OpenAI’s swift decision to withdraw the GPT-4o image generation feature from its free tier suggests a cautious approach. The company appears to be taking a step back to evaluate the situation and determine its next course of action in light of the unexpected popularity of Ghibli-inspired AI art.

OpenAI’s decision to roll back the deployment of its latest image generation feature underscores the ongoing uncertainty around not just copyright law, but also the ethical implications of using AI to replicate human creativity.

(Photo by Kai Pilger)

See also: Gemini 2.5: Google cooks up its ‘most intelligent’ AI model to date

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI pulls free GPT-4o image generator after one day appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-pulls-free-gpt-4o-image-generator-after-one-day/feed/ 0
ChatGPT gains agentic capability for complex research https://www.artificialintelligence-news.com/news/chatgpt-gains-agentic-capability-for-complex-research/ https://www.artificialintelligence-news.com/news/chatgpt-gains-agentic-capability-for-complex-research/#respond Mon, 03 Feb 2025 17:22:06 +0000 https://www.artificialintelligence-news.com/?p=104108 OpenAI is releasing a powerful agentic capability that enables ChatGPT to conduct complex, multi-step research tasks online. The feature, called Deep Research, reportedly achieves in tens of minutes what could take a human researcher hours or even days. OpenAI describes Deep Research as a significant milestone in its journey toward artificial general intelligence (AGI). “The […]

The post ChatGPT gains agentic capability for complex research appeared first on AI News.

]]>
OpenAI is releasing a powerful agentic capability that enables ChatGPT to conduct complex, multi-step research tasks online. The feature, called Deep Research, reportedly achieves in tens of minutes what could take a human researcher hours or even days.

OpenAI describes Deep Research as a significant milestone in its journey toward artificial general intelligence (AGI).

“The ability to synthesise knowledge is a prerequisite for creating new knowledge,” says OpenAI. “For this reason, Deep Research marks a significant step toward our broader goal of developing AGI.”

Agentic AI enables ChatGPT to assist with complex research

Deep Research empowers ChatGPT to find, analyse, and synthesise information from hundreds of online sources autonomously. With just a prompt from the user, the tool can deliver a comprehensive report, comparable to the output of a research analyst, according to OpenAI.

Drawing capabilities from a variant of OpenAI’s upcoming “o3” model, the aim is to free users from time-consuming, labour-intensive information gathering. Whether it’s a competitive analysis of streaming platforms, an informed policy review, or even personalised recommendations for a new commuter bike, Deep Research promises precise and reliable results.

Importantly, every output includes full citations and transparent documentation—enabling users to verify the findings with ease.

The tool appears particularly adept at uncovering niche or non-intuitive insights, making it an invaluable asset across industries like finance, science, policymaking, and engineering. But OpenAI also envisions Deep Research being useful for the average user, such as shoppers looking for hyper-personalised recommendations or a specific product.

This latest agentic capability operates through the user interface of ChatGPT; users simply select the “Deep Research” option in the message composer and type their query. Supporting files or spreadsheets can also be uploaded for additional context.

Once initiated, the AI embarks on a rigorous multi-step process, which may take 5-30 minutes to complete. A sidebar provides updates on the actions taken and the sources consulted. Users can carry on with other tasks and will be notified when the final report is ready. 

The results are presented in the chat as detailed, well-documented reports. In the coming weeks, OpenAI plans to enhance these outputs further by embedding images, data visualisations, and graphs to deliver even greater clarity and context.

Unlike GPT-4o – which excels in real-time, multimodal conversations – Deep Research prioritises depth and detail. Its ability to rigorously cite sources and provide comprehensive analysis sets it apart—shifting the focus from fast, summarised answers to well-documented, research-grade insights.

Built for real-world challenges

Deep Rsearch leverages sophisticated training methodologies, grounded in real-world browsing and reasoning tasks across diverse domains. Its model was trained via reinforcement learning to autonomously plan and execute multi-step research processes, including backtracking and adaptively refining its approach as new information becomes available. 

The tool can browse user-uploaded files, generate and iterate on graphs using Python, embed media such as generated images and web pages into responses, and cite exact sentences or passages from its sources. The result of this extensive training is a highly capable agent for tackling complex real-world problems.

OpenAI evaluated Deep Research across a broad set of expert-level exams known as “Humanity’s Last Exam”. The exams – comprising over 3,000 questions covering topics from rocket science and linguistics to ecology and classics – test an AI’s competence in solving multifaceted problems.

The results were impressive, with the model achieving a record-breaking 26.6% accuracy across these domains:

  • GPT-4o: 3.3%
  • Grok-2: 3.8%
  • Claude 3.5 Sonnet: 4.3%
  • OpenAI o1: 9.1%
  • DeepSeek-R1: 9.4%
  • Deep research: 26.6% (with browsing + Python tools)

Deep Research also reached a new state-of-the-art performance on the GAIA benchmark, which evaluates AI models on real-world questions requiring reasoning, multi-modal fluency, and tool-use proficiency. Deep Research topped the leaderboard with a score of 72.57%.

Limitations and challenges

While the Deep Research agentic AI capability in ChatGPT signifies a bold step forward, OpenAI acknowledges that the technology is still in its early stages and comes with limitations.

The system occasionally “hallucinates” facts or offers incorrect inferences, albeit at a notably reduced rate compared to existing GPT models, according to OpenAI. It also faces challenges in differentiating between authoritative sources and speculative content, and it struggles to calibrate its confidence levels—often displaying undue certainty for potentially uncertain findings.

Minor formatting errors in reports and citations, as well as delays in initiating tasks, could also frustrate initial users. OpenAI says these issues are expected to improve over time with more usage and iterative refinements.

OpenAI is rolling out the capability gradually, starting with Pro users, who will have access to up to 100 queries per month. Plus and Team tiers will follow suit, with Enterprise access arriving next. 

UK, Swiss, and European Economic Area residents are not yet able to access the feature, but OpenAI says it’s working on expanding its rollout to these regions.

In the weeks ahead, OpenAI will expand the feature to ChatGPT’s mobile and desktop platforms. The long-term vision includes enabling connections to subscription-based or proprietary data sources, further enhancing the robustness and personalisation of its outputs.

Looking further ahead, OpenAI envisions integrating Deep Research with “Operator,” an existing chatbot capability that takes real-world actions. This integration would allow ChatGPT to seamlessly handle tasks that require both asynchronous online research and real-world execution.

(Photo by John Schnobrich)

See also: Microsoft and OpenAI probe alleged data theft by DeepSeek

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ChatGPT gains agentic capability for complex research appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/chatgpt-gains-agentic-capability-for-complex-research/feed/ 0
Microsoft and OpenAI probe alleged data theft by DeepSeek https://www.artificialintelligence-news.com/news/microsoft-and-openai-probe-alleged-data-theft-deepseek/ https://www.artificialintelligence-news.com/news/microsoft-and-openai-probe-alleged-data-theft-deepseek/#respond Wed, 29 Jan 2025 15:28:41 +0000 https://www.artificialintelligence-news.com/?p=17009 Microsoft and OpenAI are investigating a potential breach of the AI firm’s system by a group allegedly linked to Chinese AI startup DeepSeek. According to Bloomberg, the investigation stems from suspicious data extraction activity detected in late 2024 via OpenAI’s application programming interface (API), sparking broader concerns over international AI competition. Microsoft, OpenAI’s largest financial […]

The post Microsoft and OpenAI probe alleged data theft by DeepSeek appeared first on AI News.

]]>
Microsoft and OpenAI are investigating a potential breach of the AI firm’s system by a group allegedly linked to Chinese AI startup DeepSeek.

According to Bloomberg, the investigation stems from suspicious data extraction activity detected in late 2024 via OpenAI’s application programming interface (API), sparking broader concerns over international AI competition.

Microsoft, OpenAI’s largest financial backer, first identified the large-scale data extraction and informed the ChatGPT maker of the incident. Sources believe the activity may have violated OpenAI’s terms of service, or that the group may have exploited loopholes to bypass restrictions limiting how much data they could collect.

DeepSeek has quickly risen to prominence in the competitive AI landscape, particularly with the release of its latest model, R-1, on 20 January.

Billed as a rival to OpenAI’s ChatGPT in performance but developed at a significantly lower cost, R-1 has shaken up the tech industry. Its release triggered a sharp decline in tech and AI stocks that wiped billions from US markets in a single week.

David Sacks, the White House’s newly appointed “crypto and AI czar,” alleged that DeepSeek may have employed questionable methods to achieve its AI’s capabilities. In an interview with Fox News, Sacks noted evidence suggesting that DeepSeek had used “distillation” to train its AI models using outputs from OpenAI’s systems.

“There’s substantial evidence that what DeepSeek did here is they distilled knowledge out of OpenAI’s models, and I don’t think OpenAI is very happy about this,” Sacks told the network.  

Model distillation involves training one AI system using data generated by another, potentially allowing a competitor to develop similar functionality. This method, when applied without proper authorisation, has stirred ethical and intellectual property debates as the global race for AI supremacy heats up.  

OpenAI declined to comment specifically on the accusations against DeepSeek but acknowledged the broader risk posed by model distillation, particularly by Chinese companies.  

“We know PRC-based companies — and others — are constantly trying to distill the models of leading US AI companies,” a spokesperson for OpenAI told Bloomberg.  

Geopolitical and security concerns  

Growing tensions around AI innovation now extend into national security. CNBC reported that the US Navy has banned its personnel from using DeepSeek’s products, citing fears that the Chinese government could exploit the platform to access sensitive information.

In an email dated 24 January, the Navy warned its staff against using DeepSeek AI “in any capacity” due to “potential security and ethical concerns associated with the model’s origin and usage.”

Critics have highlighted DeepSeek’s privacy policy, which permits the collection of data such as IP addresses, device information, and even keystroke patterns—a scope of data collection considered excessive by some experts.

Earlier this week, DeepSeek stated it was facing “large-scale malicious attacks” against its systems. A banner on its website informed users of a temporary sign-up restriction.

The growing competition between the US and China in particular in the AI sector has underscored wider concerns regarding technological ownership, ethical governance, and national security.  

Experts warn that as AI systems advance and become increasingly integral to global economic and strategic planning, disputes over data usage and intellectual property are only likely to intensify. Accusations such as those against DeepSeek amplify alarm over China’s rapid development in the field and its potential quest to bypass US-led safeguards through reverse engineering and other means.  

While OpenAI and Microsoft continue their investigation into the alleged misuse of OpenAI’s platform, businesses and governments alike are paying close attention. The case could set a precedent for how AI developers police model usage and enforce terms of service.

For now, the response from both US and Chinese stakeholders highlights how AI innovation has become not just a race for technological dominance, but a fraught geopolitical contest that is shaping 21st-century power dynamics.

(Image by Mohamed Hassan)

See also: Qwen 2.5-Max outperforms DeepSeek V3 in some benchmarks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Microsoft and OpenAI probe alleged data theft by DeepSeek appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/microsoft-and-openai-probe-alleged-data-theft-deepseek/feed/ 0
ChatGPT Gov aims to modernise US government agencies https://www.artificialintelligence-news.com/news/chatgpt-gov-aims-modernise-us-government-agencies/ https://www.artificialintelligence-news.com/news/chatgpt-gov-aims-modernise-us-government-agencies/#respond Tue, 28 Jan 2025 16:21:26 +0000 https://www.artificialintelligence-news.com/?p=16999 OpenAI has launched ChatGPT Gov, a specially designed version of its AI chatbot tailored for use by US government agencies. ChatGPT Gov aims to harness the potential of AI to enhance efficiency, productivity, and service delivery while safeguarding sensitive data and complying with stringent security requirements. “We believe the US government’s adoption of artificial intelligence […]

The post ChatGPT Gov aims to modernise US government agencies appeared first on AI News.

]]>
OpenAI has launched ChatGPT Gov, a specially designed version of its AI chatbot tailored for use by US government agencies.

ChatGPT Gov aims to harness the potential of AI to enhance efficiency, productivity, and service delivery while safeguarding sensitive data and complying with stringent security requirements.

“We believe the US government’s adoption of artificial intelligence can boost efficiency and productivity and is crucial for maintaining and enhancing America’s global leadership in this technology,” explained OpenAI.

The company emphasised how its AI solutions present “enormous potential” for tackling complex challenges in the public sector, ranging from improving public health and infrastructure to bolstering national security.

By introducing ChatGPT Gov, OpenAI hopes to offer tools that “serve the national interest and the public good, aligned with democratic values,” while assisting policymakers in responsibly integrating AI to enhance services for the American people.

The role of ChatGPT Gov

Public sector organisations can deploy ChatGPT Gov within their own Microsoft Azure environments, either through Azure’s commercial cloud or the specialised Azure Government cloud.

This self-hosting capability ensures that agencies can meet strict security, privacy, and compliance standards, such as IL5, CJIS, ITAR, and FedRAMP High. 

OpenAI believes this infrastructure will not only help facilitate compliance with cybersecurity frameworks, but also speed up internal authorisation processes for handling non-public sensitive data.

The tailored version of ChatGPT incorporates many of the features found in the enterprise version, including:

  • The ability to save and share conversations within a secure government workspace.
  • Uploading text and image files for streamlined workflows.
  • Access to GPT-4o, OpenAI’s state-of-the-art model capable of advanced text interpretation, summarisation, coding, image analysis, and mathematics.
  • Customisable GPTs, which enable users to create and share specifically tailored models for their agency’s needs.
  • A built-in administrative console to help CIOs and IT departments manage users, groups, security protocols such as single sign-on (SSO), and more.

These features ensure that ChatGPT Gov is not merely a tool for innovation, but an infrastructure supportive of secure and efficient operations across US public-sector entities.

OpenAI says it’s actively working to achieve FedRAMP Moderate and High accreditations for its fully managed SaaS product, ChatGPT Enterprise, a step that would bolster trust in its AI offerings for government use.

Additionally, the company is exploring ways to expand ChatGPT Gov’s capabilities into Azure’s classified regions for even more secure environments.

“ChatGPT Gov reflects our commitment to helping US government agencies leverage OpenAI’s technology today,” the company said.

A better track record in government than most politicians

Since January 2024, ChatGPT has seen widespread adoption among US government agencies, with over 90,000 users across more than 3,500 federal, state, and local agencies having already sent over 18 million messages to support a variety of operational tasks.

Several notable agencies have highlighted how they are employing OpenAI’s AI tools for meaningful outcomes:

  • The Air Force Research Laboratory: The lab uses ChatGPT Enterprise for administrative purposes, including improving access to internal resources, basic coding assistance, and boosting AI education efforts.
  • Los Alamos National Laboratory: The laboratory leverages ChatGPT Enterprise for scientific research and innovation. This includes work within its Bioscience Division, which is evaluating ways GPT-4o can safely advance bioscientific research in laboratory settings.
  • State of Minnesota: Minnesota’s Enterprise Translations Office uses ChatGPT Team to provide faster, more accurate translation services to multilingual communities across the state. The integration has resulted in significant cost savings and reduced turnaround times.
  • Commonwealth of Pennsylvania: Employees in Pennsylvania’s pioneering AI pilot programme reported that ChatGPT Enterprise helped them reduce routine task times, such as analysing project requirements, by approximately 105 minutes per day on days they used the tool.

These early use cases demonstrate the transformative potential of AI applications across various levels of government.

Beyond delivering tangible improvements to government workflows, OpenAI seeks to foster public trust in artificial intelligence through collaboration and transparency. The company said it is committed to working closely with government agencies to align its tools with shared priorities and democratic values. 

“We look forward to collaborating with government agencies to enhance service delivery to the American people through AI,” OpenAI stated.

As other governments across the globe begin adopting similar technologies, America’s proactive approach may serve as a model for integrating AI into the public sector while safeguarding against risks.

Whether supporting administrative workflows, research initiatives, or language services, ChatGPT Gov stands as a testament to the growing role AI will play in shaping the future of effective governance.

(Photo by Dave Sherrill)

See also: Yiannis Antoniou, Lab49: OpenAI Operator kickstarts era of browser AI agents

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ChatGPT Gov aims to modernise US government agencies appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/chatgpt-gov-aims-modernise-us-government-agencies/feed/ 0
Yiannis Antoniou, Lab49: OpenAI Operator kickstarts era of browser AI agents https://www.artificialintelligence-news.com/news/yiannis-antoniou-lab49-openai-operator-era-browser-ai-agents/ https://www.artificialintelligence-news.com/news/yiannis-antoniou-lab49-openai-operator-era-browser-ai-agents/#respond Fri, 24 Jan 2025 14:03:14 +0000 https://www.artificialintelligence-news.com/?p=16963 OpenAI has unveiled Operator, a tool that integrates seamlessly with web browsers to perform tasks autonomously. From filling out forms to ordering groceries, Operator promises to simplify repetitive online activities by interacting directly with websites through clicks, typing, and scrolling. Designed around a new model called the Computer-Using Agent (CUA), Operator combines GPT-4o’s vision recognition […]

The post Yiannis Antoniou, Lab49: OpenAI Operator kickstarts era of browser AI agents appeared first on AI News.

]]>
OpenAI has unveiled Operator, a tool that integrates seamlessly with web browsers to perform tasks autonomously. From filling out forms to ordering groceries, Operator promises to simplify repetitive online activities by interacting directly with websites through clicks, typing, and scrolling.

Designed around a new model called the Computer-Using Agent (CUA), Operator combines GPT-4o’s vision recognition with advanced reasoning capabilities—allowing it to function as a virtual “human-in-the-browser.” Yet, for all its innovation, industry experts see room for refinement.

Yiannis Antoniou, Head of AI, Data, and Analytics at specialist consultancy Lab49, shared his insights on Operator’s significance and positioning in the competitive landscape of agent AI systems.

Agentic AI through a familiar interface

“OpenAI’s announcement of Operator, its latest foray into the agentic AI wars, is both fascinating and incomplete,” said Antoniou, who has over two decades of experience designing AI systems for financial services firms.

Headshot of Yiannis Antoniou, Head of AI, Data, and Analytics at specialist consultancy Lab49, for an article on how OpenAI operator is kickstarting the era of browser AI agents.

“Clearly influenced by Anthropic Claude’s Computer Use system, introduced back in October, Operator streamlines the experience by removing the need for complex infrastructure and focusing on a familiar interface: the browser.”

By designing Operator to operate within an environment users already understand, the web browser, OpenAI sidesteps the need for bespoke APIs or integrations.

“By leveraging the world’s most popular interface, OpenAI enhances the user experience and captures immediate interest from the general public. This browser-centric approach creates significant potential for widespread adoption, something Anthropic – despite its early-mover advantage – has struggled to achieve.”

Unlike some competing systems that may feel technical or niche in their application, Operator’s browser-focused framework lowers the barrier to entry and is a step forward in OpenAI’s efforts to democratise AI.

Unique take on usability and security

One of the hallmarks of Operator is its emphasis on adaptability and security, implemented through human-in-the-loop protocols. Antoniou acknowledged these thoughtful usability features but noted that more work is needed.

“Architecturally, Operator’s browser integration closely mirrors Claude’s system. Both involve taking screenshots of the user’s browser and sending them for analysis, as well as controlling the screen via virtual keystrokes and mouse movements. However, Operator introduces thoughtful usability touches. 

“Features like custom instructions for specific websites add a layer of personalisation, and the emphasis on human-in-the-loop safeguards against unauthorised actions – such as purchases, sending emails, or applying for jobs – demonstrate OpenAI’s awareness of potential security risks posed by malicious websites, but more work is clearly needed to make this system widely safe across a variety of scenarios.”

OpenAI has implemented a multi-layered safety framework for Operator, including takeover mode for secure inputs, user confirmations prior to significant actions, and monitoring systems to detect adversarial behavior. Furthermore, users can delete browsing data and manage privacy settings directly within the tool.

However, Antoniou emphasised that these measures are still evolving—particularly as Operator encounters complex or sensitive tasks. 

OpenAI Operator further democratises AI

Antoniou also sees the release of Operator as a pivotal moment for the consumer AI landscape, albeit one that is still in its early stages. 

“Overall, this is an excellent first attempt at building an agentic system for everyday users, designed around how they naturally interact with technology. As the system develops – with added capabilities and more robust security controls – this limited rollout, priced at $200/month, will serve as a testing ground. 

“Once matured and extended to lower subscription tiers and the free version, Operator has the potential to usher in the era of consumer-facing agents, further democratising AI and embedding it into daily life.”

Designed initially for Pro users at a premium price point, Operator provides OpenAI with an opportunity to learn from early adopters and refine its capabilities.

Antoniou noted that while $200/month might not yet justify the system’s value for most users, investment in making Operator more powerful and accessible could lead to significant competitive advantages for OpenAI in the long run.

“Is it worth $200/month? Perhaps not yet. But as the system evolves, OpenAI’s moat will grow, making it harder for competitors to catch up. Now, the challenge shifts back to Anthropic and Google – both of whom have demonstrated similar capabilities in niche or engineering-focused products – to respond and stay in the game,” he concludes.

As OpenAI continues to fine-tune Operator, the potential to revolutionise how people interact with technology becomes apparent. From collaborations with companies like Instacart, DoorDash, and Uber to use cases in the public sector, Operator aims to balance innovation with trust and safety.

While early limitations and pricing may deter widespread adoption for now, these hurdles might only be temporary as OpenAI commits to enhancing usability and accessibility over time.

See also: OpenAI argues against ChatGPT data deletion in Indian court

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Yiannis Antoniou, Lab49: OpenAI Operator kickstarts era of browser AI agents appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/yiannis-antoniou-lab49-openai-operator-era-browser-ai-agents/feed/ 0
OpenAI targets business sector with advanced AI tools https://www.artificialintelligence-news.com/news/openai-targets-business-sector-advanced-ai-tools/ https://www.artificialintelligence-news.com/news/openai-targets-business-sector-advanced-ai-tools/#respond Fri, 24 Jan 2025 13:30:53 +0000 https://www.artificialintelligence-news.com/?p=16955 OpenAI, the powerhouse behind ChatGPT, is ramping up efforts to dominate the enterprise market with a suite of AI tools tailored for business users. The company recently revealed its plans to introduce a series of enhancements designed to make AI integration seamless for companies of all sizes. This includes updates to its flagship AI agent […]

The post OpenAI targets business sector with advanced AI tools appeared first on AI News.

]]>
OpenAI, the powerhouse behind ChatGPT, is ramping up efforts to dominate the enterprise market with a suite of AI tools tailored for business users.

The company recently revealed its plans to introduce a series of enhancements designed to make AI integration seamless for companies of all sizes. This includes updates to its flagship AI agent technology, expected to transform workplace productivity by automating complex workflows, from financial analysis to customer service.

“Businesses are looking for solutions that go beyond surface-level assistance. Our agents are designed to provide in-depth, actionable insights,” said Sarah Friar, CFO of OpenAI. “This is particularly relevant as enterprises seek to streamline operations in today’s competitive landscape.”

OpenAI’s corporate strategy builds on its ongoing collaborations with tech leaders such as Microsoft, which has already integrated OpenAI’s technology into its Azure cloud platform. Analysts say these partnerships position OpenAI to rival established enterprise solutions providers like Salesforce and Oracle.

AI research assistant tools 

As part of its enterprise-focused initiatives, OpenAI is emphasising the development of AI research tools that cater to specific industries. 

For instance, its AI models are being trained on legal and medical data to create highly specialised assistants that could redefine research-intensive sectors. This focus aligns with the broader market demand for AI-driven solutions that enhance decision-making and efficiency.

Infrastructure for expansion 

OpenAI’s rapid growth strategy is supported by a robust infrastructure push. The company has committed to building state-of-the-art data centers in Europe and Asia, aiming to lower latency and improve service reliability for global users. These investments reflect OpenAI’s long-term vision of becoming a critical enabler in the AI-driven global economy.

Challenges and issues

However, challenges persist. The company faces mounting pressure from regulators concerned about data privacy and the ethical implications of deploying powerful AI tools. Critics also question the sustainability of OpenAI’s ambitious growth targets, given its significant operational costs and strong competition from other tech giants.

Despite these hurdles, OpenAI remains optimistic about its trajectory. With plans to unveil its expanded portfolio at the upcoming Global AI Summit, the company is well-positioned to strengthen its foothold in the burgeoning AI enterprise market.

(Editor’s note: This article is sponsored by AI Tools Network)

See also: OpenAI argues against ChatGPT data deletion in Indian court

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI targets business sector with advanced AI tools appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-targets-business-sector-advanced-ai-tools/feed/ 0
OpenAI argues against ChatGPT data deletion in Indian court https://www.artificialintelligence-news.com/news/openai-argues-against-chatgpt-data-deletion-in-indian-court/ https://www.artificialintelligence-news.com/news/openai-argues-against-chatgpt-data-deletion-in-indian-court/#respond Thu, 23 Jan 2025 14:24:49 +0000 https://www.artificialintelligence-news.com/?p=16948 OpenAI has argued in an Indian court that removing the training data behind ChatGPT service would clash with its legal obligations in the United States. The statement was issued in response to a lawsuit filed by Indian news agency ANI, which accused the AI business of using its content without permission. The Microsoft-backed AI giant […]

The post OpenAI argues against ChatGPT data deletion in Indian court appeared first on AI News.

]]>
OpenAI has argued in an Indian court that removing the training data behind ChatGPT service would clash with its legal obligations in the United States.

The statement was issued in response to a lawsuit filed by Indian news agency ANI, which accused the AI business of using its content without permission.

The Microsoft-backed AI giant stated that Indian courts lack jurisdiction in the case since OpenAI has no offices nor operations in the country. In its January 10 filing to the Delhi High Court, OpenAI emphasised that it is already defending similar lawsuits in the US, where it is required to preserve its training data during ongoing litigation.

The case, filed by ANI in November, is one of India’s most closely-watched lawsuits involving the use of AI. ANI alleges that OpenAI utilised its published content without authorisation to train ChatGPT and is demanding the deletion of its data from the company’s systems.

A global battle over copyright and AI

OpenAI is no stranger to such disputes, facing a wave of lawsuits from copyright holders worldwide. In the US, the New York Times filed a similar case against the company, accusing it of misusing its content. OpenAI has consistently denied such allegations, claiming its systems rely on the fair use of publicly available data.

During a November hearing in Delhi, OpenAI told the court it would no longer use ANI’s content. However, ANI argued that its previously published material remains stored in ChatGPT’s repositories and must be deleted.

In its rebuttal, OpenAI highlighted that it is legally obligated under US law to retain training data while related cases are pending. “The company is under a legal obligation, under the laws of the United States, to preserve, and not delete, the said training data,” OpenAI stated in its filing.

Jurisdiction dispute

OpenAI also argued that the relief ANI is seeking falls outside the jurisdiction of Indian courts. It pointed out that the company has “no office or permanent establishment in India,” and its servers, which store ChatGPT’s training data, are located outside the country.

ANI, which is partially owned by Reuters, countered the claim, saying the Delhi court has the authority to hear the case and that it will file a detailed response.

A Reuters spokesperson declined to comment on proceedings, but has stated that the agency has no involvement in ANI’s business operations.

Concerns over competition

ANI has also expressed concern about unfair competition, citing OpenAI’s partnerships with major news organisations like Time Magazine, The Financial Times, and France’s Le Monde. ANI says that these agreements give OpenAI an edge.

The agency further claimed that ChatGPT reproduces verbatim or similar excerpts of its works in response to user prompts. OpenAI, on the other hand, claimed that ANI deliberately used its own articles as prompts to “manipulate ChatGPT” to file the lawsuit.

The case is scheduled to be heard by the Delhi High Court on January 28. Meanwhile, OpenAI is transitioning from a non-profit to a for-profit company, raising $6.6 billion last year.

In recent months, OpenAI has secured high-profile deals with media outlets from around the world, highlighting its efforts to strengthen its commercial partnerships while managing regulatory concerns worldwide.

(Photo by Unsplash)

See also: DeepSeek-R1 reasoning models rival OpenAI in performance 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI argues against ChatGPT data deletion in Indian court appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-argues-against-chatgpt-data-deletion-in-indian-court/feed/ 0