agi Archives - AI News https://www.artificialintelligence-news.com/news/tag/agi/ Artificial Intelligence News Thu, 24 Apr 2025 15:02:58 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png agi Archives - AI News https://www.artificialintelligence-news.com/news/tag/agi/ 32 32 Coalition opposes OpenAI shift from nonprofit roots https://www.artificialintelligence-news.com/news/coalition-opposes-openai-shift-from-nonprofit-roots/ https://www.artificialintelligence-news.com/news/coalition-opposes-openai-shift-from-nonprofit-roots/#respond Thu, 24 Apr 2025 15:02:57 +0000 https://www.artificialintelligence-news.com/?p=106036 A coalition of experts, including former OpenAI employees, has voiced strong opposition to the company’s shift away from its nonprofit roots. In an open letter addressed to the Attorneys General of California and Delaware, the group – which also includes legal experts, corporate governance specialists, AI researchers, and nonprofit representatives – argues that the proposed […]

The post Coalition opposes OpenAI shift from nonprofit roots appeared first on AI News.

]]>
A coalition of experts, including former OpenAI employees, has voiced strong opposition to the company’s shift away from its nonprofit roots.

In an open letter addressed to the Attorneys General of California and Delaware, the group – which also includes legal experts, corporate governance specialists, AI researchers, and nonprofit representatives – argues that the proposed changes fundamentally threaten OpenAI’s original charitable mission.   

OpenAI was founded with a unique structure. Its core purpose, enshrined in its Articles of Incorporation, is “to ensure that artificial general intelligence benefits all of humanity” rather than serving “the private gain of any person.”

The letter’s signatories contend that the planned restructuring – transforming the current for-profit subsidiary (OpenAI-profit) controlled by the original nonprofit entity (OpenAI-nonprofit) into a Delaware public benefit corporation (PBC) – would dismantle crucial governance safeguards.

This shift, the signatories argue, would transfer ultimate control over the development and deployment of potentially transformative Artificial General Intelligence (AGI) from a charity focused on humanity’s benefit to a for-profit enterprise accountable to shareholders.

Original vision of OpenAI: Nonprofit control as a bulwark

OpenAI defines AGI as “highly autonomous systems that outperform humans at most economically valuable work”. While acknowledging AGI’s potential to “elevate humanity,” OpenAI’s leadership has also warned of “serious risk of misuse, drastic accidents, and societal disruption.”

Co-founder Sam Altman and others have even signed statements equating mitigating AGI extinction risks with preventing pandemics and nuclear war.   

The company’s founders – including Altman, Elon Musk, and Greg Brockman – were initially concerned about AGI being developed by purely commercial entities like Google. They established OpenAI as a nonprofit specifically “unconstrained by a need to generate financial return”. As Altman stated in 2017, “The only people we want to be accountable to is humanity as a whole.”

Even when OpenAI introduced a “capped-profit” subsidiary in 2019 to attract necessary investment, it emphasised that the nonprofit parent would retain control and that the mission remained paramount. Key safeguards included:   

  • Nonprofit control: The for-profit subsidiary was explicitly “controlled by OpenAI Nonprofit’s board”.   
  • Capped profits: Investor returns were capped, with excess value flowing back to the nonprofit for humanity’s benefit.   
  • Independent board: A majority of nonprofit board members were required to be independent, holding no financial stake in the subsidiary.   
  • Fiduciary duty: The board’s legal duty was solely to the nonprofit’s mission, not to maximising investor profit.   
  • AGI ownership: AGI technologies were explicitly reserved for the nonprofit to govern.

Altman himself testified to Congress in 2023 that this “unusual structure” “ensures it remains focused on [its] long-term mission.”

A threat to the mission?

The critics argue the move to a PBC structure would jeopardise these safeguards:   

  • Subordination of mission: A PBC board – while able to consider public benefit – would also have duties to shareholders, potentially balancing profit against the mission rather than prioritising the mission above all else.   
  • Loss of enforceable duty: The current structure gives Attorneys General the power to enforce the nonprofit’s duty to the public. Under a PBC, this direct public accountability – enforceable by regulators – would likely vanish, leaving shareholder derivative suits as the primary enforcement mechanism.   
  • Uncapped profits?: Reports suggest the profit cap might be removed, potentially reallocating vast future wealth from the public benefit mission to private shareholders.   
  • Board independence uncertain: Commitments to a majority-independent board overseeing AI development could disappear.   
  • AGI control shifts: Ownership and control of AGI would likely default to the PBC and its investors, not the mission-focused nonprofit. Reports even suggest OpenAI and Microsoft have discussed removing contractual restrictions on Microsoft’s access to future AGI.   
  • Charter commitments at risk: Commitments like the “stop-and-assist” clause (pausing competition to help a safer, aligned AGI project) might not be honoured by a profit-driven entity.  

OpenAI has publicly cited competitive pressures (i.e. attracting investment and talent against rivals with conventional equity structures) as reasons for the change.

However, the letter counters that competitive advantage isn’t the charitable purpose of OpenAI and that its unique nonprofit structure was designed to impose certain competitive costs in favour of safety and public benefit. 

“Obtaining a competitive advantage by abandoning the very governance safeguards designed to ensure OpenAI remains true to its mission is unlikely to, on balance, advance the mission,” the letter states.   

The authors also question why OpenAI abandoning nonprofit control is necessary merely to simplify the capital structure, suggesting the core issue is the subordination of investor interests to the mission. They argue that while the nonprofit board can consider investor interests if it serves the mission, the restructuring appears aimed at allowing these interests to prevail at the expense of the mission.

Many of these arguments have also been pushed by Elon Musk in his legal action against OpenAI. Earlier this month, OpenAI counter-sued Musk for allegedly orchestrating a “relentless” and “malicious” campaign designed to “take down OpenAI” after he left the company years ago and started rival AI firm xAI.

Call for intervention

The signatories of the open letter urge intervention, demanding answers from OpenAI about how the restructuring away from a nonprofit serves its mission and why safeguards previously deemed essential are now obstacles.

Furthemore, the signatories request a halt to the restructuring, preservation of nonprofit control and other safeguards, and measures to ensure the board’s independence and ability to oversee management effectively in line with the charitable purpose.   

“The proposed restructuring would eliminate essential safeguards, effectively handing control of, and profits from, what could be the most powerful technology ever created to a for-profit entity with legal duties to prioritise shareholder returns,” the signatories conclude.

See also: How does AI judge? Anthropic studies the values of Claude

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Coalition opposes OpenAI shift from nonprofit roots appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/coalition-opposes-openai-shift-from-nonprofit-roots/feed/ 0
OpenAI counter-sues Elon Musk for attempts to ‘take down’ AI rival https://www.artificialintelligence-news.com/news/openai-counter-sues-elon-musk-attempts-take-down-ai-rival/ https://www.artificialintelligence-news.com/news/openai-counter-sues-elon-musk-attempts-take-down-ai-rival/#respond Thu, 10 Apr 2025 12:05:31 +0000 https://www.artificialintelligence-news.com/?p=105285 OpenAI has launched a legal counteroffensive against one of its co-founders, Elon Musk, and his competing AI venture, xAI. In court documents filed yesterday, OpenAI accuses Musk of orchestrating a “relentless” and “malicious” campaign designed to “take down OpenAI” after he left the organisation years ago. The court filing, submitted to the US District Court […]

The post OpenAI counter-sues Elon Musk for attempts to ‘take down’ AI rival appeared first on AI News.

]]>
OpenAI has launched a legal counteroffensive against one of its co-founders, Elon Musk, and his competing AI venture, xAI.

In court documents filed yesterday, OpenAI accuses Musk of orchestrating a “relentless” and “malicious” campaign designed to “take down OpenAI” after he left the organisation years ago.

The court filing, submitted to the US District Court for the Northern District of California, alleges Musk could not tolerate OpenAI’s success after he had “abandoned and declared [it] doomed.”

OpenAI is now seeking legal remedies, including an injunction to stop Musk’s alleged “unlawful and unfair action” and compensation for damages already caused.   

Origin story of OpenAI and the departure of Elon Musk

The legal documents recount OpenAI’s origins in 2015, stemming from an idea discussed by current CEO Sam Altman and President Greg Brockman to create an AI lab focused on developing artificial general intelligence (AGI) – AI capable of outperforming humans – for the “benefit of all humanity.”

Musk was involved in the launch, serving on the initial non-profit board and pledging $1 billion in donations.   

However, the relationship fractured. OpenAI claims that between 2017 and 2018, Musk’s demands for “absolute control” of the enterprise – or its potential absorption into Tesla – were rebuffed by Altman, Brockman, and then-Chief Scientist Ilya Sutskever. The filing quotes Sutskever warning Musk against creating an “AGI dictatorship.”

Following this disagreement, OpenAI alleges Elon Musk quit in February 2018, declaring the venture would fail without him and that he would pursue AGI development at Tesla instead. Critically, OpenAI contends the pledged $1 billion “was never satisfied—not even close”.   

Restructuring, success, and Musk’s alleged ‘malicious’ campaign

Facing escalating costs for computing power and talent retention, OpenAI restructured and created a “capped-profit” entity in 2019 to attract investment while remaining controlled by the non-profit board and bound by its mission. This structure, OpenAI states, was announced publicly and Musk was offered equity in the new entity but declined and raised no objection at the time.   

OpenAI highlights its subsequent breakthroughs – including GPT-3, ChatGPT, and GPT-4 – achieved massive public adoption and critical acclaim. These successes, OpenAI emphasises, were made after the departure of Elon Musk and allegedly spurred his antagonism.

The filing details a chronology of alleged actions by Elon Musk aimed at harming OpenAI:   

  • Founding xAI: Musk “quietly created” his competitor, xAI, in March 2023.   
  • Moratorium call: Days later, Musk supported a call for a development moratorium on AI more advanced than GPT-4, a move OpenAI claims was intended “to stall OpenAI while all others, most notably Musk, caught up”.   
  • Records demand: Musk allegedly made a “pretextual demand” for confidential OpenAI documents, feigning concern while secretly building xAI.   
  • Public attacks: Using his social media platform X (formerly Twitter), Musk allegedly broadcast “press attacks” and “malicious campaigns” to his vast following, labelling OpenAI a “lie,” “evil,” and a “total scam”.   
  • Legal actions: Musk filed lawsuits, first in state court (later withdrawn) and then the current federal action, based on what OpenAI dismisses as meritless claims of a “Founding Agreement” breach.   
  • Regulatory pressure: Musk allegedly urged state Attorneys General to investigate OpenAI and force an asset auction.   
  • “Sham bid”: In February 2025, a Musk-led consortium made a purported $97.375 billion offer for OpenAI, Inc.’s assets. OpenAI derides this as a “sham bid” and a “stunt” lacking evidence of financing and designed purely to disrupt OpenAI’s operations, potential restructuring, fundraising, and relationships with investors and employees, particularly as OpenAI considers evolving its capped-profit arm into a Public Benefit Corporation (PBC). One investor involved allegedly admitted the bid’s aim was to gain “discovery”.   

Based on these allegations, OpenAI asserts two primary counterclaims against both Elon Musk and xAI:

  • Unfair competition: Alleging the “sham bid” constitutes an unfair and fraudulent business practice under California law, intended to disrupt OpenAI and gain an unfair advantage for xAI.   
  • Tortious interference with prospective economic advantage: Claiming the sham bid intentionally disrupted OpenAI’s existing and potential relationships with investors, employees, and customers. 

OpenAI argues Musk’s actions have forced it to divert resources and expend funds, causing harm. They claim his campaign threatens “irreparable harm” to their mission, governance, and crucial business relationships. The filing also touches upon concerns regarding xAI’s own safety record, citing reports of its AI Grok generating harmful content and misinformation.

The counterclaims mark a dramatic escalation in the legal battle between the AI pioneer and its departed co-founder. While Elon Musk initially sued OpenAI alleging a betrayal of its founding non-profit, open-source principles, OpenAI now contends Musk’s actions are a self-serving attempt to undermine a competitor he couldn’t control.

With billions at stake and the future direction of AGI in the balance, this dispute is far from over.

See also: Deep Cogito open LLMs use IDA to outperform same size models

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI counter-sues Elon Musk for attempts to ‘take down’ AI rival appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-counter-sues-elon-musk-attempts-take-down-ai-rival/feed/ 0
ARC Prize launches its toughest AI benchmark yet: ARC-AGI-2 https://www.artificialintelligence-news.com/news/arc-prize-launches-toughest-ai-benchmark-yet-arc-agi-2/ https://www.artificialintelligence-news.com/news/arc-prize-launches-toughest-ai-benchmark-yet-arc-agi-2/#respond Tue, 25 Mar 2025 16:43:12 +0000 https://www.artificialintelligence-news.com/?p=104994 ARC Prize has launched the hardcore ARC-AGI-2 benchmark, accompanied by the announcement of their 2025 competition with $1 million in prizes. As AI progresses from performing narrow tasks to demonstrating general, adaptive intelligence, the ARC-AGI-2 challenges aim to uncover capability gaps and actively guide innovation. “Good AGI benchmarks act as useful progress indicators. Better AGI […]

The post ARC Prize launches its toughest AI benchmark yet: ARC-AGI-2 appeared first on AI News.

]]>
ARC Prize has launched the hardcore ARC-AGI-2 benchmark, accompanied by the announcement of their 2025 competition with $1 million in prizes.

As AI progresses from performing narrow tasks to demonstrating general, adaptive intelligence, the ARC-AGI-2 challenges aim to uncover capability gaps and actively guide innovation.

“Good AGI benchmarks act as useful progress indicators. Better AGI benchmarks clearly discern capabilities. The best AGI benchmarks do all this and actively inspire research and guide innovation,” the ARC Prize team states.

ARC-AGI-2 is setting out to achieve the “best” category.

Beyond memorisation

Since its inception in 2019, ARC Prize has served as a “North Star” for researchers striving toward AGI by creating enduring benchmarks. 

Benchmarks like ARC-AGI-1 leaned into measuring fluid intelligence (i.e., the ability to adapt learning to new unseen tasks.) It represented a clear departure from datasets that reward memorisation alone.

ARC Prize’s mission is also forward-thinking, aiming to accelerate timelines for scientific breakthroughs. Its benchmarks are designed not just to measure progress but to inspire new ideas.

Researchers observed a critical shift with the debut of OpenAI’s o3 in late 2024, evaluated using ARC-AGI-1. Combining deep learning-based large language models (LLMs) with reasoning synthesis engines, o3 marked a breakthrough where AI transitioned beyond rote memorisation.

Yet, despite progress, systems like o3 remain inefficient and require significant human oversight during training processes. To challenge these systems for true adaptability and efficiency, ARC Prize introduced ARC-AGI-2.

ARC-AGI-2: Closing the human-machine gap

The ARC-AGI-2 benchmark is tougher for AI yet retains its accessibility for humans. While frontier AI reasoning systems continue to score in single-digit percentages on ARC-AGI-2, humans can solve every task in under two attempts.

So, what sets ARC-AGI apart? Its design philosophy chooses tasks that are “relatively easy for humans, yet hard, or impossible, for AI.”

The benchmark includes datasets with varying visibility and the following characteristics:

  • Symbolic interpretation: AI struggles to assign semantic significance to symbols, instead focusing on shallow comparisons like symmetry checks.
  • Compositional reasoning: AI falters when it needs to apply multiple interacting rules simultaneously.
  • Contextual rule application: Systems fail to apply rules differently based on complex contexts, often fixating on surface-level patterns.

Most existing benchmarks focus on superhuman capabilities, testing advanced, specialised skills at scales unattainable for most individuals. 

ARC-AGI flips the script and highlights what AI can’t yet do; specifically the adaptability that defines human intelligence. When the gap between tasks that are easy for humans but difficult for AI eventually reaches zero, AGI can be declared achieved.

However, achieving AGI isn’t limited to the ability to solve tasks; efficiency – the cost and resources required to find solutions – is emerging as a crucial defining factor.

The role of efficiency

Measuring performance by cost per task is essential to gauge intelligence as not just problem-solving capability but the ability to do so efficiently.

Real-world examples are already showing efficiency gaps between humans and frontier AI systems:

  • Human panel efficiency: Passes ARC-AGI-2 tasks with 100% accuracy at $17/task.
  • OpenAI o3: Early estimates suggest a 4% success rate at an eye-watering $200 per task.

These metrics underline disparities in adaptability and resource consumption between humans and AI. ARC Prize has committed to reporting on efficiency alongside scores across future leaderboards.

The focus on efficiency prevents brute-force solutions from being considered “true intelligence.”

Intelligence, according to ARC Prize, encompasses finding solutions with minimal resources—a quality distinctly human but still elusive for AI.

ARC Prize 2025

ARC Prize 2025 launches on Kaggle this week, promising $1 million in total prizes and showcasing a live leaderboard for open-source breakthroughs. The contest aims to drive progress toward systems that can efficiently tackle ARC-AGI-2 challenges. 

Among the prize categories, which have increased from 2024 totals, are:

  • Grand prize: $700,000 for reaching 85% success within Kaggle efficiency limits.
  • Top score prize: $75,000 for the highest-scoring submission.
  • Paper prize: $50,000 for transformative ideas contributing to solving ARC-AGI tasks.
  • Additional prizes: $175,000, with details pending announcements during the competition.

These incentives ensure fair and meaningful progress while fostering collaboration among researchers, labs, and independent teams.

Last year, ARC Prize 2024 saw 1,500 competitor teams, resulting in 40 papers of acclaimed industry influence. This year’s increased stakes aim to nurture even greater success.

ARC Prize believes progress hinges on novel ideas rather than merely scaling existing systems. The next breakthrough in efficient general systems might not originate from current tech giants but from bold, creative researchers embracing complexity and curious experimentation.

(Image credit: ARC Prize)

See also: DeepSeek V3-0324 tops non-reasoning AI models in open-source first

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ARC Prize launches its toughest AI benchmark yet: ARC-AGI-2 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/arc-prize-launches-toughest-ai-benchmark-yet-arc-agi-2/feed/ 0
DeepSeek to open-source AGI research amid privacy concerns https://www.artificialintelligence-news.com/news/deepseek-open-source-agi-research-amid-privacy-concerns/ https://www.artificialintelligence-news.com/news/deepseek-open-source-agi-research-amid-privacy-concerns/#respond Fri, 21 Feb 2025 13:56:59 +0000 https://www.artificialintelligence-news.com/?p=104592 DeepSeek, a Chinese AI startup aiming for artificial general intelligence (AGI), announced plans to open-source five repositories starting next week as part of its commitment to transparency and community-driven innovation. However, this development comes against the backdrop of mounting controversies that have drawn parallels to the TikTok saga. Today, DeepSeek shared its intentions in a […]

The post DeepSeek to open-source AGI research amid privacy concerns appeared first on AI News.

]]>
DeepSeek, a Chinese AI startup aiming for artificial general intelligence (AGI), announced plans to open-source five repositories starting next week as part of its commitment to transparency and community-driven innovation.

However, this development comes against the backdrop of mounting controversies that have drawn parallels to the TikTok saga.

Today, DeepSeek shared its intentions in a tweet that outlined its vision of open collaboration: “We’re a tiny team at DeepSeek exploring AGI. Starting next week, we’ll be open-sourcing five repos, sharing our small but sincere progress with full transparency.”

The repositories – which the company describes as “documented, deployed, and battle-tested in production” – include fundamental building blocks of DeepSeek’s online service.

By open-sourcing its tools, DeepSeek hopes to contribute to the broader AI research community.

“As part of the open-source community, we believe that every line shared becomes collective momentum that accelerates the journey. No ivory towers – just pure garage-energy and community-driven innovation,” the company said.

This philosophy has drawn praise for fostering collaboration in a field that often suffers from secrecy, but DeepSeek’s rapid rise has also raised eyebrows.

Despite being a small team with a mission rooted in transparency, the company has been under intense scrutiny amid allegations of data misuse and geopolitical entanglements.

Rising fast, under fire

Practically unknown until recently, DeepSeek burst onto the scene with a business model that stood in stark contrast to more established players like OpenAI and Google.

Offering its advanced AI capabilities for free, DeepSeek quickly gained global acclaim for its cutting-edge performance. However, its exponential rise has also sparked debates about the trade-offs between innovation and privacy.

US lawmakers are now pushing for a ban on DeepSeek after security researchers found the app transferring user data to a banned state-owned company.

A probe has also been launched by Microsoft and OpenAI over a breach of the latter’s systems by a group allegedly linked to DeepSeek.

Concerns about data collection and potential misuse have triggered comparisons to the controversies surrounding TikTok, another Chinese tech success story grappling with regulatory pushback in the West.

DeepSeek continues AGI innovation amid controversy

DeepSeek’s commitment to open-source its technology appears timed to deflect criticism and reassure sceptics about its intentions.

Open-sourcing has long been heralded as a way to democratise technology and increase transparency, and DeepSeek’s “daily unlocks,” that are set to begin soon, could offer the community reassuring insight into its operations.

Nevertheless, questions remain over how much of the technology will be open for scrutiny and whether the move is an attempt to shift the narrative amid growing political and regulatory pressure.

It’s unclear whether this balancing act will be enough to satisfy lawmakers or deter critics, but one thing is certain: DeepSeek’s open-source leap marks another turn in its dramatic rise.

While the company’s motto of “garage-energy and community-driven innovation” resonates with developers eager for open collaboration, its future may rest as much on its ability to address security concerns as on its technical prowess.

(Photo by Solen Feyissa)

See also: DeepSeek’s AI dominance expands from EVs to e-scooters in China

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including IoT Tech Expo, Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepSeek to open-source AGI research amid privacy concerns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/deepseek-open-source-agi-research-amid-privacy-concerns/feed/ 0
Sam Altman, OpenAI: ‘Lucky and humbling’ to work towards superintelligence https://www.artificialintelligence-news.com/news/sam-altman-openai-lucky-humbling-work-towards-superintelligence/ https://www.artificialintelligence-news.com/news/sam-altman-openai-lucky-humbling-work-towards-superintelligence/#respond Mon, 06 Jan 2025 14:19:23 +0000 https://www.artificialintelligence-news.com/?p=16810 Sam Altman, CEO and co-founder of OpenAI, has shared candid reflections on the company’s journey as it aims to achieve superintelligence. With ChatGPT recently marking its second anniversary, Altman outlines OpenAI’s achievements, ongoing challenges, and vision for the future of AI. “The second birthday of ChatGPT was only a little over a month ago, and […]

The post Sam Altman, OpenAI: ‘Lucky and humbling’ to work towards superintelligence appeared first on AI News.

]]>
Sam Altman, CEO and co-founder of OpenAI, has shared candid reflections on the company’s journey as it aims to achieve superintelligence.

With ChatGPT recently marking its second anniversary, Altman outlines OpenAI’s achievements, ongoing challenges, and vision for the future of AI.

“The second birthday of ChatGPT was only a little over a month ago, and now we have transitioned into the next paradigm of models that can do complex reasoning,” Altman reflects.

A bold mission to achieve AGI and superintelligence

OpenAI was founded in 2015 with a clear, albeit bold, mission: to develop AGI and ensure it benefits all of humanity.

Altman and the founding team believed AGI could become “the most impactful technology in human history.” Yet, he recalls, the world wasn’t particularly interested in their quest back then.

“At the time, very few people cared, and if they did, it was mostly because they thought we had no chance of success,” Altman explains.

Fast forward to 2022, OpenAI was still a relatively quiet research facility testing what was then referred to as ‘Chat With GPT-3.5.’ Developers had been exploring the capabilities of its API, and the excitement sparked the idea of launching a user-ready demo.

This demo led to the creation of ChatGPT, which Altman acknowledges benefited from “mercifully” better branding than its initial name. When it launched on 30 November 2022, ChatGPT proved to be a tipping point.

“The launch of ChatGPT kicked off a growth curve like nothing we have ever seen—in our company, our industry, and the world broadly,” he says

OpenAI has since witnessed an evolution marked by staggering interest, not just in its tools but in the broader possibilities of AI.

Building at breakneck speed  

Altman admits that scaling OpenAI into a global tech powerhouse came with significant challenges.

“In the last two years, we had to build an entire company, almost from scratch, around this new technology,” he notes, adding, “There is no way to train people for this except by doing it.”

Operating in uncharted waters, the OpenAI team often faced ambiguity—making decisions on the fly and dealing with the inevitable missteps.

“Building up a company at such high velocity with so little training is a messy process,” Altman explains. “It’s often two steps forward, one step back (and sometimes, one step forward and two steps back).”

Yet, despite the chaos, Altman credits the team’s resilience and ability to adapt.

OpenAI now boasts over 300 million weekly active users, a sharp increase from the 100 million reported just a year ago. Much of this success lies in the organisation’s ethos of learning by doing, combined with a commitment to putting “technology out into the world that people genuinely seem to love and that solves real problems.”

‘A big failure of governance’

Of course, the journey so far hasn’t been without turmoil. Altman recounts a particularly difficult chapter from November 2023 when he was suddenly ousted as CEO, briefly recruited by Microsoft, only to be reinstated by OpenAI days later amid industry backlash and staff protests.

Speaking openly, Altman highlights the need for better governance structures in organisations tackling critical technologies like AI.  

“The whole event was, in my opinion, a big failure of governance by well-meaning people, myself included,” he admits. “Looking back, I certainly wish I had done things differently, and I’d like to believe I’m a better, more thoughtful leader today than I was a year ago.”

The episode served as a stark reminder of the complexity of managing rapid growth and the stakes involved in AI development. It also drove OpenAI to forge new governance structures “that enable us to pursue our mission of ensuring that AGI benefits all of humanity.”

Altman expressed deep gratitude for the support OpenAI received during the crisis from employees, partners, and customers. “My biggest takeaway is how much I have to be thankful for and how many people I owe gratitude towards,” he emphasises.

Pivoting towards superintelligence  

Looking forward, Altman says OpenAI is beginning to aim beyond AGI towards the development of “superintelligence”—AI systems that far surpass human cognitive capabilities.

“We are now confident we know how to build AGI as we have traditionally understood it,” Altman shares. OpenAI predicts that by the end of this year, AI agents will significantly “join the workforce,” revolutionising industries with smarter automation and companion systems.

Achieving superintelligence would be especially transformative for society, with the potential to accelerate scientific discoveries, but also poses the most significant dangers.

“We believe in the importance of being world leaders on safety and alignment research … OpenAI cannot be a normal company,” he notes, underscoring the need to approach innovation responsibly.

OpenAI’s strategy includes gradually introducing breakthroughs into the world, allowing for society to adapt alongside AI’s rapid evolution. “Iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes,” Altman argues.

Reflecting on the organisation’s trajectory, Altman admits OpenAI’s path has been defined by both extraordinary breakthroughs and significant challenges—from scaling teams to navigating public scrutiny. 

“Nine years ago, we really had no idea what we were eventually going to become; even now, we only sort of know,” he says.

What remains clear is his unwavering commitment to OpenAI’s vision. “Our vision won’t change; our tactics will continue to evolve,” Altman claims, attributing the company’s remarkable progress to the team’s willingness to rethink processes and embrace challenges.

As AI continues to reshape industries and daily life, Altman’s central message is evident: While the journey has been anything but smooth, OpenAI is steadfast in its mission to unlock the benefits of AI for all.

“How lucky and humbling it is to be able to play a role in this work,” Altman concludes.

See also: OpenAI funds $1 million study on AI and morality at Duke University

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Sam Altman, OpenAI: ‘Lucky and humbling’ to work towards superintelligence appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/sam-altman-openai-lucky-humbling-work-towards-superintelligence/feed/ 0
ASI Alliance launches AIRIS that ‘learns’ in Minecraft https://www.artificialintelligence-news.com/news/asi-alliance-launches-airis-learns-minecraft/ https://www.artificialintelligence-news.com/news/asi-alliance-launches-airis-learns-minecraft/#respond Wed, 06 Nov 2024 16:56:03 +0000 https://www.artificialintelligence-news.com/?p=16448 The ASI Alliance has introduced AIRIS (Autonomous Intelligent Reinforcement Inferred Symbolism) that “learns” within the popular game, Minecraft. AIRIS represents the first proto-AGI (Artificial General Intelligence) to harness a comprehensive tech stack across the alliance. SingularityNET, founded by renowned AI researcher Dr Ben Goertzel, uses agent technology from Fetch.ai, incorporates Ocean Data for long-term memory […]

The post ASI Alliance launches AIRIS that ‘learns’ in Minecraft appeared first on AI News.

]]>
The ASI Alliance has introduced AIRIS (Autonomous Intelligent Reinforcement Inferred Symbolism) that “learns” within the popular game, Minecraft.

AIRIS represents the first proto-AGI (Artificial General Intelligence) to harness a comprehensive tech stack across the alliance.

SingularityNET, founded by renowned AI researcher Dr Ben Goertzel, uses agent technology from Fetch.ai, incorporates Ocean Data for long-term memory capabilities, and is soon expected to integrate CUDOS Compute infrastructure for scalable processing power.

“AIRIS is a significant step in the direction of practical, scalable neural-symbolic learning, and – alongside its already powerful and valuable functionality – it illustrates several general points about neural-symbolic systems, such as their ability to learn precise generalisable conclusions from small amounts of data,” explains Goertzel.

According to the company, this alliance-driven procedure propels AIRIS towards AGI—crafting one of the first intelligent systems with autonomous and adaptive learning that holds practical applications for real-world scenarios.

AIRIS’ learning mechanisms

AIRIS is crafted to enhance its understanding by interacting directly with its environment, venturing beyond the traditional AI limitations that depend on predefined rules or vast datasets. Instead, AIRIS evolves through observation, experimentation, and continual refinement of its unique “rule set.”

This system facilitates a profound level of problem-solving and contextual comprehension, with its implementation in Minecraft setting a new benchmark for AI interaction with both digital and tangible landscapes.

Shifting from a controlled 2D grid to the sophisticated 3D world of Minecraft, AIRIS faced numerous challenges—including terrain navigation and adaptive problem-solving in a dynamic environment. This transition underscores AIRIS’ autonomy in navigation, exploration, and learning.

The AIRIS Minecraft Agent distinguishes itself from other AI entities through several key features:

  • Dynamic navigation: AIRIS initially evaluates its milieu to formulate movement strategies, adapting to new environments in real-time. Its capabilities include manoeuvring around obstacles, jumping over barriers, and anticipating reactions to varied terrains.
  • Obstacle adaptation: It learns to navigate around impediments like cliffs and forested areas, refining its rule set with every new challenge to avoid redundant errors and minimise needless trial-and-error efforts.
  • Efficient pathfinding: Via continuous optimisation, AIRIS advances from initially complex navigation paths to streamlined, direct routes as it “comprehends” Minecraft dynamics.
  • Real-time environmental adaptation: Contrasting with conventional reinforcement learning systems that demand extensive retraining for new environments, AIRIS adapts immediately to unfamiliar regions, crafting new rules based on partial observations dynamically.

AIRIS’ adeptness in dealing with fluctuating terrains, including water bodies and cave systems, introduces sophisticated rule refinement founded on hands-on experience. Additionally, AIRIS boasts optimised computational efficiency—enabling real-time management of complex rules without performance compromises.

Future applications

Minecraft serves as an excellent launchpad for AIRIS’ prospective applications, establishing a solid foundation for expansive implementations:

  • Enhanced object interaction: Forthcoming stages will empower AIRIS to engage more profoundly with its surroundings, improving capabilities in object manipulation, construction, and even crafting. This development will necessitate AIRIS to develop a more refined decision-making framework for contextual tasks.
  • Social AI collaboration: Plans are underway to incorporate AIRIS in multi-agent scenarios, where agents learn, interact, and fulfil shared objectives, simulating real-world social dynamics and problem-solving collaboratively.
  • Abstract and strategic reasoning: Expanded developments will enhance AIRIS’s reasoning, enabling it to tackle complex goals such as resource management and prioritisation, moving beyond basic navigation towards strategic gameplay.

The transition of AIRIS to 3D environments signifies a pivotal advancement in the ASI Alliance’s mission to cultivate AGI. Through AIRIS’s achievements in navigating and learning within Minecraft, the ASI Alliance aspires to expedite its deployment in the real world, pioneering applications for autonomous robots, intelligent home assistants, and other systems requiring adaptive learning and problem-solving capacities.

Berick Cook, AI Developer at SingularityNET and creator of AIRIS, said: “AIRIS is a whole new way of approaching the problem of machine learning. We are only just beginning to explore its capabilities. We are excited to see how we can apply it to problems that have posed a significant challenge for traditional reinforcement learning.

“The most important aspect of AIRIS to me is its transparency and explainability. Moving away from ‘Black Box’ AI represents a significant leap forward in the pursuit of safe, ethical, and beneficial AI.”

The innovative approach to AI evident in AIRIS – emphasising self-directed learning and continuous rule refinement – lays the foundation for AI systems capable of independent functioning in unpredictable real-world environments. Minecraft’s intricate ecosystem enables the system to hone its skills within a controlled yet expansive virtual setting, effectively bridging the divide between simulation and reality.

The AIRIS Minecraft Agent represents the inaugural tangible step towards an AI that learns from, adapts to and makes autonomous decisions about its environment. This accomplishment illustrates the potential of such technology to re-envision AI’s role across various industries.

(Image by SkyeWeste)

See also: SingularityNET bets on supercomputer network to deliver AGI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ASI Alliance launches AIRIS that ‘learns’ in Minecraft appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/asi-alliance-launches-airis-learns-minecraft/feed/ 0
SingularityNET bets on supercomputer network to deliver AGI https://www.artificialintelligence-news.com/news/singularitynet-bets-supercomputer-network-deliver-agi/ https://www.artificialintelligence-news.com/news/singularitynet-bets-supercomputer-network-deliver-agi/#respond Tue, 13 Aug 2024 12:47:29 +0000 https://www.artificialintelligence-news.com/?p=15714 SingularityNET is betting on a network of powerful supercomputers to get us to Artificial General Intelligence (AGI), with the first one set to whir into action this September. While today’s AI excels in specific areas – think GPT-4 composing poetry or DeepMind’s AlphaFold predicting protein structures – it’s still miles away from genuine human-like intelligence.  […]

The post SingularityNET bets on supercomputer network to deliver AGI appeared first on AI News.

]]>
SingularityNET is betting on a network of powerful supercomputers to get us to Artificial General Intelligence (AGI), with the first one set to whir into action this September.

While today’s AI excels in specific areas – think GPT-4 composing poetry or DeepMind’s AlphaFold predicting protein structures – it’s still miles away from genuine human-like intelligence. 

“While the novel neural-symbolic AI approaches developed by the SingularityNET AI team decrease the need for data, processing and energy somewhat relative to standard deep neural nets, we still need significant supercomputing facilities,” SingularityNET CEO Ben Goertzel explained to LiveScience in a recent written statement.

Enter SingularityNET’s ambitious plan: a “multi-level cognitive computing network” designed to host and train the incredibly complex AI architectures required for AGI. Imagine deep neural networks that mimic the human brain, vast language models (LLMs) trained on colossal datasets, and systems that seamlessly weave together human behaviours like speech and movement with multimedia outputs.

But this level of sophistication doesn’t come cheap. The first supercomputer, slated for completion by early 2025, will be a Frankensteinian beast of cutting-edge hardware: Nvidia GPUs, AMD processors, Tenstorrent server racks – you name it, it’s in there.

This, Goertzel believes, is more than just a technological leap, it’s a philosophical one: “Before our eyes, a paradigmatic shift is taking place towards continuous learning, seamless generalisation, and reflexive AI self-modification.”

To manage this distributed network and its precious data, SingularityNET has developed OpenCog Hyperon, an open-source software framework specifically designed for AI systems. Think of it as the conductor trying to make sense of a symphony played across multiple concert halls. 

But SingularityNET isn’t keeping all this brainpower to itself. Reminiscent of arcade tokens, users will purchase access to the supercomputer network with the AGIX token on blockchains like Ethereum and Cardano and contribute data to the collective pool—fuelling further AGI development.  

With experts like DeepMind’s Shane Legg predicting human-level AI by 2028, the race is on. Only time will tell if this global network of silicon brains will birth the next great leap in artificial intelligence.

(Photo by Anshita Nair)

See also: The merging of AI and blockchain was inevitable – but what will it mean?

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post SingularityNET bets on supercomputer network to deliver AGI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/singularitynet-bets-supercomputer-network-deliver-agi/feed/ 0
OpenResearch reveals potential impacts of universal basic income https://www.artificialintelligence-news.com/news/openresearch-reveals-potential-impacts-universal-basic-income/ https://www.artificialintelligence-news.com/news/openresearch-reveals-potential-impacts-universal-basic-income/#respond Tue, 23 Jul 2024 11:45:52 +0000 https://www.artificialintelligence-news.com/?p=15495 A study conducted by OpenResearch has shed light on the transformative potential of universal basic income (UBI). The research aimed to “learn from participants’ experiences and better understand both the potential and the limitations of unconditional cash transfers.” The study – which provided participants with an extra $1,000 per month – revealed significant impacts across […]

The post OpenResearch reveals potential impacts of universal basic income appeared first on AI News.

]]>
A study conducted by OpenResearch has shed light on the transformative potential of universal basic income (UBI). The research aimed to “learn from participants’ experiences and better understand both the potential and the limitations of unconditional cash transfers.”

The study – which provided participants with an extra $1,000 per month – revealed significant impacts across various aspects of recipients’ lives, including health, spending habits, employment, personal agency, and housing mobility.

In healthcare, the analysis showed increased utilisation of medical services, particularly in dental and specialist care.

One participant noted, “I got myself braces…I feel like people underestimate the importance of having nice teeth because it affects more than just your own sense of self, it affects how people look at you.”

While no immediate measurable effects on physical health were observed, researchers suggest that increased medical care utilisation could lead to long-term health benefits.

The study also uncovered interesting spending patterns among UBI recipients.

On average, participants increased their overall monthly spending by $310, with significant allocations towards basic needs such as food, transportation, and rent. Notably, there was a 26% increase in financial support provided to others, highlighting the ripple effect of UBI on communities.

In terms of employment, the study revealed nuanced outcomes.

While there was a slight decrease in overall employment rates and work hours among recipients, the study found that UBI provided individuals with greater flexibility in making employment decisions aligned with their circumstances and goals.

One participant explained, “Because of that money and being able to build up my savings, I’m in a position for once to be picky…I don’t have to take a crappy job just because I need income right now.”

The research also uncovered significant improvements in personal agency and future planning. 

UBI recipients were 14% more likely to pursue education or job training and 5% more likely to have a budget compared to the control group. Black recipients in the third year of the program were 26% more likely to report starting or helping to start a business.

Lastly, the study’s analysis revealed increased housing mobility among UBI recipients. Participants were 11% more likely to move neighbourhoods and 23% more likely to actively search for new housing compared to the control group.

The study provides valuable insights into the potential impacts of UBI, offering policymakers and researchers a data-driven foundation for future decisions on social welfare programs. This major societal conversation may be necessary if worst case scenarios around AI-induced job displacement come to fruition.

(Photo by Freddie Collins on Unsplash)

See also: AI could unleash £119 billion in UK productivity

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenResearch reveals potential impacts of universal basic income appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openresearch-reveals-potential-impacts-universal-basic-income/feed/ 0
SoftBank chief: Forget AGI, ASI will be here within 10 years https://www.artificialintelligence-news.com/news/softbank-chief-forget-agi-asi-here-within-10-years/ https://www.artificialintelligence-news.com/news/softbank-chief-forget-agi-asi-here-within-10-years/#respond Mon, 24 Jun 2024 17:14:29 +0000 https://www.artificialintelligence-news.com/?p=15104 SoftBank founder and CEO Masayoshi Son has claimed that artificial super intelligence (ASI) could be a reality within the next decade. Speaking at SoftBank’s annual meeting in Tokyo on June 21, Son painted a picture of a future where AI far surpasses human intelligence, potentially revolutionising life as we know it. Son asserted that by […]

The post SoftBank chief: Forget AGI, ASI will be here within 10 years appeared first on AI News.

]]>
SoftBank founder and CEO Masayoshi Son has claimed that artificial super intelligence (ASI) could be a reality within the next decade.

Speaking at SoftBank’s annual meeting in Tokyo on June 21, Son painted a picture of a future where AI far surpasses human intelligence, potentially revolutionising life as we know it. Son asserted that by 2030, AI could be “one to 10 times smarter than humans,” and by 2035, it might reach a staggering “10,000 times smarter” than human intelligence.

SoftBank’s CEO made a clear distinction between artificial general intelligence (AGI) and ASI. According to Son, AGI would be equivalent to a human “genius,” potentially up to 10 times more capable than an average person. ASI, however, would be in a league of its own, with capabilities 10,000 times beyond human potential.

Son’s predictions align with the goals of Safe Superintelligence Inc. (SSI), founded by Ilya Sutskever, former chief scientist at OpenAI, along with Daniel Levy and Daniel Gross. SSI’s mission, as stated on their website, is to “approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs.”

The timing of these announcements underscores the growing focus on superintelligent AI within the tech industry. While SoftBank appears to be prioritising the development of ASI, SSI is emphasising the importance of safety in this pursuit. As stated by SSI’s founders, “We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.”

It’s worth noting that the scientific community has yet to reach a consensus on the feasibility or capabilities of AGI or ASI. Current AI systems, while impressive in specific domains, are still far from achieving human-level reasoning across all areas.

Son’s speech took an unexpectedly personal turn when he linked the development of ASI to his own sense of purpose and mortality. “SoftBank was founded for what purpose? For what purpose was Masayoshi Son born? It may sound strange, but I think I was born to realise ASI. I am super serious about it,” he declared.

Son’s predictions and SoftBank’s apparent pivot towards ASI development, coupled with the formation of SSI, raise important questions about the future of AI and its potential impact on society. While the promise of superintelligent AI is enticing, it also brings concerns about job displacement, ethical considerations, and the potential risks associated with creating an intelligence that far surpasses our own.

Whether Son’s vision of ASI within a decade proves prescient or overly optimistic remains to be seen, but one thing is certain: the race towards superintelligent AI is heating up, with major players positioning themselves at the forefront.

See also: Anthropic’s Claude 3.5 Sonnet beats GPT-4o in most benchmarks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post SoftBank chief: Forget AGI, ASI will be here within 10 years appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/softbank-chief-forget-agi-asi-here-within-10-years/feed/ 0
OpenAI co-founder Ilya Sutskever’s new startup aims for ‘safe superintelligence’ https://www.artificialintelligence-news.com/news/openai-co-founder-ilya-sutskevers-startup-aims-safe-superintelligence/ https://www.artificialintelligence-news.com/news/openai-co-founder-ilya-sutskevers-startup-aims-safe-superintelligence/#respond Thu, 20 Jun 2024 12:50:42 +0000 https://www.artificialintelligence-news.com/?p=15068 Ilya Sutskever, former chief scientist at OpenAI, has revealed his next major project after departing the AI research company he co-founded in May. Alongside fellow OpenAI alumnus Daniel Levy and Apple’s former AI lead Daniel Gross, the trio has formed Safe Superintelligence Inc. (SSI), a startup solely focused on building safe superintelligent systems. The formation […]

The post OpenAI co-founder Ilya Sutskever’s new startup aims for ‘safe superintelligence’ appeared first on AI News.

]]>
Ilya Sutskever, former chief scientist at OpenAI, has revealed his next major project after departing the AI research company he co-founded in May.

Alongside fellow OpenAI alumnus Daniel Levy and Apple’s former AI lead Daniel Gross, the trio has formed Safe Superintelligence Inc. (SSI), a startup solely focused on building safe superintelligent systems.

The formation of SSI follows the brief November 2023 ousting of OpenAI’s CEO Sam Altman, in which Sutskever played a central role before later expressing regret over the situation.

In a message on SSI’s website, the founders state::

“We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace. 

Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”

Sutskever’s work at SSI represents a continuation of his efforts at OpenAI, where he was part of the superalignment team tasked with designing control methods for powerful new AI systems. However, that group was disbanded following Sutskever’s high-profile departure.

According to SSI, it will pursue safe superintelligence in “a straight shot, with one focus, one goal, and one product.” This singular focus stands in contrast to the diversification seen at major AI labs like OpenAI, DeepMind, and Anthropic over recent years.

Only time will tell if Sutskever’s team can make substantive progress toward their lofty goal of safe superintelligent AI. Critics argue the challenge represents a matter of philosophy as much as engineering. However, the pedigree of SSI’s founders means their efforts will be followed with great interest.

In the meantime, expect to see a resurgence of the “What did Ilya see?” meme:

See also: Meta unveils five AI models for multi-modal processing, music generation, and more

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI co-founder Ilya Sutskever’s new startup aims for ‘safe superintelligence’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-co-founder-ilya-sutskevers-startup-aims-safe-superintelligence/feed/ 0
Elon Musk sues OpenAI over alleged breach of nonprofit agreement https://www.artificialintelligence-news.com/news/elon-musk-sues-openai-alleged-breach-nonprofit-agreement/ https://www.artificialintelligence-news.com/news/elon-musk-sues-openai-alleged-breach-nonprofit-agreement/#respond Fri, 01 Mar 2024 13:09:25 +0000 https://www.artificialintelligence-news.com/?p=14473 Elon Musk has filed a lawsuit against OpenAI and its CEO, Sam Altman, citing a violation of their nonprofit agreement. The legal battle, unfolding in the Superior Court of California for the County of San Francisco, revolves around OpenAI’s departure from its foundational mission of advancing open-source artificial general intelligence (AGI) for the betterment of […]

The post Elon Musk sues OpenAI over alleged breach of nonprofit agreement appeared first on AI News.

]]>
Elon Musk has filed a lawsuit against OpenAI and its CEO, Sam Altman, citing a violation of their nonprofit agreement.

The legal battle, unfolding in the Superior Court of California for the County of San Francisco, revolves around OpenAI’s departure from its foundational mission of advancing open-source artificial general intelligence (AGI) for the betterment of humanity.

Musk was a co-founder and early backer of OpenAI. According to Musk, Altman and Greg Brockman (another co-founder and current president of OpenAI) convinced him to bankroll the startup in 2015 on promises that it would remain a nonprofit.

In his legal challenge, Musk accuses OpenAI of straying from its principles through a collaboration with Microsoft—alleging that the partnership prioritises proprietary technology over the original ethos of open-source advancement.

Musk’s grievances include claims of contract breach, violation of fiduciary duty, and unfair business practices. He calls upon OpenAI to realign with its nonprofit objectives and seeks an injunction to halt the commercial exploitation of AGI technology.

At the heart of the dispute is OpenAI’s recent launch of GPT-4 in March 2023. Musk contends that unlike its predecessors, GPT-4 represents a shift towards closed-source models—a move he believes favours Microsoft’s financial interests at the expense of OpenAI’s altruistic mission.

Founded in 2015 as a nonprofit AI research lab, OpenAI transitioned into a commercial entity in 2020. OpenAI has now adopted a profit-driven approach, with revenues reportedly surpassing $2 billion annually.

Musk, who has long voiced concerns about the risks posed by AI, has called for robust government regulation and responsible AI development. He questions the technical expertise of OpenAI’s current board and highlights the removal and subsequent reinstatement of Altman in November 2023 as evidence of a profit-oriented agenda aligned with Microsoft’s interests.

See also: Mistral AI unveils LLM rivalling major players

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Elon Musk sues OpenAI over alleged breach of nonprofit agreement appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/elon-musk-sues-openai-alleged-breach-nonprofit-agreement/feed/ 0
DeepMind framework offers breakthrough in LLMs’ reasoning https://www.artificialintelligence-news.com/news/deepmind-framework-offers-breakthrough-llm-reasoning/ https://www.artificialintelligence-news.com/news/deepmind-framework-offers-breakthrough-llm-reasoning/#respond Thu, 08 Feb 2024 11:28:05 +0000 https://www.artificialintelligence-news.com/?p=14338 A breakthrough approach in enhancing the reasoning abilities of large language models (LLMs) has been unveiled by researchers from Google DeepMind and the University of Southern California. Their new ‘SELF-DISCOVER’ prompting framework – published this week on arXiV and Hugging Face – represents a significant leap beyond existing techniques, potentially revolutionising the performance of leading […]

The post DeepMind framework offers breakthrough in LLMs’ reasoning appeared first on AI News.

]]>
A breakthrough approach in enhancing the reasoning abilities of large language models (LLMs) has been unveiled by researchers from Google DeepMind and the University of Southern California.

Their new ‘SELF-DISCOVER’ prompting framework – published this week on arXiV and Hugging Face – represents a significant leap beyond existing techniques, potentially revolutionising the performance of leading models such as OpenAI’s GPT-4 and Google’s PaLM 2.

The framework promises substantial enhancements in tackling challenging reasoning tasks. It demonstrates remarkable improvements, boasting up to a 32% performance increase compared to traditional methods like Chain of Thought (CoT). This novel approach revolves around LLMs autonomously uncovering task-intrinsic reasoning structures to navigate complex problems.

At its core, the framework empowers LLMs to self-discover and utilise various atomic reasoning modules – such as critical thinking and step-by-step analysis – to construct explicit reasoning structures.

By mimicking human problem-solving strategies, the framework operates in two stages:

  • Stage one involves composing a coherent reasoning structure intrinsic to the task, leveraging a set of atomic reasoning modules and task examples.
  • During decoding, LLMs then follow this self-discovered structure to arrive at the final solution.

In extensive testing across various reasoning tasks – including Big-Bench Hard, Thinking for Doing, and Math – the self-discover approach consistently outperformed traditional methods. Notably, it achieved an accuracy of 81%, 85%, and 73% across the three tasks with GPT-4, surpassing chain-of-thought and plan-and-solve techniques.

However, the implications of this research extend far beyond mere performance gains.

By equipping LLMs with enhanced reasoning capabilities, the framework paves the way for tackling more challenging problems and brings AI closer to achieving general intelligence. Transferability studies conducted by the researchers further highlight the universal applicability of the composed reasoning structures, aligning with human reasoning patterns.

As the landscape evolves, breakthroughs like the SELF-DISCOVER prompting framework represent crucial milestones in advancing the capabilities of language models and offering a glimpse into the future of AI.

(Photo by Victor on Unsplash)

See also: The UK is outpacing the US for AI hiring

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepMind framework offers breakthrough in LLMs’ reasoning appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/deepmind-framework-offers-breakthrough-llm-reasoning/feed/ 0