xai Archives - AI News https://www.artificialintelligence-news.com/news/tag/xai/ Artificial Intelligence News Fri, 25 Apr 2025 14:07:51 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png xai Archives - AI News https://www.artificialintelligence-news.com/news/tag/xai/ 32 32 OpenAI counter-sues Elon Musk for attempts to ‘take down’ AI rival https://www.artificialintelligence-news.com/news/openai-counter-sues-elon-musk-attempts-take-down-ai-rival/ https://www.artificialintelligence-news.com/news/openai-counter-sues-elon-musk-attempts-take-down-ai-rival/#respond Thu, 10 Apr 2025 12:05:31 +0000 https://www.artificialintelligence-news.com/?p=105285 OpenAI has launched a legal counteroffensive against one of its co-founders, Elon Musk, and his competing AI venture, xAI. In court documents filed yesterday, OpenAI accuses Musk of orchestrating a “relentless” and “malicious” campaign designed to “take down OpenAI” after he left the organisation years ago. The court filing, submitted to the US District Court […]

The post OpenAI counter-sues Elon Musk for attempts to ‘take down’ AI rival appeared first on AI News.

]]>
OpenAI has launched a legal counteroffensive against one of its co-founders, Elon Musk, and his competing AI venture, xAI.

In court documents filed yesterday, OpenAI accuses Musk of orchestrating a “relentless” and “malicious” campaign designed to “take down OpenAI” after he left the organisation years ago.

The court filing, submitted to the US District Court for the Northern District of California, alleges Musk could not tolerate OpenAI’s success after he had “abandoned and declared [it] doomed.”

OpenAI is now seeking legal remedies, including an injunction to stop Musk’s alleged “unlawful and unfair action” and compensation for damages already caused.   

Origin story of OpenAI and the departure of Elon Musk

The legal documents recount OpenAI’s origins in 2015, stemming from an idea discussed by current CEO Sam Altman and President Greg Brockman to create an AI lab focused on developing artificial general intelligence (AGI) – AI capable of outperforming humans – for the “benefit of all humanity.”

Musk was involved in the launch, serving on the initial non-profit board and pledging $1 billion in donations.   

However, the relationship fractured. OpenAI claims that between 2017 and 2018, Musk’s demands for “absolute control” of the enterprise – or its potential absorption into Tesla – were rebuffed by Altman, Brockman, and then-Chief Scientist Ilya Sutskever. The filing quotes Sutskever warning Musk against creating an “AGI dictatorship.”

Following this disagreement, OpenAI alleges Elon Musk quit in February 2018, declaring the venture would fail without him and that he would pursue AGI development at Tesla instead. Critically, OpenAI contends the pledged $1 billion “was never satisfied—not even close”.   

Restructuring, success, and Musk’s alleged ‘malicious’ campaign

Facing escalating costs for computing power and talent retention, OpenAI restructured and created a “capped-profit” entity in 2019 to attract investment while remaining controlled by the non-profit board and bound by its mission. This structure, OpenAI states, was announced publicly and Musk was offered equity in the new entity but declined and raised no objection at the time.   

OpenAI highlights its subsequent breakthroughs – including GPT-3, ChatGPT, and GPT-4 – achieved massive public adoption and critical acclaim. These successes, OpenAI emphasises, were made after the departure of Elon Musk and allegedly spurred his antagonism.

The filing details a chronology of alleged actions by Elon Musk aimed at harming OpenAI:   

  • Founding xAI: Musk “quietly created” his competitor, xAI, in March 2023.   
  • Moratorium call: Days later, Musk supported a call for a development moratorium on AI more advanced than GPT-4, a move OpenAI claims was intended “to stall OpenAI while all others, most notably Musk, caught up”.   
  • Records demand: Musk allegedly made a “pretextual demand” for confidential OpenAI documents, feigning concern while secretly building xAI.   
  • Public attacks: Using his social media platform X (formerly Twitter), Musk allegedly broadcast “press attacks” and “malicious campaigns” to his vast following, labelling OpenAI a “lie,” “evil,” and a “total scam”.   
  • Legal actions: Musk filed lawsuits, first in state court (later withdrawn) and then the current federal action, based on what OpenAI dismisses as meritless claims of a “Founding Agreement” breach.   
  • Regulatory pressure: Musk allegedly urged state Attorneys General to investigate OpenAI and force an asset auction.   
  • “Sham bid”: In February 2025, a Musk-led consortium made a purported $97.375 billion offer for OpenAI, Inc.’s assets. OpenAI derides this as a “sham bid” and a “stunt” lacking evidence of financing and designed purely to disrupt OpenAI’s operations, potential restructuring, fundraising, and relationships with investors and employees, particularly as OpenAI considers evolving its capped-profit arm into a Public Benefit Corporation (PBC). One investor involved allegedly admitted the bid’s aim was to gain “discovery”.   

Based on these allegations, OpenAI asserts two primary counterclaims against both Elon Musk and xAI:

  • Unfair competition: Alleging the “sham bid” constitutes an unfair and fraudulent business practice under California law, intended to disrupt OpenAI and gain an unfair advantage for xAI.   
  • Tortious interference with prospective economic advantage: Claiming the sham bid intentionally disrupted OpenAI’s existing and potential relationships with investors, employees, and customers. 

OpenAI argues Musk’s actions have forced it to divert resources and expend funds, causing harm. They claim his campaign threatens “irreparable harm” to their mission, governance, and crucial business relationships. The filing also touches upon concerns regarding xAI’s own safety record, citing reports of its AI Grok generating harmful content and misinformation.

The counterclaims mark a dramatic escalation in the legal battle between the AI pioneer and its departed co-founder. While Elon Musk initially sued OpenAI alleging a betrayal of its founding non-profit, open-source principles, OpenAI now contends Musk’s actions are a self-serving attempt to undermine a competitor he couldn’t control.

With billions at stake and the future direction of AGI in the balance, this dispute is far from over.

See also: Deep Cogito open LLMs use IDA to outperform same size models

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI counter-sues Elon Musk for attempts to ‘take down’ AI rival appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-counter-sues-elon-musk-attempts-take-down-ai-rival/feed/ 0
Grok 3: The next-gen ‘truth-seeking’ AI model https://www.artificialintelligence-news.com/news/grok-3-next-gen-truth-seeking-ai-model/ https://www.artificialintelligence-news.com/news/grok-3-next-gen-truth-seeking-ai-model/#respond Tue, 18 Feb 2025 12:20:39 +0000 https://www.artificialintelligence-news.com/?p=104551 xAI unveiled its Grok 3 AI model on Monday, alongside new capabilities such as image analysis and refined question answering. The company harnessed an immense data centre equipped with approximately 200,000 GPUs to develop Grok 3. According to xAI owner Elon Musk, this project utilised “10x” more computing power than its predecessor, Grok 2, with […]

The post Grok 3: The next-gen ‘truth-seeking’ AI model appeared first on AI News.

]]>
xAI unveiled its Grok 3 AI model on Monday, alongside new capabilities such as image analysis and refined question answering.

The company harnessed an immense data centre equipped with approximately 200,000 GPUs to develop Grok 3. According to xAI owner Elon Musk, this project utilised “10x” more computing power than its predecessor, Grok 2, with an expanded dataset that reportedly includes information from legal case filings.

Musk claimed that Grok 3 is a “maximally truth-seeking AI, even if that truth is sometimes at odds with what is politically-correct.”

The Grok 3 rollout includes a family of models designed for different needs. Grok 3 mini, for example, prioritises faster response times over absolute accuracy. However, particularly noteworthy are the new reasoning-focused Grok 3 models.

Dubbed Grok 3 Reasoning and Grok 3 mini Reasoning, these variants aim to emulate human-like cognitive processes by “thinking through” problems. Comparable to models like OpenAI’s o3-mini and DeepSeek’s R1, these reasoning systems attempt to fact-check their responses—reducing the likelihood of errors or missteps.

Grok 3: The benchmark results

xAI asserts that Grok 3 surpasses OpenAI’s GPT-4o in certain benchmarks, including AIME and GPQA, which assess the model’s proficiency in tackling complex problems across mathematics, physics, biology, and chemistry.

The early version of Grok 3 is also currently leading on Chatbot Arena, a crowdsourced evaluation platform where users pit AI models against one another and rank their outputs. The model is the first to break the Arena’s 1400 score.

According to xAI, Grok 3 Reasoning outperforms its rivals on a variety of prominent benchmarks:

Reasoning benchmark results of the Grok 3 AI model from xAI compared to other leading artificial intelligence models from Google, DeepSeek, and OpenAI.

These reasoning models are already integrated into features available via the Grok app. Users can select commands like “Think” or activate the more computationally-intensive “Big Brain” mode for tackling particularly challenging questions.

xAI has positioned the reasoning models as ideal tools for STEM (science, technology, engineering, and mathematics) applications, including mathematics, science, and coding challenges.

Guarding against AI distillation

Interestingly, not all of Grok 3’s internal processes are laid bare to users. Musk explained that some of the reasoning models’ “thoughts” are intentionally obscured to prevent distillation—a controversial practice where competing AI developers extract knowledge from proprietary models.

The practice was thrust into the spotlight in recent weeks after Chinese AI firm DeepSeek faced allegations of distilling OpenAI’s models to develop its latest model, R-1.

xAI’s new reasoning models serve as the foundation for a new Grok app feature called DeepSearch. The feature uses Grok models to scan the internet and Musk’s social platform, X, for relevant information before synthesising a detailed abstract in answer to user queries.

Accessing Grok 3 and committing to open-source

Access to the latest Grok model is currently tied to X’s subscription tiers. Premium+ subscribers, who pay $50 (~£41) per month, will receive priority access to the latest functionalities. 

xAI is also introducing a SuperGrok subscription plan, reportedly priced at either $30 per month or $300 annually. SuperGrok subscribers will benefit from enhanced reasoning capabilities, more DeepSearch queries, and unlimited image generation features.

The company also teased upcoming features. Within a week, the Grok app is expected to introduce a voice mode—enabling users to interact with the AI through a synthesised voice similar to Gemini Live.

Musk further revealed plans to release Grok 3 models via an enterprise-ready API in the coming weeks, with DeepSearch functionality included.

Although Grok 3 is still fresh, xAI intends to open-source its predecessor in the coming months. Musk claims that xAI will continue to open-source the last version of Grok.

“When Grok 3 is mature and stable, which is probably within a few months, then we’ll open-source Grok 2,” explains Musk.

The ‘anti-woke’ AI model

Grok has long been marketed as unfiltered, bold, and willing to engage with queries that competitors might avoid. Musk previously described the AI as “anti-woke,” presenting it as a model unafraid to touch on controversial topics. 

True to its promise, early models like Grok and Grok 2 embraced politically-charged queries, even veering into colourful language when prompted. Yet, these versions also revealed some biases when delving deep into political discourse.

“We’re working to shift Grok closer to politically-neutral,” said Musk.

However, whether Grok 3 achieves this goal remains to be seen. With such changes at play, analysts are already highlighting the potential societal impacts of introducing increasingly “truth-seeking” yet politically-sensitive AI systems.

With Grok 3, Musk and xAI have made a bold statement, pushing their technology forward while potentially fuelling debates around bias, transparency, and the ethics of AI deployment.

As competitors like OpenAI, Google, and DeepSeek refine their offerings, Grok 3’s success will hinge on its ability to balance accuracy, user demand, and societal responsibility.

See also: AI in 2025: Purpose-driven models, human integration, and more

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Grok 3: The next-gen ‘truth-seeking’ AI model appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/grok-3-next-gen-truth-seeking-ai-model/feed/ 0
xAI breaks records with ‘Colossus’ AI training system https://www.artificialintelligence-news.com/news/xai-breaks-records-colossus-ai-training-system/ https://www.artificialintelligence-news.com/news/xai-breaks-records-colossus-ai-training-system/#respond Tue, 03 Sep 2024 13:29:42 +0000 https://www.artificialintelligence-news.com/?p=15954 Elon Musk’s xAI has unveiled its record-breaking AI training system, dubbed ‘Colossus’. Musk revealed that the xAI team had successfully brought the Colossus 100k H100 training cluster online after a 122-day process. Not content with its existing capabilities, Musk stated, “over the next couple of months, it will double in size, bringing it to 200k […]

The post xAI breaks records with ‘Colossus’ AI training system appeared first on AI News.

]]>
Elon Musk’s xAI has unveiled its record-breaking AI training system, dubbed ‘Colossus’.

Musk revealed that the xAI team had successfully brought the Colossus 100k H100 training cluster online after a 122-day process. Not content with its existing capabilities, Musk stated, “over the next couple of months, it will double in size, bringing it to 200k (50k H200s).”

The scale of Colossus is unprecedented, surpassing every other cluster to date. For context, Google uses 90,000 GPUs while OpenAI utilises 80,000 GPUs—both of which have been surpassed by xAI’s creation, even prior to Colossus’ doubling in size over the coming months.

Developed in partnership with Nvidia, Colossus leverages some of the most advanced GPU technology on the market. The system initially employs Nvidia’s H100 chips, with plans to incorporate the newer H200 model in its expansion. This vast array of processing power positions Colossus as the most formidable AI training system currently available.

The H200, while recently superseded by Nvidia’s Blackwell chip unveiled in March 2024, remains a highly sought-after component in the AI industry. It boasts impressive specifications, including 141 GB of HBM3E memory and 4.8 TB/sec of bandwidth. However, the Blackwell chip raises the bar even further, with top-end capacity 36.2% higher than the H200 and a 66.7% increase in total bandwidth.

Nvidia’s response to the Colossus unveiling was one of enthusiasm and support. The company congratulated Musk and the xAI team on their achievement, highlighting that Colossus will not only be the most powerful system of its kind but will also deliver “exceptional gains” in energy efficiency.

Colossus’ processing power could potentially accelerate breakthroughs in various AI applications, from natural language processing to complex problem-solving algorithms. However, the unveiling of Colossus also reignites discussions about the concentration of AI power among a handful of tech giants and well-funded startups.

As companies like xAI push the boundaries of what’s possible in AI training, concerns about the accessibility of such advanced technologies to smaller organisations and researchers may come to the forefront.

As the AI arms race continues to heat up, all eyes will be on xAI and its competitors to see how they leverage these increasingly powerful systems. With Colossus, Musk and his team have thrown down the gauntlet and issued a challenge to rivals to match or exceed their efforts.

See also: Amazon partners with Anthropic to enhance Alexa

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post xAI breaks records with ‘Colossus’ AI training system appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/xai-breaks-records-colossus-ai-training-system/feed/ 0
xAI unveils Grok-2 to challenge the AI hierarchy https://www.artificialintelligence-news.com/news/xai-unveils-grok-2-challenge-ai-hierarchy/ https://www.artificialintelligence-news.com/news/xai-unveils-grok-2-challenge-ai-hierarchy/#respond Wed, 14 Aug 2024 09:05:59 +0000 https://www.artificialintelligence-news.com/?p=15724 xAI has announced the release of Grok-2, a major upgrade that boasts improved capabilities in chat, coding, and reasoning. Alongside Grok-2, xAI has introduced Grok-2 mini, a smaller but capable version of the main model. Both are currently in beta on X and will be made available through xAI’s enterprise API later this month. An […]

The post xAI unveils Grok-2 to challenge the AI hierarchy appeared first on AI News.

]]>
xAI has announced the release of Grok-2, a major upgrade that boasts improved capabilities in chat, coding, and reasoning.

Alongside Grok-2, xAI has introduced Grok-2 mini, a smaller but capable version of the main model. Both are currently in beta on X and will be made available through xAI’s enterprise API later this month.

An early version of Grok-2 was tested on the LMSYS leaderboard under the pseudonym “sus-column-r”. 

At the time of the announcement, xAI claims it is outperforming both Anthropic’s Claude 3.5 Sonnet and OpenAI’s GPT-4-Turbo. However, it’s worth noting that GPT-4o currently holds the top spot as the best AI assistant in terms of overall capabilities, followed by Google’s Gemini 1.5.

xAI’s internal evaluation process employs AI Tutors to assess the models across various real-world tasks. The company states that “Grok-2 has shown significant improvements in reasoning with retrieved content and in its tool use capabilities, such as correctly identifying missing information, reasoning through sequences of events, and discarding irrelevant posts”.

Benchmark results shared by xAI indicate that both Grok-2 and Grok-2 mini demonstrate substantial improvements over Grok-1.5. The models show competitive performance in areas such as graduate-level science knowledge, general knowledge, and maths competition problems. Notably, Grok-2 excels in vision-based tasks, delivering state-of-the-art performance in visual maths reasoning and document-based question answering.

The new Grok experience on X features a redesigned interface and new features. Premium and Premium+ subscribers will have access to both Grok-2 and Grok-2 mini. xAI describes Grok-2 as “more intuitive, steerable, and versatile across a wide range of tasks, whether you’re seeking answers, collaborating on writing, or solving coding tasks”.

xAI is also collaborating with Black Forest Labs to experiment with their FLUX.1 model to expand Grok’s capabilities on X.

For developers, xAI is launching an enterprise API platform later this month. The company promises enhanced security features, rich traffic statistics, and advanced billing analytics. A management API will also be available for integrating team, user, and billing management into existing tools and services.

Looking ahead, xAI plans to roll out multimodal understanding as a core part of the Grok experience on both X and the API. The company’s rapid progress since announcing Grok-1 in November 2023 is attributed to “a small team with the highest talent density”.

xAI’s focus remains on advancing core reasoning capabilities with its new compute cluster, as it aims to maintain its position at the forefront of AI development. However, the company recently agreed to halt the use of certain EU data for training its models.

While the release of Grok-2 marks a significant milestone for xAI, it’s clear that the AI landscape remains highly competitive. With ChatGPT-4o and Google’s Gemini 1.5 leading the pack, and other major players like Anthropic continuing to make advancements, the race for AI supremacy is far from over.

See also: SingularityNET bets on supercomputer network to deliver AGI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post xAI unveils Grok-2 to challenge the AI hierarchy appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/xai-unveils-grok-2-challenge-ai-hierarchy/feed/ 0
Elon Musk’s xAI open-sources Grok https://www.artificialintelligence-news.com/news/elon-musk-xai-open-sources-grok/ https://www.artificialintelligence-news.com/news/elon-musk-xai-open-sources-grok/#respond Mon, 18 Mar 2024 11:13:15 +0000 https://www.artificialintelligence-news.com/?p=14560 Elon Musk’s startup xAI has made its large language model Grok available as open source software. The 314 billion parameter model can now be freely accessed, modified, and distributed by anyone under an Apache 2.0 license. The release fulfils Musk’s promise to open source Grok in an effort to accelerate AI development and adoption. XAI […]

The post Elon Musk’s xAI open-sources Grok appeared first on AI News.

]]>
Elon Musk’s startup xAI has made its large language model Grok available as open source software. The 314 billion parameter model can now be freely accessed, modified, and distributed by anyone under an Apache 2.0 license.

The release fulfils Musk’s promise to open source Grok in an effort to accelerate AI development and adoption.

XAI announced the move in a blog post, stating: “We are releasing the base model weights and network architecture of Grok-1, our large language model. Grok-1 is a 314 billion parameter Mixture-of-Experts model trained from scratch by xAI.”

Grok had previously only been available through Musk’s social network X as part of the paid X Premium+ subscription. By open sourcing it, xAI has empowered developers, companies, and enthusiasts worldwide to leverage the advanced language model’s capabilities.

The model’s release includes its weights, which represent the strength of connections between its artificial neurons, as well as documentation and code. However, it omits the original training data and access to real-time data streams that gave the proprietary version an advantage.

Named after a term meaning “understanding” from Douglas Adams’ Hitchhiker’s Guide series, Grok has been positioned as a more open and humorous alternative to OpenAI’s ChatGPT. The move aligns with Musk’s battle against censorship, “woke” ideology displayed by models like Gemini, and his recent lawsuit claiming OpenAI violated its nonprofit principles.

While xAI’s open source release earned praise from open source advocates, some critics raised concerns about potential misuse facilitated by unrestricted access to powerful AI capabilities.

You can find Grok-1 on GitHub here.

(Image Credit: xAI)

See also: Anthropic says Claude 3 Haiku is the fastest model in its class

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Elon Musk’s xAI open-sources Grok appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/elon-musk-xai-open-sources-grok/feed/ 0
OpenAI calls Elon Musk’s lawsuit claims ‘incoherent’ https://www.artificialintelligence-news.com/news/openai-calls-elon-musk-lawsuit-claims-incoherent/ https://www.artificialintelligence-news.com/news/openai-calls-elon-musk-lawsuit-claims-incoherent/#respond Tue, 12 Mar 2024 16:36:27 +0000 https://www.artificialintelligence-news.com/?p=14529 OpenAI has hit back at Elon Musk’s lawsuit, deeming his claims “convoluted — often incoherent — factual premises.” Musk’s lawsuit accuses OpenAI of breaching its non-profit status and reneging on a founding agreement to keep the organisation non-profit and release its AI technology publicly. However, OpenAI has refuted these allegations, stating that there is no […]

The post OpenAI calls Elon Musk’s lawsuit claims ‘incoherent’ appeared first on AI News.

]]>
OpenAI has hit back at Elon Musk’s lawsuit, deeming his claims “convoluted — often incoherent — factual premises.”

Musk’s lawsuit accuses OpenAI of breaching its non-profit status and reneging on a founding agreement to keep the organisation non-profit and release its AI technology publicly. However, OpenAI has refuted these allegations, stating that there is no such agreement with Musk and branding it as a mere “fiction.”

According to court filings, OpenAI asserts that there is no existing agreement with Musk, contradicting his assertions in the lawsuit.

The organisation further alleges that Musk had actually supported the idea of transitioning OpenAI into a for-profit entity under his control. It is claimed that Musk advocated for full control of the company as CEO, majority equity ownership, and even suggested tethering it to Tesla for financial backing. However, negotiations between Musk and OpenAI did not culminate in an agreement, leading to Musk’s withdrawal from the project.

OpenAI’s rebuttal highlights purported emails exchanged between Musk and the organisation, indicating his prior knowledge and support for its transition to a for-profit model. The company suggests that Musk’s lawsuit is driven by his desire to claim credit for OpenAI’s successes after he disengaged from the project.

In response to Musk’s legal action, OpenAI has portrayed his motives as self-serving rather than altruistic, asserting that his lawsuit is a bid to further his own commercial interests under the guise of championing humanity’s cause.

Meanwhile, Musk’s own foray into the realm of artificial intelligence with his company xAI has drawn attention.

Musk announced xAI’s intention to open source its Grok chatbot shortly after OpenAI’s publication of emails purportedly demonstrating Musk’s prior awareness of its non-open source intentions. While this move could be interpreted as a retaliatory gesture against OpenAI, it also presents an opportunity for xAI to garner feedback from developers and enhance its technology.

The legal clash between Musk and OpenAI underscores the complexities surrounding the development and governance of AI technologies, as well as the competing interests within the tech industry.

(Photo by Tim Mossholder on Unsplash)

See also: OpenAI announces new board lineup and governance structure

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI calls Elon Musk’s lawsuit claims ‘incoherent’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-calls-elon-musk-lawsuit-claims-incoherent/feed/ 0
Justin Swansburg, DataRobot: On combining human and machine intelligence https://www.artificialintelligence-news.com/news/justin-swansburg-datarobot-on-combining-human-and-machine-intelligence/ https://www.artificialintelligence-news.com/news/justin-swansburg-datarobot-on-combining-human-and-machine-intelligence/#respond Tue, 04 Oct 2022 10:55:51 +0000 https://www.artificialintelligence-news.com/?p=12349 Advancements in AI are providing transformational benefits to enterprises, but keeping risks in check and improving consumer sentiment is paramount. Explainable AI (XAI) is the idea that an AI should always provide reasoning for its decisions in a way that makes it easy for humans to comprehend. XAI helps to build trust and ensures that […]

The post Justin Swansburg, DataRobot: On combining human and machine intelligence appeared first on AI News.

]]>
Advancements in AI are providing transformational benefits to enterprises, but keeping risks in check and improving consumer sentiment is paramount.

Explainable AI (XAI) is the idea that an AI should always provide reasoning for its decisions in a way that makes it easy for humans to comprehend. XAI helps to build trust and ensures that issues can be more quickly identified before they cause wider damage.

AI News caught up with Justin Swansburg, VP of Americas Data Science Practice at DataRobot, to discuss how the company is driving AI adoption using concepts like XAI to combine the strengths of human and machine intelligence.

AI News: Can you give us a brief overview of DataRobot’s core solutions?

Justin Swansburg: DataRobot’s AI Cloud platform is uniquely built to democratise and accelerate the use of AI while delivering critical insights that drive clear business results. 

DataRobot helps organisations across industries harness the transformational power of AI, from restoring supply chain resiliency to accelerating the treatment and prevention of disease and enhancing patient care to combating the climate crisis.

As one of the most widely deployed and proven AI platforms in the market today, DataRobot AI Cloud brings together a broad range of data, giving businesses comprehensive insights to drive revenue growth, manage operations, and reduce risk.

DataRobot has delivered over 1.4 trillion predictions for customers around the world, including the U.S. Army, CBS Interactive, and CVS.

AN: What is “augmented intelligence” and how does it differ from artificial intelligence?

JS: Artificial intelligence and augmented intelligence share the same objective but have different ways of accomplishing it.

Augmented intelligence brings together qualities of human intuition and experience with the efficiency and power of machine learning. Whereas artificial intelligence is often used as a replacement or substitute for human processes and decision-making.

AN: Do you need machine learning or programming experience to build predictive analytics with DataRobot?  

JS: DataRobot is a unified platform designed to democratise and accelerate the use of AI. This means that anyone in an organisation – with or without specialist knowledge of AI – can use DataRobot to build, deploy, and manage AI applications to transform their products, services, and operations.

AN: How does DataRobot support the idea of explainable AI and why is that important?

JS: DataRobot Explainable AI helps organisations understand the behaviour of models and gain confidence in their results. When AI is not transparent, it can be difficult to trust the system and translate insights and predictions into business outcomes.

With Explainable AI, users can easily understand the model inputs while bridging the gap between development and actionable results.

AN: DataRobot recently earned a coveted spot among Forrester’s leading AI/ML platforms – what makes you stand out from rivals?

JS: We’re very proud of this achievement. We believe that our innovative platform and customer loyalty sets us apart from competitors.

Over the last year, we’ve focused on improving our AI platform through new tooling and functionality, as well as several acquisitions.

Our main goal is to provide customers with the best possible technology to help solve their business problems and we’ve heard that our platform’s ease of use, model documentation, and explainability have been appreciated by customers. 

AN: Your report, AI and the Power of Perception, found that 72 percent of businesses are positively impacted by AI but consumer scepticism remains – how do you think that can be addressed?

JS: That’s a great question. While there is significant scepticism, we believe that this can be addressed with some form of increased regulatory guidance and education on the benefits of AI for both businesses and consumers.

We believe that increased training for businesses would help to demonstrate to consumers a commitment to higher standards. It would also give consumers more confidence that responsible data practices were being followed.

Other consumer concerns, like the potential of AI to replace jobs, will take longer to address. But, it is too early to make a call on the extent to which these concerns are warranted, overblown, or somewhere in between.

We’re interested to see how perceptions change over time and are hopeful that more and more people will start to realise the great benefits AI has to offer. 

Justin Swansburg and the DataRobot team will be sharing their invaluable insights at this year’s AI & Big Data Expo North America. You can find out more about Justin’s sessions here and be sure to swing by DataRobot’s booth at stand #176

The post Justin Swansburg, DataRobot: On combining human and machine intelligence appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/justin-swansburg-datarobot-on-combining-human-and-machine-intelligence/feed/ 0
Democrats renew push for ‘algorithmic accountability’ https://www.artificialintelligence-news.com/news/democrats-renew-push-for-algorithmic-accountability/ https://www.artificialintelligence-news.com/news/democrats-renew-push-for-algorithmic-accountability/#respond Fri, 04 Feb 2022 09:04:05 +0000 https://artificialintelligence-news.com/?p=11647 Democrats have reintroduced their Algorithmic Accountability Act that seeks to hold tech firms accountable for bias in their algorithms. The bill is an updated version of one first introduced by Senator Ron Wyden (D-OR) in 2019 but never passed the House or Senate. The updated bill was introduced this week by Wyden alongside Senator Cory […]

The post Democrats renew push for ‘algorithmic accountability’ appeared first on AI News.

]]>
Democrats have reintroduced their Algorithmic Accountability Act that seeks to hold tech firms accountable for bias in their algorithms.

The bill is an updated version of one first introduced by Senator Ron Wyden (D-OR) in 2019 but never passed the House or Senate. The updated bill was introduced this week by Wyden alongside Senator Cory Booker (D-NJ) and Representative Yvette Clarke (D-NY)

Concern about bias in algorithms is increasing as they become used for ever more critical decisions. Bias would lead to inequalities being automated—with some people being given more opportunities than others.

“As algorithms and other automated decision systems take on increasingly prominent roles in our lives, we have a responsibility to ensure that they are adequately assessed for biases that may disadvantage minority or marginalised communities,” said Booker.

A human can always be held accountable for a decision to, say, reject a mortgage/loan application. There’s currently little-to-no accountability for algorithmic decisions.

Representative Yvette Clarke explained:

“When algorithms determine who goes to college, who gets healthcare, who gets a home, and even who goes to prison, algorithmic discrimination must be treated as the highly significant issue that it is.

These large and impactful decisions, which have become increasingly void of human input, are forming the foundation of our American society that generations to come will build upon. And yet, they are subject to a wide range of flaws from programming bias to faulty datasets that can reinforce broader societal discrimination, particularly against women and people of colour.

It is long past time Congress act to hold companies and software developers accountable for their discrimination by automation

With our renewed Algorithmic Accountability Act, large companies will no longer be able to turn a blind eye towards the deleterious impact of their automated systems, intended or not. We must ensure that our 21st Century technologies become tools of empowerment, rather than marginalisation and seclusion.”

The bill would force audits of AI systems; with findings reported to the Federal Trade Commission. A public database would be created so decisions can be reviewed to give confidence to consumers.

“If someone decides not to rent you a house because of the colour of your skin, that’s flat-out illegal discrimination. Using a flawed algorithm or software that results in discrimination and bias is just as bad,” commented Wyden.

“Our bill will pull back the curtain on the secret algorithms that can decide whether Americans get to see a doctor, rent a house, or get into a school. Transparency and accountability are essential to give consumers choice and provide policymakers with the information needed to set the rules of the road for critical decision systems.”

In our predictions for the AI industry in 2022, we predicted an increased focus on Explainable AI (XAI). XAI is artificial intelligence in which the results of the solution can be understood by humans and is seen as a partial solution to algorithmic bias.

“Too often, Big Tech’s algorithms put profits before people, from negatively impacting young people’s mental health, to discriminating against people based on race, ethnicity, or gender, and everything in between,” said Senator Tammy Baldwin (D-Wis), who is co-sponsoring the bill.

“It is long past time for the American public and policymakers to get a look under the hood and see how these algorithms are being used and what next steps need to be taken to protect consumers.”

Joining Baldwin in co-sponsoring the Algorithmic Accountability Act are Senators Brian Schatz (D-Hawaii), Mazie Hirono (D-Hawaii), Ben Ray Luján (D-NM), Bob Casey (D-Pa), and Martin Heinrich (D-NM).

A copy of the full bill is available here (PDF)

(Photo by Darren Halstead on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Democrats renew push for ‘algorithmic accountability’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/democrats-renew-push-for-algorithmic-accountability/feed/ 0
Editorial: Our predictions for the AI industry in 2022 https://www.artificialintelligence-news.com/news/editorial-our-predictions-for-the-ai-industry-in-2022/ https://www.artificialintelligence-news.com/news/editorial-our-predictions-for-the-ai-industry-in-2022/#respond Thu, 23 Dec 2021 11:59:08 +0000 https://artificialintelligence-news.com/?p=11547 The AI industry continued to thrive this year as companies sought ways to support business continuity through rapidly-changing situations. For those already invested, many are now doubling-down after reaping the benefits. As we wrap up the year, it’s time to look ahead at what to expect from the AI industry in 2022. Tackling bias Our […]

The post Editorial: Our predictions for the AI industry in 2022 appeared first on AI News.

]]>
The AI industry continued to thrive this year as companies sought ways to support business continuity through rapidly-changing situations. For those already invested, many are now doubling-down after reaping the benefits.

As we wrap up the year, it’s time to look ahead at what to expect from the AI industry in 2022.

Tackling bias

Our ‘Ethics & Society’ category got more use than most others this year, and with good reason. AI cannot thrive when it’s not trusted.

Biases are present in algorithms that are already causing harm. They’ve been the subject of many headlines, including a number of ours, and must be addressed for the public to have confidence in wider adoption.

Explainable AI (XAI) is a partial solution to the problem. XAI is artificial intelligence in which the results of the solution can be understood by humans.

Robert Penman, Associate Analyst at GlobalData, comments:

“2022 will see the further rollout of XAI, enabling companies to identify potential discrimination in their systems’ algorithms. It is essential that companies correct their models to mitigate bias in data. Organisations that drag their feet will face increasing scrutiny as AI continues to permeate our society, and people demand greater transparency. For example, in the Netherlands, the government’s use of AI to identify welfare fraud was found to violate European human rights.

Reducing human bias present in training datasets is a huge challenge in XAI implementation. Even tech giant Amazon had to scrap its in-development hiring tool because it was claimed to be biased against women.

Further, companies will be desperate to improve their XAI capabilities—the potential to avoid a PR disaster is reason enough.”

To that end, expect a large number of acquisitions of startups specialising in synthetic data training in 2022.

Smoother integration

Many companies don’t know how to get started on their AI journeys. Around 30 percent of enterprises plan to incorporate AI into their company within the next few years, but 91 percent foresee significant barriers and roadblocks.

If the confusion and anxiety that surrounds AI can be tackled, it will lead to much greater adoption.

Dr Max Versace, PhD, CEO and Co-Founder of Neurala, explains:

“Similar to what happened with the introduction of WordPress for websites in early 2000, platforms that resemble a ‘WordPress for AI’ will simplify building and maintaining AI models. 

In manufacturing for example, AI platforms will provide integration hooks, hardware flexibility, ease of use by non-experts, the ability to work with little data, and, crucially, a low-cost entry point to make this technology viable for a broad set of customers.”

AutoML platforms will thrive in 2022 and beyond.

From the cloud to the edge

The migration of AI from the cloud to the edge will accelerate in 2022.

Edge processing has a plethora of benefits over relying on cloud servers including speed, reliability, privacy, and lower costs.

Versace commented:

“Increasingly, companies are realising that the way to build a truly efficient AI algorithm is to train it on their own unique data, which might vary substantially over time. To do that effectively, the intelligence needs to directly interface with the sensors producing the data. 

From there, AI should run at a compute edge, and interface with cloud infrastructure only occasionally for backups and/or increased functionality. No critical process – for example,  in a manufacturing plant – should exclusively rely on cloud AI, exposing the manufacturing floor to connectivity/latency issues that could disrupt production.”

Expect more companies to realise the benefits of migrating from cloud to edge AI in 2022.

Doing more with less

Among the early concerns about the AI industry is that it would be dominated by “big tech” due to the gargantuan amount of data they’ve collected.

However, innovative methods are now allowing algorithms to be trained with less information. Training using smaller but more unique datasets for each deployment could prove to be more effective.

We predict more startups will prove the world doesn’t have to rely on big tech in 2022.

Human-powered AI

While XAI systems will provide results which can be understood by humans, the decisions made by AIs will be more useful because they’ll be human-powered.

Varun Ganapathi, PhD, Co-Founder and CTO at AKASA, said:

“For AI to truly be useful and effective, a human has to be present to help push the work to the finish line. Without guidance, AI can’t be expected to succeed and achieve optimal productivity. This is a trend that will only continue to increase.

Ultimately, people will have machines report to them. In this world, humans will be the managers of staff – both other humans and AIs – that will need to be taught and trained to be able to do the tasks they’re needed to do.

Just like people, AI needs to constantly be learning to improve performance.”

Greater human input also helps to build wider trust in AI. Involving humans helps to counter narratives about AI replacing jobs and concerns that decisions about people’s lives could be made without human qualities such as empathy and compassion.

Expect human input to lead to more useful AI decisions in 2022.

Avoiding captivity

The telecoms industry is currently pursuing an innovation called Open RAN which aims to help operators avoid being locked to specific vendors and help smaller competitors disrupt the relative monopoly held by a small number companies.

Enterprises are looking to avoid being held in captivity by any AI vendor.

Doug Gilbert, CIO and Chief Digital Officer at Sutherland, explains:

“Early adopters of rudimentary enterprise AI embedded in ERP / CRM platforms are starting to feel trapped. In 2022, we’ll see organisations take steps to avoid AI lock-in. And for good reason. AI is extraordinarily complex.

When embedded in, say, an ERP system, control, transparency, and innovation is handed over to the vendor not the enterprise. AI shouldn’t be treated as a product or feature: it’s a set of capabilities. AI is also evolving rapidly, with new AI capabilities and continuously improved methods of training algorithms.

To get the most powerful results from AI, more enterprises will move toward a model of combining different AI capabilities to solve unique problems or achieve an outcome. That means they’ll be looking to spin up more advanced and customizable options and either deprioritising AI features in their enterprise platforms or winding down those expensive but basic AI features altogether.”

In 2022 and beyond, we predict enterprises will favour AI solutions that avoid lock-in.

Chatbots get smart

Hands up if you’ve ever screamed (internally or externally) that you just want to speak to a human when dealing with a chatbot—I certainly have, more often than I’d care to admit.

“Today’s chatbots have proven beneficial but have very limited capabilities. Natural language processing will start to be overtaken by neural voice software that provides near real time natural language understanding (NLU),” commented Gilbert.

“With the ability to achieve comprehensive understanding of more complex sentence structures, even emotional states, break down conversations into meaningful content, quickly perform keyword detection and named entity recognition, NLU will dramatically improve the accuracy and the experience of conversational AI.”

In theory, this will have two results:

  • Augmenting human assistance in real-time, such as suggesting responses based on behaviour or based on skill level.
  • Change how a customer or client perceives they’re being treated with NLU delivering a more natural and positive experience.  

In 2022, chatbots will get much closer to offering a human-like experience.

It’s not about size, it’s about the quality

A robust AI system requires two things: a functioning model and underlying data to train that model. Collecting huge amounts of data is a waste of time if it’s not of high quality and labeled correctly.

Gabriel Straub, Chief Data Scientist at Ocado Technology, said:

“Andrew Ng has been speaking about data-centric AI, about how improving the quality of your data can often lead to better outcomes than improving your algorithms (at least for the same amount of effort.)

So, how do you do this in practice? How do you make sure that you manage the quality of data at least as carefully as the quantity of data you collect?

There are two things that will make a big difference: 1) making sure that data consumers are always at the heart of your data thinking and 2) ensuring that data governance is a function that enables you to unlock the value in your data, safely, rather than one that focuses on locking down data.”

Expect the AI industry to make the quality of data a priority in 2022.

(Photo by Michael Dziedzic on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

The post Editorial: Our predictions for the AI industry in 2022 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/editorial-our-predictions-for-the-ai-industry-in-2022/feed/ 0