robotics Archives - AI News https://www.artificialintelligence-news.com/news/tag/robotics/ Artificial Intelligence News Thu, 24 Apr 2025 11:40:04 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png robotics Archives - AI News https://www.artificialintelligence-news.com/news/tag/robotics/ 32 32 Meta FAIR advances human-like AI with five major releases https://www.artificialintelligence-news.com/news/meta-fair-advances-human-like-ai-five-major-releases/ https://www.artificialintelligence-news.com/news/meta-fair-advances-human-like-ai-five-major-releases/#respond Thu, 17 Apr 2025 16:00:05 +0000 https://www.artificialintelligence-news.com/?p=105371 The Fundamental AI Research (FAIR) team at Meta has announced five projects advancing the company’s pursuit of advanced machine intelligence (AMI). The latest releases from Meta focus heavily on enhancing AI perception – the ability for machines to process and interpret sensory information – alongside advancements in language modelling, robotics, and collaborative AI agents. Meta […]

The post Meta FAIR advances human-like AI with five major releases appeared first on AI News.

]]>
The Fundamental AI Research (FAIR) team at Meta has announced five projects advancing the company’s pursuit of advanced machine intelligence (AMI).

The latest releases from Meta focus heavily on enhancing AI perception – the ability for machines to process and interpret sensory information – alongside advancements in language modelling, robotics, and collaborative AI agents.

Meta stated its goal involves creating machines “that are able to acquire, process, and interpret sensory information about the world around us and are able to use this information to make decisions with human-like intelligence and speed.”

The five new releases represent diverse but interconnected efforts towards achieving this ambitious goal.

Perception Encoder: Meta sharpens the ‘vision’ of AI

Central to the new releases is the Perception Encoder, described as a large-scale vision encoder designed to excel across various image and video tasks.

Vision encoders function as the “eyes” for AI systems, allowing them to understand visual data.

Meta highlights the increasing challenge of building encoders that meet the demands of advanced AI, requiring capabilities that bridge vision and language, handle both images and videos effectively, and remain robust under challenging conditions, including potential adversarial attacks.

The ideal encoder, according to Meta, should recognise a wide array of concepts while distinguishing subtle details—citing examples like spotting “a stingray burrowed under the sea floor, identifying a tiny goldfinch in the background of an image, or catching a scampering agouti on a night vision wildlife camera.”

Meta claims the Perception Encoder achieves “exceptional performance on image and video zero-shot classification and retrieval, surpassing all existing open source and proprietary models for such tasks.”

Furthermore, its perceptual strengths reportedly translate well to language tasks. 

When aligned with a large language model (LLM), the encoder is said to outperform other vision encoders in areas like visual question answering (VQA), captioning, document understanding, and grounding (linking text to specific image regions). It also reportedly boosts performance on tasks traditionally difficult for LLMs, such as understanding spatial relationships (e.g., “if one object is behind another”) or camera movement relative to an object.

“As Perception Encoder begins to be integrated into new applications, we’re excited to see how its advanced vision capabilities will enable even more capable AI systems,” Meta said.

Perception Language Model (PLM): Open research in vision-language

Complementing the encoder is the Perception Language Model (PLM), an open and reproducible vision-language model aimed at complex visual recognition tasks. 

PLM was trained using large-scale synthetic data combined with open vision-language datasets, explicitly without distilling knowledge from external proprietary models.

Recognising gaps in existing video understanding data, the FAIR team collected 2.5 million new, human-labelled samples focused on fine-grained video question answering and spatio-temporal captioning. Meta claims this forms the “largest dataset of its kind to date.”

PLM is offered in 1, 3, and 8 billion parameter versions, catering to academic research needs requiring transparency.

Alongside the models, Meta is releasing PLM-VideoBench, a new benchmark specifically designed to test capabilities often missed by existing benchmarks, namely “fine-grained activity understanding and spatiotemporally grounded reasoning.”

Meta hopes the combination of open models, the large dataset, and the challenging benchmark will empower the open-source community.

Meta Locate 3D: Giving robots situational awareness

Bridging the gap between language commands and physical action is Meta Locate 3D. This end-to-end model aims to allow robots to accurately localise objects in a 3D environment based on open-vocabulary natural language queries.

Meta Locate 3D processes 3D point clouds directly from RGB-D sensors (like those found on some robots or depth-sensing cameras). Given a textual prompt, such as “flower vase near TV console,” the system considers spatial relationships and context to pinpoint the correct object instance, distinguishing it from, say, a “vase on the table.”

The system comprises three main parts: a preprocessing step converting 2D features to 3D featurised point clouds; the 3D-JEPA encoder (a pretrained model creating a contextualised 3D world representation); and the Locate 3D decoder, which takes the 3D representation and the language query to output bounding boxes and masks for the specified objects.

Alongside the model, Meta is releasing a substantial new dataset for object localisation based on referring expressions. It includes 130,000 language annotations across 1,346 scenes from the ARKitScenes, ScanNet, and ScanNet++ datasets, effectively doubling existing annotated data in this area.

Meta sees this technology as crucial for developing more capable robotic systems, including its own PARTNR robot project, enabling more natural human-robot interaction and collaboration.

Dynamic Byte Latent Transformer: Efficient and robust language modelling

Following research published in late 2024, Meta is now releasing the model weights for its 8-billion parameter Dynamic Byte Latent Transformer.

This architecture represents a shift away from traditional tokenisation-based language models, operating instead at the byte level. Meta claims this approach achieves comparable performance at scale while offering significant improvements in inference efficiency and robustness.

Traditional LLMs break text into ‘tokens’, which can struggle with misspellings, novel words, or adversarial inputs. Byte-level models process raw bytes, potentially offering greater resilience.

Meta reports that the Dynamic Byte Latent Transformer “outperforms tokeniser-based models across various tasks, with an average robustness advantage of +7 points (on perturbed HellaSwag), and reaching as high as +55 points on tasks from the CUTE token-understanding benchmark.”

By releasing the weights alongside the previously shared codebase, Meta encourages the research community to explore this alternative approach to language modelling.

Collaborative Reasoner: Meta advances socially-intelligent AI agents

The final release, Collaborative Reasoner, tackles the complex challenge of creating AI agents that can effectively collaborate with humans or other AIs.

Meta notes that human collaboration often yields superior results, and aims to imbue AI with similar capabilities for tasks like helping with homework or job interview preparation.

Such collaboration requires not just problem-solving but also social skills like communication, empathy, providing feedback, and understanding others’ mental states (theory-of-mind), often unfolding over multiple conversational turns.

Current LLM training and evaluation methods often neglect these social and collaborative aspects. Furthermore, collecting relevant conversational data is expensive and difficult.

Collaborative Reasoner provides a framework to evaluate and enhance these skills. It includes goal-oriented tasks requiring multi-step reasoning achieved through conversation between two agents. The framework tests abilities like disagreeing constructively, persuading a partner, and reaching a shared best solution.

Meta’s evaluations revealed that current models struggle to consistently leverage collaboration for better outcomes. To address this, they propose a self-improvement technique using synthetic interaction data where an LLM agent collaborates with itself.

Generating this data at scale is enabled by a new high-performance model serving engine called Matrix. Using this approach on maths, scientific, and social reasoning tasks reportedly yielded improvements of up to 29.4% compared to the standard ‘chain-of-thought’ performance of a single LLM.

By open-sourcing the data generation and modelling pipeline, Meta aims to foster further research into creating truly “social agents that can partner with humans and other agents.”

These five releases collectively underscore Meta’s continued heavy investment in fundamental AI research, particularly focusing on building blocks for machines that can perceive, understand, and interact with the world in more human-like ways. 

See also: Meta will train AI models using EU user data

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta FAIR advances human-like AI with five major releases appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-fair-advances-human-like-ai-five-major-releases/feed/ 0
NVIDIA advances AI frontiers with CES 2025 announcements https://www.artificialintelligence-news.com/news/nvidia-advances-ai-frontiers-with-ces-2025-announcements/ https://www.artificialintelligence-news.com/news/nvidia-advances-ai-frontiers-with-ces-2025-announcements/#respond Tue, 07 Jan 2025 11:25:09 +0000 https://www.artificialintelligence-news.com/?p=16818 NVIDIA CEO and founder Jensen Huang took the stage for a keynote at CES 2025 to outline the company’s vision for the future of AI in gaming, autonomous vehicles (AVs), robotics, and more. “AI has been advancing at an incredible pace,” Huang said. “It started with perception AI — understanding images, words, and sounds. Then […]

The post NVIDIA advances AI frontiers with CES 2025 announcements appeared first on AI News.

]]>
NVIDIA CEO and founder Jensen Huang took the stage for a keynote at CES 2025 to outline the company’s vision for the future of AI in gaming, autonomous vehicles (AVs), robotics, and more.

“AI has been advancing at an incredible pace,” Huang said. “It started with perception AI — understanding images, words, and sounds. Then generative AI — creating text, images, and sound. Now, we’re entering the era of ‘physical AI,’ AI that can perceive, reason, plan, and act.”

With NVIDIA’s platforms and GPUs at the core, Huang explained how the company continues to fuel breakthroughs across multiple industries while unveiling innovations such as the Cosmos platform, next-gen GeForce RTX 50 Series GPUs, and compact AI supercomputer Project DIGITS. 

RTX 50 series: “The GPU is a beast”

One of the most significant announcements during CES 2025 was the introduction of the GeForce RTX 50 Series, powered by NVIDIA Blackwell architecture. Huang debuted the flagship RTX 5090 GPU, boasting 92 billion transistors and achieving an impressive 3,352 trillion AI operations per second (TOPS).

“GeForce enabled AI to reach the masses, and now AI is coming home to GeForce,” said Huang.

Holding the blacked-out GPU, Huang called it “a beast,” highlighting its advanced features, including dual cooling fans and its ability to leverage AI for revolutionary real-time graphics.

Set for a staggered release in early 2025, the RTX 50 Series includes the flagship RTX 5090 and RTX 5080 (available 30 January), followed by the RTX 5070 Ti and RTX 5070 (February). Laptop GPUs join the lineup in March.

In addition, NVIDIA introduced DLSS 4 – featuring ‘Multi-Frame Generation’ technology – which boosts gaming performance up to eightfold by generating three additional frames for every frame rendered.

Other advancements, such as RTX Neural Shaders and RTX Mega Geometry, promise heightened realism in video games, including precise face and hair rendering using generative AI.

Cosmos: Ushering in physical AI

NVIDIA took another step forward with the Cosmos platform at CES 2025, which Huang described as a “game-changer” for robotics, industrial AI, and AVs. Much like the impact of large language models on generative AI, Cosmos represents a new frontier for AI applications in robotics and autonomous systems.

“The ChatGPT moment for general robotics is just around the corner,” Huang declared.

Cosmos integrates generative models, tokenisers, and video processing frameworks to enable robots and vehicles to simulate potential outcomes and predict optimal actions. By ingesting text, image, and video prompts, Cosmos can generate “virtual world states,” tailored for complex robotics and AV use cases involving real-world environments and lighting.

Top robotics and automotive leaders – including XPENG, Hyundai Motor Group, and Uber – are among the first to adopt Cosmos, which is available on GitHub via an open licence.

Pras Velagapudi, CTO at Agility, comments: “Data scarcity and variability are key challenges to successful learning in robot environments. Cosmos’ text-, image- and video-to-world capabilities allow us to generate and augment photorealistic scenarios for a variety of tasks that we can use to train models without needing as much expensive, real-world data capture.”

Empowering developers with AI models

NVIDIA also unveiled new AI foundation models for RTX PCs, which aim to supercharge content creation, productivity, and enterprise applications. These models, presented as NVIDIA NIM (Neural Interaction Model) microservices, are designed to integrate with the RTX 50 Series hardware.

Huang emphasised the accessibility of these tools: “These AI models run in every single cloud because NVIDIA GPUs are now available in every cloud.”

NVIDIA is doubling down on its push to equip developers with advanced tools for building AI-driven solutions. The company introduced AI Blueprints: pre-configured tools for crafting agents tailored to specific enterprise needs, such as content generation, fraud detection, and video management.

“They are completely open source, so you could take it and modify the blueprints,” explains Huang.

Huang also announced the release of Llama Nemotron, designed for developers to build and deploy powerful AI agents.

Ahmad Al-Dahle, VP and Head of GenAI at Meta, said: “Agentic AI is the next frontier of AI development, and delivering on this opportunity requires full-stack optimisation across a system of LLMs to deliver efficient, accurate AI agents.

“Through our collaboration with NVIDIA and our shared commitment to open models, the NVIDIA Llama Nemotron family built on Llama can help enterprises quickly create their own custom AI agents.”

Philipp Herzig, Chief AI Officer at SAP, added: “AI agents that collaborate to solve complex tasks across multiple lines of the business will unlock a whole new level of enterprise productivity beyond today’s generative AI scenarios.

“Through SAP’s Joule, hundreds of millions of enterprise users will interact with these agents to accomplish their goals faster than ever before. NVIDIA’s new open Llama Nemotron model family will foster the development of multiple specialised AI agents to transform business processes.”

Safer and smarter autonomous vehicles

NVIDIA’s announcements extended to the automotive industry, where its DRIVE Hyperion AV platform is fostering a safer and smarter future for AVs. Built on the new NVIDIA AGX Thor system-on-a-chip (SoC), the platform allows vehicles to achieve next-level functional safety and autonomous capabilities using generative AI models.

“The autonomous vehicle revolution is here,” Huang said. “Building autonomous vehicles, like all robots, requires three computers: NVIDIA DGX to train AI models, Omniverse to test-drive and generate synthetic data, and DRIVE AGX, a supercomputer in the car.”

Huang explained that synthetic data is critical for AV development, as it dramatically enhances real-world datasets. NVIDIA’s AI data factories – powered by Omniverse and Cosmos platforms – generate synthetic driving scenarios, increasing the effectiveness of training data exponentially.

Toyota, the world’s largest automaker, is committed to using NVIDIA DRIVE AGX Orin and the safety-certified NVIDIA DriveOS to develop its next-generation vehicles. Heavyweights such as JLR, Mercedes-Benz, and Volvo Cars have also adopted DRIVE Hyperion.

Project DIGITS: Compact AI supercomputer

Huang concluded his NVIDIA keynote at CES 2025 with a final “one more thing” announcement: Project DIGITS, NVIDIA’s smallest yet most powerful AI supercomputer, powered by the cutting-edge GB10 Grace Blackwell Superchip.

“This is NVIDIA’s latest AI supercomputer,” Huang declared, revealing its compact size, claiming it’s portable enough to “practically fit in a pocket.”

Project DIGITS enables developers and engineers to train and deploy AI models directly from their desks, providing the full power of NVIDIA’s AI stack in a compact form.

Image of Project DIGITS on a desk, a compact AI supercomputer by NVIDIA debuted at CES 2025.

Set to launch in May, Project DIGITS represents NVIDIA’s push to make AI supercomputing accessible to individuals as well as organisations.

Vision for tomorrow

Reflecting on NVIDIA’s journey since inventing the programmable GPU in 1999, Huang described the past 12 years of AI-driven change as transformative.

“Every single layer of the technology stack has been fundamentally transformed,” he said.

With advancements spanning gaming, AI-driven agents, robotics, and autonomous vehicles, Huang foresees an exciting future.

“All of the enabling technologies I’ve talked about today will lead to surprising breakthroughs in general robotics and AI over the coming years,” Huang concludes.

(Image Credit: NVIDIA)

See also: Sam Altman, OpenAI: ‘Lucky and humbling’ to work towards superintelligence

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NVIDIA advances AI frontiers with CES 2025 announcements appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/nvidia-advances-ai-frontiers-with-ces-2025-announcements/feed/ 0
MIT breakthrough could transform robot training https://www.artificialintelligence-news.com/news/mit-breakthrough-could-transform-robot-training/ https://www.artificialintelligence-news.com/news/mit-breakthrough-could-transform-robot-training/#respond Mon, 28 Oct 2024 16:43:57 +0000 https://www.artificialintelligence-news.com/?p=16403 MIT researchers have developed a robot training method that reduces time and cost while improving adaptability to new tasks and environments. The approach – called Heterogeneous Pretrained Transformers (HPT) – combines vast amounts of diverse data from multiple sources into a unified system, effectively creating a shared language that generative AI models can process. This […]

The post MIT breakthrough could transform robot training appeared first on AI News.

]]>
MIT researchers have developed a robot training method that reduces time and cost while improving adaptability to new tasks and environments.

The approach – called Heterogeneous Pretrained Transformers (HPT) – combines vast amounts of diverse data from multiple sources into a unified system, effectively creating a shared language that generative AI models can process. This method marks a significant departure from traditional robot training, where engineers typically collect specific data for individual robots and tasks in controlled environments.

Lead researcher Lirui Wang – an electrical engineering and computer science graduate student at MIT – believes that while many cite insufficient training data as a key challenge in robotics, a bigger issue lies in the vast array of different domains, modalities, and robot hardware. Their work demonstrates how to effectively combine and utilise all these diverse elements.

The research team developed an architecture that unifies various data types, including camera images, language instructions, and depth maps. HPT utilises a transformer model, similar to those powering advanced language models, to process visual and proprioceptive inputs.

In practical tests, the system demonstrated remarkable results—outperforming traditional training methods by more than 20 per cent in both simulated and real-world scenarios. This improvement held true even when robots encountered tasks significantly different from their training data.

The researchers assembled an impressive dataset for pretraining, comprising 52 datasets with over 200,000 robot trajectories across four categories. This approach allows robots to learn from a wealth of experiences, including human demonstrations and simulations.

One of the system’s key innovations lies in its handling of proprioception (the robot’s awareness of its position and movement.) The team designed the architecture to place equal importance on proprioception and vision, enabling more sophisticated dexterous motions.

Looking ahead, the team aims to enhance HPT’s capabilities to process unlabelled data, similar to advanced language models. Their ultimate vision involves creating a universal robot brain that could be downloaded and used for any robot without additional training.

While acknowledging they are in the early stages, the team remains optimistic that scaling could lead to breakthrough developments in robotic policies, similar to the advances seen in large language models.

You can find a copy of the researchers’ paper here (PDF)

(Photo by Possessed Photography)

See also: Jailbreaking AI robots: Researchers sound alarm over security flaws

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post MIT breakthrough could transform robot training appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/mit-breakthrough-could-transform-robot-training/feed/ 0
AI-powered underwater vehicle transforms offshore wind inspections https://www.artificialintelligence-news.com/news/ai-underwater-vehicle-offshore-wind-inspections/ https://www.artificialintelligence-news.com/news/ai-underwater-vehicle-offshore-wind-inspections/#respond Tue, 24 Sep 2024 09:32:46 +0000 https://www.artificialintelligence-news.com/?p=16150 Beam has deployed the world’s first AI-driven autonomous underwater vehicle for offshore wind farm inspections. The technology has already proved its mettle by inspecting jacket structures at Scotland’s largest offshore wind farm, Seagreen—a joint venture between SSE Renewables, TotalEnergies, and PTTEP. The AI-powered vehicle represents a significant leap forward in marine technology and underwater robotics. […]

The post AI-powered underwater vehicle transforms offshore wind inspections appeared first on AI News.

]]>
Beam has deployed the world’s first AI-driven autonomous underwater vehicle for offshore wind farm inspections. The technology has already proved its mettle by inspecting jacket structures at Scotland’s largest offshore wind farm, Seagreen—a joint venture between SSE Renewables, TotalEnergies, and PTTEP.

The AI-powered vehicle represents a significant leap forward in marine technology and underwater robotics. Capable of conducting complex underwater inspections without human intervention, it promises to dramatically enhance efficiency and slash costs associated with underwater surveys and inspections.

Traditionally, offshore wind site inspections have been manual, labour-intensive processes. Beam’s autonomous solution offers a radical departure from this approach, enabling data to be streamed directly back to shore. This shift allows offshore workers to concentrate on more intricate tasks while reducing inspection timelines by up to 50%, resulting in substantial operational cost savings.

Brian Allen, CEO of Beam, said: “We are very proud to have succeeded in deploying the world’s first autonomous underwater vehicle driven by AI. Automation can revolutionise how we carry out inspection and maintenance of offshore wind farms, helping to reduce both costs and timelines.”

Beyond improved efficiency, Beam’s technology elevates the quality of inspection data and facilitates the creation of 3D reconstructions of assets alongside visual data. This deployment marks a crucial step in Beam’s roadmap for autonomous technology, with plans to extend this AI-driven solution across its fleet of DP2 vessels, ROVs, and autonomous underwater vehicles (AUVs) throughout 2025 and 2026.

“Looking ahead to the future, the potential of this technology is huge for the industry, and success in these initial projects is vital for us to progress and realise this vision. This wouldn’t be possible without forward-thinking customers like SSE Renewables who are willing to go on the journey with us,” explained Allen.

The Seagreen wind farm, operational since October 2023, is the world’s deepest fixed-bottom offshore wind farm. Beam’s project at Seagreen has provided crucial insights into the potential of autonomous technology for large offshore wind superstructures. The data collected by the AI-driven vehicle will support ongoing operational reliability at the site, offering valuable information on areas such as marine growth and potential erosion at the foundations.

Matthew Henderson, Technical Asset Manager – Substructure and Asset Lifecycle at SSE Renewables, commented: “At SSE, we have a mantra that ‘if it’s not safe, we don’t do it.’ Beam’s technology demonstrates that autonomous inspections can reduce the personnel we need to send offshore for planned inspections, while speeding up planned works and collecting rich data-sets to inform asset integrity planning.

“As we move further offshore, and into deeper waters, the ability to collect high-quality inspection data in a low-risk manner is imperative to us delivering our Net Zero Acceleration Programme.”

As Beam prepares to roll out its AI-driven inspection technology across its fleet in 2025 and 2026, this deployment aligns with the company’s mission to revolutionise offshore wind operations by making them more efficient and cost-effective—further supporting the global energy transition.

The success of this AI-powered underwater vehicle at Seagreen wind farm not only demonstrates the potential of autonomous technology in offshore wind inspections but also sets a new standard for safety, efficiency, and data quality in the industry. Such innovations will play a crucial role in ensuring the sustainability and cost-effectiveness of offshore wind energy.

See also: Hugging Face is launching an open robotics project

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI-powered underwater vehicle transforms offshore wind inspections appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-underwater-vehicle-offshore-wind-inspections/feed/ 0
Hugging Face is launching an open robotics project https://www.artificialintelligence-news.com/news/hugging-face-launching-open-robotics-project/ https://www.artificialintelligence-news.com/news/hugging-face-launching-open-robotics-project/#respond Fri, 08 Mar 2024 17:37:22 +0000 https://www.artificialintelligence-news.com/?p=14519 Hugging Face, the startup behind the popular open source machine learning codebase and ChatGPT rival Hugging Chat, is venturing into new territory with the launch of an open robotics project. The ambitious expansion was announced by former Tesla staff scientist Remi Cadene in a post on X: In keeping with Hugging Face’s ethos of open […]

The post Hugging Face is launching an open robotics project appeared first on AI News.

]]>
Hugging Face, the startup behind the popular open source machine learning codebase and ChatGPT rival Hugging Chat, is venturing into new territory with the launch of an open robotics project.

The ambitious expansion was announced by former Tesla staff scientist Remi Cadene in a post on X:

In keeping with Hugging Face’s ethos of open source, Cadene stated the robot project would be “open-source, not as in Open AI” in reference to OpenAI’s legal battle with Cadene’s former boss, Elon Musk.

Cadene – who will be leading the robotics initiative – revealed that Hugging Face is hiring robotics engineers in Paris, France.

A job listing for an “Embodied Robotics Engineer” sheds light on the project’s goals, which include “designing, building, and maintaining open-source and low cost robotic systems that integrate AI technologies, specifically in deep learning and embodied AI.”

The role involves collaborating with ML engineers, researchers, and product teams to develop innovative robotics solutions that “push the boundaries of what’s possible in robotics and AI.” Key responsibilities range from building low-cost robots using off-the-shelf components and 3D-printed parts to integrating deep learning and embodied AI technologies into robotic systems.

Until now, Hugging Face has primarily focused on software offerings like its machine learning codebase and open-source chatbot. The robotics project marks a significant departure into the hardware realm as the startup aims to bring AI into the physical world through open and affordable robotic platforms.

(Photo by Possessed Photography on Unsplash)

See also: Google engineer stole AI tech for Chinese firms

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Hugging Face is launching an open robotics project appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/hugging-face-launching-open-robotics-project/feed/ 0
AUKUS trial advances AI for military operations  https://www.artificialintelligence-news.com/news/aukus-trial-advances-ai-for-military-operations/ https://www.artificialintelligence-news.com/news/aukus-trial-advances-ai-for-military-operations/#respond Mon, 05 Feb 2024 16:29:13 +0000 https://www.artificialintelligence-news.com/?p=14324 The UK armed forces and Defence Science and Technology Laboratory (Dstl) recently collaborated with the militaries of Australia and the US as part of the AUKUS partnership in a landmark trial focused on AI and autonomous systems.  The trial, called Trusted Operation of Robotic Vehicles in Contested Environments (TORVICE), was held in Australia under the […]

The post AUKUS trial advances AI for military operations  appeared first on AI News.

]]>
The UK armed forces and Defence Science and Technology Laboratory (Dstl) recently collaborated with the militaries of Australia and the US as part of the AUKUS partnership in a landmark trial focused on AI and autonomous systems. 

The trial, called Trusted Operation of Robotic Vehicles in Contested Environments (TORVICE), was held in Australia under the AUKUS partnership formed last year between the three countries. It aimed to test robotic vehicles and sensors in situations involving electronic attacks, GPS disruption, and other threats to evaluate the resilience of autonomous systems expected to play a major role in future military operations.

Understanding how to ensure these AI systems can operate reliably in the face of modern electronic warfare and cyber threats will be critical before the technology can be more widely adopted.  

The TORVICE trial featured US and British autonomous vehicles carrying out reconnaissance missions while Australia units simulated battlefield electronic attacks on their systems. Analysis of the performance data will help strengthen protections and safeguards needed to prevent system failures or disruptions.

Guy Powell, Dstl’s technical authority for the trial, said: “The TORVICE trial aims to understand the capabilities of robotic and autonomous systems to operate in contested environments. We need to understand how robust these systems are when subject to attack.

“Robotic and autonomous systems are a transformational capability that we are introducing to armies across all three nations.” 

This builds on the first AUKUS autonomous systems trial held in April 2023 in the UK. It also represents a step forward following the AUKUS defense ministers’ December announcement that Resilient and Autonomous Artificial Intelligence Technologies (RAAIT) would be integrated into the three countries’ military forces beginning in 2024.

Dstl military advisor Lt Col Russ Atherton says that successfully harnessing AI and autonomy promises to “be an absolute game-changer” that reduces the risk to soldiers. The technology could carry out key tasks like sensor operation and logistics over wider areas.

“The ability to deploy different payloads such as sensors and logistics across a larger battlespace will give commanders greater options than currently exist,” explained Lt Atherton.

By collaborating, the AUKUS allies aim to accelerate development in this crucial new area of warfare, improving interoperability between their forces, maximising their expertise, and strengthening deterrence in the Indo-Pacific region.

As AUKUS continues to deepen cooperation on cutting-edge military technologies, this collaborative effort will significantly enhance military capabilities while reducing risks for warfighters.

(Image Credit: Dstl)

See also: Experts from 30 nations will contribute to global AI safety report

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AUKUS trial advances AI for military operations  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/aukus-trial-advances-ai-for-military-operations/feed/ 0
Open X-Embodiment dataset and RT-X model aim to revolutionise robotics https://www.artificialintelligence-news.com/news/open-x-embodiment-dataset-rt-x-model-aim-revolutionise-robotics/ https://www.artificialintelligence-news.com/news/open-x-embodiment-dataset-rt-x-model-aim-revolutionise-robotics/#respond Wed, 04 Oct 2023 14:36:10 +0000 https://www.artificialintelligence-news.com/?p=13674 In a collaboration between 33 academic labs worldwide, a consortium of researchers has unveiled a revolutionary approach to robotics. Traditionally, robots have excelled in specific tasks but struggled with versatility, requiring individual training for each unique job. However, this limitation might soon be a thing of the past. Open X-Embodiment: The gateway to generalist robots […]

The post Open X-Embodiment dataset and RT-X model aim to revolutionise robotics appeared first on AI News.

]]>
In a collaboration between 33 academic labs worldwide, a consortium of researchers has unveiled a revolutionary approach to robotics.

Traditionally, robots have excelled in specific tasks but struggled with versatility, requiring individual training for each unique job. However, this limitation might soon be a thing of the past.

Open X-Embodiment: The gateway to generalist robots

At the heart of this transformation lies the Open X-Embodiment dataset, a monumental effort pooling data from 22 distinct robot types.

With the contributions of over 20 research institutions, this dataset comprises over 500 skills, encompassing a staggering 150,000 tasks across more than a million episodes.

This treasure trove of diverse robotic demonstrations represents a significant leap towards training a universal robotic model capable of multifaceted tasks.

RT-1-X: A general-purpose robotics model

Accompanying this dataset is RT-1-X, a product of meticulous training on RT-1 – a real-world robotic control model – and RT-2, a vision-language-action model. This fusion resulted in RT-1-X, exhibiting exceptional skills transferability across various robot embodiments.

In rigorous testing across five research labs, RT-1-X outperformed its counterparts by an average of 50 percent.

The success of RT-1-X signifies a paradigm shift, demonstrating that training a single model with diverse, cross-embodiment data dramatically enhances its performance on various robots.

Emergent skills: Leaping into the future

The experimentation did not stop there. Researchers explored emergent skills, delving into uncharted territories of robotic capabilities.

RT-2-X, an advanced version of the vision-language-action model, exhibited remarkable spatial understanding and problem-solving abilities. By incorporating data from different robots, RT-2-X demonstrated an expanded repertoire of tasks, showcasing the potential of shared learning in the robotic realm.

A responsible approach

Crucially, this research emphasises a responsible approach to the advancement of robotics. 

By openly sharing data and models, the global community can collectively elevate the field—transcending individual limitations and fostering an environment of shared knowledge and progress.

The future of robotics lies in mutual learning, where robots teach each other, and researchers learn from one another. The momentous achievement unveiled this week paves the way for a future where robots seamlessly adapt to diverse tasks, heralding a new era of innovation and efficiency.

(Photo by Brett Jordan on Unsplash)

See also: Amazon invests $4B in Anthropic to boost AI capabilities

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Open X-Embodiment dataset and RT-X model aim to revolutionise robotics appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/open-x-embodiment-dataset-rt-x-model-aim-revolutionise-robotics/feed/ 0
UK commits £13M to cutting-edge AI healthcare research https://www.artificialintelligence-news.com/news/uk-commits-13m-cutting-edge-ai-healthcare-research/ https://www.artificialintelligence-news.com/news/uk-commits-13m-cutting-edge-ai-healthcare-research/#respond Thu, 10 Aug 2023 14:51:26 +0000 https://www.artificialintelligence-news.com/?p=13457 The UK has announced a £13 million investment in cutting-edge AI research within the healthcare sector. The announcement, made by Technology Secretary Michelle Donelan, marks a major step forward in harnessing the potential of AI in revolutionising healthcare. The investment will empower 22 winning projects across universities and NHS trusts, from Edinburgh to Surrey, to […]

The post UK commits £13M to cutting-edge AI healthcare research appeared first on AI News.

]]>
The UK has announced a £13 million investment in cutting-edge AI research within the healthcare sector.

The announcement, made by Technology Secretary Michelle Donelan, marks a major step forward in harnessing the potential of AI in revolutionising healthcare. The investment will empower 22 winning projects across universities and NHS trusts, from Edinburgh to Surrey, to drive innovation and transform patient care.

Dr Antonio Espingardeiro, IEEE member and software and robotics expert, comments:

“As it becomes more sophisticated, AI can efficiently conduct tasks traditionally undertaken by humans. The potential for the technology within the medical field is huge—it can analyse vast quantities of information and, when coupled with machine learning, search through records and infer patterns or anomalies in data, that would otherwise take decades for humans to analyse.

We are just starting to see the beginning of a new era where machine learning could bring substantial value and transform the traditional role of the doctor. The true capabilities of this technology as an aide to the healthcare sector are yet to be fully realised. In the future, we may even be able to solve of some of the biggest challenges and issues of our time.

One of the standout projects receiving funding is the University College London’s Centre for Interventional and Surgical Sciences. With a grant exceeding £500,000, researchers aim to develop a semi-autonomous surgical robotics platform designed to enhance the removal of brain tumours. This pioneering technology promises to elevate surgical outcomes, minimise complications, and expedite patient recovery times.

“With the increased adoption of AI and robotics, we will soon be able to deliver the scalability that the healthcare sector needs and establish more proactive care delivery,” added Espingardeiro.

University of Sheffield’s project, backed by £463,000, is focused on a crucial aspect of healthcare – chronic nerve pain. Their innovative approach aims to widen and improve treatments for this condition, which affects one in ten adults over 30.

The University of Oxford’s project, bolstered by £640,000, seeks to expedite research into a foundational AI model for clinical risk prediction. By analysing an individual’s existing health conditions, this AI model could accurately forecast the likelihood of future health problems and revolutionise early intervention strategies.

Meanwhile, Heriot-Watt University in Edinburgh has secured £644,000 to develop a groundbreaking system that offers real-time feedback to trainee surgeons practising laparoscopy procedures, also known as keyhole surgeries. This technology promises to enhance the proficiency of aspiring surgeons and elevate the overall quality of healthcare.

Finally, the University of Surrey’s project – backed by £456,000 – will collaborate closely with radiologists to develop AI capable of enhancing mammogram analysis. By streamlining and improving this critical diagnostic process, AI could contribute to earlier cancer detection.

Ayesha Iqbal, IEEE senior member and engineering trainer at the Advanced Manufacturing Training Centre, said:

“The emergence of AI in healthcare has completely reshaped the way we diagnose, treat, and monitor patients.

Applications of AI in healthcare include finding new links between genetic codes, performing robot-assisted surgeries, improving medical imaging methods, automating administrative tasks, personalising treatment options, producing more accurate diagnoses and treatment plans, enhancing preventive care and quality of life, predicting and tracking the spread of infectious diseases, and helping combat epidemics and pandemics.”

With the UK healthcare sector already witnessing AI applications in improving stroke diagnosis, heart attack risk assessment, and more, the £13 million investment is poised to further accelerate transformative healthcare breakthroughs.

Health and Social Care Secretary Steve Barclay commented:

“AI can help the NHS improve outcomes for patients, with breakthroughs leading to earlier diagnosis, more effective treatments, and faster recovery. It’s already being used in the NHS in a number of areas, from improving diagnosis and treatment for stroke patients to identifying those most at risk of a heart attack.

This funding is yet another boost to help the UK lead the way in healthcare research. It comes on top of the £21 million we recently announced for trusts to roll out the latest AI diagnostic tools and £123 million invested in 86 promising tech through our AI in Health and Care Awards.”

However, the announcement was made the same week as NHS waiting lists hit a record high. Prime Minister Rishi Sunak made reducing waiting lists one of his five key priorities for 2023 on which to hold him “to account directly for whether it is delivered.” Hope is being pinned on technologies like AI to help tackle waiting lists.

This pivotal move is accompanied by the nation’s preparations to host the world’s first major international summit on AI safety, underscoring its commitment to responsible AI development.

Scheduled for later this year, the AI safety summit will provide a platform for international stakeholders to collaboratively address AI’s risks and opportunities.

As Europe’s AI leader, and the third-ranking globally behind the USA and China, the UK is well-positioned to lead these discussions and champion the responsible advancement of AI technology.

(Photo by National Cancer Institute on Unsplash)

See also: BSI publishes guidance to boost trust in AI for healthcare

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK commits £13M to cutting-edge AI healthcare research appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/uk-commits-13m-cutting-edge-ai-healthcare-research/feed/ 0
Tesla’s AI supercomputer tripped the power grid https://www.artificialintelligence-news.com/news/tesla-ai-supercomputer-tripped-power-grid/ https://www.artificialintelligence-news.com/news/tesla-ai-supercomputer-tripped-power-grid/#respond Mon, 03 Oct 2022 09:40:05 +0000 https://www.artificialintelligence-news.com/?p=12337 Tesla’s purpose-built AI supercomputer ‘Dojo’ is so powerful that it tripped the power grid. Dojo was unveiled at Tesla’s annual AI Day last year but the project was still in its infancy. At AI Day 2022, Tesla unveiled the progress it has made with Dojo over the course of the year. The supercomputer has transitioned […]

The post Tesla’s AI supercomputer tripped the power grid appeared first on AI News.

]]>
Tesla’s purpose-built AI supercomputer ‘Dojo’ is so powerful that it tripped the power grid.

Dojo was unveiled at Tesla’s annual AI Day last year but the project was still in its infancy. At AI Day 2022, Tesla unveiled the progress it has made with Dojo over the course of the year.

The supercomputer has transitioned from just a chip and training tiles into a full cabinet. Tesla claims that it can replace six GPU boxes with a single Dojo tile, which it says is cheaper than one GPU box.

Per tray, there are six Dojo tiles. Tesla claims that each tray is equivalent to “three to four full-loaded supercomputer racks”. Two trays can fit in a single Dojo cabinet with a host assembly.

Such a supercomputer naturally has a large power draw. Dojo requires so much power that it managed to trip the grid in Palo Alto.

“Earlier this year, we started load testing our power and cooling infrastructure. We were able to push it over 2 MW before we tripped our substation and got a call from the city,” said Bill Chang, Tesla’s Principal System Engineer for Dojo.

In order to function, Tesla had to build custom infrastructure for Dojo with its own high-powered cooling and power system.

An ‘ExaPOD’ (consisting of a few Dojo cabinets) has the following specs:

  • 1.1 EFLOP
  • 1.3TB SRAM
  • 13TB DRAM

Seven ExaPODs are currently planned to be housed in Palo Alto.

Dojo is purpose-built for AI and will greatly improve Tesla’s ability to train neural nets using video data from its vehicles. These neural nets will be critical for Tesla’s self-driving efforts and its humanoid robot ‘Optimus’, which also made an appearance during this year’s event.

Optimus

Optimus was also first unveiled last year and was even more in its infancy than Dojo. In fact, all it was at the time was a person in a spandex suit and some PowerPoint slides.

While it’s clear that Optimus still has a long way to go before it can do the shopping and carry out dangerous manual labour tasks, as Tesla envisions, we at least saw a working prototype of the robot at AI Day 2022.

“I do want to set some expectations with respect to our Optimus robot,” said Tesla CEO Elon Musk. “As you know, last year it was just a person in a robot suit. But, we’ve come a long way, and compared to that it’s going to be very impressive.”

Optimus can now walk around and, if attached to apparatus from the ceiling, do some basic tasks like watering plants:

The prototype of Optimus was reportedly developed in the past six months and Tesla is hoping to get a working design within the “next few months… or years”. The price tag is “probably less than $20,000”.

All the details of Optimus are still vague at the moment, but at least there’s more certainty around the Dojo supercomputer.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Tesla’s AI supercomputer tripped the power grid appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/tesla-ai-supercomputer-tripped-power-grid/feed/ 0
Chess robot breaks child’s finger after premature move https://www.artificialintelligence-news.com/news/chess-robot-breaks-childs-finger-after-premature-move/ https://www.artificialintelligence-news.com/news/chess-robot-breaks-childs-finger-after-premature-move/#respond Mon, 25 Jul 2022 14:33:03 +0000 https://www.artificialintelligence-news.com/?p=12172 A robot went rogue at a Moscow chess tournament and broke a kid’s finger after he made a move prematurely.  The robot, which uses AI to play three chess games at once, grabbed and pinched the child’s finger. Unfortunately, despite several people rushing to help, the robot broke the kid’s finger: According to Moscow Chess […]

The post Chess robot breaks child’s finger after premature move appeared first on AI News.

]]>
A robot went rogue at a Moscow chess tournament and broke a kid’s finger after he made a move prematurely. 

The robot, which uses AI to play three chess games at once, grabbed and pinched the child’s finger. Unfortunately, despite several people rushing to help, the robot broke the kid’s finger:

According to Moscow Chess Federation VP Sergey Smagin, the robot has been used for 15 years and this is the first time such an incident has occurred.

Reports suggest the robot expects its human rival to leave a set amount of time after it makes its play. The child played too quickly and the robot didn’t know how to handle the situation.

“There are certain safety rules and the child, apparently, violated them. When he made his move, he did not realise he first had to wait,” Smagin said. “This is an extremely rare case, the first I can recall.”

It doesn’t paint Russia’s robotics scene in the best light and it’s quite surprising the story even made it out of the country’s notorious censorship.

Fortunately, the child’s finger has been put in a cast and he is expected to make a quick and complete recovery. There doesn’t appear to be any lasting mental trauma either as he played again the next day.

A study in 2015 found that one person is killed each year by an industrial robot in the US alone. As robots become ever more prevalent in our work and personal lives; that number is likely to increase.

Most injuries and fatalities with robots are from human error, so it’s always worth being cautious.

(Photo by GR Stocks on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Chess robot breaks child’s finger after premature move appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/chess-robot-breaks-childs-finger-after-premature-move/feed/ 0
National Robotarium pioneers AI and telepresence robotic tech for remote health consultations https://www.artificialintelligence-news.com/news/national-robotarium-pioneers-ai-and-telepresence-robotic-tech-for-remote-health-consultations/ https://www.artificialintelligence-news.com/news/national-robotarium-pioneers-ai-and-telepresence-robotic-tech-for-remote-health-consultations/#respond Mon, 20 Sep 2021 13:45:11 +0000 http://artificialintelligence-news.com/?p=11095 The National Robotarium, hosted by Heriot-Watt University in Edinburgh, has unveiled an AI-powered telepresence robotic solution for remote health consultations. Using the solution, health practitioners would be able to assess a person’s physical and cognitive health from anywhere in the world. Patients could access specialists no matter whether they’re based in the UK, India, the […]

The post National Robotarium pioneers AI and telepresence robotic tech for remote health consultations appeared first on AI News.

]]>
The National Robotarium, hosted by Heriot-Watt University in Edinburgh, has unveiled an AI-powered telepresence robotic solution for remote health consultations.

Using the solution, health practitioners would be able to assess a person’s physical and cognitive health from anywhere in the world. Patients could access specialists no matter whether they’re based in the UK, India, the US, or anywhere else.

Iain Stewart, UK Government Minister for Scotland, said:

“It was fascinating to visit the National Robotarium and see first-hand how virtual teleportation technology could revolutionise healthcare and assisted living.

Backed by £21 million UK Government City Region Deal funding, this cutting-edge research centre is a world leader for robotics and AI, bringing jobs and investment to the area.”

The project is part of the National Robotarium’s assisted living lab which explores how to improve the lives of people living with various conditions.

Dr Mario Parra Rodriguez, an expert in cognitive assessment from the University of Strathclyde, is working on the project and believes the solution will enable more regular monitoring and health assessments that are critical for people living with conditions like Alzheimer’s disease and other cognitive impairments.

“The experience of inhabiting a distant robot through which I can remotely guide, assess, and support vulnerable adults affected by devastating conditions such as Alzheimer’s disease, grants me confidence that challenges we are currently experiencing to mitigate the impact of such diseases will soon be overcome through revolutionary technologies,” commented Rodriguez.

“The collaboration with the National Robotarium, hosted by Heriot-Watt University is combining experience from various disciplines to deliver technologies that can address the ever-changing needs of people affected by dementia.”

Dr Mauro Dragone is leading the research and explains how AI was vital for the project:

“Our prototype makes use of machine learning and artificial intelligence techniques to monitor smart home sensors to detect and analyse daily activities. We are programming the system to use this information to carry out a thorough, non-intrusive assessment of an older person’s cognitive abilities, as well as their ability to live independently.

Combining the system with a telepresence robot brings two major advances: Firstly, robots can be equipped with powerful sensors and can also operate in a semi-autonomous mode, enriching the capability of the system to deliver quality data, 24 hours a day, seven days a week. 

Secondly, telepresence robots keep clinicians and carers in the loop. These professionals can benefit from the data provided by the project’s intelligent sensing system, but they can also control the robot directly, over the Internet, to interact with the individual under their care. They can see through the eyes of the robot, move around the room or between rooms and operate its arms and hands to carry out more complex assessment protocols. They can also respond to emergencies and provide assistance when needed.”

Earlier this month, the UK government announced tax rises to fund social care, give people the dignity they deserve, and help the NHS recover from the pandemic.

However, some believe further rises are on the horizon. Innovative technologies could help to reduce costs while maintaining or improving care.

“Blackwood is always looking for solutions that help our customers to live more independently whilst promoting choice and control for the individual. Robotics has the potential to improve independent living, provide new levels of support, and integrate with our digital housing and care system CleverCogs,” said Mr Colin Foskett, Head of Innovation at Blackwood Homes and Care.

“Our partnership with the National Robotarium and the design of the assisted living lab ensures that our customers are involved in the co-design and co-creation of new products and services, increasing our investment in innovation and in the future leading to new solutions that will aid independent living and improve outcomes for our customers.”

Our sister publication, IoT News, reported on the construction of the £22.4 million National Robotarium earlier this year—including some of the facilities, equipment, and innovative projects that it hosts.

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post National Robotarium pioneers AI and telepresence robotic tech for remote health consultations appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/national-robotarium-pioneers-ai-and-telepresence-robotic-tech-for-remote-health-consultations/feed/ 0
AI Day: Elon Musk unveils ‘friendly’ humanoid robot Tesla Bot https://www.artificialintelligence-news.com/news/ai-day-elon-musk-unveils-friendly-humanoid-robot-tesla-bot/ https://www.artificialintelligence-news.com/news/ai-day-elon-musk-unveils-friendly-humanoid-robot-tesla-bot/#respond Fri, 20 Aug 2021 13:23:59 +0000 http://artificialintelligence-news.com/?p=10935 During Tesla’s AI Day event, CEO Elon Musk unveiled a robot that is “intended to be friendly”. Musk has been one of the most prominent figures to warn that AI is a “danger to the public” and potentially the “biggest risk we face as a civilisation”. In 2017, he even said there was just a […]

The post AI Day: Elon Musk unveils ‘friendly’ humanoid robot Tesla Bot appeared first on AI News.

]]>
During Tesla’s AI Day event, CEO Elon Musk unveiled a robot that is “intended to be friendly”.

Musk has been one of the most prominent figures to warn that AI is a “danger to the public” and potentially the “biggest risk we face as a civilisation”. In 2017, he even said there was just a “five to 10 percent chance of success [of making AI safe]”.

Speaking about London-based DeepMind in a New York Times interview last year, Musk said: “Just the nature of the AI that they’re building is one that crushes all humans at all games. I mean, it’s basically the plotline in ‘War Games’”.

Unveiling a 5ft 8in AI-powered humanoid robot may seem to contradict Musk’s concerns. However, rather than leave development to parties who he believes would be less responsible, Musk believes Tesla can lead in building ethical AI and robotics.

Musk has form in this area after co-founding OpenAI. The company’s mission statement is: “To build safe Artificial General Intelligence (AGI), and ensure AGI’s benefits are as widely and evenly distributed as possible.”

Of course, it all feels a little like building nuclear weapons to deter them—it’s an argument that’s sure to have some rather passionate views on either side.

During the unveiling of Tesla Bot, Musk was sure to point out that you could easily outrun and overpower it.

Tesla Bot is designed to “navigate through a world built for humans” and carry out tasks that are dangerous, repetitive, or boring. One example task is for the robot to be told to go to the store and get specific groceries.

Of course, all we’ve seen of Tesla Bot at this point is a series of PowerPoint slides (if you forget about the weird dance by a performer dressed as a Tesla Bot … which we’re all trying our hardest to.)

The unveiling of the robot followed a 90-minute presentation about some of the AI upgrades coming to Tesla’s electric vehicles. Tesla Bot is essentially a robot version of the company’s vehicles.

“Our cars are basically semi-sentient robots on wheels,” Musk said. “It makes sense to put that into humanoid form.”

AI Day was used to hype Tesla’s advancements in a bid to recruit new talent to the company. 

On its recruitment page, Tesla wrote: “Develop the next generation of automation, including a general purpose, bi-pedal, humanoid robot capable of performing tasks that are unsafe, repetitive or boring.

“We’re seeking mechanical, electrical, controls and software engineers to help us leverage our AI expertise beyond our vehicle fleet.”

A prototype of Tesla Bot is expected next year, although Musk has a history of delays and showing products well before they’re ready across his many ventures. Musk says that it’s important the new machine is not “super expensive”.

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post AI Day: Elon Musk unveils ‘friendly’ humanoid robot Tesla Bot appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-day-elon-musk-unveils-friendly-humanoid-robot-tesla-bot/feed/ 0