Scale Provides Annotation for nuScenes Dataset (Led by nuTonomy, an Aptiv Company) – the Largest Open Source Multi-Sensor Self-Driving Dataset Available to Public

World News: . []

SAN FRANCISCO, Sept. 14, 2018 (GLOBE NEWSWIRE) -- today is happy to share the release of the largest open source multi-sensor (LIDAR, RADAR, and camera) self-driving dataset published by nuTonomy (acquired by in 2017), with annotations by Scale. Academic researchers and autonomous vehicle innovators can access the open-sourced dataset, .

Scale’s , which leverages machine learning, statistical modeling, and human labeling to process LIDAR, RADAR, and camera sensor data into impeccable ground truth data, played a critical role in the creation of this new standard. The nuScenes open source dataset is based on LIDAR point cloud, camera sensor, and RADAR data sourced from nuTonomy and then labeled through Scale’s sophisticated and thorough processing to deliver data ideal for training autonomous vehicle perception algorithms. The open source tool made available by nuTonomy and Aptiv surpasses the public KITTI dataset, Baidu ApolloScape dataset, Udacity self-driving dataset, and the even the more recent Berkeley DeepDrive dataset that have until now served as the standard for academic and even industry use. nuScenes provides significantly greater data volume, accuracy, and precision; the full dataset will include 1,000 twenty-second scenes, nearly 1.4 million camera images, 400,000 LIDAR sweeps, and 1.1 million 3D boxes.

Similar to RADAR, LIDAR emits invisible infrared laser light that reflects off surrounding objects, allowing systems to compile three-dimensional point cloud data maps of their environments and identify the specific objects within them. Correctly identifying surrounding objects from LIDAR data allows autonomous vehicles to anticipate those objects’ behavior – whether they are other vehicles, pedestrians, animals or other obstacles – and to safely navigate around them. In this pursuit, the quality of a multi-sensor dataset is a critical differentiator that defines an autonomous vehicle’s ability to perceive what is around it and operate safely under real-world and real-time conditions.

“We’re proud to provide the annotations for nuScenes as the most robust open source multi-sensor self-driving dataset ever released,” said Alexandr Wang, CEO, Scale. “We believe this will be an invaluable resource for researchers developing autonomous vehicle systems, and one that will help to shape and accelerate their production for years to come.”

“Our partnership with Scale on the production of the annotations for nuScenes is a milestone for AV innovators and the academic community,” said Oscar Beijbom, Machine Learning Lead at nuTonomy (an Aptiv company). “Scale’s outstanding agility, tooling, scalability and quality made them our preferred partner and the natural choice for this project.”

Scale, whose autonomous vehicle customers also include Lyft, General Motors (Cruise), Zoox, Nuro and many others, $18 million in Series B funding.

Scale accelerates the development of AI by democratizing access to intelligent data. By leveraging its API for autonomous vehicles and other use cases, businesses depend on Scale to turn raw information into the human-labeled training data that dependably powers their AI applications. Scale uses a combination of high-quality human task work, smart tools, statistical confidence checks and machine learning to consistently return scalable, precise data. The company is headquartered in San Francisco.

More news and information about Scale

Published By:

Globe Newswire: 13:30 GMT Friday 14th September 2018

Published: .

Search for other references to "scale" on SPi News


Share

Previous StoryNext Story

SPi News is published by Sector Publishing Intelligence Ltd.
© Sector Publishing Intelligence Ltd 2018. [Admin Only]
 
Sector Publishing Intelligence Ltd.
Agriculture House, Acland Road, DORCHESTER, Dorset DT1 1EF United Kingdom
Registered in England and Wales number 0751938.
 
Privacy Policy | Terms and Conditions | Contact Us