“`html
Today, we are excited to announce that AWS IoT FleetWise now supports vehicle vision system data collection that enables customers to collect metadata, object list and detection data, and images or videos from camera, lidar, radar and other vision sub-systems. This new feature, now available in Preview, builds upon existing AWS IoT FleetWise capabilities that enable customers to extract more value and context from their data to build vehicles that are more connected and convenient.
Modern vehicles are equipped with multiple vision systems. Examples of vision systems include a surround view array of cameras and radars that enable advanced driver assistance (ADAS) use cases and driver and cabin monitoring systems to assist with driver attention in semi-autonomous driving use cases. Most of these systems perform some level of computation on the vehicle, often using sophisticated algorithms for sensor fusion and AI/ML for inference.
Vision systems generate massive amounts of data in structured (numbers, text) and unstructured (images, video) formats. This challenge makes it difficult to synchronize data from multiple vehicle sensor modalities around a given event of interest in a way that minimizes interference with the operation of the vehicle. For example, to analyze the accuracy of road conditions detected by a vehicle camera, a data scientist may want to view telemetry data (e.g., speed and brake pressure), structured object lists and metadata, and unstructured images/video data. Keeping all of those data points organized and associated with the same event is a heavy lift. This typically requires additional software and compute power to only collect data points of interest to minimize interference with the operation of the vehicle, add metadata, and keep the data synchronized.
Vision system data from AWS IoT FleetWise lets automotive companies easily collect and organize data from vehicle vision systems that include cameras, radars, and lidars. It keeps both structured and unstructured vision system data, metadata, and telemetry data synchronized in the cloud, making it easier for customers to assemble a full picture view of events and gain insights. Here are a few scenarios:
To understand what happened during a hard-braking event, a customer wants to collect data before and after the event occurs. The data collected may include inference (e.g., an obstacle was detected), timestamps and camera settings (metadata), and what occurred around the vehicle (e.g., images, videos, and light/radar maps with bounding boxes and detection overlays).
A customer is interested in anomalous events on roadways like accidents, wildfires, and obstacles that impede traffic. The customer begins by collecting telemetry and object list data at scale across a large number of vehicles, then, zooms in on a set of vehicles that are signaling anomalous events (e.g., speed is 0 on a large highway) and collects vision system data from those vehicles.
When collecting vision system data using AWS IoT FleetWise, customers can take advantage of the service’s advanced features and interfaces they already use to collect telemetry data, for example, specifying events in their data collection campaign to optimize bandwidth and data size. Customers can get started on AWS by defining and modeling a vehicle’s vision system, alongside its attributes and telemetry sensors. The customer’s Edge Agent deployed in the vehicle collects data from CAN-based vehicle sensors (e.g. battery temperature), as well as from vehicle sub-systems that include vision system sensors. Customers can use the same event- or time-based data collection campaign to collect data signals concurrently from both standard sensors and vision systems. In the cloud, customers see a unified view of their defined vehicle attributes and other metadata, telemetry data, and structured vision system data, with links to view unstructured vision system data in Amazon Simple Storage Service (Amazon S3). The data stays synchronized using vehicle, campaign, and event identifiers. Customers can then use services like AWS Glue to integrate data for downstream analytics.
Continental AG is developing driver convenience features
Continental AG develops pioneering technologies and services for autonomous mobility. “Continental has collaborated closely with AWS on developing technologies that accelerate automotive software development in the cloud. With vision system data from AWS IoT FleetWise, we will be able to easily collect camera and motion-planning data to improve automated parking assistance and enable fleet-wide monitoring and reporting.”
Yann Baudouin, Head of Data Solutions – Engineering Platform and Ecosystem, Continental AG
HL Mando is developing capabilities that enhance driver safety and personalization
HL Mando is a tier 1 supplier of parts and software to the automotive industry. “At Mando, we are committed to innovating technology that makes vehicles easier to drive and operate. Our solutions rely on the ability to collect vehicle telemetry data as well as vehicle camera data in an efficient way. We are looking forward to using the data we collect through AWS IoT FleetWise to improve vehicle software capabilities that can enhance driver safety and driver personalization.”
Seong-Hyeon Cho, Vice Chairman/CEO, HL Mando
ThunderSoft is developing automotive and fleet solutions
ThunderSoft provides intelligent operating systems and technologies to automotive companies and enterprises. “As ThunderSoft works to help advance the next generation of connected vehicle technology across the globe, we look forward to continuing our collaboration with AWS. With the arrival of vision system data from AWS IoT FleetWise, we’ll be able to help our customers with innovative solutions for advanced driver assistance systems (ADAS) and fleet management.”
Pengcheng Zou, CTO, ThunderSoft
Solution Overview
Let’s take an ADAS use case to walk through the process of collecting vision system data. Imagine that an ADAS engineer is deploying a collision avoidance system in production vehicles. One way this system helps vehicles avoid collisions is by automatically applying brakes in certain scenarios (e.g., an impending rear-end collision with another vehicle).
While the software used in this system has already gone through rigorous testing, the engineer wants to continuously improve the software for both current-gen and future-gen vehicles. In this case, the engineer wants to see all scenarios where a collision was detected. To understand what happened during the event, the engineer will look at vision data comprised of images and telemetry data before and after the collision was detected. Once in the S3 bucket, the engineer may want to visualize, analyze and label the data.
Prerequisites
Before you get started, you will need:
An AWS account with console, CLI and programmatic access in supported Regions.
Permission to create and access AWS IoT FleetWise and Amazon S3 resources.
To follow the instructions in our AWS IoT FleetWise vision system demo guide, up to and including, “Playback ROS 2 data.”
(Optional) A ROS 2 environment that supports the “Galactic” version of ROS 2. During the Preview period for vision system data, the AWS IoT FleetWise Reference Edge Agent supports ROS 2 middleware to collect vision system signals.
Walkthrough
Step 1: Model your vehicle
Create a signal catalog by creating the file: ros2-nodes.json . Feel free to change the name and description within this file to your liking.
… (truncated for brevity) …
Data from scenes of interest can then be passed to downstream tools for visualization, labeling, and re-simulation to develop the next version of models and vehicle software. For example, third party software such as Foxglove Studio can be used to visualize what happened before and after the collision using the images stored in Amazon S3; Amazon Rekognition can be utilized to automatically discover and label additional objects present at the time of collision; Amazon SageMaker Groundtruth can be used for annotation and human-in-the-loop workflows to improve the accuracy and relevance of the collision avoidance software. In a future blog, we plan to explore options for this part of the workflow.
Conclusion
In this post, we showcased how AWS IoT FleetWise vision system data enables you to easily collect and organize data from advanced vehicle sensor systems to assemble a holistic view of events and gain insights. The new feature expands the scope of data-driven use cases for automotive customers. We then used a sample ADAS development use case to walk through the process of creating condition-based campaigns can help improve an ADAS system, and how to access that data in Amazon S3.
To learn more, visit the AWS IoT FleetWise site. We look forward to your feedback and questions.
About the Authors
Akshay Tandon is a Principal Product Manager at Amazon Web Services with the AWS IoT FleetWise team. He is passionate about everything automotive and product. He enjoys listening to customers and envisioning innovative products and services that help fulfill their needs. At Amazon, Akshay has led product initiatives in the AI/ML space with Alexa and the fleet management space with Amazon Transportation Services. He has more than 10 years of product management experience.
Matt Pollock is a Senior Solution Architect at Amazon Web Services currently working with automotive OEMs and suppliers. Based in Austin, Texas, he has worked with customers at the interface of digital and physical systems across a diverse range of industries since 2005. When not building scalable solutions to challenging technical problems, he enjoys telling terrible jokes to his daughter.
“`