×
×

Introduction

Real challenges in achieving autonomous driving

The evolution of semi-automated driving systems has ushered in a new era of mobility, presenting both unprecedented opportunities and formidable challenges for automotive manufacturers. At the heart of this paradigm shift lies the data-driven framework for L2+ driving—an essential bedrock for developing and deploying cutting-edge driving solutions. To appreciate the distinction between L2 and L2+ autonomous cars, consider their additional functionalities. While L2 vehicles offer features like adaptive cruise control, lane-keeping assist, blind zone detection, and automated parking, L2+ autonomous driving takes it a step further. These advanced vehicles seamlessly incorporate automated lane changing (ALC), enable hands-off driving under specific conditions, and provide traffic jam assistance. As the automotive industry propels toward L2+ driving, the intricate complexities of these augmented capabilities underscore the need for precision and innovation.

The complexities in building a global L2+ driving system

Developing a global L2+ driving system is a multifaceted task, marked by a range of intricate challenges and diverse requirements. This includes details from compute power for sensor fusion and AI model training to addressing the long tail problem. These complexities demand a comprehensive and forward-thinking approach. Our exploration of these challenges aims to illuminate the detailed considerations involved in achieving a mature L2+ driving system.

Quest Global's framework for a matured L2+ driving system

At the core of our vision for a mature L2+ driving system lies a robust framework that integrates key elements. This framework emphasizes data collection, development of the autonomous driving AI Stack, and its deployment to production.

 

Essential 360-degree sensor data from L2+ autonomous driving cars 

The foundation of our framework is the integration of an extensive sensor suite, including multiple cameras, radars, and lidars, to provide 360-degree mid to long-range data around the ego vehicle. This increase in sensor complexity generates petabytes of data, requiring advanced data logging and computing capabilities. Additionally, our framework employs a local server with substantial storage and cloud capabilities to efficiently manage the massive influx of data.

 

Data ingestion pipeline

To handle the high volume of data during the development phase, we have established a robust data ingestion pipeline. This pipeline integrates with leading Cloud platforms such as AWS and Azure, offering a scalable and efficient solution that supports the advanced development and training of AI models.

 

Development pipeline

In the journey from Level 2 to Level 2+ in advanced driver-assistance systems (ADAS), the development pipeline plays a critical role. This pipeline involves the meticulous training of AI models, which requires complex algorithms for deep machine learning. Considering the integration of multiple sensors—such as 7+ cameras and more than 5 radars—a sophisticated fusion algorithm becomes essential for effective sensor integration, object recognition, and real-time decision-making. 

 

Complex fusion algorithm and sensor integration

Moving from Level 2 to Level 2+ driving necessitates a powerful computational backbone capable of handling complex tasks like sensor fusion, object recognition, and real-time decision-making. High-performance computers serve as the vehicle's brain, consolidating inputs from various sensors to make split-second decisions crucial for safety and efficiency. The development of a complex fusion algorithm is essential to process data from cameras, radar, and lidars effectively, ensuring a cohesive understanding of the vehicle's surroundings.

AI Models in ADAS

In the realm of ADAS, AI models play a pivotal role in enhancing perception, planning, and control functionalities, forming the core of ADAS building blocks: 

 

1. Perception: Perception is like the eyes of the system, utilizing sensors and AI algorithms to perceive the environment. By identifying objects on the road, the perception module lays the foundation for subsequent decision-making processes.

 

2. Planning: Planning involves determining the vehicle's actions, such as staying in a lane, changing lanes, or accelerating. It includes two types: global planning and local planning. Global planning involves the overall route from point A to point B, while local planning deals with immediate decisions within that route. 

 

3. Control: Control refers to the execution of decisions made during the planning phase, such as applying brakes or accelerating. It dictates how the vehicle responds to various driving scenarios, translating planned actions into physical movements.

Approaches in overall Autonomous Driving (AD) stack development

In the industry, two prominent approaches are used for ADAS:

 

1. Perception Heavy AI Stack: We develop Deep Machine Learning-based Multi-Task Network Architectures to understand the surroundings. Classical path planning approaches are utilized for this purpose. To achieve precise localization, we use prestored HD Maps, which are validated in real-time with Lidars. 

 

2. End-to-end deep machine learning: Unlike the first approach, this uses deep machine learning for both perception and planning, enabling more advanced decision-making capabilities. The network output provides the freeway and a set of waypoints for the vehicle's journey. 

Challenges with the 'end-to-end deep machine learning' approach

The automotive industry is at a critical juncture where the integration of advanced technologies, particularly in autonomous driving, is reshaping the landscape of vehicle development. As the industry navigates the complexities of incorporating machine learning into critical decision-making processes, it is essential to understand the challenges and considerations that come with this paradigm shift.

 

The transition to ‘end-to-end deep machine learning’ offers significant advantages, but it also presents challenges, particularly in achieving Functional Safety (FuSa) compliance. Traditionally, FuSa compliance standards prohibit the use of machine learning, data-driven methods for critical decisions in the vehicle. This poses a challenge as deep machine learning is integral to its planning process. Researchers are actively exploring solutions to this problem, with extensive testing being one approach to meeting Functional Safety requirements. 

 

Deep machine learning methods, while powerful, face challenges in handling corner cases and rare scenarios, known as the "long tail problem". 

The long tail problem & solution

Real challenges in achieving autonomous driving

Traditional methods are preferred for critical decisions due to their testability, especially in corner cases where machine learning may struggle. In contrast, deep machine learning methods adhere to the 80-20 rule, where 80% of the development is relatively straightforward and easily testable, while the remaining 20% comprises complex corner cases that pose significant testing challenges. This phenomenon, known as the long tail problem, is inherent in machine learning approaches, as they require extensive time and resources to learn from a small amount of data, particularly in handling corner cases.

 

For instance, consider a self-driving vehicle navigating through a bustling city center with intricate road layouts, diverse traffic scenarios, and complex pedestrian interactions. In such practical cases, the long tail problem of deep machine learning is exemplified by the need to account for rare but critical situations, such as unconventional traffic signals, unexpected pedestrian movements, or unique environmental conditions.

 

Addressing the challenges associated with the long tail problem in deep machine learning and the testing of corner cases in autonomous driving development requires innovative solutions. One potential approach involves the integration of advanced simulation technologies to create diverse and realistic testing environments. By leveraging simulation platforms that accurately replicate real-world scenarios, developers can systematically expose autonomous driving systems to a wide range of corner cases, including rare and challenging situations.

 

Implementation of advanced data collection and analysis techniques can contribute to addressing the long tail problem. Through the process of aggregating and analyzing data from practical cases and real-world examples, developers can identify and prioritize critical corner cases for targeted testing and validation. This data-driven approach enables a more focused and efficient strategy for addressing the 20% of cases that pose significant challenges in deep machine learning.

 

Also, the collaboration between automotive industry stakeholders, regulatory bodies, and technology innovators can foster the development of standardized testing frameworks and best practices. Establishing industry-wide guidelines for testing and validating autonomous driving systems, particularly in addressing corner cases, can enhance safety and reliability while expediting the deployment of advanced technologies.

Deployment pipeline

Real challenges in achieving autonomous driving

The deployment pipeline is a critical phase in the development of a matured L2+ driving system, ensuring the seamless integration and validation of autonomous driving technologies. This phase encompasses simulation testing, cybersecurity measures, and a range of testing protocols, culminating in on-road testing and final deployment. Let's explore the key components of the deployment pipeline:

 

A. Millions of kilometers testing - simulation way 

Simulation testing plays a pivotal role in validating the performance of the autonomous driving system in a controlled environment. It recreates various driving scenarios, traffic conditions, and environmental factors, simulation testing allows for a thorough evaluation of the system's decision-making capabilities and responses. This phase ensures that the AI models and sensor fusion algorithms function as intended, minimizing potential risks before moving to real-world testing.

 

B. Cybersecurity

As autonomous driving systems become increasingly complex, the importance of robust cybersecurity measures cannot be overstated. The deployment pipeline incorporates comprehensive cybersecurity protocols to safeguard the vehicle's systems from potential cyber threats, data breaches, and unauthorized access.     

 

C. Testing methodologies

The deployment pipeline employs a range of testing methodologies to validate the system's performance and safety. Traditionally, SIL, HIL, and VIL are employed. To validate the AD stack, a further step known as Shadow Mode Testing is introduced. Here's an overview:

 

1. Software-in-the-loop (SIL)

SIL testing involves simulating the autonomous driving system's software components in a virtual environment, allowing for rapid testing and validation of algorithms without the need for physical hardware.

 

2. Hardware-in-the-loop (HIL)

HIL testing integrates the actual hardware components of the autonomous driving system, such as sensors and electronic control units (ECUs), into a simulated environment. This approach enables the testing of the system's real-time performance and interactions between hardware and software.

 

3. Driver-in-the-loop (DIL)

DIL testing, also known as Vehicle-in-the-Loop (VIL), involves testing the autonomous driving system on a physical vehicle in a controlled environment, such as a test track or closed course. This phase allows for the evaluation of the system's performance under real-world conditions, including interactions with other vehicles and obstacles.

 

4. Shadow mode testing

Shadow mode testing is a crucial step in the deployment pipeline, where the autonomous driving system runs in parallel with the human driver, without directly controlling the vehicle. During this phase, the system's algorithms are evaluated against the driver's actions, ensuring that the system's decisions align with human driving behavior. Shadow mode testing provides valuable insights into the system's performance and helps identify potential areas for improvement before transitioning to fully autonomous mode.

 

5. On-road testing (Final stage of AD stack testing)

Once the autonomous driving system has successfully passed the simulation and testing phases, it undergoes a final update to ensure that the latest software and security patches are in place. The system is then deployed for on-road testing, where it operates in a real-world environment under the supervision of trained safety drivers. On-road testing allows for the evaluation of the system's performance in diverse driving conditions, traffic scenarios, and environmental factors, providing valuable data for further refinement and optimization.

 

The deployment pipeline is a meticulous process that typically takes 6 to 12 months to complete, ensuring the safety, reliability, and performance of the autonomous driving system before it is released to customers.     

Towards L2+ systems and beyond with Quest Global's data-driven framework

The automotive industry is rapidly embracing end-to-end deep machine learning approaches, highlighting the critical need for a comprehensive data-driven framework for L2+ driving systems. Quest Global stands at the forefront of this technological evolution, harnessing our expertise to navigate the complexities and challenges inherent in advanced driving systems. Our focus on innovation and dedication to excellence position us as a trusted partner for automotive OEMs seeking to embrace the future of autonomous mobility. 

 

At Quest Global, our data-driven framework for L2+ driving systems represents a crucial stride towards a new era of automotive innovation, and we are resolutely dedicated to leading this transformative journey. This article serves as an exploration of our framework within the evolving L2+ ADAS landscape, offering a pragmatic insight tailored to meet the needs and expectations of automotive OEMs.

Shaping Tomorrow's Mobility With L2+ Autonomous Driving Systems

Author

Kamal Deep Sethi

Global ADAS/Autonomous Mobility CoE Leader, Quest Global

Talk to the author