Back to list
85 / 452
RAPA (실시간 어텐션 기반 필라 아키텍처 4D 레이더 인지)
HonoreekoArtificial Intelligence4D 레이더실시간 처리어텐션 메커니즘자율 주행객체 감지AI

RAPA (실시간 어텐션 기반 필라 아키텍처 4D 레이더 인지)

13
0

Deep Fusion AI

One-Line Product Definition

RAPA (Real-time Attention-based Pillar Architecture) – A software-defined autonomous driving perception engine that recognizes the 360° surrounding environment using only multiple 4D imaging radars. It achieves LiDAR-level precise object detection and tracking in real-time through a deep learning model that overcomes the sparsity and noise of radar data.

Problem Definition

Sensor fusion, including LiDAR, cameras, and radar, is used to implement fully autonomous vehicles. While LiDAR offers excellent performance, the equipment cost is very high and it is affected by weather conditions.

Radar is relatively inexpensive and operates in adverse weather conditions, but the output data is faint and noisy, resulting in low deep learning-based recognition rates.

Until now, radar has mainly been used in vehicles for speed measurement assistance, and high-precision object recognition has relied on LiDAR/cameras, leading to increased costs and sensor complexity issues.

In other words, *"replacing expensive LiDAR with cheap radar"* has been a challenge for the commercialization of autonomous driving. However, existing deep learning models have not been able to properly utilize radar's sparse point cloud, resulting in significant performance degradation.

In addition, using multiple radars simultaneously increases the burden of data synchronization and computing, making real-time processing difficult.

As a result, the problem was the lack of a solution for stable 360-degree perception using only radar.

Key Differentiators

The RAPA engine is an industry-first AI solution that detects and tracks surrounding objects using only multiple 4D imaging radar inputs.

By developing a dedicated deep learning architecture called Attention-based Pillar Network, it learns the patterns and noise of radar signals and applies customized filtering, extracting meaningful features from radar's unique sparse data.

In particular, it actively utilizes time-velocity (Doppler) information to accurately distinguish between stationary and moving objects and identify noise from real signals.

As a result, object recognition accuracy has improved by more than 40% compared to the existing methods, and it has proven superior performance compared to competing solutions in public benchmarks.

In addition, this model is optimized for real-time inference on automotive edge computing boards, and it minimizes processing delays while fusing data from multiple radars to perform 360-degree omnidirectional recognition.

Ultimately, when RAPA is installed, it can detect and track vehicles, pedestrians, and obstacles without cameras or LiDAR, and it is stable even in adverse weather or backlight conditions.

In terms of price, it greatly reduces costs by replacing multiple expensive LiDARs with a combination of multiple radars and software.

Furthermore, it can be expanded to various platforms such as unmanned surface vehicles (USV) and autonomous robots, and has already been applied to unmanned military vehicles, demonstrating high stability even in severe cold and rainy conditions.

In short, RAPA is a game-changer that opens *"an era of autonomous driving with radar only"*, enabling low-cost, high-performance autonomous driving perception.

Key Adopters

Automobile OEMs and autonomous driving technology companies (B2B)are the key customers.

For example, electric vehicle startups or robotaxi companies are actively considering adopting this radar perception stack instead of LiDAR to significantly reduce costs.

In fact, according to CES news, a global mobility company plans to mass-produce robotaxis equipped with RAPA technology by the end of 2026.

In addition, ADAS (Advanced Driver Assistance Systems) suppliers can also supply high-resolution radar-based solutions to automobile manufacturers.

Other customer segments include the defense sector (unmanned military vehicles, security robots) and port/logistics robots.

Therefore, it is provided in the form of algorithm licenses or modules to vehicle/robot manufacturers in a B2B format, and ultimately consumers will benefit throughvehicles/services (B2C)equipped with this technology.

Scalability

This solution has the potential to expand throughout the autonomous driving/robotics industry.

As more cars are adopting radar, the software can be applied as a soft upgrade, and it can be adopted to reduce LiDAR dependence when designing next-generation vehicles.

In addition, there are Radar+Camera and Radar+LiDAR fusion versions (RAPA-RC, RAPA-RL) currently under development, which are expected to be used in various sensor combinations.

Regionally, there is global demand from automobile OEMs in the US and Europe, autonomous driving companies in Israel, and mobility companies in Korea.

Standardization and regulations are not a problem when safety is proven. Rather, it will be welcomed by Euro NCAP, etc. as a solution to improve pedestrian detection performance at a low cost.

From a corporate perspective, startups like Deep Fusion AI can contract directly with OEMs, or be recognized for their technology and be acquired by large parts companies for expansion.

Having already received the Best of Innovation award at CES Innovation Awards, it is attracting industry attention and partnership inquiries are flooding in.

Given the impact that *"it could replace LiDAR"*, widespread industry adoption is expected if it is properly commercialized.

Judges' Evaluation

Winning the Best of Innovation Award in the AI category at CES 2026 demonstrates how great and unique the potential of this technology is.

Automotive industry experts were amazed, saying, "Can you see this accurately with radar alone?" and industry publications such as Traffic Technology Today reported with headlines such as *"DeepFusionAI accelerates autonomous driving without LiDAR"*.

It has become a hot topic in domestic and international investment and technology communities as an achievement of a startup in Incheon, and it has been evaluated as *"catching two rabbits: cost and stability"*.

In terms of technological completeness, the application case for naval unmanned boats has already proven that it is not a bluff, and investors have high expectations for the news of the 2026 mass production plan.

On the other hand, some cautious voices say, "There is a limit to radar resolution, so is it really possible to completely replace LiDAR?" However, the practical goal is to reduce the number of expensive LiDARs, so it is more appropriate to see it as a complement rather than a replacement.

Market expectations are very high, and it is recognized as a *"key technology that will dramatically lower the price of autonomous vehicles"*, and there are rumors that this company will sign a contract with a global OEM.

In fact, it seems that it is a technology that is less known to the world compared to its innovation, rather than being overrated, so it is expected to receive more attention in the future.

Analyst Insights

🔥 High marketability / Feasibility of business connection – As a game-changer technology that solves the cost problem of autonomous driving, major automobile companies will continue to send love calls, and it is expected to have a great ripple effect throughout the related industries.

The award list data is based on the official CES 2026 website, and detailed analysis content is produced by USLab.ai. For content modification requests or inquiries, please contact contact@uslab.ai. Free to use with source attribution (USLab.ai) (CC BY)

댓글 (0)

댓글을 불러오는 중...