About us

Welcome to the Intelligent Networked Vehicle System (INVS) Laboratory. We are part of the Cloud Computing Center at Shenzhen Institute of Advanced Technology (SIAT), Chinese Academy of Sciences (CAS). Our lab focuses on general autonomous navigation of mobile robotics, with an emphasis on leveraging high-dimensional optimization and high-fidelity simulation to enhance their efficiency and robustness, so as to push the limit of their practical use in real human life. Our current focuses are on end-to-end model-based learning, planning and control, and extended reality.

We are hiring new MPhil and Ph.D. students on lidar slam, planning, simulation, optimization, and reinforcement learning. Potential students can contact Dr. Wang at s.wang@siat.ac.cn for the positions.

Project Highlights

Distributed Dynamic Map Fusion via Federated Learning for Intelligent Networked VehiclesICRA’21

We present a federated learning-assisted dynamic map fusion framework, CarlaINVS, which enables object-level fusion and distributed online learning to achieve high map quality and low communication overhead. CarlaINVS consists of 1) a three-stage map fusion based on the DBSCAN-based clustering, score-based weighted-sum, and IoU-based box-pruning methods; 2) a point-cloud federated learning algorithm, which fine-tunes feature models of objects distributively by aggregating model parameters; 3) a knowledge distillation method to transfer the knowledge from the roadside units to individual vehicles. CarlaINVS is implemented in CARLA and compared with extensive benchmark schemes.

Authors: Zijian Zhang, Shuai Wang, Yuncong Hong, Liangkai Zhou, Qi Hao
Paper: Free access here  Code:  Code

NeuPAN: Direct Point Robot Navigation with End-to-End Model-based Learning 【TRO’24】(submitted)

We present NeuPAN, an end-to-end model-based learning framework that directly maps raw points to a distance-oriented latent space that is used as neural regularizers to compute physically bounded robot actions. NeuPAN can avoid error propagation from perception to control or lack of generalization in existing solutions. NeuPAN is a real-time (20Hz), highly accurate (dm-level), map-free (suitable for exploration), robot-agnostic (direct deployment on new robots), and environment invariant (i.e., no retraining across different scenarios) robot navigation system. Experiments demonstrate that NeuPAN outperforms various benchmarks, in terms of accuracy, efficiency, robustness, and generalization capability across various environments, including the cluttered sandbox, office, corridor, and parking lot. NeuPAN works well in unstructured environments with arbitrary-shape undetectable objects, making impassable ways passable.

Authors: Ruihua Han, Shuai Wang, Shuaijun Wang, Zeqing Zhang, Jianjun Chen, Shijie Lin, Chengyang Li, Chengzhong Xu, Yonina C Eldar, Qi Hao, Jia Pan

Paper: Free access here  Code:  Code

Seamless Virtual Reality with Integrated Synchronizer and Synthesizer for Autonomous Driving【RA-L’24】

We present a seamless virtual reality (SVR) platform for autonomous driving, which enables virtual and real agents to interact with each other in a shared symbiotic world. SVR can mitigate VR inconsistency and degraded fidelity in existing driving simulators. The crux of SVR is an integrated synchronizer and synthesizer (IS2) design, which consists of a drift-aware lidar-inertial synchronizer for VR colocation and a motion-aware deep visual synthesis network for VR image generation. We implement SVR on car-like robots in two sandbox platforms, achieving a cm-level VR colocalization accuracy and 3.2% VR image deviation. Experiments show that the proposed SVR reduces the intervention times, missed turns, and failure rates compared to other benchmarks. The SVR-trained neural network can handle unseen situations in real-world environments, by leveraging its knowledge learned from the VR space.

Authors: He Li, Ruihua Han, Zirui Zhao, Wei Xu, Qi Hao, Shuai Wang, Chengzhong Xu

Paper: Free access here 

RDA: An Accelerated Collision-free Motion Planner for Autonomous Navigation in Cluttered Environments 【RA-L’23】

We present an accelerated collision-free motion planner, i.e., regularized dual alternating direction method of multipliers (RDADMM or RDA for short), for the collision avoidance motion planning problem. In contrast to existing shape-ignored collision avoidance that is prone to stuck or shape-aware collision avoidance that has a low frequency, the proposed RDA is both fast and shape-aware. This is realized by solving a smooth biconvex reformulation via duality and computing collision-free trajectories in parallel for each obstacle to reduce computation time significantly. Experimental results show that the proposed method generates smooth collision-free trajectories with less computation time compared with other benchmarks and performs robustly in cluttered environments.

Authors: Ruihua Han, Shuai Wang, Shuaijun Wang, Zeqing Zhang, Qianru Zhang, Yonina Eldar, Qi Hao, Jia Pan

Paper: Free access here  Code:  Code

Communication Resources Constrained Hierarchical Federated Learning for End-to-End Autonomous Driving 【IROS’23】

We present the Communication Resource Constrained Hierarchical Federated Learning (CRCHFL) framework to minimize the generalization error of the autonomous driving model using hybrid data and model aggregation. CRCHFL can overcome the slow convergence due to long-range communications among vehicles and cloud servers. CRCHFL achieves the orchestration between constrained communication resources and its effectiveness is evaluated in the Car Learning to Act (CARLA) simulation platform. Results show that the proposed CRCHFL both accelerates the convergence rate and enhances the generalization of the federated learning autonomous driving model. Moreover, under the same communication resource budget, it outperforms the HFL by 10.33% and the SFL by 12.44%.

Authors: Wei-Bin Kou, Shuai Wang, Guangxu Zhu, Bin Luo, Yingxian Chen, Derrick Wing Kwan Ng, Yik-Chung Wu

Paper: Free access here  Code:  Code

MPCOM: Robotic Data Gathering with Radio Mapping and Model Predictive Communication 【IROS’24】

We present radio map guided model predictive communication (MPCOM), which navigates the robot with both grid and radio maps for shape-aware collision avoidance and communication-aware trajectory generation in a dynamic environment. In contrast to existing motion planning methods that plan robot trajectories merely according to motion factors, MPCOM maximizes the robotic data gathering efficiency. The proposed MPCOM is able to trade off the time spent on reaching goal, avoiding collision, and improving communication, as it captures high-order signal propagation characteristics using radio maps and incorporates the map guided communication regularizer to the MPCOM framework. Experiments show that the proposed MPCOM outperforms other benchmarks in both LOS and NLOS cases.

Authors: Zhiyou Ji, Guoliang Li, Ruihua Han, Shuai Wang, Bing Bai, Wei Xu, Kejiang Ye, Chengzhong Xu

Edge Accelerated Robot Navigation with Hierarchical Motion Planning 【TMECH’24】

We present EARN, which navigates low-cost robots in real time via collaborative motion planning. In contrast to existing local or edge motion planning solutions that ignore the inter-dependency between low-level motion planning and high-level resource allocation, EARN adopts model predictive switching (MPS) that maximizes the expected switching gain w.r.t. robot states and actions under computation and communication resource constraints. As such, each robot can dynamically switch between a point-based motion planner executed locally to guarantee safety (e.g., path-following) and a shape-based motion planner executed non-locally to guarantee efficiency (e.g., overtaking). We validate the performance of EARN in indoor simulation, outdoor simulation, and real-world environments. Experiments show that EARN achieves significantly smaller navigation time and collision ratios than state-of-the-art navigation approaches.

Authors: Guoliang Li, Ruihua Han, Shuai Wang, Fei Gao, Yonina Eldar, Chengzhong Xu

滚动至顶部