About us

Welcome to the Intelligent Networked Vehicle Systems (INVS) Laboratory. We are part of the Cloud Computing Center at Shenzhen Institute of Advanced Technology (SIAT), Chinese Academy of Sciences (CAS). Our lab focuses on general autonomous navigation of mobile robotics, with an emphasis on leveraging high-dimensional optimization and high-fidelity simulation to enhance their efficiency and robustness, so as to push the limit of their practical use in real human life. Our current focuses are on end-to-end model-based learning, planning and control, and extended reality.

We are hiring new MPhil and Ph.D. students on lidar slam, planning, simulation, optimization, and reinforcement learning. Potential students can contact Dr. Wang at for the positions.

Project Highlights

Distributed Dynamic Map Fusion via Federated Learning for Intelligent Networked VehiclesICRA’21

We present a federated learning-assisted dynamic map fusion framework, CarlaINVS, which enables object-level fusion and distributed online learning to achieve high map quality and low communication overhead. CarlaINVS consists of 1) a three-stage map fusion based on the DBSCAN-based clustering, score-based weighted-sum, and IoU-based box-pruning methods; 2) a point-cloud federated learning algorithm, which fine-tunes feature models of objects distributively by aggregating model parameters; 3) a knowledge distillation method to transfer the knowledge from the roadside units to individual vehicles. CarlaINVS is implemented in CARLA and compared with extensive benchmark schemes.

Authors: Zijian Zhang, Shuai Wang, Yuncong Hong, Liangkai Zhou, Qi Hao
Paper: Free access here  Code:  Code

NeuPAN: Direct Point Robot Navigation with End-to-End Model-based Learning 【TRO’24】(submitted)

We present NeuPAN, an end-to-end model-based learning framework that directly maps raw points to a distance-oriented latent space that is used as neural regularizers to compute physically bounded robot actions. NeuPAN can avoid error propagation from perception to control or lack of generalization in existing solutions. NeuPAN is a real-time (20Hz), highly accurate (dm-level), map-free (suitable for exploration), robot-agnostic (direct deployment on new robots), and environment invariant (i.e., no retraining across different scenarios) robot navigation system. Experiments demonstrate that NeuPAN outperforms various benchmarks, in terms of accuracy, efficiency, robustness, and generalization capability across various environments, including the cluttered sandbox, office, corridor, and parking lot. NeuPAN works well in unstructured environments with arbitrary-shape undetectable objects, making impassable ways passable.

Authors: Ruihua Han, Shuai Wang, Shuaijun Wang, Zeqing Zhang, Jianjun Chen, Shijie Lin, Chengyang Li, Chengzhong Xu, Yonina C Eldar, Qi Hao, Jia Pan

Paper: Free access here  Code:  Code

Seamless Virtual Reality with Integrated Synchronizer and Synthesizer for Autonomous Driving【RA-L’24 & IROS’24】

We present a seamless virtual reality (SVR) platform for autonomous driving, which enables virtual and real agents to interact with each other in a shared symbiotic world. SVR can mitigate VR inconsistency and degraded fidelity in existing driving simulators. The crux of SVR is an integrated synchronizer and synthesizer (IS2) design, which consists of a drift-aware lidar-inertial synchronizer for VR colocation and a motion-aware deep visual synthesis network for VR image generation. We implement SVR on car-like robots in two sandbox platforms, achieving a cm-level VR colocalization accuracy and 3.2% VR image deviation. Experiments show that the proposed SVR reduces the intervention times, missed turns, and failure rates compared to other benchmarks. The SVR-trained neural network can handle unseen situations in real-world environments, by leveraging its knowledge learned from the VR space.

Authors: He Li, Ruihua Han, Zirui Zhao, Wei Xu, Qi Hao, Shuai Wang, Chengzhong Xu

Paper: Free access here 

RDA: An Accelerated Collision-free Motion Planner for Autonomous Navigation in Cluttered Environments 【RA-L’23】

We present an accelerated collision-free motion planner, i.e., regularized dual alternating direction method of multipliers (RDADMM or RDA for short), for the collision avoidance motion planning problem. In contrast to existing shape-ignored collision avoidance that is prone to stuck or shape-aware collision avoidance that has a low frequency, the proposed RDA is both fast and shape-aware. This is realized by solving a smooth biconvex reformulation via duality and computing collision-free trajectories in parallel for each obstacle to reduce computation time significantly. Experimental results show that the proposed method generates smooth collision-free trajectories with less computation time compared with other benchmarks and performs robustly in cluttered environments.

Authors: Ruihua Han, Shuai Wang, Shuaijun Wang, Zeqing Zhang, Qianru Zhang, Yonina Eldar, Qi Hao, Jia Pan

Paper: Free access here  Code:  Code ROS:  Code

Communication Resources Constrained Hierarchical Federated Learning for End-to-End Autonomous Driving 【IROS’23】

We present the Communication Resource Constrained Hierarchical Federated Learning (CRCHFL) framework to minimize the generalization error of the autonomous driving model using hybrid data and model aggregation. CRCHFL can overcome the slow convergence due to long-range communications among vehicles and cloud servers. CRCHFL achieves the orchestration between constrained communication resources and its effectiveness is evaluated in the Car Learning to Act (CARLA) simulation platform. Results show that the proposed CRCHFL both accelerates the convergence rate and enhances the generalization of the federated learning autonomous driving model. Moreover, under the same communication resource budget, it outperforms the HFL by 10.33% and the SFL by 12.44%.

Authors: Wei-Bin Kou, Shuai Wang, Guangxu Zhu, Bin Luo, Yingxian Chen, Derrick Wing Kwan Ng, Yik-Chung Wu

Paper: Free access here  Code:  Code

MPCOM: Robotic Data Gathering with Radio Mapping and Model Predictive Communication

We present radio map guided model predictive communication (MPCOM), which navigates the robot with both grid and radio maps for shape-aware collision avoidance and communication-aware trajectory generation in a dynamic environment. In contrast to existing motion planning methods that plan robot trajectories merely according to motion factors, MPCOM maximizes the robotic data gathering efficiency. The proposed MPCOM is able to trade off the time spent on reaching goal, avoiding collision, and improving communication, as it captures high-order signal propagation characteristics using radio maps and incorporates the map guided communication regularizer to the MPCOM framework. Experiments show that the proposed MPCOM outperforms other benchmarks in both LOS and NLOS cases.

Authors: Zhiyou Ji, Guoliang Li, Ruihua Han, Shuai Wang, Bing Bai, Wei Xu, Kejiang Ye, Chengzhong Xu

Edge Accelerated Robot Navigation with Collaborative Motion Planning 【TMECH’24】

We present EARN, which navigates low-cost robots in real time via collaborative motion planning. In contrast to existing local or edge motion planning solutions that ignore the inter-dependency between low-level motion planning and high-level resource allocation, EARN adopts model predictive switching (MPS) that maximizes the expected switching gain w.r.t. robot states and actions under computation and communication resource constraints. As such, each robot can dynamically switch between a point-based motion planner executed locally to guarantee safety (e.g., path-following) and a shape-based motion planner executed non-locally to guarantee efficiency (e.g., overtaking). We validate the performance of EARN in indoor simulation, outdoor simulation, and real-world environments. Experiments show that EARN achieves significantly smaller navigation time and collision ratios than state-of-the-art navigation approaches.

Authors: Guoliang Li, Ruihua Han, Shuai Wang, Fei Gao, Yonina Eldar, Chengzhong Xu

Paper: Free access here  Code:  Code

FedRC: A Rapid-Converged Hierarchical Federated Learning Framework in Street Scene Semantic Understanding 【IROS’24】

We present FedRC, a rapid-converged hierarchical federated learning (HFL) framework for street scene semantic understanding. FedRC tackles the inter-city data heterogeneity by differentiating different images using statistical properties, thereby accelerating the convergence of HFL. Extensive experiments on across-city datasets demonstrate that FedRC converges faster than state-of-the-art methods by 38.7%, 37.5%, 35.5%, and 40.6% in terms of mIoU, mPrecision, mRecall, and mF1, respectively. Furthermore, qualitative evaluations in the CARLA simulation environment confirm that the proposed FedRC framework delivers satisfactory performance.

Authors: Wei-Bin Kou, Qingfeng Lin, Ming Tang, Shuai Wang, Guangxu Zhu, Yik-Chung Wu

Paper: Free access here 

Multi-Uncertainty Aware Autonomous Cooperative Planning 【IROS’24】

Autonomous cooperative planning (ACP) is a promising technique to improve the efficiency and safety of multi-vehicle interactions for future intelligent transportation
systems. However, realizing robust ACP is a challenge due to the aggregation of perception, motion, and communication uncertainties. This paper proposes a novel multi-uncertainty aware ACP (MUACP) framework that simultaneously accounts for multiple types of uncertainties via regularized cooperative model predictive control (RC-MPC). The regularizers and constraints for perception, motion, and communication are constructed according to the confidence levels, weather conditions, and outage probabilities, respectively. The effectiveness of the proposed method is evaluated in the Car Learning to Act (CARLA) simulation platform. Results demonstrate that the proposed MUACP efficiently performs cooperative formation in real time and outperforms other benchmark approaches in various scenarios under imperfect knowledge of the environment.

Authors: Shiyao Zhang, He Li, Shengyu Zhang, Shuai Wang, Derrick Wing Kwan Ng, Chengzhong Xu

Paper: Not available yet

滚动至顶部