INVS Robots

Scout robot is a fast mobile system for autonomous exploration in cluttered environments. The robot has 3D lidar, RGBD camera, onboard computer, onboard WIFI, and strong power battery. The robot can navigate for hours in both indoor and outdoor scenarios. The robot currently supports various navigation software, including neupan, falco, tar, fast-livo, and AI agent.

Limo robot is a small & agile system for quick design and verification of robotic algorithms. The robot has 3D lidar, RGBD camera, Intel NUC, and touch screen. The robot can switch among different motion modalities, including Ackermann steering, differential drive, and omni drive. The robot is mainly suitable for indoor scenarios. The robot has been adopted for designing rda and neupan algorithms.

Pocket Unitree dog is a robotic dog system for real-time 3D mapping and reconstruction. The robot is equipped with Manifold Pocket, a lidar-inertial-visual fusion system that can build 3D point clouds of environments with high frequency and accuracy. The point cloud can be further converted into Gaussian splats for image/video rendering.

DDT Diablo is a wheel legged robotic system for autonomous navigation. The robot is equipped with 3D lidar, RGBD camera, Orin Nano. The robot can switch between vehicle form and humanoid form. The vehicle form has a small height and can crawl under a table. The humanoid form has a larger height and can step over a low obstacle.

Agilex Ranger is a wheel mobile system for indoor navigation. The robot is equipped with high-resolution 3D lidar, camera, IMU, and 5G CPE. Due to its high load capacity, Ranger can accommodate a powerful computer and large language models. The robot is manily used to design and evaluate embodied navigation models. For instance, if we want to drop litter, Ranger can understand our requirement and take us to a trash can.

Unitree G1 is a humanoid robotic system. The robot has 23 DoFs that can support walking, running, dancing, jumping. Equipped with 3D lidar and camera sensors, the robot can also interact with the envoriments and accomplish navigtation and manupulation tasks. The robot has a large weight and can cause harm. Therefore, it is often trained in issac simulation environments first and then be deployed in reality with data update.

Odin Dog is a memory robotic dog system for agentic navigation. The Manifold Odin equipment can build multi-modal memory over long horizon for 3D environments. This enables us to quickly depoly vision language navigation and vision language action models on the system.

Project Highlights

RDA: Open-Source Platform for Fast Collision Avoidance Model Predictive Control

Project link: https://github.com/hanruihua/rda_ros

Project link: https://github.com/GuoliangLI1998/EARN

Publication: IEEE RAL 2023 & IEEE TMECH 2025 & ICASSP 2025/2026

RDA planner is a fast and efficient motion planner for autonomous navigation in cluttered environments. The key idea of RDA is to decompose the complex optimization problem into several subproblems by ADMM, which allows the collision avoidance constraints to be computed in parallel for each obstacle to reduce computation time significantly. Key features:

  • Shape aware planner, which can tackle robots and obstacles with arbitrary convex shapes.
  • Highly accurate control achieved through the use of an optimization solver.
  • Support for both static and dynamic obstacles.
  • Fast computation time, which is suitable for real-time applications.
  • Support different types of dynamics, including differential, Ackermann, and omnidirectional robots.

CarlaFLCAV: Open-Source Platform for Design and Verification of Automonous Driving

Project link: https://github.com/SIAT-INVS/CarlaFLCAV

Publication: IEEE TITS 2025 & IEEE NETW 2023 & ICRA 2021

CarlaFLCAV is an open-source FLCAV simulation platform based on CARLA simulator that supports:

  • Multi-modal dataset generation: Including point-cloud, image, radar data with associated calibration, synchronization, and annotation
  • Training and inference: Examples for CAV perception, including object detection, traffic sign detection, and weather classification
  • Various FL frameworks: FedAvg, device selection, noisy aggregation, parameter selection, distillation, and personalization
  • Optimization based modules: Network resource and road sensor pose optimization.

CarlaGrandprix: Grand Prix Metaverse Autonomous Driving Challenge

Project link: https://github.com/MoCAM-ResearchGroup/grandprix

Publication: IEEE RAL 2024 & IROS 2024 & ICASSP 2026

We have organized a competition based on the Macau Grand Prix Circuit and established a virtual racing system. Hundreds of participants are required to apply intelligent control, machine learning, and ROS (Robot Operating System) programming in the virtual environment to compete in a speed race.

  • Visual Perception: The autonomous driving system must accurately perceive its surroundings, including roads, vehicles, pedestrians, and obstacles. This requires the system to effectively process large amounts of visual data from cameras, LiDAR (Light Detection and Ranging), and other sensors, to identify and track surrounding objects and road signs, perform Simultaneous Localization and Mapping (SLAM), understand and predict real-time traffic conditions. Additionally, the system must handle various challenging scenarios such as different weather conditions, changes in lighting, and visual occlusion.
  • Decision Making: The autonomous driving system needs to make appropriate decisions based on real-time environmental and vehicle information, including route selection, speed adjustment, and adherence to traffic rules. This requires the system to consider multiple factors like traffic conditions, road surface status, passenger needs, and safety, and to determine the optimal driving strategy. Moreover, the system must adapt to rapidly changing traffic environments and respond to unexpected situations.
  • Planning and Control: The autonomous driving system must precisely execute decisions, including controlling the vehicle's acceleration, braking, and steering. This requires the system to monitor the vehicle's state in real time, adjust control commands as needed, and ensure smooth, safe, and comfortable driving.
滚动至顶部