第一次完整测例跑完

This commit is contained in:
2026-01-18 00:30:10 +08:00
parent ca15cc593b
commit 25c6fc04db
180 changed files with 29305 additions and 0 deletions

223
unitree_deploy/README.md Normal file
View File

@@ -0,0 +1,223 @@
# Unitree Deploy
<div align="center">
<p align="right">
<span> 🌎English </span> | <a href="./docs/README_cn.md"> 🇨🇳中文 </a>
</p>
</div>
This document provides instructions for setting up the deployment environment for Unitree G1 (with gripper) and Z1 platforms, including dependency installation, image service startup, and gripper control.
# 0. 📖 Introduction
This repository is used for model deployment with Unitree robots.
---
# 1. 🛠️ Environment Setup
```bash
conda create -n unitree_deploy python=3.10 && conda activate unitree_deploy
conda install pinocchio -c conda-forge
pip install -e .
# Optional: Install lerobot dependencies
pip install -e ".[lerobot]"
git clone https://github.com/unitreerobotics/unitree_sdk2_python.git
cd unitree_sdk2_python && pip install -e . && cd ..
```
---
# 2. 🚀 Start
**Tip: Keep all devices on the same LAN**
## 2.1 🤖 Run G1 with Dex_1 Gripper
### 2.1.1 📷 Image Capture Service Setup (G1 Board)
[To open the image_server, follow these steps](https://github.com/unitreerobotics/xr_teleoperate?tab=readme-ov-file#31-%EF%B8%8F-image-service)
1. Connect to the G1 board:
```bash
ssh unitree@192.168.123.164 # Password: 123
```
2. Activate the environment and start the image server:
```bash
conda activate tv
cd ~/image_server
python image_server.py
```
---
### 2.1.2 🤏 Dex_1 Gripper Service Setup (Development PC2)
Refer to the [Dex_1 Gripper Installation Guide](https://github.com/unitreerobotics/dex1_1_service?tab=readme-ov-file#1--installation) for detailed setup instructions.
1. Navigate to the service directory:
```bash
cd ~/dex1_1_service/build
```
2. Start the gripper service, **ifconfig examines its own dds networkInterface**:
```bash
sudo ./dex1_1_gripper_server --network eth0 -l -r
```
3. Verify communication with the gripper service:
```bash
./test_dex1_1_gripper_server --network eth0 -l -r
```
---
### 2.1.2 ✅Testing
Perform the following tests to ensure proper functionality:
- **Dex1 Gripper Test**:
```bash
python test/endeffector/test_dex1.py
```
- **G1 Arm Test**:
```bash
python test/arm/g1/test_g1_arm.py
```
- **Image Client Camera Test**:
```bash
python test/camera/test_image_client_camera.py
```
- **G1 Datasets Replay**:
```bash
# --repo-id Your unique repo ID on Hugging Face Hub
# --robot_type The type of the robot e.g., z1_dual_dex1_realsense, z1_realsense, g1_dex1,
python test/test_replay.py --repo-id unitreerobotics/G1_CameraPackaging_NewDataset --robot_type g1_dex1
```
---
## 2.2 🦿 Run Z1
### 2.2.1 🦿 Z1 Setup
Clone and build the required repositories:
1. Download [z1_controller](https://github.com/unitreerobotics/z1_controller.git) and [z1_sdk](https://github.com/unitreerobotics/z1_sdk.git).
2. Build the repositories:
```bash
mkdir build && cd build
cmake .. && make -j
```
3. Copy the `unitree_arm_interface` library: [Modify according to your own path]
```bash
cp z1_sdk/lib/unitree_arm_interface.cpython-310-x86_64-linux-gnu.so ./unitree_deploy/robot_devices/arm
```
4. Start the Z1 controller [Modify according to your own path]:
```bash
cd z1_controller/build && ./z1_ctrl
```
---
### 2.2.2 Testing ✅
Run the following tests:
- **Realsense Camera Test**:
```bash
python test/camera/test_realsense_camera.py # Modify the corresponding serial number according to your realsense
```
- **Z1 Arm Test**:
```bash
python test/arm/z1/test_z1_arm.py
```
- **Z1 Environment Test**:
```bash
python test/arm/z1/test_z1_env.py
```
- **Z1 Datasets Replay**:
```bash
# --repo-id Your unique repo ID on Hugging Face Hub
# --robot_type The type of the robot e.g., z1_dual_dex1_realsense, z1_realsense, g1_dex1,
python test/test_replay.py --repo-id unitreerobotics/Z1_StackBox_Dataset --robot_type z1_realsense
```
---
## 2.3 🦿 Run Z1_Dual
### 2.3.1 🦿 Z1 Setup and Dex1 Setup
Clone and build the required repositories:
1. Download and compile the corresponding code according to the above z1 steps and Download the gripper program to start locally
2. [Modify the multi-machine control according to the document](https://support.unitree.com/home/zh/Z1_developer/sdk_operation)
3. [Download the modified z1_sdk_1 and then compile it](https://github.com/unitreerobotics/z1_sdk/tree/z1_dual), Copy the `unitree_arm_interface` library: [Modify according to your own path]
```bash
cp z1_sdk/lib/unitree_arm_interface.cpython-310-x86_64-linux-gnu.so ./unitree_deploy/robot_devices/arm
```
4. Start the Z1 controller [Modify according to your own path]:
```bash
cd z1_controller/builb && ./z1_ctrl
cd z1_controller_1/builb && ./z1_ctrl
```
5. Start the gripper service, **ifconfig examines its own dds networkInterface**:
```
sudo ./dex1_1_gripper_server --network eth0 -l -r
```
---
### 2.3.2 Testing ✅
Run the following tests:
- **Z1_Dual Arm Test**:
```bash
python test/arm/z1/test_z1_arm_dual.py
```
- **Z1_Dual Datasets Replay**:
```bash
# --repo-id Your unique repo ID on Hugging Face Hub
# --robot_type The type of the robot e.g., z1_dual_dex1_realsense, z1_realsense, g1_dex1,
python test/test_replay.py --repo-id unitreerobotics/Z1_Dual_Dex1_StackBox_Dataset_V2 --robot_type z1_dual_dex1_realsense
```
---
# 3.🧠 Inference and Deploy
1. [Modify the corresponding parameters according to your configuration](./unitree_deploy/robot/robot_configs.py)
2. Go back the **step-2 of Client Setup** under the [Inference and Deployment under Decision-Making Mode](https://github.com/unitreerobotics/unifolm-world-model-action/blob/main/README.md).
# 4.🏗️ Code structure
[If you want to add your own robot equipment, you can build it according to this document](./docs/GettingStarted.md)
# 5. 🤔 Troubleshooting
For assistance, contact the project maintainer or refer to the respective GitHub repository documentation. 📖
# 6. 🙏 Acknowledgement
This code builds upon following open-source code-bases. Please visit the URLs to see the respective LICENSES (If you find these projects valuable, it would be greatly appreciated if you could give them a star rating.):
1. https://github.com/huggingface/lerobot
2. https://github.com/unitreerobotics/unitree_sdk2_python

View File

@@ -0,0 +1,70 @@
# Getting Started
### Code framework
| Module Name | Documentation Link |
| ------------------------- | -------------------------------------------------- |
| robots | [build_robot](./build_robot.md) |
| robot_devices/arm | [add_robot_arm](./add_robot_arm.md) |
| robot_devices/cameras | [add_robot_camera](./add_robot_camera.md) |
| robot_devices/endeffector | [add_robot_endeffector](./add_robot_endeffector.md)|
### Simple Usage (Example code, not executable)
```python
import time
import math
import torch
from unitree_deploy.robot.robot_utils import make_robot
from unitree_deploy.robot_devices.robots_devices_utils import precise_wait
class YourPolicy:
def predict_action(self, observation, policy):
# Logic for predicting action
pass
class UnitreeEnv:
def __init__(self):
self.robot = make_robot(self.robot_type)
if not self.robot.is_connected:
self.robot.connect()
# If disconnection is needed, call disconnect() here
# self.robot.disconnect()
def get_obs(self):
# Get observation
observation = self.robot.capture_observation()
return observation
def step(self, pred_action, t_command_target):
# Execute action
t_cycle_end = time.monotonic() + self.control_dt
t_command_target = t_cycle_end + self.control_dt
action = self.robot.send_action(torch.from_numpy(pred_action), t_command_target)
precise_wait(t_cycle_end)
return action
if __name__ == "__main__":
policy = YourPolicy() # Create policy instance
env = UnitreeEnv() # Create environment instance
t_start = time.monotonic() # Get start time
iter_idx = 0 # Initialize iteration index
control_dt = 1 / 30 # Control loop interval (30Hz)
try:
while True:
t_cycle_end = t_start + (iter_idx + 1) * control_dt # Calculate end time of current cycle
t_command_target = t_cycle_end + control_dt # Calculate command target time
observation = env.get_obs() # Get environment observation
pred_action = policy.predict_action(observation, policy) # Predict action
env.step(pred_action, t_command_target) # Execute action
precise_wait(t_cycle_end) # Wait until cycle end
iter_idx += 1 # Update iteration index
finally:
# Perform cleanup operations on exit (e.g., disconnect robot)
pass

View File

@@ -0,0 +1,213 @@
# Unitree Deploy
本文档提供了为 Unitree G1 和 Z1 平台设置部署环境的说明,包括依赖安装、图像服务启动和夹爪控制。
# 0. 📖 简介
此代码库用于 Unitree 机器人模型的部署。
---
# 1. 🛠️ 环境设置
```bash
conda create -n unitree_deploy python=3.10 && conda activate unitree_deploy
conda install pinocchio -c conda-forge
pip install -e .
# 可选:安装 lerobot 依赖
pip install -e ".[lerobot]"
git clone https://github.com/unitreerobotics/unitree_sdk2_python.git
cd unitree_sdk2_python && pip install -e . && cd ..
```
---
# 2. 🚀 启动
**提示:确保所有设备处于同一局域网内**
## 2.1 🤖 运行 G1 和 Dex_1 夹爪
### 2.1.1 📷 图像捕获服务设置G1 pc2
[按照以下步骤启动 image_server](https://github.com/unitreerobotics/xr_teleoperate?tab=readme-ov-file#31-%EF%B8%8F-image-service)
1. 连接到 G1
```bash
ssh unitree@192.168.123.164 # 密码123
```
2. 激活环境并启动图像服务:
```bash
conda activate tv
cd ~/image_server
python image_server.py
```
---
### 2.1.2 🤏 Dex_1 夹爪服务设置(开发 PC2
参考 [Dex_1 夹爪安装指南](https://github.com/unitreerobotics/dex1_1_service?tab=readme-ov-file#1--installation) 获取详细设置说明。
1. 进入服务目录:
```bash
cd ~/dex1_1_service/build
```
2. 启动夹爪服务,**ifconfig 检查其自身的 dds 网络接口**
```bash
sudo ./dex1_1_gripper_server --network eth0 -l -r
```
3. 验证与夹爪服务的通信:
```bash
./test_dex1_1_gripper_server --network eth0 -l -r
```
---
### 2.1.3 ✅ 测试
执行以下测试以确保功能正常:
- **Dex1 夹爪测试**
```bash
python test/endeffector/test_dex1.py
```
- **G1 机械臂测试**
```bash
python test/arm/g1/test_g1_arm.py
```
- **图像客户端相机测试**
```bash
python test/camera/test_image_client_camera.py
```
- **G1 数据集回放**
```bash
# --repo-id Your unique repo ID on Hugging Face Hub
# --robot_type The type of the robot e.g., z1_dual_dex1_realsense, z1_realsense, g1_dex1,
python test/test_replay.py --repo-id unitreerobotics/G1_CameraPackaging_NewDataset --robot_type g1_dex1
```
---
## 2.2 🦿 运行 Z1
### 2.2.1 🦿 Z1 设置
克隆并构建所需的代码库:
1. 下载 [z1_controller](https://github.com/unitreerobotics/z1_controller.git) 和 [z1_sdk](https://github.com/unitreerobotics/z1_sdk.git)。
2. 构建代码库:
```bash
mkdir build && cd build
cmake .. && make -j
```
3. 复制 `unitree_arm_interface` 库:[根据您的路径修改]
```bash
cp z1_sdk/lib/unitree_arm_interface.cpython-310-x86_64-linux-gnu.so ./unitree_deploy/robot_devices/arm
```
4. 启动 Z1 控制器 [根据您的路径修改]
```bash
cd z1_controller/build && ./z1_ctrl
```
---
### 2.2.2 ✅ 测试
运行以下测试:
- **Realsense 相机测试**
```bash
python test/camera/test_realsense_camera.py # 根据您的 Realsense 修改对应的序列号
```
- **Z1 机械臂测试**
```bash
python test/arm/z1/test_z1_arm.py
```
- **Z1 环境测试**
```bash
python test/arm/z1/test_z1_env.py
```
- **Z1 数据集回放**
```bash
# --repo-id Your unique repo ID on Hugging Face Hub
# --robot_type The type of the robot e.g., z1_dual_dex1_realsense, z1_realsense, g1_dex1,
python test/test_replay.py --repo-id unitreerobotics/Z1_StackBox_Dataset --robot_type z1_realsense
```
---
## 2.3 🦿 运行 Z1_Dual
### 2.3.1 🦿 Z1 设置和 Dex1 设置
克隆并构建所需的代码库:
1. 按照上述 Z1 步骤下载并编译代码,并下载夹爪程序以本地启动。
2. [根据文档修改多机控制](https://support.unitree.com/home/zh/Z1_developer/sdk_operation)
3. [下载修改后的 z1_sdk_1 并编译](https://github.com/unitreerobotics/z1_sdk/tree/z1_dual),复制 `unitree_arm_interface` 库:[根据您的路径修改]
```bash
cp z1_sdk/lib/unitree_arm_interface.cpython-310-x86_64-linux-gnu.so ./unitree_deploy/robot_devices/arm
```
4. 启动 Z1 控制器 [根据您的路径修改]
```bash
cd z1_controller/builb && ./z1_ctrl
cd z1_controller_1/builb && ./z1_ctrl
```
5. 启动夹爪服务,**ifconfig 检查其自身的 dds 网络接口**
```
sudo ./dex1_1_gripper_server --network eth0 -l -r
```
---
### 2.3.2 ✅ 测试
运行以下测试:
- **Z1_Dual 机械臂测试**
```bash
python test/arm/z1/test_z1_arm_dual.py
```
- **Z1_Dual 数据集回放**
```bash
# --repo-id Your unique repo ID on Hugging Face Hub
# --robot_type The type of the robot e.g., z1_dual_dex1_realsense, z1_realsense, g1_dex1,
python test/test_replay.py --repo-id unitreerobotics/Z1_Dual_Dex1_StackBox_Dataset_V2 --robot_type z1_dual_dex1_realsense
```
---
# 3.🧠 推理与部署
1. [根据您的配置修改相应参数](./unitree_deploy/robot/robot_configs.py)
2. 返回 [决策模式下的推理与部署](https://github.com/unitreerobotics/unifolm-world-model-action/blob/main/README.md) 中的 **客户端设置步骤 2**
# 4.🏗️ 代码结构
[如果您想添加自己的机器人设备,可以根据此文档进行构建](./docs/GettingStarted.md)
# 5. 🤔 故障排除
如需帮助,请联系项目维护人员或参考相应的 GitHub 仓库文档。📖
# 6. 🙏 致谢
此代码基于以下开源代码库构建。请访问相关 URL 查看相应的 LICENSES如果您觉得这些项目有价值请为它们点亮星星
1. https://github.com/huggingface/lerobot
2. https://github.com/unitreerobotics/unitree_sdk2_python

View File

@@ -0,0 +1,76 @@
# How to Build Your Own Arm
### Define your own config for the robot arm (unitree_deploy/robot_devices/arm/config.py)
```python
@ArmConfig.register_subclass("z1") # Register your custom arm wrapper. Here use def __init__(self, config: Z1DualArmConfig):
@dataclass
class Z1ArmConfig(ArmConfig):
port: str
motors: dict[str, tuple[int, str]]
mock: bool = False
init_pose_left: list = None
init_pose_right: list = None
control_dt: float = 1/500.0
# Default parameters go first [parameters that may need to be customized],
# Non-default parameters go later [fixed parameters]
```
### Description of methods in your arm class (unitree_deploy/robot_devices/arm/utils.py)
```python
# Base class for Arm, extensible with required methods
class Arm(Protocol):
def connect(self): ...
def disconnect(self): ...
def motor_names(self): ...
def read_current_motor_q(self): ...
def read_current_arm_q(self): ...
def read_current_arm_dq(self): ...
def write_arm(self): ...
def arm_ik(self): ...
```
How to implement external calls?
Use make_arm_motors_buses_from_configs [based on the config file] to construct the UnitreeRobot class.
Use make_arm_motors_bus [based on arm_type] which is generally used for external module loading.
### Implementation of the arm class (unitree_deploy/robot_devices/arm/.../....py)
```python
# These methods need to be implemented and completed
def connect(self): ...
def disconnect(self): ...
def motor_names(self): ...
# connect() and disconnect() should handle initialization and homing respectively
def read_current_motor_q(self): ...
def read_current_arm_q(self): ...
def read_current_arm_dq(self): ...
# Outputs should be unified as np.ndarray
def write_arm(self): ...
# Write control commands here
def arm_ik(self): ...
# Wrap IK into your own arm class for external calling
# Private/protected properties [for reading motor names, IDs, etc.]
@property
def motor_names(self) -> list[str]:
return list(self.motors.keys())
@property
def motor_models(self) -> list[str]:
return [model for _, model in self.motors.values()]
@property
def motor_indices(self) -> list[int]:
return [idx for idx, _ in self.motors.values()]
```
All arms use threading to implement \_subscribe_motor_state and \_ctrl_motor_state threads for internal reading and writing within the class.

View File

@@ -0,0 +1,66 @@
# How to build your own cameras
### Define your own config for cameras (unitree_deploy/robot_devices/cameras/config.py)
```python
@CameraConfig.register_subclass("opencv") # Define and wrap your own cameras. Here use def __init__(self, config: OpenCVCameraConfig):
@dataclass
class OpenCVCameraConfig(CameraConfig):
"""
Example of tested options for Intel Real Sense D405:
OpenCVCameraConfig(0, 30, 640, 480)
OpenCVCameraConfig(0, 60, 640, 480)
OpenCVCameraConfig(0, 90, 640, 480)
OpenCVCameraConfig(0, 30, 1280, 720)
"""
# Define the required camera parameters
camera_index: int
fps: int | None = None
width: int | None = None
height: int | None = None
color_mode: str = "rgb"
channels: int | None = None
rotation: int | None = None
mock: bool = False
def __post_init__(self):
if self.color_mode not in ["rgb", "bgr"]:
raise ValueError(
f"`color_mode` is expected to be 'rgb' or 'bgr', but {self.color_mode} is provided."
)
self.channels = 3
if self.rotation not in [-90, None, 90, 180]:
raise ValueError(f"`rotation` must be in [-90, None, 90, 180] (got {self.rotation})")
# Default parameters go first [parameters that need to be customized],
# Non-default parameters go later [fixed parameters]
```
### Description of methods in your cameras class (unitree_deploy/robot_devices/cameras/utils.py)
```python
# Base class for cameras, extensible with required methods
class Camera(Protocol):
def connect(self): ...
def read(self, temporary_color: str | None = None) -> np.ndarray: ... # Single-threaded reading
def async_read(self) -> np.ndarray: ... # Multi-threaded
def disconnect(self): ...
```
How can external modules implement calls? Use **make_cameras_from_configs [based on configuration files]** to construct the `UnitreeRobot` class.
**make_camera [based on camera_type]** is generally used for external module loading.
### Implementation of the `camera` class (unitree_deploy/robot_devices/camera/.../....py)
```python
# These need to be completed, focusing on implementing these two parts
def read(self, temporary_color: str | None = None) -> np.ndarray: ... # Single-threaded reading
def async_read(self) -> np.ndarray: ... # Multi-threaded
```
All cameras use threading to implement `async_read` for internal read and write operations.

View File

@@ -0,0 +1,77 @@
# How to Build Your Own End-Effector [Currently dex_1 and dex_3 are available]
### Define your own config for the end-effector (unitree_deploy/robot_devices/endeffector/config.py)
```python
@EndEffectorConfig.register_subclass("gripper") # Register your custom end-effector wrapper. Here it uses def __init__(self, config: GripperConfig):
@dataclass
class GripperConfig(EndEffectorConfig):
motors: dict[str, tuple[int, str]]
unit_test: bool = False
control_dt: float = 1/200
mock: bool = False
def __post_init__(self):
if self.control_dt < 0.002:
raise ValueError(f"`control_dt` must > 1/500 (got {self.control_dt})")
# Default arguments should be placed first [parameters that may need to be customized],
# Non-default arguments should be placed later [fixed or less important parameters].
```
### Description of methods in your end-effector class (unitree_deploy/robot_devices/endeffector/utils.py)
```python
# Base class for EndEffector, extend with required methods
class EndEffector(Protocol):
def connect(self): ...
def disconnect(self): ...
def motor_names(self): ...
def read_current_endeffector_q(self): ...
def read_current_endeffector_dq(self): ...
def write_endeffector(self): ...
def endeffector_ik(self): ...
```
How to call externally?
Use make_endeffector_motors_buses_from_configs → Construct the UnitreeRobot class based on the config file
Use make_endeffector_motors_bus → Construct based on endeffector_type (typically for external module loading)
### Implementation of your end-effector class (unitree_deploy/robot_devices/endeffector/.../....py)
```python
# These methods need to be implemented and completed
def connect(self): ...
def disconnect(self): ...
def motor_names(self): ...
# connect() and disconnect() should handle initialization and homing respectively
def read_current_endeffector_q(self): ...
def read_current_endeffector_dq(self): ...
# Outputs should be unified as np.ndarray
def write_endeffector(self): ...
# Write control commands here
def arm_ik(self): ...
# Wrap IK into your own arm class, to be called externally
# Private/protected properties
# (for reading motor names, IDs, etc. These will be used in UnitreeRobot class for dataset encapsulation)
@property
def motor_names(self) -> list[str]:
return list(self.motors.keys())
@property
def motor_models(self) -> list[str]:
return [model for _, model in self.motors.values()]
@property
def motor_indices(self) -> list[int]:
return [idx for idx, _ in self.motors.values()]
```
For arms, use threading to implement \_subscribe_gripper_motor_state (thread for reading motor states),\_ctrl_gripper_motor (thread for motor control),Both threads should run internally within the class.

View File

@@ -0,0 +1,140 @@
# Build your own robot
### Add your own config ((unitree_deploy/robot/robot_configs.py))
The base class of robot config is defined as **UnitreeRobotConfig**
```python
class UnitreeRobotConfig(RobotConfig):
cameras: dict[str, CameraConfig] = field(default_factory=lambda: {}) # Corresponding to your own camera
arm: dict[str, ArmConfig] = field(default_factory=lambda: {}) # Corresponding to your own arm
endeffector: dict[str, EndEffectorConfig] = field(default_factory=lambda: {}) # Corresponding to your own end-effector
mock: bool = False # Simulation [To be implemented, for debugging, to check some class definitions and message type formats]
```
Specific example: separately fill in \[name\]:robot_devies → cameras,
arm, endeffector.\
If not provided, they default to empty and will not affect the system.\
(In principle, different robot_devies and different quantities can be
constructed.)
```python
class Z1dual_Dex1_Opencv_RobotConfig(UnitreeRobotConfig):
# Troubleshooting: If one of your IntelRealSense cameras freezes during
# data recording due to bandwidth limit, you might need to plug the camera
# into another USB hub or PCIe card.
cameras: dict[str, CameraConfig] = field(
default_factory=lambda: { # Add corresponding configs for different cameras [name]:OpenCVCameraConfig + required parameters
"cam_high": OpenCVCameraConfig(
camera_index="/dev/video0",
fps=30,
width=640,
height=480,
),
"cam_left_wrist": OpenCVCameraConfig(
camera_index="/dev/video2",
fps=30,
width=640,
height=480,
),
"cam_right_wrist": OpenCVCameraConfig(
camera_index="/dev/video4",
fps=30,
width=640,
height=480,
),
}
)
arm: dict[str, ArmConfig] = field(
default_factory=lambda: {
"z1_dual": Z1DualArmConfig( # Add corresponding configs for different arms [name]:Z1DualArmConfig + required parameters
unit_test = False,
init_pose_left = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
init_pose_right = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
control_dt = 1/500.0,
motors={
# name: (index, model)
"kLeftWaist": [0, "z1-joint"],
"kLeftShoulder": [1, "z1-joint"],
"kLeftElbow": [2, "z1-joint"],
"kLeftForearmRoll": [3, "z1-joint"],
"kLeftWristAngle": [4, "z1-joint"],
"kLeftWristRotate": [5, "z1-joint"],
"kRightWaist": [7, "z1-joint"],
"kRightShoulder": [8, "z1-joint"],
"kRightElbow": [9, "z1-joint"],
"kRightForearmRoll": [10, "z1-joint"],
"kRightWristAngle": [11, "z1-joint"],
"kRightWristRotate": [12, "z1-joint"],
},
),
}
)
endeffector: dict[str, EndEffectorConfig] = field(
default_factory=lambda: {
"gripper": GripperConfig( # Add corresponding configs for different end-effectors [name]:GripperConfig + required parameters
unit_test = False,
unit_test = True,
control_dt = 1/250,
motors={
# name: (index, model)
"kLeftGripper": [0, "z1_gripper-joint"],
"kRightGripper": [1, "z1_gripper-joint"],
},
),
}
)
mock: bool = False
```
---
### robot utils ((unitree_deploy/robot/utils.py))
```python
Implementation of the Robot base class
class Robot(Protocol):
robot_type: str
features: dict
def connect(self): ... # Connect devices (including cameras, arms, end-effectors of robot_devies)
def capture_observation(self): ... # capture_observation (Get current state, including data from camera + arm + end-effector)
def send_action(self, action): ... # send_action (Send action to arm + end-effector actuators, can be used for model inference and data replay)
def disconnect(self): ... # Disconnect devices
```
External calls **make_robot_from_config** and **make_robot** are used in
**control_robot**, to initialize the robot and implement specific
functions.
---
### manipulator ((unitree_deploy/robot/manipulator.py))
UnitreeRobot implements initialization by calling
**UnitreeRobotConfig**.
```python
Several important parts of the implementation
def capture_observation(self): # Get current obs, return { observation.state, observation.images}
def send_action( # Model inference and data replay, receives action + time
self, action: torch.Tensor, t_command_target:float|None = None
) -> torch.Tensor:
# Here we input device data
# Output (arm + end-effector) joint angle positions, end-effector positions, or other data conversion (IK is implemented here!)
# Output is uniformly converted into joint angle positions {"left":arm_joint_points, "roght":arm_joint_points} + {"left":endeffector_joint_points, "roght":endeffector_joint_points}
# Why consider left and right? Because this separates single-arm cases, and different arms and different end-effectors.
# This way, the implementation can work properly.
def convert_data_based_on_robot_type(self, robot_type: str, leader_pos: dict[str, np.ndarray]
) -> None | tuple[dict[str, np.ndarray], dict[str, np.ndarray]]:
```

View File

@@ -0,0 +1,51 @@
[build-system]
requires = ["setuptools", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "unitree_deploy"
version = "0.0.3"
description = "unitree deploy"
readme = "README.md"
requires-python = ">=3.10"
license = { text = "BSD-3-Clause" }
authors = [
{ name = "hengguo", email = "rd_gh@unitree.com" }
]
keywords = ["unitree", "robotics", "deployment"]
dependencies = [
"tyro",
"draccus",
"datasets==3.3.0",
"meshcat",
"pyrealsense2",
"numpy",
"opencv-python",
"mujoco",
"matplotlib",
"dm_env",
"torch>=2.2.1,<2.8.0",
"rerun-sdk>=0.21.0,<0.23.0",
]
[tool.setuptools]
packages = ["unitree_deploy"]
[project.optional-dependencies]
lerobot = [
"lerobot @ git+https://github.com/huggingface/lerobot.git@0878c68"
]
[tool.ruff]
line-length = 110
target-version = "py310"
exclude = ["build", "venv", "__pycache__"]
fix = true
show-fixes = true
[tool.ruff.lint]
select = ["E4", "E7", "E9", "F", "I", "N", "B", "C4", "SIM"]
ignore = ["N801"]
[tool.ruff.lint.per-file-ignores]
"arm_indexs.py" = ["N815"]

View File

@@ -0,0 +1,198 @@
import argparse
import os
import time
import cv2
import numpy as np
import torch
import tqdm
from typing import Any, Deque, MutableMapping, OrderedDict
from collections import deque
from pathlib import Path
from unitree_deploy.real_unitree_env import make_real_env
from unitree_deploy.utils.eval_utils import (
ACTTemporalEnsembler,
LongConnectionClient,
populate_queues,
)
# -----------------------------------------------------------------------------
# Network & environment defaults
# -----------------------------------------------------------------------------
os.environ["http_proxy"] = ""
os.environ["https_proxy"] = ""
HOST = "127.0.0.1"
PORT = 8000
BASE_URL = f"http://{HOST}:{PORT}"
# fmt: off
INIT_POSE = {
'g1_dex1': np.array([0.10559805, 0.02726714, -0.01210221, -0.33341318, -0.22513399, -0.02627627, -0.15437093, 0.1273793 , -0.1674708 , -0.11544029, -0.40095493, 0.44332668, 0.11566751, 0.3936641, 5.4, 5.4], dtype=np.float32),
'z1_dual_dex1_realsense': np.array([-1.0262332, 1.4281361, -1.2149128, 0.6473399, -0.12425245, 0.44945636, 0.89584476, 1.2593982, -1.0737865, 0.6672816, 0.39730102, -0.47400007, 0.9894176, 0.9817477 ], dtype=np.float32),
'z1_realsense': np.array([-0.06940782, 1.4751548, -0.7554075, 1.0501366, 0.02931615, -0.02810347, -0.99238837], dtype=np.float32),
}
ZERO_ACTION = {
'g1_dex1': torch.zeros(16, dtype=torch.float32),
'z1_dual_dex1_realsense': torch.zeros(14, dtype=torch.float32),
'z1_realsense': torch.zeros(7, dtype=torch.float32),
}
CAM_KEY = {
'g1_dex1': 'cam_right_high',
'z1_dual_dex1_realsense': 'cam_high',
'z1_realsense': 'cam_high',
}
# fmt: on
def prepare_observation(args: argparse.Namespace, obs: Any) -> OrderedDict:
"""
Convert a raw env observation into the model's expected input dict.
"""
rgb_image = cv2.cvtColor(
obs.observation["images"][CAM_KEY[args.robot_type]], cv2.COLOR_BGR2RGB)
observation = {
"observation.images.top":
torch.from_numpy(rgb_image).permute(2, 0, 1),
"observation.state":
torch.from_numpy(obs.observation["qpos"]),
"action": ZERO_ACTION[args.robot_type],
}
return OrderedDict(observation)
def run_policy(
args: argparse.Namespace,
env: Any,
client: LongConnectionClient,
temporal_ensembler: ACTTemporalEnsembler,
cond_obs_queues: MutableMapping[str, Deque[torch.Tensor]],
output_dir: Path,
) -> None:
"""
Single rollout loop:
1) warm start the robot,
2) stream observations,
3) fetch actions from the policy server,
4) execute with temporal ensembling for smoother control.
"""
_ = env.step(INIT_POSE[args.robot_type])
time.sleep(2.0)
t = 0
while True:
# Gapture observation
obs = env.get_observation(t)
# Format observation
obs = prepare_observation(args, obs)
cond_obs_queues = populate_queues(cond_obs_queues, obs)
# Call server to get actions
pred_actions = client.predict_action(args.language_instruction,
cond_obs_queues).unsqueeze(0)
# Keep only the next horizon of actions and apply temporal ensemble smoothing
actions = temporal_ensembler.update(
pred_actions[:, :args.action_horizon])[0]
# Execute the actions
for n in range(args.exe_steps):
action = actions[n].cpu().numpy()
print(f">>> Exec => step {n} action: {action}", flush=True)
print("---------------------------------------------")
# Maintain real-time loop at `control_freq` Hz
t1 = time.time()
obs = env.step(action)
time.sleep(max(0, 1 / args.control_freq - time.time() + t1))
t += 1
# Prime the queue for the next action step (except after the last one in this chunk)
if n < args.exe_steps - 1:
obs = prepare_observation(args, obs)
cond_obs_queues = populate_queues(cond_obs_queues, obs)
def run_eval(args: argparse.Namespace) -> None:
client = LongConnectionClient(BASE_URL)
# Initialize ACT temporal moving-averge smoother
temporal_ensembler = ACTTemporalEnsembler(temporal_ensemble_coeff=0.01,
chunk_size=args.action_horizon,
exe_steps=args.exe_steps)
temporal_ensembler.reset()
# Initialize observation and action horizon queue
cond_obs_queues = {
"observation.images.top": deque(maxlen=args.observation_horizon),
"observation.state": deque(maxlen=args.observation_horizon),
"action": deque(
maxlen=16), # NOTE: HAND CODE AS THE MODEL PREDCIT FUTURE 16 STEPS
}
env = make_real_env(
robot_type=args.robot_type,
dt=1 / args.control_freq,
)
env.connect()
try:
for episode_idx in tqdm.tqdm(range(0, args.num_rollouts_planned)):
output_dir = Path(args.output_dir) / f"episode_{episode_idx:03d}"
output_dir.mkdir(parents=True, exist_ok=True)
run_policy(args, env, client, temporal_ensembler, cond_obs_queues,
output_dir)
finally:
env.close()
env.close()
def get_parser() -> argparse.ArgumentParser:
parser = argparse.ArgumentParser()
parser.add_argument("--robot_type",
type=str,
default="g1_dex1",
help="The type of the robot embodiment.")
parser.add_argument(
"--action_horizon",
type=int,
default=16,
help="Number of future actions, predicted by the policy, to keep",
)
parser.add_argument(
"--exe_steps",
type=int,
default=16,
help=
"Number of future actions to execute, which must be less than the above action horizon.",
)
parser.add_argument(
"--observation_horizon",
type=int,
default=2,
help="Number of most recent frames/states to consider.",
)
parser.add_argument(
"--language_instruction",
type=str,
default="Pack black camera into box",
help="The language instruction provided to the policy server.",
)
parser.add_argument("--num_rollouts_planned",
type=int,
default=10,
help="The number of rollouts to run.")
parser.add_argument("--output_dir",
type=str,
default="./results",
help="The directory for saving results.")
parser.add_argument("--control_freq",
type=float,
default=30,
help="The Low-level control frequency in Hz.")
return parser
if __name__ == "__main__":
parser = get_parser()
args = parser.parse_args()
run_eval(args)

View File

@@ -0,0 +1,105 @@
import time
import numpy as np
import pinocchio as pin
from unitree_deploy.robot.robot_configs import g1_motors
from unitree_deploy.robot_devices.arm.configs import G1ArmConfig
from unitree_deploy.robot_devices.arm.utils import make_arm_motors_buses_from_configs
from unitree_deploy.robot_devices.robots_devices_utils import precise_wait
if __name__ == "__main__":
# ============== Arm Configuration ==============
def g1_dual_arm_default_factory():
return {
"g1": G1ArmConfig(
init_pose=np.zeros(14),
motors=g1_motors,
mock=False,
),
}
# ==============================================
# Initialize and connect to the robotic arm
arm = make_arm_motors_buses_from_configs(g1_dual_arm_default_factory())
for name in arm:
arm[name].connect()
time.sleep(1.5)
print("✅ Arm connected. Waiting to start...")
# Define initial target poses for left and right arms
L_tf_target = pin.SE3(
pin.Quaternion(1, 0, 0, 0),
np.array([0.25, +0.25, 0.1]),
)
R_tf_target = pin.SE3(
pin.Quaternion(1, 0, 0, 0),
np.array([0.25, -0.25, 0.1]),
)
rotation_speed = 0.005 # Rotation speed in radians per iteration
# Motion parameters
control_dt = 1 / 50 # Control cycle duration (20ms)
step = 0
max_step = 240
initial_data_received = True # Used to switch from drive to schedule mode
# Wait for user input to start the motion loop
user_input = input("Please enter the start signal (enter 's' to start the subsequent program): \n")
if user_input.lower() == "s":
try:
while True:
# Define timing for the control cycle
t_cycle_end = time.monotonic() + control_dt
t_command_target = t_cycle_end + control_dt
direction = 1 if step <= 120 else -1
angle = rotation_speed * (step if step <= 120 else (240 - step))
cos_half_angle = np.cos(angle / 2)
sin_half_angle = np.sin(angle / 2)
L_quat = pin.Quaternion(cos_half_angle, 0, sin_half_angle, 0) # 绕 Y 轴旋转
R_quat = pin.Quaternion(cos_half_angle, 0, 0, sin_half_angle) # 绕 Z 轴旋转
delta_l = np.array([0.001, 0.001, 0.001]) * direction
delta_r = np.array([0.001, -0.001, 0.001]) * direction
L_tf_target.translation += delta_l
R_tf_target.translation += delta_r
L_tf_target.rotation = L_quat.toRotationMatrix()
R_tf_target.rotation = R_quat.toRotationMatrix()
# Solve inverse kinematics for the arm
for name in arm:
sol_q, sol_tauff = arm[name].arm_ik(L_tf_target.homogeneous, R_tf_target.homogeneous)
print(f"Arm {name} solution: q={sol_q}, tauff={sol_tauff}")
# Determine command mode
cmd_target = "drive_to_waypoint" if initial_data_received else "schedule_waypoint"
# Send joint target command to arm
arm[name].write_arm(
q_target=sol_q,
tauff_target=sol_tauff, # Optional: send torque feedforward
time_target=t_command_target - time.monotonic() + time.perf_counter(),
cmd_target="schedule_waypoint",
)
# Update step and reset after full cycle
step = (step + 1) % (max_step + 1)
initial_data_received = False
# Wait until end of control cycle
precise_wait(t_cycle_end)
except KeyboardInterrupt:
# Handle Ctrl+C to safely disconnect
print("\n🛑 Ctrl+C detected. Disconnecting arm...")
for name in arm:
arm[name].disconnect()
print("✅ Arm disconnected. Exiting.")

View File

@@ -0,0 +1,91 @@
import math
import time
import numpy as np
import pinocchio as pin
from unitree_deploy.real_unitree_env import make_real_env
from unitree_deploy.utils.rerun_visualizer import RerunLogger, flatten_images, visualization_data
from unitree_deploy.utils.rich_logger import log_info
from unitree_deploy.utils.trajectory_generator import sinusoidal_gripper_motion
if __name__ == "__main__":
period = 2.0
motion_period = 2.0
motion_amplitude = 0.99
rerun_logger = RerunLogger()
env = make_real_env(robot_type="g1_dex1", dt=1 / 30)
env.connect()
# Define initial target poses for left and right arms
L_tf_target = pin.SE3(
pin.Quaternion(1, 0, 0, 0),
np.array([0.25, +0.25, 0.1]),
)
R_tf_target = pin.SE3(
pin.Quaternion(1, 0, 0, 0),
np.array([0.25, -0.25, 0.1]),
)
rotation_speed = 0.005 # Rotation speed in radians per iteration
# Motion parameters
control_dt = 1 / 50 # Control cycle duration (20ms)
step = 0
max_step = 240
initial_data_received = True # Used to switch from drive to schedule mode
# Wait for user input to start the motion loop
user_input = input("Please enter the start signal (enter 's' to start the subsequent program): \n")
if user_input.lower() == "s":
try:
current_time = math.pi / 2
idx = 0 # Initialize index for logging
while True:
# Define timing for the control cycle
t_cycle_end = time.monotonic() + control_dt
t_command_target = t_cycle_end + control_dt
direction = 1 if step <= 120 else -1
angle = rotation_speed * (step if step <= 120 else (240 - step))
cos_half_angle = np.cos(angle / 2)
sin_half_angle = np.sin(angle / 2)
L_quat = pin.Quaternion(cos_half_angle, 0, sin_half_angle, 0) # 绕 Y 轴旋转
R_quat = pin.Quaternion(cos_half_angle, 0, 0, sin_half_angle) # 绕 Z 轴旋转
delta_l = np.array([0.001, 0.001, 0.001]) * direction
delta_r = np.array([0.001, -0.001, 0.001]) * direction
L_tf_target.translation += delta_l
R_tf_target.translation += delta_r
L_tf_target.rotation = L_quat.toRotationMatrix()
R_tf_target.rotation = R_quat.toRotationMatrix()
# Solve inverse kinematics for the left arm
for arm_name in env.robot.arm:
arm_sol_q, arm_sol_tauff = env.robot.arm[arm_name].arm_ik(
L_tf_target.homogeneous, R_tf_target.homogeneous
)
gripper_target_q = sinusoidal_gripper_motion(
period=motion_period, amplitude=motion_amplitude, current_time=time.perf_counter()
)
action = np.concatenate([arm_sol_q, gripper_target_q], axis=0)
step_type, reward, _, observation = env.step(action)
idx += 1
visualization_data(idx, flatten_images(observation), observation["qpos"], arm_sol_q, rerun_logger)
# Update step and reset after full cycle
current_time += control_dt
step = (step + 1) % (max_step + 1)
except KeyboardInterrupt:
# Handle Ctrl+C to safely disconnect
log_info("\n🛑 Ctrl+C detected. Disconnecting arm...")
env.close()

View File

@@ -0,0 +1,81 @@
import math
import time
import numpy as np
import pinocchio as pin
from unitree_deploy.robot.robot_configs import z1_motors
from unitree_deploy.robot_devices.arm.utils import make_arm_motors_bus
from unitree_deploy.robot_devices.robots_devices_utils import precise_wait
from unitree_deploy.utils.trajectory_generator import generate_rotation, sinusoidal_gripper_motion
if __name__ == "__main__":
# ============== Arm Configuration ==============
arm_type = "z1"
arm_kwargs = {
"arm_type": arm_type,
"init_pose": [0.00623, 1.11164, -0.77531, -0.32167, -0.005, 0.0, 0.0], # Initial joint pose
"motors": z1_motors,
}
# ==============================================
# Initialize and connect to the robotic arm
arm = make_arm_motors_bus(**arm_kwargs)
arm.connect()
time.sleep(2)
print("✅ Arm connected. Waiting to start...")
# Define arm initial target poses
arm_tf_target = pin.SE3(pin.Quaternion(1, 0, 0, 0), np.array([0.2, 0, 0.4]))
# Motion parameters
rotation_speed = 0.01 # Rotation speed (rad per step)
control_dt = 1 / 30 # Control cycle duration (20ms)
step = 0
max_step = 240 # Full motion cycle
# Wait for user input to start the motion loop
user_input = input("Please enter the start signal (enter 's' to start the subsequent program): \n")
if user_input.lower() == "s":
try:
current_time = math.pi / 2
while True:
# Define timing for the control cycle
t_cycle_end = time.monotonic() + control_dt
t_command_target = t_cycle_end + control_dt
# Generate target rotation and translation
L_quat, R_quat, delta_l, delta_r = generate_rotation(step, rotation_speed, max_step)
arm_tf_target.translation += delta_l
# delta_r is not used in this context
arm_tf_target.rotation = L_quat.toRotationMatrix()
# Solve inverse kinematics for the left arm
arm_sol_q, arm_sol_tauff = arm.arm_ik(arm_tf_target.homogeneous)
# Generate sinusoidal motion for the gripper
target_gripper = (
sinusoidal_gripper_motion(period=4.0, amplitude=0.99, current_time=current_time) - 1
) # Adjust target_q by subtracting 1
target_arm = np.concatenate((arm_sol_q, target_gripper), axis=0) # Add a zero for the gripper
arm.write_arm(
q_target=target_arm,
# tauff_target=left_sol_tauff, # Optional: send torque feedforward
time_target=t_command_target - time.monotonic() + time.perf_counter(),
cmd_target="schedule_waypoint",
)
# Update step and reset after full cycle
step = (step + 1) % (max_step + 1)
current_time += control_dt
# Wait until end of control cycle
precise_wait(t_cycle_end)
except KeyboardInterrupt:
# Handle Ctrl+C to safely disconnect
print("\n🛑 Ctrl+C detected. Disconnecting arm...")
arm.disconnect()
print("✅ Arm disconnected. Exiting.")

View File

@@ -0,0 +1,112 @@
import time
import numpy as np
import pinocchio as pin
from unitree_deploy.robot_devices.arm.configs import Z1DualArmConfig
from unitree_deploy.robot_devices.arm.utils import make_arm_motors_buses_from_configs
from unitree_deploy.robot_devices.robots_devices_utils import precise_wait
from unitree_deploy.utils.trajectory_generator import generate_rotation
if __name__ == "__main__":
# ============== Arm Configuration ==============
def z1_dual_arm_single_config_factory():
return {
"z1_dual": Z1DualArmConfig(
left_robot_ip="127.0.0.1",
left_robot_port1=8073,
left_robot_port2=8074,
right_robot_ip="127.0.0.1",
right_robot_port1=8071,
right_robot_port2=8072,
init_pose_left=[0, 0, 0, 0, 0, 0],
init_pose_right=[0, 0, 0, 0, 0, 0],
control_dt=1 / 250.0,
motors={
# name: (index, model)
"kLeftWaist": [0, "z1-joint"],
"kLeftShoulder": [1, "z1-joint"],
"kLeftElbow": [2, "z1-joint"],
"kLeftForearmRoll": [3, "z1-joint"],
"kLeftWristAngle": [4, "z1-joint"],
"kLeftWristRotate": [5, "z1-joint"],
"kRightWaist": [7, "z1-joint"],
"kRightShoulder": [8, "z1-joint"],
"kRightElbow": [9, "z1-joint"],
"kRightForearmRoll": [10, "z1-joint"],
"kRightWristAngle": [11, "z1-joint"],
"kRightWristRotate": [12, "z1-joint"],
},
),
}
# ==============================================
# Initialize and connect to the robotic arm
arm = make_arm_motors_buses_from_configs(z1_dual_arm_single_config_factory())
for name in arm:
arm[name].connect()
time.sleep(1.5)
print("✅ Arm connected. Waiting to start...")
# Define initial target poses for left and right arms
L_tf_target = pin.SE3(pin.Quaternion(1, 0, 0, 0), np.array([0.2, 0, 0.4]))
R_tf_target = pin.SE3(pin.Quaternion(1, 0, 0, 0), np.array([0.2, 0, 0.3]))
# Motion parameters
rotation_speed = 0.01 # Rotation speed (rad per step)
control_dt = 1 / 30 # Control cycle duration (20ms)
step = 0
max_step = 240 # Full motion cycle
initial_data_received = True # Used to switch from drive to schedule mode
# Wait for user input to start the motion loop
user_input = input("Please enter the start signal (enter 's' to start the subsequent program): \n")
if user_input.lower() == "s":
try:
while True:
# Define timing for the control cycle
t_cycle_end = time.monotonic() + control_dt
t_command_target = t_cycle_end + control_dt
# Generate target rotation and translation
L_quat, R_quat, delta_l, delta_r = generate_rotation(step, rotation_speed, max_step)
# Apply translation deltas to target pose
L_tf_target.translation += delta_l
R_tf_target.translation += delta_r
# Apply rotation to target pose
L_tf_target.rotation = L_quat.toRotationMatrix()
R_tf_target.rotation = R_quat.toRotationMatrix()
# Solve inverse kinematics for the left arm
for name in arm:
sol_q, sol_tauff = arm[name].arm_ik(L_tf_target.homogeneous, R_tf_target.homogeneous)
# Determine command mode
cmd_target = "drive_to_waypoint" if initial_data_received else "schedule_waypoint"
# Send joint target command to arm
for name in arm:
arm[name].write_arm(
q_target=sol_q,
# tauff_target=sol_tauff, # Optional: send torque feedforward
time_target=t_command_target - time.monotonic() + time.perf_counter(),
cmd_target=cmd_target,
)
# Update step and reset after full cycle
step = (step + 1) % (max_step + 1)
initial_data_received = False
# Wait until end of control cycle
precise_wait(t_cycle_end)
except KeyboardInterrupt:
# Handle Ctrl+C to safely disconnect
print("\n🛑 Ctrl+C detected. Disconnecting arm...")
for name in arm:
arm[name].disconnect()
print("✅ Arm disconnected. Exiting.")

View File

@@ -0,0 +1,65 @@
import math
import time
import numpy as np
import pinocchio as pin
from unitree_deploy.real_unitree_env import make_real_env
from unitree_deploy.utils.rerun_visualizer import RerunLogger, flatten_images, visualization_data
from unitree_deploy.utils.rich_logger import log_info
from unitree_deploy.utils.trajectory_generator import generate_rotation, sinusoidal_gripper_motion
if __name__ == "__main__":
rerun_logger = RerunLogger()
env = make_real_env(robot_type="z1_realsense", dt=1 / 30)
env.connect()
# Define initial target poses for left and right arms
arm_tf_target = pin.SE3(pin.Quaternion(1, 0, 0, 0), np.array([0.2, 0, 0.4]))
# Motion parameters
rotation_speed = 0.01 # Rotation speed (rad per step)
control_dt = 1 / 30 # Control cycle duration (20ms)
step = 0
max_step = 240 # Full motion cycle
# Wait for user input to start the motion loop
user_input = input("Please enter the start signal (enter 's' to start the subsequent program): \n")
if user_input.lower() == "s":
try:
current_time = math.pi / 2
idx = 0 # Initialize index for logging
while True:
# Define timing for the control cycle
t_cycle_end = time.monotonic() + control_dt
t_command_target = t_cycle_end + control_dt
# Generate target rotation and translation
L_quat, R_quat, delta_l, delta_r = generate_rotation(step, rotation_speed, max_step)
arm_tf_target.translation += delta_l
# delta_r is not used in this context
arm_tf_target.rotation = L_quat.toRotationMatrix()
# Solve inverse kinematics for the left arm
for arm_name in env.robot.arm:
arm_sol_q, arm_sol_tauff = env.robot.arm[arm_name].arm_ik(arm_tf_target.homogeneous)
# Generate sinusoidal motion for the gripper
target_gripper = (
sinusoidal_gripper_motion(period=4.0, amplitude=0.99, current_time=current_time) - 1
) # Adjust target_q by subtracting 1
target_arm = np.concatenate((arm_sol_q, target_gripper), axis=0) # Add a zero for the gripper
step_type, reward, _, observation = env.step(target_arm)
idx += 1
visualization_data(idx, flatten_images(observation), observation["qpos"], target_arm, rerun_logger)
# Update step and reset after full cycle
current_time += control_dt
step = (step + 1) % (max_step + 1)
except KeyboardInterrupt:
# Handle Ctrl+C to safely disconnect
log_info("\n🛑 Ctrl+C detected. Disconnecting arm...")
env.close()

View File

@@ -0,0 +1,64 @@
import time
import cv2
import numpy as np
import torch
from tqdm import tqdm
from unitree_deploy.robot_devices.cameras.configs import ImageClientCameraConfig
from unitree_deploy.robot_devices.cameras.utils import make_cameras_from_configs
from unitree_deploy.utils.rich_logger import log_success
# ============================From configs============================
def run_camera():
def image_client_default_factory():
return {
"imageclient": ImageClientCameraConfig(
head_camera_type="opencv",
head_camera_id_numbers=[4],
head_camera_image_shape=[480, 1280], # Head camera resolution
wrist_camera_type="opencv",
wrist_camera_id_numbers=[0, 2],
wrist_camera_image_shape=[480, 640], # Wrist camera resolution
aspect_ratio_threshold=2.0,
fps=30,
mock=False,
),
}
# ===========================================
cameras = make_cameras_from_configs(image_client_default_factory())
print(cameras)
for name in cameras:
cameras[name].connect()
log_success(f"Connecting {name} cameras.")
for _ in tqdm(range(20), desc="Camera warming up"):
for name in cameras:
cameras[name].async_read()
time.sleep(1 / 30)
while True:
images = {}
for name in cameras:
output = cameras[name].async_read()
if isinstance(output, dict):
for k, v in output.items():
images[k] = torch.from_numpy(v)
else:
images[name] = torch.from_numpy(output)
image_list = [np.stack([img.numpy()] * 3, axis=-1) if img.ndim == 2 else img.numpy() for img in images.values()]
stacked_image = np.hstack(image_list)
cv2.imshow("Stacked Image", stacked_image)
if (cv2.waitKey(1) & 0xFF) == ord("q"):
cv2.destroyAllWindows()
break
if __name__ == "__main__":
run_camera()

View File

@@ -0,0 +1,66 @@
import time
import cv2
import numpy as np
from unitree_deploy.robot_devices.cameras.configs import IntelRealSenseCameraConfig
from unitree_deploy.robot_devices.cameras.utils import make_cameras_from_configs
from unitree_deploy.utils.rich_logger import log_success
def run_camera():
# ===========================================
def intelrealsense_camera_default_factory():
return {
"cam_high": IntelRealSenseCameraConfig(
serial_number="044122071036",
fps=30,
width=640,
height=480,
),
"cam_wrist": IntelRealSenseCameraConfig(
serial_number="419122270615",
fps=30,
width=640,
height=480,
),
}
# ===========================================
cameras = make_cameras_from_configs(intelrealsense_camera_default_factory())
for name in cameras:
cameras[name].connect()
log_success(f"Connecting {name} cameras.")
for _ in range(20):
for name in cameras:
cameras[name].async_read()
time.sleep(1 / 30)
while True:
images = []
for name in cameras:
frame = cameras[name].async_read()
if frame is not None:
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
cv2.putText(frame, name, (10, 25), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)
images.append(frame)
if images:
rows = []
for i in range(0, len(images), 2):
row = np.hstack(images[i : i + 2])
rows.append(row)
canvas = np.vstack(rows)
cv2.imshow("All Cameras", canvas)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cv2.destroyAllWindows()
if __name__ == "__main__":
run_camera()

View File

@@ -0,0 +1,95 @@
import time
import cv2
import numpy as np
import tyro
from tqdm import tqdm
from unitree_deploy.robot_devices.cameras.configs import OpenCVCameraConfig
from unitree_deploy.robot_devices.cameras.utils import make_camera, make_cameras_from_configs
from unitree_deploy.utils.rich_logger import log_success
def usb_camera_default_factory():
return {
"cam_high": OpenCVCameraConfig(
camera_index="/dev/video1",
fps=30,
width=640,
height=480,
),
"cam_left_wrist": OpenCVCameraConfig(
camera_index="/dev/video3",
fps=30,
width=640,
height=480,
),
"cam_right_wrist": OpenCVCameraConfig(
camera_index="/dev/video5",
fps=30,
width=640,
height=480,
),
}
def run_cameras(camera_style: int = 0):
"""
Runs camera(s) based on the specified style.
Args:
camera_style (int):
0 - Single camera (OpenCV).
1 - Multiple cameras from config.
"""
if camera_style == 0:
# ========== Single camera ==========
camera_kwargs = {"camera_type": "opencv", "camera_index": "/dev/video5", "mock": False}
camera = make_camera(**camera_kwargs)
camera.connect()
log_success("Connecting camera.")
while True:
color_image = camera.read()
color_image = cv2.cvtColor(color_image, cv2.COLOR_BGR2RGB)
cv2.imshow("Camera", color_image)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
elif camera_style == 1:
# ========== Multi-camera from configs ==========
cameras = make_cameras_from_configs(usb_camera_default_factory())
for name in cameras:
cameras[name].connect()
log_success(f"Connecting {name} camera.")
# Camera warm-up
for _ in tqdm(range(20), desc="Camera warming up"):
for name in cameras:
cameras[name].async_read()
time.sleep(1 / 30)
while True:
images = {}
for name in cameras:
images[name] = cameras[name].async_read()
image_list = [
np.stack([img.numpy()] * 3, axis=-1) if img.ndim == 2 else img.numpy() for img in images.values()
]
stacked_image = np.hstack(image_list)
cv2.imshow("Multi-Camera View", stacked_image)
if (cv2.waitKey(1) & 0xFF) == ord("q"):
cv2.destroyAllWindows()
break
else:
raise ValueError(f"Unsupported camera_style: {camera_style}")
if __name__ == "__main__":
tyro.cli(run_cameras)

View File

@@ -0,0 +1,60 @@
import time
import tyro
from unitree_deploy.robot_devices.endeffector.utils import (
Dex1_GripperConfig,
make_endeffector_motors_buses_from_configs,
)
from unitree_deploy.robot_devices.robots_devices_utils import precise_wait
from unitree_deploy.utils.rich_logger import log_success
from unitree_deploy.utils.trajectory_generator import sinusoidal_single_gripper_motion
period = 2.0
motion_period = 2.0
motion_amplitude = 0.99
def gripper_default_factory():
return {
"left": Dex1_GripperConfig(
unit_test=True,
motors={
"kLeftGripper": [0, "z1_gripper-joint"],
},
topic_gripper_state="rt/dex1/left/state",
topic_gripper_command="rt/dex1/left/cmd",
),
"right": Dex1_GripperConfig(
unit_test=True,
motors={
"kRightGripper": [1, "z1_gripper-joint"],
},
topic_gripper_state="rt/dex1/right/state",
topic_gripper_command="rt/dex1/right/cmd",
),
}
def run_gripper():
control_dt = 1 / 30
log_success("Running gripper in style 1 (multi-bus from config)")
endeffectors = make_endeffector_motors_buses_from_configs(gripper_default_factory())
for name in endeffectors:
endeffectors[name].connect()
log_success(f"Connected endeffector '{name}'.")
while True:
t_cycle_end = time.monotonic() + control_dt
target_q = sinusoidal_single_gripper_motion(
period=motion_period, amplitude=motion_amplitude, current_time=time.perf_counter()
)
for name in endeffectors:
endeffectors[name].write_endeffector(q_target=target_q)
precise_wait(t_cycle_end)
if __name__ == "__main__":
tyro.cli(run_gripper)

View File

@@ -0,0 +1,44 @@
"""
python test/test_replay.py --repo-id unitreerobotics/G1_CameraPackaging_NewDataset --robot_type g1_dex1
python test/test_replay.py --repo-id unitreerobotics/Z1_StackBox_Dataset --robot_type z1_realsense
python test/test_replay.py --repo-id unitreerobotics/Z1_Dual_Dex1_StackBox_Dataset_V2 --robot_type z1_dual_dex1_realsense
"""
import tyro
from lerobot.datasets.lerobot_dataset import LeRobotDataset
from unitree_deploy.real_unitree_env import make_real_env
from unitree_deploy.utils.rerun_visualizer import RerunLogger, flatten_images, visualization_data
from unitree_deploy.utils.rich_logger import log_info
# Replay a specific episode from the LeRobot dataset using the real environment robot_type(e.g., g1_dex1, z1_realsense, z1_dual_dex1_realsense)
def replay_lerobot_data(repo_id: str, robot_type: str, root: str | None = None, episode: int = 145):
dataset = LeRobotDataset(repo_id, root=root, episodes=[episode])
actions = dataset.hf_dataset.select_columns("action")
init_pose_arm = actions[0]["action"].numpy()[:14] if robot_type == "g1" else actions[0]["action"].numpy()
rerun_logger = RerunLogger()
env = make_real_env(robot_type=robot_type, dt=1 / 30, init_pose_arm=init_pose_arm)
env.connect()
try:
# Wait for user input to start the motion loop
user_input = input("Please enter the start signal (enter 's' to start the subsequent program): \n")
if user_input.lower() == "s":
log_info("Replaying episode")
for idx in range(dataset.num_frames):
action = actions[idx]["action"].numpy()
if robot_type == "z1_realsense":
action[-1] = -action[-1]
step_type, reward, _, observation = env.step(action)
visualization_data(idx, flatten_images(observation), observation["qpos"], action, rerun_logger)
env.close()
except KeyboardInterrupt:
# Handle Ctrl+C to safely disconnect
log_info("\n🛑 Ctrl+C detected. Disconnecting arm...")
env.close()
if __name__ == "__main__":
tyro.cli(replay_lerobot_data)

View File

@@ -0,0 +1,105 @@
import collections
import time
import matplotlib.pyplot as plt
import numpy as np
from lerobot.datasets.lerobot_dataset import LeRobotDataset
from unitree_deploy.utils.rerun_visualizer import RerunLogger, visualization_data
def extract_observation(step: dict):
observation = {}
for key, value in step.items():
if key.startswith("observation.images."):
if isinstance(value, np.ndarray) and value.ndim == 3 and value.shape[-1] in [1, 3]:
value = np.transpose(value, (2, 0, 1))
observation[key] = value
elif key == "observation.state":
observation[key] = value
return observation
class DatasetEvalEnv:
def __init__(self, repo_id: str, episode_index: int = 0, visualization: bool = True):
self.dataset = LeRobotDataset(repo_id=repo_id)
self.visualization = visualization
if self.visualization:
self.rerun_logger = RerunLogger()
self.from_idx = self.dataset.episode_data_index["from"][episode_index].item()
self.to_idx = self.dataset.episode_data_index["to"][episode_index].item()
self.step_idx = self.from_idx
self.ground_truth_actions = []
self.predicted_actions = []
def get_observation(self):
step = self.dataset[self.step_idx]
observation = extract_observation(step)
state = step["observation.state"].numpy()
self.ground_truth_actions.append(step["action"].numpy())
if self.visualization:
visualization_data(
self.step_idx,
observation,
observation["observation.state"],
step["action"].numpy(),
self.rerun_logger,
)
images_observation = {
key: value.numpy() for key, value in observation.items() if key.startswith("observation.images.")
}
obs = collections.OrderedDict()
obs["qpos"] = state
obs["images"] = images_observation
self.step_idx += 1
return obs
def step(self, action):
self.predicted_actions.append(action)
if self.step_idx == self.to_idx:
self._plot_results()
exit()
def _plot_results(self):
ground_truth_actions = np.array(self.ground_truth_actions)
predicted_actions = np.array(self.predicted_actions)
n_timesteps, n_dims = ground_truth_actions.shape
fig, axes = plt.subplots(n_dims, 1, figsize=(12, 4 * n_dims), sharex=True)
fig.suptitle("Ground Truth vs Predicted Actions")
for i in range(n_dims):
ax = axes[i] if n_dims > 1 else axes
ax.plot(ground_truth_actions[:, i], label="Ground Truth", color="blue")
ax.plot(predicted_actions[:, i], label="Predicted", color="red", linestyle="--")
ax.set_ylabel(f"Dim {i + 1}")
ax.legend()
axes[-1].set_xlabel("Timestep")
plt.tight_layout()
plt.savefig("figure.png")
time.sleep(1)
def make_dataset_eval_env() -> DatasetEvalEnv:
return DatasetEvalEnv()
if __name__ == "__main__":
eval_dataset = DatasetEvalEnv(repo_id="unitreerobotics/G1_Brainco_PickApple_Dataset")
while True:
observation = eval_dataset.get_observation()
eval_dataset.step(observation["qpos"])

View File

@@ -0,0 +1,81 @@
import collections
import time
from typing import List, Optional
import cv2
import dm_env
import numpy as np
import torch
from unitree_deploy.robot.robot_utils import make_robot
from unitree_deploy.robot_devices.robots_devices_utils import precise_wait
from unitree_deploy.utils.rich_logger import log_success
class UnitreeEnv:
def __init__(
self,
robot_type: str = "z1_realsense",
dt: float = 1 / 30,
init_pose_arm: np.ndarray | List[float] | None = None,
):
self.control_dt = dt
self.init_pose_arm = init_pose_arm
self.state: Optional[np.ndarray] = None
self.robot_type = robot_type
self.robot = make_robot(self.robot_type)
def connect(self):
self.robot.connect()
def _get_obs(self):
observation = self.robot.capture_observation()
# Process images
image_dict = {
key.split("observation.images.")[-1]: cv2.cvtColor(value.numpy(), cv2.COLOR_BGR2RGB)
for key, value in observation.items()
if key.startswith("observation.images.")
}
# for image_key, image in image_dict.items():
# cv2.imwrite(f"{image_key}.png", image)
# Process state
self.state = observation["observation.state"].numpy()
# Construct observation dictionary
obs = collections.OrderedDict(
qpos=self.state,
qvel=np.zeros_like(self.state),
effort=np.zeros_like(self.state),
images=image_dict,
)
return obs
def get_observation(self, t=0):
step_type = dm_env.StepType.FIRST if t == 0 else dm_env.StepType.MID
return dm_env.TimeStep(step_type=step_type, reward=0, discount=None, observation=self._get_obs())
def step(self, action) -> dm_env.TimeStep:
t_cycle_end = time.monotonic() + self.control_dt
t_command_target = t_cycle_end + self.control_dt
self.robot.send_action(torch.from_numpy(action), t_command_target)
precise_wait(t_cycle_end)
return dm_env.TimeStep(
step_type=dm_env.StepType.MID,
reward=0,
discount=None,
observation=self._get_obs(),
)
def close(self) -> None:
self.robot.disconnect()
log_success("Robot disconnected successfully! 🎉")
def make_real_env(
robot_type: str, dt: float | None, init_pose_arm: np.ndarray | List[float] | None = None
) -> UnitreeEnv:
return UnitreeEnv(robot_type, dt, init_pose_arm)

View File

@@ -0,0 +1,147 @@
import time
import torch
from unitree_deploy.robot.robot_configs import UnitreeRobotConfig
from unitree_deploy.robot_devices.arm.utils import make_arm_motors_buses_from_configs
from unitree_deploy.robot_devices.cameras.utils import make_cameras_from_configs
from unitree_deploy.robot_devices.endeffector.utils import (
make_endeffector_motors_buses_from_configs,
)
from unitree_deploy.utils.rich_logger import log_success
class UnitreeRobot:
def __init__(
self,
config: UnitreeRobotConfig,
):
self.config = config
self.robot_type = self.config.type
self.cameras = make_cameras_from_configs(self.config.cameras)
self.arm = make_arm_motors_buses_from_configs(self.config.arm)
self.endeffector = make_endeffector_motors_buses_from_configs(self.config.endeffector)
self.initial_data_received = True
def connect(self):
if not self.arm and self.endeffector and not self.cameras:
raise ValueError(
"UnitreeRobot doesn't have any device to connect. See example of usage in docstring of the class."
)
# Connect the cameras
for name in self.cameras:
self.cameras[name].connect()
log_success(f"Connecting {name} cameras.")
for _ in range(20):
for name in self.cameras:
self.cameras[name].async_read()
time.sleep(1 / 30)
for name in self.arm:
self.arm[name].connect()
log_success(f"Connecting {name} arm.")
for name in self.endeffector:
self.endeffector[name].connect()
log_success(f"Connecting {name} endeffector.")
time.sleep(2)
log_success("All Device Connect Success!!!.✅")
def capture_observation(self):
"""The returned observations do not have a batch dimension."""
# Create state by concatenating follower current position
state = []
arm_state_list = []
endeffector_state_list = []
for arm_name in self.arm:
arm_state = self.arm[arm_name].read_current_arm_q()
arm_state_list.append(torch.from_numpy(arm_state))
for endeffector_name in self.endeffector:
endeffector_state = self.endeffector[endeffector_name].read_current_endeffector_q()
endeffector_state_list.append(torch.from_numpy(endeffector_state))
state = (
torch.cat(arm_state_list + endeffector_state_list, dim=0)
if arm_state_list or endeffector_state_list
else torch.tensor([])
)
# Capture images from cameras
images = {}
for name in self.cameras:
output = self.cameras[name].async_read()
if isinstance(output, dict):
images.update({k: torch.from_numpy(v) for k, v in output.items()})
else:
images[name] = torch.from_numpy(output)
# Populate output dictionnaries and format to pytorch
obs_dict = {}
obs_dict["observation.state"] = state
for name, value in images.items():
obs_dict[f"observation.images.{name}"] = value
return obs_dict
def send_action(self, action: torch.Tensor, t_command_target: float | None = None) -> torch.Tensor:
from_idx_arm = 0
to_idx_arm = 0
action_sent_arm = []
cmd_target = "drive_to_waypoint" if self.initial_data_received else "schedule_waypoint"
for arm_name in self.arm:
to_idx_arm += len(self.arm[arm_name].motor_names)
action_arm = action[from_idx_arm:to_idx_arm].numpy()
from_idx_arm = to_idx_arm
action_sent_arm.append(torch.from_numpy(action_arm))
self.arm[arm_name].write_arm(
action_arm,
time_target=t_command_target - time.monotonic() + time.perf_counter(),
cmd_target=cmd_target,
)
from_idx_endeffector = to_idx_arm
to_idx_endeffector = to_idx_arm
action_endeffector_set = []
for endeffector_name in self.endeffector:
to_idx_endeffector += len(self.endeffector[endeffector_name].motor_names)
action_endeffector = action[from_idx_endeffector:to_idx_endeffector].numpy()
from_idx_endeffector = to_idx_endeffector
action_endeffector_set.append(torch.from_numpy(action_endeffector))
self.endeffector[endeffector_name].write_endeffector(
action_endeffector,
time_target=t_command_target - time.monotonic() + time.perf_counter(),
cmd_target=cmd_target,
)
self.initial_data_received = False
return torch.cat(action_sent_arm + action_endeffector_set, dim=0)
def disconnect(self):
# disconnect the arms
for name in self.arm:
self.arm[name].disconnect()
log_success(f"disconnect {name} arm.")
for name in self.endeffector:
self.endeffector[name].disconnect()
log_success(f"disconnect {name} endeffector.")
# disconnect the cameras
for name in self.cameras:
self.cameras[name].disconnect()
log_success(f"disconnect {name} cameras.")
def __del__(self):
if getattr(self, "is_connected", False):
self.disconnect()

View File

@@ -0,0 +1,270 @@
import abc
from dataclasses import dataclass, field
import draccus
import numpy as np
from unitree_deploy.robot_devices.arm.configs import (
ArmConfig,
G1ArmConfig,
Z1ArmConfig,
Z1DualArmConfig,
)
from unitree_deploy.robot_devices.cameras.configs import (
CameraConfig,
ImageClientCameraConfig,
IntelRealSenseCameraConfig,
OpenCVCameraConfig,
)
from unitree_deploy.robot_devices.endeffector.configs import (
Dex1_GripperConfig,
EndEffectorConfig,
)
# ======================== arm motors =================================
# name: (index, model)
g1_motors = {
"kLeftShoulderPitch": [0, "g1-joint"],
"kLeftShoulderRoll": [1, "g1-joint"],
"kLeftShoulderYaw": [2, "g1-joint"],
"kLeftElbow": [3, "g1-joint"],
"kLeftWristRoll": [4, "g1-joint"],
"kLeftWristPitch": [5, "g1-joint"],
"kLeftWristyaw": [6, "g1-joint"],
"kRightShoulderPitch": [7, "g1-joint"],
"kRightShoulderRoll": [8, "g1-joint"],
"kRightShoulderYaw": [9, "g1-joint"],
"kRightElbow": [10, "g1-joint"],
"kRightWristRoll": [11, "g1-joint"],
"kRightWristPitch": [12, "g1-joint"],
"kRightWristYaw": [13, "g1-joint"],
}
z1_motors = {
"kWaist": [0, "z1-joint"],
"kShoulder": [1, "z1-joint"],
"kElbow": [2, "z1-joint"],
"kForearmRoll": [3, "z1-joint"],
"kWristAngle": [4, "z1-joint"],
"kWristRotate": [5, "z1-joint"],
"kGripper": [6, "z1-joint"],
}
z1_dual_motors = {
"kLeftWaist": [0, "z1-joint"],
"kLeftShoulder": [1, "z1-joint"],
"kLeftElbow": [2, "z1-joint"],
"kLeftForearmRoll": [3, "z1-joint"],
"kLeftWristAngle": [4, "z1-joint"],
"kLeftWristRotate": [5, "z1-joint"],
"kRightWaist": [7, "z1-joint"],
"kRightShoulder": [8, "z1-joint"],
"kRightElbow": [9, "z1-joint"],
"kRightForearmRoll": [10, "z1-joint"],
"kRightWristAngle": [11, "z1-joint"],
"kRightWristRotate": [12, "z1-joint"],
}
# =========================================================
# ======================== camera =================================
def z1_intelrealsense_camera_default_factory():
return {
"cam_high": IntelRealSenseCameraConfig(
serial_number="044122071036",
fps=30,
width=640,
height=480,
),
# "cam_wrist": IntelRealSenseCameraConfig(
# serial_number="419122270615",
# fps=30,
# width=640,
# height=480,
# ),
}
def z1_dual_intelrealsense_camera_default_factory():
return {
# "cam_left_wrist": IntelRealSenseCameraConfig(
# serial_number="218722271166",
# fps=30,
# width=640,
# height=480,
# ),
# "cam_right_wrist": IntelRealSenseCameraConfig(
# serial_number="419122270677",
# fps=30,
# width=640,
# height=480,
# ),
"cam_high": IntelRealSenseCameraConfig(
serial_number="947522071393",
fps=30,
width=640,
height=480,
),
}
def g1_image_client_default_factory():
return {
"imageclient": ImageClientCameraConfig(
head_camera_type="opencv",
head_camera_id_numbers=[4],
head_camera_image_shape=[480, 1280], # Head camera resolution
wrist_camera_type="opencv",
wrist_camera_id_numbers=[0, 2],
wrist_camera_image_shape=[480, 640], # Wrist camera resolution
aspect_ratio_threshold=2.0,
fps=30,
mock=False,
),
}
def usb_camera_default_factory():
return {
"cam_high": OpenCVCameraConfig(
camera_index="/dev/video1",
fps=30,
width=640,
height=480,
),
"cam_left_wrist": OpenCVCameraConfig(
camera_index="/dev/video5",
fps=30,
width=640,
height=480,
),
"cam_right_wrist": OpenCVCameraConfig(
camera_index="/dev/video3",
fps=30,
width=640,
height=480,
),
}
# =========================================================
# ======================== endeffector =================================
def dex1_default_factory():
return {
"left": Dex1_GripperConfig(
unit_test=True,
motors={
"kLeftGripper": [0, "z1_gripper-joint"],
},
topic_gripper_state="rt/dex1/left/state",
topic_gripper_command="rt/dex1/left/cmd",
),
"right": Dex1_GripperConfig(
unit_test=True,
motors={
"kRightGripper": [1, "z1_gripper-joint"],
},
topic_gripper_state="rt/dex1/right/state",
topic_gripper_command="rt/dex1/right/cmd",
),
}
# =========================================================
# ======================== arm =================================
def z1_arm_default_factory(init_pose=None):
return {
"z1": Z1ArmConfig(
init_pose=np.zeros(7) if init_pose is None else init_pose,
motors=z1_motors,
),
}
def z1_dual_arm_single_config_factory(init_pose=None):
return {
"z1_dual": Z1DualArmConfig(
left_robot_ip="127.0.0.1",
left_robot_port1=8073,
left_robot_port2=8074,
right_robot_ip="127.0.0.1",
right_robot_port1=8071,
right_robot_port2=8072,
init_pose_left=np.zeros(6) if init_pose is None else init_pose[:6],
init_pose_right=np.zeros(6) if init_pose is None else init_pose[6:],
control_dt=1 / 250.0,
motors=z1_dual_motors,
),
}
def g1_dual_arm_default_factory(init_pose=None):
return {
"g1": G1ArmConfig(
init_pose=np.zeros(14) if init_pose is None else init_pose,
motors=g1_motors,
mock=False,
),
}
# =========================================================
# robot_type: arm devies _ endeffector devies _ camera devies
@dataclass
class RobotConfig(draccus.ChoiceRegistry, abc.ABC):
@property
def type(self) -> str:
return self.get_choice_name(self.__class__)
@dataclass
class UnitreeRobotConfig(RobotConfig):
cameras: dict[str, CameraConfig] = field(default_factory=lambda: {})
arm: dict[str, ArmConfig] = field(default_factory=lambda: {})
endeffector: dict[str, EndEffectorConfig] = field(default_factory=lambda: {})
# =============================== Single-arm:z1, Camera:Realsense ========================================
@RobotConfig.register_subclass("z1_realsense")
@dataclass
class Z1_Realsense_RobotConfig(UnitreeRobotConfig):
cameras: dict[str, CameraConfig] = field(default_factory=z1_intelrealsense_camera_default_factory)
arm: dict[str, ArmConfig] = field(default_factory=z1_arm_default_factory)
# =============================== Dual-arm:z1, Endeffector:dex1, Camera:Realsense ========================================
@RobotConfig.register_subclass("z1_dual_dex1_realsense")
@dataclass
class Z1dual_Dex1_Realsense_RobotConfig(UnitreeRobotConfig):
cameras: dict[str, CameraConfig] = field(default_factory=z1_dual_intelrealsense_camera_default_factory)
arm: dict[str, ArmConfig] = field(default_factory=z1_dual_arm_single_config_factory)
endeffector: dict[str, EndEffectorConfig] = field(default_factory=dex1_default_factory)
# =============================== Dual-arm:z1, Endeffector:dex1, Camera:Realsense ========================================
@RobotConfig.register_subclass("z1_dual_dex1_opencv")
@dataclass
class Z1dual_Dex1_Opencv_RobotConfig(UnitreeRobotConfig):
cameras: dict[str, CameraConfig] = field(default_factory=usb_camera_default_factory)
arm: dict[str, ArmConfig] = field(default_factory=z1_dual_arm_single_config_factory)
endeffector: dict[str, EndEffectorConfig] = field(default_factory=dex1_default_factory)
# =============================== Arm:g1, Endeffector:dex1, Camera:imageclint ========================================
@RobotConfig.register_subclass("g1_dex1")
@dataclass
class G1_Dex1_Imageclint_RobotConfig(UnitreeRobotConfig):
cameras: dict[str, CameraConfig] = field(default_factory=g1_image_client_default_factory)
arm: dict[str, ArmConfig] = field(default_factory=g1_dual_arm_default_factory)
endeffector: dict[str, EndEffectorConfig] = field(default_factory=dex1_default_factory)

View File

@@ -0,0 +1,47 @@
from typing import Protocol
from unitree_deploy.robot.robot_configs import (
G1_Dex1_Imageclint_RobotConfig,
RobotConfig,
Z1_Realsense_RobotConfig,
Z1dual_Dex1_Opencv_RobotConfig,
Z1dual_Dex1_Realsense_RobotConfig,
)
def get_arm_id(name, arm_type):
return f"{name}_{arm_type}"
class Robot(Protocol):
robot_type: str
features: dict
def connect(self): ...
def capture_observation(self): ...
def send_action(self, action): ...
def disconnect(self): ...
def make_robot_config(robot_type: str, **kwargs) -> RobotConfig:
if robot_type == "z1_realsense":
return Z1_Realsense_RobotConfig(**kwargs)
elif robot_type == "z1_dual_dex1_realsense":
return Z1dual_Dex1_Realsense_RobotConfig(**kwargs)
elif robot_type == "z1_dual_dex1_opencv":
return Z1dual_Dex1_Opencv_RobotConfig(**kwargs)
elif robot_type == "g1_dex1":
return G1_Dex1_Imageclint_RobotConfig(**kwargs)
else:
raise ValueError(f"Robot type '{robot_type}' is not available.")
def make_robot_from_config(config: RobotConfig):
from unitree_deploy.robot.robot import UnitreeRobot
return UnitreeRobot(config)
def make_robot(robot_type: str, **kwargs) -> Robot:
config = make_robot_config(robot_type, **kwargs)
return make_robot_from_config(config)

View File

@@ -0,0 +1,119 @@
# noqa: N815
from enum import IntEnum
# ==================g1========================
class G1_29_JointArmIndex(IntEnum):
# Left arm
kLeftShoulderPitch = 15
kLeftShoulderRoll = 16
kLeftShoulderYaw = 17
kLeftElbow = 18
kLeftWristRoll = 19
kLeftWristPitch = 20
kLeftWristyaw = 21
# Right arm
kRightShoulderPitch = 22
kRightShoulderRoll = 23
kRightShoulderYaw = 24
kRightElbow = 25
kRightWristRoll = 26
kRightWristPitch = 27
kRightWristYaw = 28
class G1_29_JointIndex(IntEnum):
# Left leg
kLeftHipPitch = 0
kLeftHipRoll = 1
kLeftHipYaw = 2
kLeftKnee = 3
kLeftAnklePitch = 4
kLeftAnkleRoll = 5
# Right leg
kRightHipPitch = 6
kRightHipRoll = 7
kRightHipYaw = 8
kRightKnee = 9
kRightAnklePitch = 10
kRightAnkleRoll = 11
kWaistYaw = 12
kWaistRoll = 13
kWaistPitch = 14
# Left arm
kLeftShoulderPitch = 15
kLeftShoulderRoll = 16
kLeftShoulderYaw = 17
kLeftElbow = 18
kLeftWristRoll = 19
kLeftWristPitch = 20
kLeftWristyaw = 21
# Right arm
kRightShoulderPitch = 22
kRightShoulderRoll = 23
kRightShoulderYaw = 24
kRightElbow = 25
kRightWristRoll = 26
kRightWristPitch = 27
kRightWristYaw = 28
# not used
kNotUsedJoint0 = 29
kNotUsedJoint1 = 30
kNotUsedJoint2 = 31
kNotUsedJoint3 = 32
kNotUsedJoint4 = 33
kNotUsedJoint5 = 34
# ==========================================
# ==================z1========================
class Z1ArmJointIndex(IntEnum):
WAIST = 0
SHOULDER = 1
ELBOW = 2
FOREARM_ROLL = 3
WRIST_ANGLE = 4
WRIST_ROTATE = 5
class Z1_12_JointArmIndex(IntEnum):
# Left arm
kLeftWaist = 0
kLeftShoulder = 1
kLeftElbow = 2
kLeftForearmRoll = 3
kLeftWristAngle = 4
kLeftWristRotate = 5
# Right arm
kRightWaist = 6
kRightShoulder = 7
kRightElbow = 8
kRightForearmRoll = 9
kRightWristAngle = 10
kRightWristRotate = 11
class Z1GripperArmJointIndex(IntEnum):
WAIST = 0
SHOULDER = 1
ELBOW = 2
FOREARM_ROLL = 3
WRIST_ANGLE = 4
WRIST_ROTATE = 5
GRIPPER = 6
class Gripper_Sigle_JointIndex(IntEnum):
kGripper = 0
# ==========================================

View File

@@ -0,0 +1,82 @@
import abc
from dataclasses import dataclass
import draccus
import numpy as np
@dataclass
class ArmConfig(draccus.ChoiceRegistry, abc.ABC):
@property
def type(self) -> str:
return self.get_choice_name(self.__class__)
@ArmConfig.register_subclass("z1")
@dataclass
class Z1ArmConfig(ArmConfig):
motors: dict[str, tuple[int, str]]
init_pose: list = None
unit_test: bool = False
control_dt: float = 1 / 500.0
robot_kp: np.ndarray = np.array([4, 6, 6, 6, 6, 6])
robot_kd: np.ndarray = np.array([350, 300, 300, 200, 200, 200])
max_pos_speed: float = 180 * (np.pi / 180) * 2
log_level: str | int = "ERROR"
def __post_init__(self):
if self.control_dt < 0.002:
raise ValueError(f"`control_dt` must > 1/500 (got {self.control_dt})")
@ArmConfig.register_subclass("z1_dual")
@dataclass
class Z1DualArmConfig(ArmConfig):
left_robot_ip: str
left_robot_port1: int
left_robot_port2: int
right_robot_ip: str
right_robot_port1: int
right_robot_port2: int
motors: dict[str, tuple[int, str]]
robot_kp: np.ndarray = np.array([4, 6, 6, 6, 6, 6])
robot_kd: np.ndarray = np.array([350, 300, 300, 200, 200, 200])
mock: bool = False
unit_test: bool = False
init_pose_left: list | None = None
init_pose_right: list | None = None
max_pos_speed: float = 180 * (np.pi / 180) * 2
control_dt: float = 1 / 500.0
def __post_init__(self):
if self.control_dt < 0.002:
raise ValueError(f"`control_dt` must > 1/500 (got {self.control_dt})")
@ArmConfig.register_subclass("g1")
@dataclass
class G1ArmConfig(ArmConfig):
motors: dict[str, tuple[int, str]]
mock: bool = False
unit_test: bool = False
init_pose: np.ndarray | list = np.zeros(14)
control_dt: float = 1 / 500.0
max_pos_speed: float = 180 * (np.pi / 180) * 2
topic_low_command: str = "rt/lowcmd"
topic_low_state: str = "rt/lowstate"
kp_high: float = 300.0
kd_high: float = 3.0
kp_low: float = 80.0 # 140.0
kd_low: float = 3.0 # 3.0
kp_wrist: float = 40.0
kd_wrist: float = 1.5
def __post_init__(self):
if self.control_dt < 0.002:
raise ValueError(f"`control_dt` must > 1/500 (got {self.control_dt})")

View File

@@ -0,0 +1,385 @@
import threading
import time
from typing import Callable
import numpy as np
from unitree_sdk2py.core.channel import ChannelFactoryInitialize, ChannelPublisher, ChannelSubscriber # dds
from unitree_sdk2py.idl.default import unitree_hg_msg_dds__LowCmd_
from unitree_sdk2py.idl.unitree_hg.msg.dds_ import LowCmd_, LowState_ # idl
from unitree_sdk2py.utils.crc import CRC
from unitree_deploy.robot_devices.arm.arm_indexs import G1_29_JointArmIndex, G1_29_JointIndex
from unitree_deploy.robot_devices.arm.configs import G1ArmConfig
from unitree_deploy.robot_devices.arm.g1_arm_ik import G1_29_ArmIK
from unitree_deploy.robot_devices.robots_devices_utils import (
DataBuffer,
MotorState,
Robot_Num_Motors,
RobotDeviceAlreadyConnectedError,
)
from unitree_deploy.utils.joint_trajcetory_inter import JointTrajectoryInterpolator
from unitree_deploy.utils.rich_logger import log_error, log_info, log_success, log_warning
from unitree_deploy.utils.run_simulation import MujicoSimulation, get_mujoco_sim_config
class G1_29_LowState:
def __init__(self):
self.motor_state = [MotorState() for _ in range(Robot_Num_Motors.G1_29_Num_Motors)]
class G1_29_ArmController:
def __init__(self, config: G1ArmConfig):
self.motors = config.motors
self.mock = config.mock
self.unit_test = config.unit_test
self.init_pose = config.init_pose
self.control_dt = config.control_dt
self.max_pos_speed = config.max_pos_speed
self.topic_low_command = config.topic_low_command
self.topic_low_state = config.topic_low_state
self.kp_high = config.kp_high
self.kd_high = config.kd_high
self.kp_low = config.kp_low
self.kd_low = config.kd_low
self.kp_wrist = config.kp_wrist
self.kd_wrist = config.kd_wrist
self.all_motor_q = None
self.q_target = np.zeros(14)
self.dq_target = np.zeros(14)
self.tauff_target = np.zeros(14)
self.time_target = time.monotonic()
self.arm_cmd = "schedule_waypoint"
self.lowstate_buffer = DataBuffer()
self.g1_arm_ik = G1_29_ArmIK(unit_test=self.unit_test, visualization=False)
self.stop_event = threading.Event()
self.ctrl_lock = threading.Lock()
self.is_connected = False
@property
def motor_names(self) -> list[str]:
return list(self.motors.keys())
@property
def motor_models(self) -> list[str]:
return [model for _, model in self.motors.values()]
@property
def motor_indices(self) -> list[int]:
return [idx for idx, _ in self.motors.values()]
def _start_daemon_thread(self, target_fn: Callable[[], None], name: str | None = None) -> threading.Thread:
thread = threading.Thread(target=target_fn, name=name)
thread.daemon = True
thread.start()
return thread
def connect(self):
try:
if self.is_connected:
raise RobotDeviceAlreadyConnectedError(
"G1_Arm is already connected. Do not call `robot.connect()` twice."
)
if self.mock:
config = get_mujoco_sim_config(robot_type="g1")
self.g1 = MujicoSimulation(config)
time.sleep(1)
else:
# initialize lowcmd publisher and lowstate subscriber
ChannelFactoryInitialize(0)
self.lowcmd_publisher = ChannelPublisher(self.topic_low_command, LowCmd_)
self.lowcmd_publisher.Init()
self.lowstate_subscriber = ChannelSubscriber(self.topic_low_state, LowState_)
self.lowstate_subscriber.Init()
# initialize subscribe thread
self.subscribe_thread = self._start_daemon_thread(
self._subscribe_motor_state, name="g1._subscribe_motor_state"
)
while not self.lowstate_buffer.get_data():
time.sleep(0.01)
log_warning("[G1_29_ArmController] Waiting to subscribe dds...")
if not self.mock:
# initialize hg's lowcmd msg
self.crc = CRC()
self.msg = unitree_hg_msg_dds__LowCmd_()
self.msg.mode_pr = 0
self.msg.mode_machine = self._read_mode_machine()
self.all_motor_q = self._read_current_motor_q()
log_info(f"Current all body motor state q:\n{self.all_motor_q} \n")
log_info(f"Current two arms motor state q:\n{self.read_current_arm_q()}\n")
log_info("Lock all joints except two arms...\n")
arm_indices = {member.value for member in G1_29_JointArmIndex}
for id in G1_29_JointIndex:
self.msg.motor_cmd[id].mode = 1
if id.value in arm_indices:
if self._is_wrist_motor(id):
self.msg.motor_cmd[id].kp = self.kp_wrist
self.msg.motor_cmd[id].kd = self.kd_wrist
else:
self.msg.motor_cmd[id].kp = self.kp_low
self.msg.motor_cmd[id].kd = self.kd_low
else:
if self._is_weak_motor(id):
self.msg.motor_cmd[id].kp = self.kp_low
self.msg.motor_cmd[id].kd = self.kd_low
else:
self.msg.motor_cmd[id].kp = self.kp_high
self.msg.motor_cmd[id].kd = self.kd_high
self.msg.motor_cmd[id].q = self.all_motor_q[id]
log_info("Lock OK!\n")
# initialize publish thread
self.publish_thread = self._start_daemon_thread(self._ctrl_motor_state, name="g1._ctrl_motor_state")
self.is_connected = True
except Exception as e:
self.disconnect()
log_error(f"❌ Error in G1_29_ArmController.connect: {e}")
def _subscribe_motor_state(self):
try:
while not self.stop_event.is_set():
lowstate = G1_29_LowState()
if self.mock:
if self.g1.get_current_positions() is not None and len(self.g1.get_current_positions()) != 0:
for motor_id in range(Robot_Num_Motors.G1_29_Num_Motors):
lowstate.motor_state[motor_id].q = self.g1.get_current_positions()[motor_id]
lowstate.motor_state[motor_id].dq = 0.0
else:
print("[WARN] get_current_positions() failed: queue is empty.")
else:
msg = self.lowstate_subscriber.Read()
if msg is not None:
for id in range(Robot_Num_Motors.G1_29_Num_Motors):
lowstate.motor_state[id].q = msg.motor_state[id].q
lowstate.motor_state[id].dq = msg.motor_state[id].dq
self.lowstate_buffer.set_data(lowstate)
time.sleep(self.control_dt)
except Exception as e:
self.disconnect()
log_error(f"❌ Error in G1_29_ArmController._subscribe_motor_state: {e}")
def _update_g1_arm(
self,
arm_q_target: np.ndarray,
arm_dq_target: np.ndarray | None = None,
arm_tauff_target: np.ndarray | None = None,
):
if self.mock:
self.g1.set_positions(arm_q_target)
else:
for idx, id in enumerate(G1_29_JointArmIndex):
self.msg.motor_cmd[id].q = arm_q_target[idx]
self.msg.motor_cmd[id].dq = arm_dq_target[idx]
self.msg.motor_cmd[id].tau = arm_tauff_target[idx]
self.msg.crc = self.crc.Crc(self.msg)
self.lowcmd_publisher.Write(self.msg)
def _drive_to_waypoint(self, target_pose: np.ndarray, t_insert_time: float):
curr_time = time.monotonic() + self.control_dt
t_insert = curr_time + t_insert_time
self.pose_interp = self.pose_interp.drive_to_waypoint(
pose=target_pose,
time=t_insert,
curr_time=curr_time,
max_pos_speed=self.max_pos_speed,
)
while time.monotonic() < t_insert:
start_time = time.perf_counter()
cliped_arm_q_target = self.pose_interp(time.monotonic())
self._update_g1_arm(cliped_arm_q_target, self.dq_target, self.tauff_target)
time.sleep(max(0, (self.control_dt - (time.perf_counter() - start_time))))
def _schedule_waypoint(
self,
arm_q_target: np.ndarray,
arm_time_target: float,
t_now: float,
start_time: float,
last_waypoint_time: float,
arm_tauff_target: np.ndarray | None = None,
) -> float:
target_time = time.monotonic() - time.perf_counter() + arm_time_target
curr_time = t_now + self.control_dt
target_time = max(target_time, curr_time + self.control_dt)
self.pose_interp = self.pose_interp.schedule_waypoint(
pose=arm_q_target,
time=target_time,
max_pos_speed=self.max_pos_speed,
curr_time=curr_time,
last_waypoint_time=last_waypoint_time,
)
last_waypoint_time = target_time
cliped_arm_q_target = self.pose_interp(t_now)
self._update_g1_arm(cliped_arm_q_target, self.dq_target, arm_tauff_target)
time.sleep(max(0, (self.control_dt - (time.perf_counter() - start_time))))
def _ctrl_motor_state(self):
# wait dds init done !!!
time.sleep(2)
self.pose_interp = JointTrajectoryInterpolator(
times=[time.monotonic()], joint_positions=[self.read_current_arm_q()]
)
self.go_start()
arm_q_target = self.read_current_arm_q()
arm_tauff_target = self.tauff_target
arm_time_target = time.monotonic()
arm_cmd = "schedule_waypoint"
last_waypoint_time = time.monotonic()
while not self.stop_event.is_set():
start_time = time.perf_counter()
t_now = time.monotonic()
with self.ctrl_lock:
arm_q_target = self.q_target
arm_tauff_target = self.tauff_target
arm_time_target = self.time_target
arm_cmd = self.arm_cmd
if arm_cmd == "drive_to_waypoint":
self._drive_to_waypoint(target_pose=arm_q_target, t_insert_time=0.8)
elif arm_cmd == "schedule_waypoint":
self._schedule_waypoint(
arm_q_target=arm_q_target,
arm_time_target=arm_time_target,
t_now=t_now,
start_time=start_time,
last_waypoint_time=last_waypoint_time,
arm_tauff_target=arm_tauff_target,
)
# target_time = time.monotonic() - time.perf_counter() + arm_time_target
# curr_time = t_now + self.control_dt
# target_time = max(target_time, curr_time + self.control_dt)
# self.pose_interp = self.pose_interp.schedule_waypoint(
# pose=arm_q_target,
# time=target_time,
# max_pos_speed=self.max_pos_speed,
# curr_time=curr_time,
# last_waypoint_time=last_waypoint_time
# )
# last_waypoint_time = target_time
# cliped_arm_q_target = self.pose_interp(t_now)
# self._update_g1_arm(cliped_arm_q_target, self.dq_target, arm_tauff_target)
# time.sleep(max(0, (self.control_dt - (time.perf_counter() - start_time))))
def _read_mode_machine(self):
"""Return current dds mode machine."""
return self.lowstate_subscriber.Read().mode_machine
def _read_current_motor_q(self) -> np.ndarray | None:
"""Return current state q of all body motors."""
return np.array([self.lowstate_buffer.get_data().motor_state[id].q for id in G1_29_JointIndex])
def read_current_arm_q(self) -> np.ndarray | None:
"""Return current state q of the left and right arm motors."""
return np.array([self.lowstate_buffer.get_data().motor_state[id].q for id in G1_29_JointArmIndex])
def read_current_arm_dq(self) -> np.ndarray | None:
"""Return current state dq of the left and right arm motors."""
return np.array([self.lowstate_buffer.get_data().motor_state[id].dq for id in G1_29_JointArmIndex])
def write_arm(
self,
q_target: list[float] | np.ndarray,
tauff_target: list[float] | np.ndarray = None,
time_target: float | None = None,
cmd_target: str | None = None,
):
"""Set control target values q & tau of the left and right arm motors."""
with self.ctrl_lock:
self.q_target = q_target
self.tauff_target = tauff_target if tauff_target is not None else self.arm_tau(self.q_target)
self.time_target = time_target
self.arm_cmd = cmd_target
def arm_ik(
self, l_ee_target: list[float] | np.ndarray, r_ee_target: list[float] | np.ndarray
) -> tuple[np.ndarray, np.ndarray] | None:
return self.g1_arm_ik.solve_ik(l_ee_target, r_ee_target, self.read_current_arm_q(), self.read_current_arm_dq())
def arm_tau(
self, current_arm_q: np.ndarray | None = None, current_arm_dq: np.ndarray | None = None
) -> np.ndarray | None:
return self.g1_arm_ik.solve_tau(current_arm_q, current_arm_dq)
def arm_fk(self, q: np.ndarray | None = None) -> np.ndarray | None:
pass
def go_start(self):
self._drive_to_waypoint(target_pose=self.init_pose, t_insert_time=2.0)
log_success("[G1_29_ArmController] Go Start OK!\n")
def go_home(self):
if self.mock:
self.stop_event.set()
# self.subscribe_thread.join()
# self.publish_thread.join()
time.sleep(1)
# self.g1.stop()
else:
self.stop_event.set()
self.publish_thread.join()
self._drive_to_waypoint(target_pose=self.init_pose, t_insert_time=2.0)
log_success("[G1_29_ArmController] Go Home OK!\n")
def disconnect(self):
self.is_connected = False
self.go_home()
def _is_weak_motor(self, motor_index):
weak_motors = [
G1_29_JointIndex.kLeftAnklePitch.value,
G1_29_JointIndex.kRightAnklePitch.value,
# Left arm
G1_29_JointIndex.kLeftShoulderPitch.value,
G1_29_JointIndex.kLeftShoulderRoll.value,
G1_29_JointIndex.kLeftShoulderYaw.value,
G1_29_JointIndex.kLeftElbow.value,
# Right arm
G1_29_JointIndex.kRightShoulderPitch.value,
G1_29_JointIndex.kRightShoulderRoll.value,
G1_29_JointIndex.kRightShoulderYaw.value,
G1_29_JointIndex.kRightElbow.value,
]
return motor_index.value in weak_motors
def _is_wrist_motor(self, motor_index):
wrist_motors = [
G1_29_JointIndex.kLeftWristRoll.value,
G1_29_JointIndex.kLeftWristPitch.value,
G1_29_JointIndex.kLeftWristyaw.value,
G1_29_JointIndex.kRightWristRoll.value,
G1_29_JointIndex.kRightWristPitch.value,
G1_29_JointIndex.kRightWristYaw.value,
]
return motor_index.value in wrist_motors

View File

@@ -0,0 +1,280 @@
import casadi
import meshcat.geometry as mg
import numpy as np
import pinocchio as pin
from pinocchio import casadi as cpin
from pinocchio.visualize import MeshcatVisualizer
from unitree_deploy.utils.weighted_moving_filter import WeightedMovingFilter
class G1_29_ArmIK:
def __init__(self, unit_test=False, visualization=False):
np.set_printoptions(precision=5, suppress=True, linewidth=200)
self.unit_test = unit_test
self.visualization = visualization
if not self.unit_test:
self.robot = pin.RobotWrapper.BuildFromURDF(
"unitree_deploy/robot_devices/assets/g1/g1_body29_hand14.urdf",
"unitree_deploy/robot_devices/assets/g1/",
)
else:
self.robot = pin.RobotWrapper.BuildFromURDF(
"unitree_deploy/robot_devices/assets/g1/g1_body29_hand14.urdf",
"unitree_deploy/robot_devices/assets/g1/",
) # for test
self.mixed_jointsToLockIDs = [
"left_hip_pitch_joint",
"left_hip_roll_joint",
"left_hip_yaw_joint",
"left_knee_joint",
"left_ankle_pitch_joint",
"left_ankle_roll_joint",
"right_hip_pitch_joint",
"right_hip_roll_joint",
"right_hip_yaw_joint",
"right_knee_joint",
"right_ankle_pitch_joint",
"right_ankle_roll_joint",
"waist_yaw_joint",
"waist_roll_joint",
"waist_pitch_joint",
"left_hand_thumb_0_joint",
"left_hand_thumb_1_joint",
"left_hand_thumb_2_joint",
"left_hand_middle_0_joint",
"left_hand_middle_1_joint",
"left_hand_index_0_joint",
"left_hand_index_1_joint",
"right_hand_thumb_0_joint",
"right_hand_thumb_1_joint",
"right_hand_thumb_2_joint",
"right_hand_index_0_joint",
"right_hand_index_1_joint",
"right_hand_middle_0_joint",
"right_hand_middle_1_joint",
]
self.reduced_robot = self.robot.buildReducedRobot(
list_of_joints_to_lock=self.mixed_jointsToLockIDs,
reference_configuration=np.array([0.0] * self.robot.model.nq),
)
self.reduced_robot.model.addFrame(
pin.Frame(
"L_ee",
self.reduced_robot.model.getJointId("left_wrist_yaw_joint"),
pin.SE3(np.eye(3), np.array([0.05, 0, 0]).T),
pin.FrameType.OP_FRAME,
)
)
self.reduced_robot.model.addFrame(
pin.Frame(
"R_ee",
self.reduced_robot.model.getJointId("right_wrist_yaw_joint"),
pin.SE3(np.eye(3), np.array([0.05, 0, 0]).T),
pin.FrameType.OP_FRAME,
)
)
# for i in range(self.reduced_robot.model.nframes):
# frame = self.reduced_robot.model.frames[i]
# frame_id = self.reduced_robot.model.getFrameId(frame.name)
# Creating Casadi models and data for symbolic computing
self.cmodel = cpin.Model(self.reduced_robot.model)
self.cdata = self.cmodel.createData()
# Creating symbolic variables
self.cq = casadi.SX.sym("q", self.reduced_robot.model.nq, 1)
self.cTf_l = casadi.SX.sym("tf_l", 4, 4)
self.cTf_r = casadi.SX.sym("tf_r", 4, 4)
cpin.framesForwardKinematics(self.cmodel, self.cdata, self.cq)
# Get the hand joint ID and define the error function
self.L_hand_id = self.reduced_robot.model.getFrameId("L_ee")
self.R_hand_id = self.reduced_robot.model.getFrameId("R_ee")
self.translational_error = casadi.Function(
"translational_error",
[self.cq, self.cTf_l, self.cTf_r],
[
casadi.vertcat(
self.cdata.oMf[self.L_hand_id].translation - self.cTf_l[:3, 3],
self.cdata.oMf[self.R_hand_id].translation - self.cTf_r[:3, 3],
)
],
)
self.rotational_error = casadi.Function(
"rotational_error",
[self.cq, self.cTf_l, self.cTf_r],
[
casadi.vertcat(
cpin.log3(self.cdata.oMf[self.L_hand_id].rotation @ self.cTf_l[:3, :3].T),
cpin.log3(self.cdata.oMf[self.R_hand_id].rotation @ self.cTf_r[:3, :3].T),
)
],
)
# Defining the optimization problem
self.opti = casadi.Opti()
self.var_q = self.opti.variable(self.reduced_robot.model.nq)
self.var_q_last = self.opti.parameter(self.reduced_robot.model.nq) # for smooth
self.param_tf_l = self.opti.parameter(4, 4)
self.param_tf_r = self.opti.parameter(4, 4)
self.translational_cost = casadi.sumsqr(self.translational_error(self.var_q, self.param_tf_l, self.param_tf_r))
self.rotation_cost = casadi.sumsqr(self.rotational_error(self.var_q, self.param_tf_l, self.param_tf_r))
self.regularization_cost = casadi.sumsqr(self.var_q)
self.smooth_cost = casadi.sumsqr(self.var_q - self.var_q_last)
# Setting optimization constraints and goals
self.opti.subject_to(
self.opti.bounded(
self.reduced_robot.model.lowerPositionLimit,
self.var_q,
self.reduced_robot.model.upperPositionLimit,
)
)
self.opti.minimize(
50 * self.translational_cost + self.rotation_cost + 0.02 * self.regularization_cost + 0.1 * self.smooth_cost
)
opts = {
"ipopt": {"print_level": 0, "max_iter": 50, "tol": 1e-6},
"print_time": False, # print or not
"calc_lam_p": False, # https://github.com/casadi/casadi/wiki/FAQ:-Why-am-I-getting-%22NaN-detected%22in-my-optimization%3F
}
self.opti.solver("ipopt", opts)
self.init_data = np.zeros(self.reduced_robot.model.nq)
self.smooth_filter = WeightedMovingFilter(np.array([0.4, 0.3, 0.2, 0.1]), 14)
self.vis = None
if self.visualization:
# Initialize the Meshcat visualizer for visualization
self.vis = MeshcatVisualizer(
self.reduced_robot.model, self.reduced_robot.collision_model, self.reduced_robot.visual_model
)
self.vis.initViewer(open=True)
self.vis.loadViewerModel("pinocchio")
self.vis.displayFrames(True, frame_ids=[101, 102], axis_length=0.15, axis_width=5)
self.vis.display(pin.neutral(self.reduced_robot.model))
# Enable the display of end effector target frames with short axis lengths and greater width.
frame_viz_names = ["L_ee_target", "R_ee_target"]
frame_axis_positions = (
np.array([[0, 0, 0], [1, 0, 0], [0, 0, 0], [0, 1, 0], [0, 0, 0], [0, 0, 1]]).astype(np.float32).T
)
frame_axis_colors = (
np.array([[1, 0, 0], [1, 0.6, 0], [0, 1, 0], [0.6, 1, 0], [0, 0, 1], [0, 0.6, 1]]).astype(np.float32).T
)
axis_length = 0.1
axis_width = 10
for frame_viz_name in frame_viz_names:
self.vis.viewer[frame_viz_name].set_object(
mg.LineSegments(
mg.PointsGeometry(
position=axis_length * frame_axis_positions,
color=frame_axis_colors,
),
mg.LineBasicMaterial(
linewidth=axis_width,
vertexColors=True,
),
)
)
# If the robot arm is not the same size as your arm :)
def scale_arms(self, human_left_pose, human_right_pose, human_arm_length=0.60, robot_arm_length=0.75):
scale_factor = robot_arm_length / human_arm_length
robot_left_pose = human_left_pose.copy()
robot_right_pose = human_right_pose.copy()
robot_left_pose[:3, 3] *= scale_factor
robot_right_pose[:3, 3] *= scale_factor
return robot_left_pose, robot_right_pose
def solve_ik(self, left_wrist, right_wrist, current_lr_arm_motor_q=None, current_lr_arm_motor_dq=None):
if current_lr_arm_motor_q is not None:
self.init_data = current_lr_arm_motor_q
self.opti.set_initial(self.var_q, self.init_data)
# left_wrist, right_wrist = self.scale_arms(left_wrist, right_wrist)
if self.visualization:
self.vis.viewer["L_ee_target"].set_transform(left_wrist) # for visualization
self.vis.viewer["R_ee_target"].set_transform(right_wrist) # for visualization
self.opti.set_value(self.param_tf_l, left_wrist)
self.opti.set_value(self.param_tf_r, right_wrist)
self.opti.set_value(self.var_q_last, self.init_data) # for smooth
try:
self.opti.solve()
# sol = self.opti.solve_limited()
sol_q = self.opti.value(self.var_q)
self.smooth_filter.add_data(sol_q)
sol_q = self.smooth_filter.filtered_data
v = current_lr_arm_motor_dq * 0.0 if current_lr_arm_motor_dq is not None else (sol_q - self.init_data) * 0.0
self.init_data = sol_q
sol_tauff = pin.rnea(
self.reduced_robot.model,
self.reduced_robot.data,
sol_q,
v,
np.zeros(self.reduced_robot.model.nv),
)
if self.visualization:
self.vis.display(sol_q) # for visualization
return sol_q, sol_tauff
except Exception as e:
print(f"ERROR in convergence, plotting debug info.{e}")
sol_q = self.opti.debug.value(self.var_q)
self.smooth_filter.add_data(sol_q)
sol_q = self.smooth_filter.filtered_data
v = current_lr_arm_motor_dq * 0.0 if current_lr_arm_motor_dq is not None else (sol_q - self.init_data) * 0.0
self.init_data = sol_q
sol_tauff = pin.rnea(
self.reduced_robot.model,
self.reduced_robot.data,
sol_q,
v,
np.zeros(self.reduced_robot.model.nv),
)
print(
f"sol_q:{sol_q} \nmotorstate: \n{current_lr_arm_motor_q} \nleft_pose: \n{left_wrist} \nright_pose: \n{right_wrist}"
)
if self.visualization:
self.vis.display(sol_q) # for visualization
# return sol_q, sol_tauff
return current_lr_arm_motor_q, np.zeros(self.reduced_robot.model.nv)
def solve_tau(self, current_lr_arm_motor_q=None, current_lr_arm_motor_dq=None):
try:
sol_tauff = pin.rnea(
self.reduced_robot.model,
self.reduced_robot.data,
current_lr_arm_motor_q,
np.zeros(14),
np.zeros(self.reduced_robot.model.nv),
)
return sol_tauff
except Exception as e:
print(f"ERROR in convergence, plotting debug info.{e}")
return np.zeros(self.reduced_robot.model.nv)

View File

@@ -0,0 +1,63 @@
from typing import Protocol
from unitree_deploy.robot_devices.arm.configs import ArmConfig, G1ArmConfig, Z1ArmConfig, Z1DualArmConfig
class Arm(Protocol):
def connect(self): ...
def disconnect(self): ...
def motor_names(self): ...
def read_current_motor_q(self): ...
def read_current_arm_q(self): ...
def read_current_arm_dq(self): ...
def write_arm(self): ...
def arm_ik(self): ...
def arm_fk(self): ...
def go_start(self): ...
def go_home(self): ...
def make_arm_motors_buses_from_configs(armconfig: dict[str, ArmConfig]) -> list[Arm]:
arm_motors_buses = {}
for key, cfg in armconfig.items():
if cfg.type == "z1":
from unitree_deploy.robot_devices.arm.z1_arm import Z1ArmController
arm_motors_buses[key] = Z1ArmController(cfg)
elif cfg.type == "g1":
from unitree_deploy.robot_devices.arm.g1_arm import G1_29_ArmController
arm_motors_buses[key] = G1_29_ArmController(cfg)
elif cfg.type == "z1_dual":
from unitree_deploy.robot_devices.arm.z1_dual_arm import Z1_12_ArmController
arm_motors_buses[key] = Z1_12_ArmController(cfg)
else:
raise ValueError(f"The motor type '{cfg.type}' is not valid.")
return arm_motors_buses
def make_arm_motors_bus(arm_type: str, **kwargs) -> Arm:
if arm_type == "z1":
from unitree_deploy.robot_devices.arm.z1_arm import Z1ArmController
config = Z1ArmConfig(**kwargs)
return Z1ArmController(config)
elif arm_type == "z1_dual":
from unitree_deploy.robot_devices.arm.z1_dual_arm import Z1_12_ArmController
config = Z1DualArmConfig(**kwargs)
return Z1_12_ArmController(config)
elif arm_type == "g1":
from unitree_deploy.robot_devices.arm.g1_arm import G1_29_ArmController
config = G1ArmConfig(**kwargs)
return G1_29_ArmController(config)
else:
raise ValueError(f"The motor type '{arm_type}' is not valid.")

View File

@@ -0,0 +1,294 @@
import os
import sys
import threading
import time
from typing import Callable
import numpy as np
sys.path.append(os.path.join(os.path.dirname(os.path.abspath(__file__))))
import unitree_arm_interface as unitree_z1 # type: ignore
from unitree_deploy.robot_devices.arm.arm_indexs import Z1GripperArmJointIndex
from unitree_deploy.robot_devices.arm.configs import Z1ArmConfig
from unitree_deploy.robot_devices.arm.z1_arm_ik import Z1_Arm_IK
from unitree_deploy.robot_devices.robots_devices_utils import (
DataBuffer,
MotorState,
Robot_Num_Motors,
RobotDeviceAlreadyConnectedError,
)
from unitree_deploy.utils.joint_trajcetory_inter import JointTrajectoryInterpolator
from unitree_deploy.utils.rich_logger import RichLogger
class Z1LowState:
def __init__(self) -> None:
self.motor_state: list[MotorState] = [MotorState() for _ in range(Robot_Num_Motors.Z1_7_Num_Motors)]
class Z1ArmController:
def __init__(self, config: Z1ArmConfig):
self.motors = config.motors
self.init_pose = config.init_pose
self.unit_test = config.unit_test
self.control_dt = config.control_dt
self.robot_kp = config.robot_kp
self.robot_kd = config.robot_kd
self.max_pos_speed = config.max_pos_speed
self.log_level = config.log_level
self.q_target = self.init_pose
self.dq_target = np.zeros(len(Z1GripperArmJointIndex) - 1, dtype=np.float16)
self.ddq_target = np.zeros(len(Z1GripperArmJointIndex) - 1, dtype=np.float16)
self.tauff_target = np.zeros(len(Z1GripperArmJointIndex) - 1, dtype=np.float16)
self.ftip_target = np.zeros(len(Z1GripperArmJointIndex) - 1, dtype=np.float16)
self.time_target = time.monotonic()
self.DELTA_GRIPPER_CMD = 5.0 / 20.0 / 25.6
self.arm_cmd = "schedule_waypoint"
self.ctrl_lock = threading.Lock()
self.lowstate_buffer = DataBuffer()
self.z1_arm_ik = Z1_Arm_IK(unit_test=self.unit_test, visualization=False)
self.logger = RichLogger(self.log_level)
self.is_connected = False
self.grasped = False
@property
def motor_names(self) -> list[str]:
return list(self.motors.keys())
@property
def motor_models(self) -> list[str]:
return [model for _, model in self.motors.values()]
@property
def motor_indices(self) -> list[int]:
return [idx for idx, _ in self.motors.values()]
def _start_daemon_thread(self, target_fn: Callable[[], None], name: str | None = None) -> threading.Thread:
thread = threading.Thread(target=target_fn, name=name)
thread.daemon = True
thread.start()
return thread
def connect(self):
try:
if self.is_connected:
raise RobotDeviceAlreadyConnectedError(
"Z1_Arm is already connected. Do not run `robot.connect()` twice."
)
# Initialize arms
self.z1 = unitree_z1.ArmInterface()
self.z1_model = self.z1._ctrlComp.armModel
self.z1.setFsmLowcmd()
self.z1.lowcmd.setControlGain(self.robot_kp, self.robot_kd)
self.z1.sendRecv()
self.subscribe_thread = self._start_daemon_thread(
self._subscribe_motor_state, name="z1._subscribe_motor_state"
)
while not self.lowstate_buffer.get_data():
time.sleep(0.01)
self.logger.warning("[Z1_ArmController] Waiting Get Data...")
self.publish_thread = self._start_daemon_thread(self._ctrl_motor_state, name="z1._ctrl_motor_state")
self.is_connected = True
except Exception as e:
self.disconnect()
self.logger.error(f"❌ Error in Z1ArmController.connect: {e}")
def _subscribe_motor_state(self):
try:
while True:
lowstate = Z1LowState()
for motor_id in range(Robot_Num_Motors.Z1_7_Num_Motors - 1):
lowstate.motor_state[motor_id].q = self.z1.lowstate.getQ()[motor_id]
lowstate.motor_state[motor_id].dq = self.z1.lowstate.getQd()[motor_id]
gripper_q = self.z1.lowstate.getGripperQ()
lowstate.motor_state[Robot_Num_Motors.Z1_7_Num_Motors - 1].q = gripper_q
lowstate.motor_state[Robot_Num_Motors.Z1_7_Num_Motors - 1].dq = 0.0
self.lowstate_buffer.set_data(lowstate)
time.sleep(self.control_dt)
except Exception as e:
self.disconnect()
self.logger.error(f"❌ Error in Z1ArmController._subscribe_motor_state: {e}")
def _update_z1_arm(
self,
q: np.ndarray,
qd: np.ndarray | None = None,
qdd: np.ndarray | None = None,
ftip: np.ndarray | None = None,
tau: np.ndarray | None = None,
):
"""Update the state and command of a given robotic arm."""
current_gripper_q = self.read_current_gripper_q()
self.z1.q = q[: len(Z1GripperArmJointIndex) - 1]
self.z1.qd = self.dq_target if qd is None else qd
qdd = self.ddq_target if qdd is None else qdd
ftip = self.ftip_target if ftip is None else ftip
self.z1.tau = self.z1_model.inverseDynamics(self.z1.q, self.z1.qd, qdd, ftip) if tau is None else tau
self.z1.setArmCmd(self.z1.q, self.z1.qd, self.z1.tau)
gripper_q = q[len(Z1GripperArmJointIndex) - 1]
self.z1.gripperQ = np.clip(
gripper_q,
current_gripper_q - self.DELTA_GRIPPER_CMD * 3,
current_gripper_q + self.DELTA_GRIPPER_CMD * 3,
)
# self.z1.gripperQ = np.clip(gripper_q, current_gripper_q - self.DELTA_GRIPPER_CMD, current_gripper_q + self.DELTA_GRIPPER_CMD) if self.grasped else np.clip(gripper_q, current_gripper_q - self.DELTA_GRIPPER_CMD*4, current_gripper_q + self.DELTA_GRIPPER_CMD*4) # np.clip(gripper_q, current_gripper_q - self.DELTA_GRIPPER_CMD*3, current_gripper_q + self.DELTA_GRIPPER_CMD*3)
self.z1.setGripperCmd(self.z1.gripperQ, self.z1.gripperQd, self.z1.gripperTau)
self.z1.sendRecv()
time.sleep(self.control_dt)
self.grasped = abs(self.read_current_gripper_q() - current_gripper_q) < self.DELTA_GRIPPER_CMD / 12.0
def _drive_to_waypoint(self, target_pose: np.ndarray, t_insert_time: float):
curr_time = time.monotonic() + self.control_dt
t_insert = curr_time + t_insert_time
self.pose_interp = self.pose_interp.drive_to_waypoint(
pose=target_pose,
time=t_insert,
curr_time=curr_time,
max_pos_speed=self.max_pos_speed,
)
while time.monotonic() < t_insert:
self._update_z1_arm(self.pose_interp(time.monotonic()))
def _schedule_waypoint(
self,
arm_q_target: np.ndarray,
arm_time_target: float,
t_now: float,
start_time: float,
last_waypoint_time: float,
arm_tauff_target: np.ndarray | None = None,
) -> float:
target_time = time.monotonic() - time.perf_counter() + arm_time_target
curr_time = t_now + self.control_dt
target_time = max(target_time, curr_time + self.control_dt)
self.pose_interp = self.pose_interp.schedule_waypoint(
pose=arm_q_target,
time=target_time,
max_pos_speed=self.max_pos_speed,
curr_time=curr_time,
last_waypoint_time=last_waypoint_time,
)
last_waypoint_time = target_time
self._update_z1_arm(q=self.pose_interp(t_now), tau=arm_tauff_target)
def _ctrl_motor_state(self):
try:
self.pose_interp = JointTrajectoryInterpolator(
times=[time.monotonic()],
joint_positions=[self.read_current_arm_q()],
)
self.go_start()
arm_q_target = self.read_current_arm_q()
arm_tauff_target = self.tauff_target
arm_time_target = time.monotonic()
arm_cmd = "schedule_waypoint"
last_waypoint_time = time.monotonic()
while True:
start_time = time.perf_counter()
t_now = time.monotonic()
with self.ctrl_lock:
arm_q_target = self.q_target
arm_tauff_target = self.tauff_target
arm_time_target = self.time_target
arm_cmd = self.arm_cmd
if arm_cmd == "drive_to_waypoint":
self._drive_to_waypoint(target_pose=arm_q_target, t_insert_time=0.8)
elif arm_cmd == "schedule_waypoint":
self._schedule_waypoint(
arm_q_target=arm_q_target,
arm_time_target=arm_time_target,
t_now=t_now,
start_time=start_time,
last_waypoint_time=last_waypoint_time,
arm_tauff_target=arm_tauff_target,
)
except Exception as e:
self.disconnect()
self.logger.error(f"❌ Error in Z1ArmController._ctrl_motor_state: {e}")
def read_current_arm_q(self) -> np.ndarray:
"""Return current state q of the left and right arm motors."""
return np.array([self.lowstate_buffer.get_data().motor_state[id].q for id in Z1GripperArmJointIndex])
def read_current_arm_q_without_gripper(self) -> np.ndarray:
"""Return current state q of the left and right arm motors."""
return np.array([self.lowstate_buffer.get_data().motor_state[id].q for id in list(Z1GripperArmJointIndex)[:-1]])
def read_current_gripper_q(self) -> np.ndarray:
"""Return current state q of the left and right arm motors."""
return np.array([self.lowstate_buffer.get_data().motor_state[list(Z1GripperArmJointIndex)[-1].value].q])
def read_current_arm_dq(self) -> np.ndarray:
"""Return current state dq of the left and right arm motors."""
return np.array([self.lowstate_buffer.get_data().motor_state[id].dq for id in Z1GripperArmJointIndex])
def read_current_arm_dq_without_gripper(self) -> np.ndarray:
"""Return current state dq of the left and right arm motors."""
return np.array(
[self.lowstate_buffer.get_data().motor_state[id].dq for id in list(Z1GripperArmJointIndex)[:-1]]
)
def write_arm(
self,
q_target: list[float] | np.ndarray,
tauff_target: list[float] | np.ndarray = None,
time_target: float | None = None,
cmd_target: str | None = None,
):
"""Set control target values q & tau of the left and right arm motors."""
with self.ctrl_lock:
self.q_target = q_target
self.tauff_target = tauff_target
self.time_target = time_target
self.arm_cmd = cmd_target
def arm_ik(self, ee_target: list[float] | np.ndarray) -> tuple[np.ndarray, np.ndarray] | None:
return self.z1_arm_ik.solve_ik(
ee_target, self.read_current_arm_q_without_gripper(), self.read_current_arm_dq_without_gripper()
)
def arm_fk(self, q: np.ndarray | None = None) -> np.ndarray | None:
return self.z1_model.forwardKinematics(
q if q is not None else self.read_current_arm_q(), len(Z1GripperArmJointIndex)
)
def go_start(self):
self._drive_to_waypoint(target_pose=self.init_pose, t_insert_time=1.0)
self.logger.success("Go Start OK!\n")
def go_home(self):
self.z1.loopOn()
self.z1.backToStart()
self.z1.loopOff()
time.sleep(0.5)
self.logger.success("Go Home OK!\n")
def disconnect(self):
self.is_connected = False
self.go_home()
def __del__(self):
if getattr(self, "is_connected", False):
self.disconnect()

View File

@@ -0,0 +1,253 @@
import time
import casadi
import meshcat.geometry as mg
import numpy as np
import pinocchio as pin
from pinocchio import casadi as cpin
from pinocchio.visualize import MeshcatVisualizer
from unitree_deploy.utils.weighted_moving_filter import WeightedMovingFilter
class Z1_Arm_IK:
def __init__(self, unit_test=False, visualization=False):
np.set_printoptions(precision=5, suppress=True, linewidth=200)
self.unit_test = unit_test
self.visualization = visualization
self.robot = pin.RobotWrapper.BuildFromURDF(
"unitree_deploy/robot_devices/assets/z1/z1.urdf", "unitree_deploy/robot_devices/assets/z1/"
)
self.mixed_jointsToLockIDs = ["base_static_joint"]
self.reduced_robot = self.robot.buildReducedRobot(
list_of_joints_to_lock=self.mixed_jointsToLockIDs,
reference_configuration=np.array([0.0] * self.robot.model.nq),
)
self.reduced_robot.model.addFrame(
pin.Frame(
"ee",
self.reduced_robot.model.getJointId("joint6"),
pin.SE3(np.eye(3), np.array([0.15, 0, 0]).T),
pin.FrameType.OP_FRAME,
)
)
# for i in range(self.reduced_robot.model.nframes):
# frame = self.reduced_robot.model.frames[i]
# frame_id = self.reduced_robot.model.getFrameId(frame.name)
# print(f"Frame ID: {frame_id}, Name: {frame.name}")
# Creating Casadi models and data for symbolic computing
self.cmodel = cpin.Model(self.reduced_robot.model)
self.cdata = self.cmodel.createData()
self.cq = casadi.SX.sym("q", self.reduced_robot.model.nq, 1)
self.cTf = casadi.SX.sym("tf", 4, 4)
cpin.framesForwardKinematics(self.cmodel, self.cdata, self.cq)
self.EE_ID = self.reduced_robot.model.getFrameId("link06")
self.translational_error = casadi.Function(
"translational_error",
[self.cq, self.cTf],
[
casadi.vertcat(
self.cdata.oMf[self.EE_ID].translation - self.cTf[:3, 3],
)
],
)
self.rotational_error = casadi.Function(
"rotational_error",
[self.cq, self.cTf],
[
casadi.vertcat(
cpin.log3(self.cdata.oMf[self.EE_ID].rotation @ self.cTf[:3, :3].T),
)
],
)
self.opti = casadi.Opti()
self.var_q = self.opti.variable(self.reduced_robot.model.nq)
self.var_q_last = self.opti.parameter(self.reduced_robot.model.nq) # for smooth
self.param_tf = self.opti.parameter(4, 4)
self.translational_cost = casadi.sumsqr(self.translational_error(self.var_q, self.param_tf))
self.rotation_cost = casadi.sumsqr(self.rotational_error(self.var_q, self.param_tf))
self.regularization_cost = casadi.sumsqr(self.var_q)
self.smooth_cost = casadi.sumsqr(self.var_q - self.var_q_last)
# Setting optimization constraints and goals
self.opti.subject_to(
self.opti.bounded(
self.reduced_robot.model.lowerPositionLimit,
self.var_q,
self.reduced_robot.model.upperPositionLimit,
)
)
self.opti.minimize(
50 * self.translational_cost
+ self.rotation_cost
+ 0.02 * self.regularization_cost
+ 0.1 * self.smooth_cost
)
# self.opti.minimize(20 * self.cost + self.regularization_cost)
opts = {
"ipopt": {"print_level": 0, "max_iter": 50, "tol": 1e-6},
"print_time": False, # print or not
"calc_lam_p": False, # https://github.com/casadi/casadi/wiki/FAQ:-Why-am-I-getting-%22NaN-detected%22in-my-optimization%3F
}
self.opti.solver("ipopt", opts)
self.init_data = np.zeros(self.reduced_robot.model.nq)
self.smooth_filter = WeightedMovingFilter(np.array([0.4, 0.3, 0.2, 0.1]), 6)
self.vis = None
if self.visualization:
# Initialize the Meshcat visualizer for visualization
self.vis = MeshcatVisualizer(
self.reduced_robot.model, self.reduced_robot.collision_model, self.reduced_robot.visual_model
)
self.vis.initViewer(open=True)
self.vis.loadViewerModel("pinocchio")
self.vis.displayFrames(True, frame_ids=[101, 102], axis_length=0.15, axis_width=5)
self.vis.display(pin.neutral(self.reduced_robot.model))
# Enable the display of end effector target frames with short axis lengths and greater width.
frame_viz_names = ["ee_target"]
frame_axis_positions = (
np.array([[0, 0, 0], [1, 0, 0], [0, 0, 0], [0, 1, 0], [0, 0, 0], [0, 0, 1]])
.astype(np.float32)
.T
)
frame_axis_colors = (
np.array([[1, 0, 0], [1, 0.6, 0], [0, 1, 0], [0.6, 1, 0], [0, 0, 1], [0, 0.6, 1]])
.astype(np.float32)
.T
)
axis_length = 0.1
axis_width = 10
for frame_viz_name in frame_viz_names:
self.vis.viewer[frame_viz_name].set_object(
mg.LineSegments(
mg.PointsGeometry(
position=axis_length * frame_axis_positions,
color=frame_axis_colors,
),
mg.LineBasicMaterial(
linewidth=axis_width,
vertexColors=True,
),
)
)
def solve_ik(self, wrist, current_lr_arm_motor_q=None, current_lr_arm_motor_dq=None):
if current_lr_arm_motor_q is not None:
self.init_data = current_lr_arm_motor_q
self.opti.set_initial(self.var_q, self.init_data)
# left_wrist, right_wrist = self.scale_arms(left_wrist, right_wrist)
if self.visualization:
self.vis.viewer["ee_target"].set_transform(wrist) # for visualization
self.opti.set_value(self.param_tf, wrist)
self.opti.set_value(self.var_q_last, self.init_data) # for smooth
try:
self.opti.solve()
# sol = self.opti.solve_limited()
sol_q = self.opti.value(self.var_q)
self.smooth_filter.add_data(sol_q)
sol_q = self.smooth_filter.filtered_data
v = current_lr_arm_motor_dq * 0.0 if current_lr_arm_motor_dq is not None else (sol_q - self.init_data) * 0.0
self.init_data = sol_q
sol_tauff = pin.rnea(
self.reduced_robot.model,
self.reduced_robot.data,
sol_q,
v,
np.zeros(self.reduced_robot.model.nv),
)
if self.visualization:
self.vis.display(sol_q) # for visualization
return sol_q, sol_tauff
except Exception as e:
print(f"ERROR in convergence, plotting debug info.{e}")
sol_q = self.opti.debug.value(self.var_q)
self.smooth_filter.add_data(sol_q)
sol_q = self.smooth_filter.filtered_data
v = current_lr_arm_motor_dq * 0.0 if current_lr_arm_motor_dq is not None else (sol_q - self.init_data) * 0.0
self.init_data = sol_q
sol_tauff = pin.rnea(
self.reduced_robot.model,
self.reduced_robot.data,
sol_q,
v,
np.zeros(self.reduced_robot.model.nv),
)
if self.visualization:
self.vis.display(sol_q) # for visualization
# return sol_q, sol_tauff
return current_lr_arm_motor_q, np.zeros(self.reduced_robot.model.nv)
if __name__ == "__main__":
arm_ik = Z1_Arm_IK(unit_test=True, visualization=True)
# initial positon
L_tf_target = pin.SE3(
pin.Quaternion(1, 0, 0, 0),
np.array([0.25, 0, 0.2]),
)
rotation_speed = 0.02
noise_amplitude_translation = 0.002
noise_amplitude_rotation = 0.1
user_input = input("Please enter the start signal (enter 's' to start the subsequent program):\n")
if user_input.lower() == "s":
step = 0
while True:
# Apply rotation noise with bias towards y and z axes
rotation_noise_l = pin.Quaternion(
np.cos(np.random.normal(0, noise_amplitude_rotation) / 2),
0,
np.random.normal(0, noise_amplitude_rotation / 2),
0,
).normalized() # y bias
if step <= 120:
angle = rotation_speed * step
L_tf_target.rotation = (
rotation_noise_l * pin.Quaternion(np.cos(angle / 2), 0, np.sin(angle / 2), 0)
).toRotationMatrix() # y axis
L_tf_target.translation += np.array([0.001, 0.001, 0.001]) + np.random.normal(
0, noise_amplitude_translation, 3
)
else:
angle = rotation_speed * (240 - step)
L_tf_target.rotation = (
rotation_noise_l * pin.Quaternion(np.cos(angle / 2), 0, np.sin(angle / 2), 0)
).toRotationMatrix() # y axis
L_tf_target.translation -= np.array([0.001, 0.001, 0.001]) + np.random.normal(
0, noise_amplitude_translation, 3
)
sol_q, _ = arm_ik.solve_ik(L_tf_target.homogeneous)
step += 1
if step > 240:
step = 0
time.sleep(0.01)

View File

@@ -0,0 +1,347 @@
import os
import sys
import threading
import time
from typing import Callable
import numpy as np
sys.path.append(os.path.join(os.path.dirname(os.path.abspath(__file__))))
import unitree_arm_interface as unitree_z1 # type: ignore
from unitree_deploy.robot_devices.arm.arm_indexs import Z1_12_JointArmIndex
from unitree_deploy.robot_devices.arm.configs import Z1DualArmConfig
from unitree_deploy.robot_devices.arm.z1_arm_ik import Z1_Arm_IK
from unitree_deploy.robot_devices.robots_devices_utils import (
DataBuffer,
MotorState,
Robot_Num_Motors,
RobotDeviceAlreadyConnectedError,
)
from unitree_deploy.utils.joint_trajcetory_inter import JointTrajectoryInterpolator
from unitree_deploy.utils.rich_logger import log_error, log_info, log_success, log_warning
class Z1_12_LowState:
def __init__(self):
self.motor_state = [MotorState() for _ in range(Robot_Num_Motors.Z1_12_Num_Motors)]
class Z1_12_ArmController:
def __init__(self, config: Z1DualArmConfig):
log_info("Initialize Z1_12_ArmController...")
self.left_robot_ip = config.left_robot_ip
self.left_robot_port1 = config.left_robot_port1
self.left_robot_port2 = config.left_robot_port2
self.right_robot_ip = config.right_robot_ip
self.right_robot_port1 = config.right_robot_port1
self.right_robot_port2 = config.right_robot_port2
self.robot_kp = config.robot_kp
self.robot_kd = config.robot_kd
self.max_pos_speed = config.max_pos_speed
self.init_pose_left = np.array(config.init_pose_left)
self.init_pose_right = np.array(config.init_pose_right)
self.init_pose = np.concatenate((self.init_pose_left, self.init_pose_right), axis=0)
self.unit_test = config.unit_test
self.control_dt = config.control_dt
self.motors = config.motors
self.mock = config.mock
self.q_target = np.concatenate((self.init_pose_left, self.init_pose_right))
self.tauff_target = np.zeros(len(Z1_12_JointArmIndex), dtype=np.float64)
self.dq_target = np.zeros(len(Z1_12_JointArmIndex) // 2, dtype=np.float64)
self.ddq_target = np.zeros(len(Z1_12_JointArmIndex) // 2, dtype=np.float64)
self.ftip_target = np.zeros(len(Z1_12_JointArmIndex) // 2, dtype=np.float64)
self.time_target = time.monotonic()
self.arm_cmd = "schedule_waypoint"
self.ctrl_lock = threading.Lock()
self.lowstate_buffer = DataBuffer()
self.stop_event = threading.Event()
self.z1_left_arm_ik = Z1_Arm_IK(unit_test=self.unit_test, visualization=False)
self.z1_right_arm_ik = Z1_Arm_IK(unit_test=self.unit_test, visualization=False)
self.arm_indices_len = len(Z1_12_JointArmIndex) // 2
self.is_connected = False
@property
def motor_names(self) -> list[str]:
return list(self.motors.keys())
@property
def motor_models(self) -> list[str]:
return [model for _, model in self.motors.values()]
@property
def motor_indices(self) -> list[int]:
return [idx for idx, _ in self.motors.values()]
def _start_daemon_thread(self, target_fn: Callable[[], None], name: str | None = None) -> threading.Thread:
thread = threading.Thread(target=target_fn, name=name)
thread.daemon = True
thread.start()
return thread
def initialize_arm(self, ip: str, port1: int, port2: int, name: str):
"""Initialize z1."""
arm = unitree_z1.ArmInterface(ip, port1, port2)
arm_model = arm._ctrlComp.armModel
arm.setFsmLowcmd()
return arm, arm_model
def set_control_gains(self, kp, kd):
"""Initialize kp kd."""
for arm in [self.z1_left, self.z1_right]:
arm.lowcmd.setControlGain(kp, kd)
arm.sendRecv()
def connect(self):
try:
if self.is_connected:
raise RobotDeviceAlreadyConnectedError(
"Z1_Dual_Arm is already connected. Do not run `robot.connect()` twice."
)
# Initialize arms
self.z1_left, self.z1_left_model = self.initialize_arm(
self.left_robot_ip, self.left_robot_port1, self.left_robot_port2, "left"
)
self.z1_right, self.z1_right_model = self.initialize_arm(
self.right_robot_ip, self.right_robot_port1, self.right_robot_port2, "right"
)
# Set control gains
self.set_control_gains(self.robot_kp, self.robot_kd)
# initialize subscribe thread
self.subscribe_thread = self._start_daemon_thread(self._subscribe_motor_state, name="z1._subscribe_motor_state")
while not self.lowstate_buffer.get_data():
time.sleep(0.01)
log_warning("[Z1_12_ArmController] Waiting Get Data...")
self.publish_thread = self._start_daemon_thread(self._ctrl_motor_state, name="z1_dual._ctrl_motor_state")
self.is_connected = True
except Exception as e:
self.disconnect()
log_error(f"❌ Error in Z1_12_ArmController.connect: {e}")
def _subscribe_motor_state(self):
while True:
msg = {
"q": np.concatenate([self.z1_left.lowstate.getQ(), self.z1_right.lowstate.getQ()], axis=0),
"dq": np.concatenate([self.z1_left.lowstate.getQd(), self.z1_right.lowstate.getQd()], axis=0),
}
if msg is not None:
lowstate = Z1_12_LowState()
for id in range(Robot_Num_Motors.Z1_12_Num_Motors):
lowstate.motor_state[id].q = msg["q"][id]
lowstate.motor_state[id].dq = msg["dq"][id]
self.lowstate_buffer.set_data(lowstate)
time.sleep(self.control_dt)
def _update_z1_arm(
self,
arm,
arm_model,
q: np.ndarray,
qd: np.ndarray | None = None,
qdd: np.ndarray | None = None,
ftip: np.ndarray | None = None,
tau: np.ndarray | None = None,
):
"""Update the state and command of a given robotic arm."""
arm.q = q
arm.qd = self.dq_target if qd is None else qd
qdd = self.ddq_target if qdd is None else qdd
ftip = self.ftip_target if ftip is None else ftip
arm.tau = arm_model.inverseDynamics(arm.q, arm.qd, qdd, ftip) if tau is None else tau
arm.setArmCmd(arm.q, arm.qd, arm.tau)
arm.sendRecv()
def _drive_to_waypoint(self, target_pose: np.ndarray, t_insert_time: float):
curr_time = time.monotonic() + self.control_dt
t_insert = curr_time + t_insert_time
self.pose_interp = self.pose_interp.drive_to_waypoint(
pose=target_pose,
time=t_insert,
curr_time=curr_time,
max_pos_speed=self.max_pos_speed,
)
while time.monotonic() < t_insert:
self._update_z1_arm(
self.z1_left, self.z1_left_model, self.pose_interp(time.monotonic())[: self.arm_indices_len]
)
self._update_z1_arm(
self.z1_right, self.z1_right_model, self.pose_interp(time.monotonic())[self.arm_indices_len :]
)
time.sleep(self.control_dt)
def _schedule_waypoint(
self,
arm_q_target: np.ndarray,
arm_time_target: float,
t_now: float,
start_time: float,
last_waypoint_time: float,
arm_tauff_target: np.ndarray | None = None,
) -> float:
target_time = time.monotonic() - time.perf_counter() + arm_time_target
curr_time = t_now + self.control_dt
target_time = max(target_time, curr_time + self.control_dt)
self.pose_interp = self.pose_interp.schedule_waypoint(
pose=arm_q_target,
time=target_time,
max_pos_speed=self.max_pos_speed,
curr_time=curr_time,
last_waypoint_time=last_waypoint_time,
)
last_waypoint_time = target_time
self._update_z1_arm(
arm=self.z1_left,
arm_model=self.z1_left_model,
q=self.pose_interp(t_now)[: self.arm_indices_len],
tau=arm_tauff_target[: self.arm_indices_len] if arm_tauff_target is not None else arm_tauff_target,
)
self._update_z1_arm(
arm=self.z1_right,
arm_model=self.z1_right_model,
q=self.pose_interp(t_now)[self.arm_indices_len :],
tau=arm_tauff_target[self.arm_indices_len :] if arm_tauff_target is not None else arm_tauff_target,
)
time.sleep(max(0, self.control_dt - (time.perf_counter() - start_time)))
def _ctrl_motor_state(self):
try:
self.pose_interp = JointTrajectoryInterpolator(
times=[time.monotonic()],
joint_positions=[self.read_current_arm_q()],
)
self.go_start()
arm_q_target = self.read_current_arm_q()
arm_tauff_target = self.tauff_target
arm_time_target = time.monotonic()
arm_cmd = "schedule_waypoint"
last_waypoint_time = time.monotonic()
while True:
start_time = time.perf_counter()
t_now = time.monotonic()
with self.ctrl_lock:
arm_q_target = self.q_target
arm_tauff_target = self.tauff_target
arm_time_target = self.time_target
arm_cmd = self.arm_cmd
if arm_cmd == "drive_to_waypoint":
self._drive_to_waypoint(target_pose=arm_q_target, t_insert_time=0.8)
elif arm_cmd == "schedule_waypoint":
self._schedule_waypoint(
arm_q_target=arm_q_target,
arm_time_target=arm_time_target,
t_now=t_now,
start_time=start_time,
last_waypoint_time=last_waypoint_time,
arm_tauff_target=arm_tauff_target,
)
except Exception as e:
self.disconnect()
log_error(f"❌ Error in Z1ArmController._ctrl_motor_state: {e}")
def write_arm(
self,
q_target: list[float] | np.ndarray,
tauff_target: list[float] | np.ndarray = None,
time_target: float | None = None,
cmd_target: str | None = None,
):
"""Set control target values q & tau of the left and right arm motors."""
with self.ctrl_lock:
self.q_target = q_target
self.tauff_target = tauff_target
self.time_target = time_target
self.arm_cmd = cmd_target
def read_current_arm_q(self) -> np.ndarray | None:
"""Return current state q of the left and right arm motors."""
return np.array([self.lowstate_buffer.get_data().motor_state[id].q for id in Z1_12_JointArmIndex])
def read_current_arm_dq(self) -> np.ndarray | None:
"""Return current state dq of the left and right arm motors."""
return np.array([self.lowstate_buffer.get_data().motor_state[id].dq for id in Z1_12_JointArmIndex])
def arm_ik(self, l_tf_target, r_tf_target) -> np.ndarray | None:
current_lr_arm_q = self.read_current_arm_q()
current_lr_arm_dq = self.read_current_arm_dq()
left_sol_q, left_sol_tauff = self.z1_left_arm_ik.solve_ik(
l_tf_target,
current_lr_arm_q[: self.arm_indices_len],
current_lr_arm_dq[: self.arm_indices_len],
)
right_sol_q, right_sol_tauff = self.z1_right_arm_ik.solve_ik(
r_tf_target,
current_lr_arm_q[self.arm_indices_len :],
current_lr_arm_dq[self.arm_indices_len :],
)
sol_q = np.concatenate([left_sol_q, right_sol_q], axis=0)
sol_tauff = np.concatenate([left_sol_tauff, right_sol_tauff], axis=0)
return sol_q, sol_tauff
def arm_fk(self, left_q: np.ndarray | None = None, right_q: np.ndarray | None = None) -> np.ndarray | None:
left = self.z1_left_arm_ik.solve_fk(
left_q if left_q is not None else self.read_current_arm_q()[: self.arm_indices_len]
)
right = self.z1_right_arm_ik.solve_fk(
right_q if right_q is not None else self.read_current_arm_q()[self.arm_indices_len :]
)
return left, right
def go_start(self):
self._drive_to_waypoint(target_pose=self.init_pose, t_insert_time=1.0)
log_success("Go Start OK!\n")
def go_home(self):
if self.mock:
self.stop_event.set()
# self.subscribe_thread.join()
# self.publish_thread.join()
time.sleep(1)
else:
self.is_connected = False
self.z1_right.loopOn()
self.z1_right.backToStart()
self.z1_right.loopOff()
self.z1_left.loopOn()
self.z1_left.backToStart()
self.z1_left.loopOff()
time.sleep(0.5)
log_success("Go Home OK!\n")
def disconnect(self):
self.is_connected = False
self.go_home()
def __del__(self):
if getattr(self, "is_connected", False):
self.disconnect()

View File

@@ -0,0 +1,2 @@
*.gv
*.pdf

View File

@@ -0,0 +1,33 @@
# Unitree G1 Description (URDF & MJCF)
## Overview
This package includes a universal humanoid robot description (URDF & MJCF) for the [Unitree G1](https://www.unitree.com/g1/), developed by [Unitree Robotics](https://www.unitree.com/).
MJCF/URDF for the G1 robot:
| MJCF/URDF file name | `mode_machine` | Hip roll reduction ratio | Update status | dof#leg | dof#waist | dof#arm | dof#hand |
| ----------------------------- | :------------: | :----------------------: | ------------- | :-----: | :-------: | :-----: | :------: |
| `g1_23dof` | 1 | 14.5 | Beta | 6*2 | 1 | 5*2 | 0 |
| `g1_29dof` | 2 | 14.5 | Beta | 6*2 | 3 | 7*2 | 0 |
| `g1_29dof_with_hand` | 2 | 14.5 | Beta | 6*2 | 3 | 7*2 | 7*2 |
| `g1_29dof_lock_waist` | 3 | 14.5 | Beta | 6*2 | 1 | 7*2 | 0 |
| `g1_23dof_rev_1_0` | 4 | 22.5 | Up-to-date | 6*2 | 1 | 5*2 | 0 |
| `g1_29dof_rev_1_0` | 5 | 22.5 | Up-to-date | 6*2 | 3 | 7*2 | 0 |
| `g1_29dof_with_hand_rev_1_0` | 5 | 22.5 | Up-to-date | 6*2 | 3 | 7*2 | 7*2 |
| `g1_29dof_lock_waist_rev_1_0` | 6 | 22.5 | Up-to-date | 6*2 | 1 | 7*2 | 0 |
| `g1_dual_arm` | 9 | null | Up-to-date | 0 | 0 | 7*2 | 0 |
## Visulization with [MuJoCo](https://github.com/google-deepmind/mujoco)
1. Open MuJoCo Viewer
```bash
pip install mujoco
python -m mujoco.viewer
```
2. Drag and drop the MJCF/URDF model file (`g1_XXX.xml`/`g1_XXX.urdf`) to the MuJoCo Viewer.
## Note for teleoperate
g1_body29_hand14 is modified from [g1_29dof_with_hand_rev_1_0](https://github.com/unitreerobotics/unitree_ros/blob/master/robots/g1_description/g1_29dof_with_hand_rev_1_0.urdf)

View File

@@ -0,0 +1,381 @@
<mujoco model="g1_29dof_rev_1_0">
<!-- <compiler angle="radian" meshdir="assets"/> -->
<compiler angle="radian" meshdir="meshes"/>
<option integrator="implicitfast"/>
<default>
<default class="g1">
<site rgba="1 0 0 1" size="0.01" group="5"/>
<joint armature="0.01" frictionloss="0.3"/>
<position kp="500" dampratio="1" inheritrange="1"/>
<default class="visual">
<geom group="2" type="mesh" contype="0" conaffinity="0" density="0" material="metal"/>
</default>
<default class="collision">
<geom group="3" type="mesh"/>
<default class="foot">
<geom type="sphere" size="0.005" priority="1" friction="0.6" condim="3"/>
</default>
</default>
</default>
</default>
<asset>
<material name="black" rgba="0.2 0.2 0.2 1"/>
<material name="metal" rgba="0.7 0.7 0.7 1"/>
<mesh file="pelvis.STL"/>
<mesh file="pelvis_contour_link.STL"/>
<mesh file="left_hip_pitch_link.STL"/>
<mesh file="left_hip_roll_link.STL"/>
<mesh file="left_hip_yaw_link.STL"/>
<mesh file="left_knee_link.STL"/>
<mesh file="left_ankle_pitch_link.STL"/>
<mesh file="left_ankle_roll_link.STL"/>
<mesh file="right_hip_pitch_link.STL"/>
<mesh file="right_hip_roll_link.STL"/>
<mesh file="right_hip_yaw_link.STL"/>
<mesh file="right_knee_link.STL"/>
<mesh file="right_ankle_pitch_link.STL"/>
<mesh file="right_ankle_roll_link.STL"/>
<mesh name="waist_yaw_link" file="waist_yaw_link_rev_1_0.STL"/>
<mesh name="waist_roll_link" file="waist_roll_link_rev_1_0.STL"/>
<mesh name="torso_link" file="torso_link_rev_1_0.STL"/>
<mesh file="logo_link.STL"/>
<mesh file="head_link.STL"/>
<mesh file="left_shoulder_pitch_link.STL"/>
<mesh file="left_shoulder_roll_link.STL"/>
<mesh file="left_shoulder_yaw_link.STL"/>
<mesh file="left_elbow_link.STL"/>
<mesh file="left_wrist_roll_link.STL"/>
<mesh file="left_wrist_pitch_link.STL"/>
<mesh file="left_wrist_yaw_link.STL"/>
<mesh file="left_rubber_hand.STL"/>
<mesh file="right_shoulder_pitch_link.STL"/>
<mesh file="right_shoulder_roll_link.STL"/>
<mesh file="right_shoulder_yaw_link.STL"/>
<mesh file="right_elbow_link.STL"/>
<mesh file="right_wrist_roll_link.STL"/>
<mesh file="right_wrist_pitch_link.STL"/>
<mesh file="right_wrist_yaw_link.STL"/>
<mesh file="right_rubber_hand.STL"/>
</asset>
<worldbody>
<light pos="1 0 3.5" dir="0 0 -1" directional="true"/>
<body name="pelvis" pos="0 0 0.793" childclass="g1">
<inertial pos="0 0 -0.07605" quat="1 0 -0.000399148 0" mass="3.813" diaginertia="0.010549 0.0093089 0.0079184"/>
<freejoint name="floating_base_joint"/>
<geom class="visual" material="black" mesh="pelvis"/>
<geom class="visual" mesh="pelvis_contour_link"/>
<geom class="collision" mesh="pelvis_contour_link"/>
<site name="imu_in_pelvis" pos="0.04525 0 -0.08339" size="0.01"/>
<body name="left_hip_pitch_link" pos="0 0.064452 -0.1027">
<inertial pos="0.002741 0.047791 -0.02606" quat="0.954862 0.293964 0.0302556 0.030122" mass="1.35"
diaginertia="0.00181517 0.00153422 0.00116212"/>
<joint name="left_hip_pitch_joint" axis="0 1 0" range="-2.5307 2.8798" actuatorfrcrange="-88 88"/>
<geom class="visual" material="black" mesh="left_hip_pitch_link"/>
<geom class="collision" material="black" mesh="left_hip_pitch_link"/>
<body name="left_hip_roll_link" pos="0 0.052 -0.030465" quat="0.996179 0 -0.0873386 0">
<inertial pos="0.029812 -0.001045 -0.087934" quat="0.977808 -1.97119e-05 0.205576 -0.0403793" mass="1.52"
diaginertia="0.00254986 0.00241169 0.00148755"/>
<joint name="left_hip_roll_joint" axis="1 0 0" range="-0.5236 2.9671" actuatorfrcrange="-139 139"/>
<geom class="visual" mesh="left_hip_roll_link"/>
<geom class="collision" mesh="left_hip_roll_link"/>
<body name="left_hip_yaw_link" pos="0.025001 0 -0.12412">
<inertial pos="-0.057709 -0.010981 -0.15078" quat="0.600598 0.15832 0.223482 0.751181" mass="1.702"
diaginertia="0.00776166 0.00717575 0.00160139"/>
<joint name="left_hip_yaw_joint" axis="0 0 1" range="-2.7576 2.7576" actuatorfrcrange="-88 88"/>
<geom class="visual" mesh="left_hip_yaw_link"/>
<geom class="collision" mesh="left_hip_yaw_link"/>
<body name="left_knee_link" pos="-0.078273 0.0021489 -0.17734" quat="0.996179 0 0.0873386 0">
<inertial pos="0.005457 0.003964 -0.12074" quat="0.923418 -0.0327699 0.0158246 0.382067" mass="1.932"
diaginertia="0.0113804 0.0112778 0.00146458"/>
<joint name="left_knee_joint" axis="0 1 0" range="-0.087267 2.8798" actuatorfrcrange="-139 139"/>
<geom class="visual" mesh="left_knee_link"/>
<geom class="collision" mesh="left_knee_link"/>
<body name="left_ankle_pitch_link" pos="0 -9.4445e-05 -0.30001">
<inertial pos="-0.007269 0 0.011137" quat="0.603053 0.369225 0.369225 0.603053" mass="0.074"
diaginertia="1.89e-05 1.40805e-05 6.9195e-06"/>
<joint name="left_ankle_pitch_joint" axis="0 1 0" range="-0.87267 0.5236" actuatorfrcrange="-50 50"/>
<geom class="visual" mesh="left_ankle_pitch_link"/>
<geom class="collision" mesh="left_ankle_pitch_link"/>
<body name="left_ankle_roll_link" pos="0 0 -0.017558">
<inertial pos="0.026505 0 -0.016425" quat="-0.000481092 0.728482 -0.000618967 0.685065" mass="0.608"
diaginertia="0.00167218 0.0016161 0.000217621"/>
<joint name="left_ankle_roll_joint" axis="1 0 0" range="-0.2618 0.2618" actuatorfrcrange="-50 50"/>
<geom class="visual" material="black" mesh="left_ankle_roll_link"/>
<geom class="foot" pos="-0.05 0.025 -0.03"/>
<geom class="foot" pos="-0.05 -0.025 -0.03"/>
<geom class="foot" pos="0.12 0.03 -0.03"/>
<geom class="foot" pos="0.12 -0.03 -0.03"/>
<site name="left_foot"/>
</body>
</body>
</body>
</body>
</body>
</body>
<body name="right_hip_pitch_link" pos="0 -0.064452 -0.1027">
<inertial pos="0.002741 -0.047791 -0.02606" quat="0.954862 -0.293964 0.0302556 -0.030122" mass="1.35"
diaginertia="0.00181517 0.00153422 0.00116212"/>
<joint name="right_hip_pitch_joint" axis="0 1 0" range="-2.5307 2.8798" actuatorfrcrange="-88 88"/>
<geom class="visual" material="black" mesh="right_hip_pitch_link"/>
<geom class="collision" material="black" mesh="right_hip_pitch_link"/>
<body name="right_hip_roll_link" pos="0 -0.052 -0.030465" quat="0.996179 0 -0.0873386 0">
<inertial pos="0.029812 0.001045 -0.087934" quat="0.977808 1.97119e-05 0.205576 0.0403793" mass="1.52"
diaginertia="0.00254986 0.00241169 0.00148755"/>
<joint name="right_hip_roll_joint" axis="1 0 0" range="-2.9671 0.5236" actuatorfrcrange="-139 139"/>
<geom class="visual" mesh="right_hip_roll_link"/>
<geom class="collision" mesh="right_hip_roll_link"/>
<body name="right_hip_yaw_link" pos="0.025001 0 -0.12412">
<inertial pos="-0.057709 0.010981 -0.15078" quat="0.751181 0.223482 0.15832 0.600598" mass="1.702"
diaginertia="0.00776166 0.00717575 0.00160139"/>
<joint name="right_hip_yaw_joint" axis="0 0 1" range="-2.7576 2.7576" actuatorfrcrange="-88 88"/>
<geom class="visual" mesh="right_hip_yaw_link"/>
<geom class="collision" mesh="right_hip_yaw_link"/>
<body name="right_knee_link" pos="-0.078273 -0.0021489 -0.17734" quat="0.996179 0 0.0873386 0">
<inertial pos="0.005457 -0.003964 -0.12074" quat="0.923439 0.0345276 0.0116333 -0.382012" mass="1.932"
diaginertia="0.011374 0.0112843 0.00146452"/>
<joint name="right_knee_joint" axis="0 1 0" range="-0.087267 2.8798" actuatorfrcrange="-139 139"/>
<geom class="visual" mesh="right_knee_link"/>
<geom class="collision" mesh="right_knee_link"/>
<body name="right_ankle_pitch_link" pos="0 9.4445e-05 -0.30001">
<inertial pos="-0.007269 0 0.011137" quat="0.603053 0.369225 0.369225 0.603053" mass="0.074"
diaginertia="1.89e-05 1.40805e-05 6.9195e-06"/>
<joint name="right_ankle_pitch_joint" axis="0 1 0" range="-0.87267 0.5236" actuatorfrcrange="-50 50"/>
<geom class="visual" mesh="right_ankle_pitch_link"/>
<geom class="collision" mesh="right_ankle_pitch_link"/>
<body name="right_ankle_roll_link" pos="0 0 -0.017558">
<inertial pos="0.026505 0 -0.016425" quat="0.000481092 0.728482 0.000618967 0.685065" mass="0.608"
diaginertia="0.00167218 0.0016161 0.000217621"/>
<joint name="right_ankle_roll_joint" axis="1 0 0" range="-0.2618 0.2618" actuatorfrcrange="-50 50"/>
<geom class="visual" material="black" mesh="right_ankle_roll_link"/>
<geom class="foot" pos="-0.05 0.025 -0.03"/>
<geom class="foot" pos="-0.05 -0.025 -0.03"/>
<geom class="foot" pos="0.12 0.03 -0.03"/>
<geom class="foot" pos="0.12 -0.03 -0.03"/>
<site name="right_foot"/>
</body>
</body>
</body>
</body>
</body>
</body>
<body name="waist_yaw_link">
<inertial pos="0.003494 0.000233 0.018034" quat="0.289697 0.591001 -0.337795 0.672821" mass="0.214"
diaginertia="0.000163531 0.000107714 0.000102205"/>
<joint name="waist_yaw_joint" axis="0 0 1" range="-2.618 2.618" actuatorfrcrange="-88 88"/>
<geom class="visual" mesh="waist_yaw_link"/>
<body name="waist_roll_link" pos="-0.0039635 0 0.044">
<inertial pos="0 2.3e-05 0" quat="0.5 0.5 -0.5 0.5" mass="0.086" diaginertia="8.245e-06 7.079e-06 6.339e-06"/>
<joint name="waist_roll_joint" axis="1 0 0" range="-0.52 0.52" actuatorfrcrange="-50 50"/>
<geom class="visual" mesh="waist_roll_link"/>
<body name="torso_link">
<inertial pos="0.00203158 0.000339683 0.184568" quat="0.999803 -6.03319e-05 0.0198256 0.00131986"
mass="7.818" diaginertia="0.121847 0.109825 0.0273735"/>
<joint name="waist_pitch_joint" axis="0 1 0" range="-0.52 0.52" actuatorfrcrange="-50 50"/>
<geom class="visual" mesh="torso_link"/>
<geom class="collision" mesh="torso_link"/>
<geom pos="0.0039635 0 -0.044" quat="1 0 0 0" class="visual" material="black" mesh="logo_link"/>
<geom pos="0.0039635 0 -0.044" quat="1 0 0 0" class="collision" material="black" mesh="logo_link"/>
<geom pos="0.0039635 0 -0.044" quat="1 0 0 0" class="visual" material="black" mesh="head_link"/>
<geom pos="0.0039635 0 -0.044" quat="1 0 0 0" class="collision" material="black" mesh="head_link"/>
<site name="imu_in_torso" pos="-0.03959 -0.00224 0.14792" size="0.01"/>
<body name="left_shoulder_pitch_link" pos="0.0039563 0.10022 0.24778"
quat="0.990264 0.139201 1.38722e-05 -9.86868e-05">
<inertial pos="0 0.035892 -0.011628" quat="0.654152 0.0130458 -0.326267 0.68225" mass="0.718"
diaginertia="0.000465864 0.000432842 0.000406394"/>
<joint name="left_shoulder_pitch_joint" axis="0 1 0" range="-3.0892 2.6704" actuatorfrcrange="-25 25"/>
<geom class="visual" mesh="left_shoulder_pitch_link"/>
<geom size="0.03 0.025" pos="0 0.04 -0.01" quat="0.707107 0 0.707107 0" type="cylinder"
rgba="0.7 0.7 0.7 1" class="collision"/>
<body name="left_shoulder_roll_link" pos="0 0.038 -0.013831" quat="0.990268 -0.139172 0 0">
<inertial pos="-0.000227 0.00727 -0.063243" quat="0.701256 -0.0196223 -0.00710317 0.712604" mass="0.643"
diaginertia="0.000691311 0.000618011 0.000388977"/>
<joint name="left_shoulder_roll_joint" axis="1 0 0" range="-1.5882 2.2515" actuatorfrcrange="-25 25"/>
<geom class="visual" mesh="left_shoulder_roll_link"/>
<geom size="0.03 0.015" pos="-0.004 0.006 -0.053" type="cylinder" rgba="0.7 0.7 0.7 1" class="collision"/>
<body name="left_shoulder_yaw_link" pos="0 0.00624 -0.1032">
<inertial pos="0.010773 -0.002949 -0.072009" quat="0.716879 -0.0964829 -0.0679942 0.687134"
mass="0.734" diaginertia="0.00106187 0.00103217 0.000400661"/>
<joint name="left_shoulder_yaw_joint" axis="0 0 1" range="-2.618 2.618" actuatorfrcrange="-25 25"/>
<geom class="visual" mesh="left_shoulder_yaw_link"/>
<geom class="collision" mesh="left_shoulder_yaw_link"/>
<body name="left_elbow_link" pos="0.015783 0 -0.080518">
<inertial pos="0.064956 0.004454 -0.010062" quat="0.541765 0.636132 0.388821 0.388129" mass="0.6"
diaginertia="0.000443035 0.000421612 0.000259353"/>
<joint name="left_elbow_joint" axis="0 1 0" range="-1.0472 2.0944" actuatorfrcrange="-25 25"/>
<geom class="visual" mesh="left_elbow_link"/>
<geom class="collision" mesh="left_elbow_link"/>
<body name="left_wrist_roll_link" pos="0.1 0.00188791 -0.01">
<inertial pos="0.0171394 0.000537591 4.8864e-07" quat="0.575338 0.411667 -0.574906 0.411094"
mass="0.085445" diaginertia="5.48211e-05 4.96646e-05 3.57798e-05"/>
<joint name="left_wrist_roll_joint" axis="1 0 0" range="-1.97222 1.97222"
actuatorfrcrange="-25 25"/>
<geom class="visual" mesh="left_wrist_roll_link"/>
<geom class="collision" mesh="left_wrist_roll_link"/>
<body name="left_wrist_pitch_link" pos="0.038 0 0">
<inertial pos="0.0229999 -0.00111685 -0.00111658" quat="0.249998 0.661363 0.293036 0.643608"
mass="0.48405" diaginertia="0.000430353 0.000429873 0.000164648"/>
<joint name="left_wrist_pitch_joint" axis="0 1 0" range="-1.61443 1.61443"
actuatorfrcrange="-5 5"/>
<geom class="visual" mesh="left_wrist_pitch_link"/>
<geom class="collision" mesh="left_wrist_pitch_link"/>
<body name="left_wrist_yaw_link" pos="0.046 0 0">
<inertial pos="0.0708244 0.000191745 0.00161742" quat="0.510571 0.526295 0.468078 0.493188"
mass="0.254576" diaginertia="0.000646113 0.000559993 0.000147566"/>
<joint name="left_wrist_yaw_joint" axis="0 0 1" range="-1.61443 1.61443"
actuatorfrcrange="-5 5"/>
<geom class="visual" mesh="left_wrist_yaw_link"/>
<geom class="collision" mesh="left_wrist_yaw_link"/>
<geom pos="0.0415 0.003 0" quat="1 0 0 0" class="visual" mesh="left_rubber_hand"/>
</body>
</body>
</body>
</body>
</body>
</body>
</body>
<body name="right_shoulder_pitch_link" pos="0.0039563 -0.10021 0.24778"
quat="0.990264 -0.139201 1.38722e-05 9.86868e-05">
<inertial pos="0 -0.035892 -0.011628" quat="0.68225 -0.326267 0.0130458 0.654152" mass="0.718"
diaginertia="0.000465864 0.000432842 0.000406394"/>
<joint name="right_shoulder_pitch_joint" axis="0 1 0" range="-3.0892 2.6704" actuatorfrcrange="-25 25"/>
<geom class="visual" mesh="right_shoulder_pitch_link"/>
<geom size="0.03 0.025" pos="0 -0.04 -0.01" quat="0.707107 0 0.707107 0" type="cylinder"
rgba="0.7 0.7 0.7 1" class="collision"/>
<body name="right_shoulder_roll_link" pos="0 -0.038 -0.013831" quat="0.990268 0.139172 0 0">
<inertial pos="-0.000227 -0.00727 -0.063243" quat="0.712604 -0.00710317 -0.0196223 0.701256"
mass="0.643" diaginertia="0.000691311 0.000618011 0.000388977"/>
<joint name="right_shoulder_roll_joint" axis="1 0 0" range="-2.2515 1.5882" actuatorfrcrange="-25 25"/>
<geom class="visual" mesh="right_shoulder_roll_link"/>
<geom size="0.03 0.015" pos="-0.004 -0.006 -0.053" type="cylinder" rgba="0.7 0.7 0.7 1"
class="collision"/>
<body name="right_shoulder_yaw_link" pos="0 -0.00624 -0.1032">
<inertial pos="0.010773 0.002949 -0.072009" quat="0.687134 -0.0679942 -0.0964829 0.716879"
mass="0.734" diaginertia="0.00106187 0.00103217 0.000400661"/>
<joint name="right_shoulder_yaw_joint" axis="0 0 1" range="-2.618 2.618" actuatorfrcrange="-25 25"/>
<geom class="visual" mesh="right_shoulder_yaw_link"/>
<geom class="collision" mesh="right_shoulder_yaw_link"/>
<body name="right_elbow_link" pos="0.015783 0 -0.080518">
<inertial pos="0.064956 -0.004454 -0.010062" quat="0.388129 0.388821 0.636132 0.541765" mass="0.6"
diaginertia="0.000443035 0.000421612 0.000259353"/>
<joint name="right_elbow_joint" axis="0 1 0" range="-1.0472 2.0944" actuatorfrcrange="-25 25"/>
<geom class="visual" mesh="right_elbow_link"/>
<geom class="collision" mesh="right_elbow_link"/>
<body name="right_wrist_roll_link" pos="0.1 -0.00188791 -0.01">
<inertial pos="0.0171394 -0.000537591 4.8864e-07" quat="0.411667 0.575338 -0.411094 0.574906"
mass="0.085445" diaginertia="5.48211e-05 4.96646e-05 3.57798e-05"/>
<joint name="right_wrist_roll_joint" axis="1 0 0" range="-1.97222 1.97222"
actuatorfrcrange="-25 25"/>
<geom class="visual" mesh="right_wrist_roll_link"/>
<geom class="collision" mesh="right_wrist_roll_link"/>
<body name="right_wrist_pitch_link" pos="0.038 0 0">
<inertial pos="0.0229999 0.00111685 -0.00111658" quat="0.643608 0.293036 0.661363 0.249998"
mass="0.48405" diaginertia="0.000430353 0.000429873 0.000164648"/>
<joint name="right_wrist_pitch_joint" axis="0 1 0" range="-1.61443 1.61443"
actuatorfrcrange="-5 5"/>
<geom class="visual" mesh="right_wrist_pitch_link"/>
<geom class="collision" mesh="right_wrist_pitch_link"/>
<body name="right_wrist_yaw_link" pos="0.046 0 0">
<inertial pos="0.0708244 -0.000191745 0.00161742" quat="0.493188 0.468078 0.526295 0.510571"
mass="0.254576" diaginertia="0.000646113 0.000559993 0.000147566"/>
<joint name="right_wrist_yaw_joint" axis="0 0 1" range="-1.61443 1.61443"
actuatorfrcrange="-5 5"/>
<geom class="visual" mesh="right_wrist_yaw_link"/>
<geom class="collision" mesh="right_wrist_yaw_link"/>
<geom pos="0.0415 -0.003 0" quat="1 0 0 0" class="visual" mesh="right_rubber_hand"/>
</body>
</body>
</body>
</body>
</body>
</body>
</body>
</body>
</body>
</body>
</body>
</worldbody>
<actuator>
<position class="g1" name="left_hip_pitch_joint" joint="left_hip_pitch_joint"/>
<position class="g1" name="left_hip_roll_joint" joint="left_hip_roll_joint"/>
<position class="g1" name="left_hip_yaw_joint" joint="left_hip_yaw_joint"/>
<position class="g1" name="left_knee_joint" joint="left_knee_joint"/>
<position class="g1" name="left_ankle_pitch_joint" joint="left_ankle_pitch_joint"/>
<position class="g1" name="left_ankle_roll_joint" joint="left_ankle_roll_joint"/>
<position class="g1" name="right_hip_pitch_joint" joint="right_hip_pitch_joint"/>
<position class="g1" name="right_hip_roll_joint" joint="right_hip_roll_joint"/>
<position class="g1" name="right_hip_yaw_joint" joint="right_hip_yaw_joint"/>
<position class="g1" name="right_knee_joint" joint="right_knee_joint"/>
<position class="g1" name="right_ankle_pitch_joint" joint="right_ankle_pitch_joint"/>
<position class="g1" name="right_ankle_roll_joint" joint="right_ankle_roll_joint"/>
<position class="g1" name="waist_yaw_joint" joint="waist_yaw_joint"/>
<position class="g1" name="waist_roll_joint" joint="waist_roll_joint"/>
<position class="g1" name="waist_pitch_joint" joint="waist_pitch_joint"/>
<position class="g1" name="left_shoulder_pitch_joint" joint="left_shoulder_pitch_joint"/>
<position class="g1" name="left_shoulder_roll_joint" joint="left_shoulder_roll_joint"/>
<position class="g1" name="left_shoulder_yaw_joint" joint="left_shoulder_yaw_joint"/>
<position class="g1" name="left_elbow_joint" joint="left_elbow_joint"/>
<position class="g1" name="left_wrist_roll_joint" joint="left_wrist_roll_joint"/>
<position class="g1" name="left_wrist_pitch_joint" joint="left_wrist_pitch_joint"/>
<position class="g1" name="left_wrist_yaw_joint" joint="left_wrist_yaw_joint"/>
<position class="g1" name="right_shoulder_pitch_joint" joint="right_shoulder_pitch_joint"/>
<position class="g1" name="right_shoulder_roll_joint" joint="right_shoulder_roll_joint"/>
<position class="g1" name="right_shoulder_yaw_joint" joint="right_shoulder_yaw_joint"/>
<position class="g1" name="right_elbow_joint" joint="right_elbow_joint"/>
<position class="g1" name="right_wrist_roll_joint" joint="right_wrist_roll_joint"/>
<position class="g1" name="right_wrist_pitch_joint" joint="right_wrist_pitch_joint"/>
<position class="g1" name="right_wrist_yaw_joint" joint="right_wrist_yaw_joint"/>
</actuator>
<sensor>
<gyro site="imu_in_torso" name="imu-torso-angular-velocity" cutoff="34.9" noise="0.0005"/>
<accelerometer site="imu_in_torso" name="imu-torso-linear-acceleration" cutoff="157" noise="0.01"/>
<gyro site="imu_in_pelvis" name="imu-pelvis-angular-velocity" cutoff="34.9" noise="0.0005"/>
<accelerometer site="imu_in_pelvis" name="imu-pelvis-linear-acceleration" cutoff="157" noise="0.01"/>
</sensor>
<statistic center="1.0 0.7 1.0" extent="0.8"/>
<visual>
<headlight diffuse="0.6 0.6 0.6" ambient="0.1 0.1 0.1" specular="0.9 0.9 0.9"/>
<rgba haze="0.15 0.25 0.35 1"/>
<global azimuth="-140" elevation="-20"/>
</visual>
<asset>
<texture type="skybox" builtin="flat" rgb1="0 0 0" rgb2="0 0 0" width="512" height="3072"/>
<texture type="2d" name="groundplane" builtin="checker" mark="edge" rgb1="0.2 0.3 0.4" rgb2="0.1 0.2 0.3" markrgb="0.8 0.8 0.8" width="300" height="300"/>
<material name="groundplane" texture="groundplane" texuniform="true" texrepeat="5 5" reflectance="0.2"/>
</asset>
<worldbody>
<light pos="1 0 3.5" dir="0 0 -1" directional="true"/>
<geom name="floor" size="0 0 0.05" type="plane" material="groundplane"/>
</worldbody>
<keyframe>
<key name="stand"
qpos="
0 0 0.79
1 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0
0.2 0.2 0 1.28 0 0 0
0.2 -0.2 0 1.28 0 0 0
"
ctrl="
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0
0.2 0.2 0 1.28 0 0 0
0.2 -0.2 0 1.28 0 0 0
"/>
</keyframe>
</mujoco>

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,408 @@
<mujoco model="g1">
<compiler angle="radian" meshdir="meshes"/>
<asset>
<mesh name="pelvis" file="pelvis.STL"/>
<mesh name="pelvis_contour_link" file="pelvis_contour_link.STL"/>
<mesh name="left_hip_pitch_link" file="left_hip_pitch_link.STL"/>
<mesh name="left_hip_roll_link" file="left_hip_roll_link.STL"/>
<mesh name="left_hip_yaw_link" file="left_hip_yaw_link.STL"/>
<mesh name="left_knee_link" file="left_knee_link.STL"/>
<mesh name="left_ankle_pitch_link" file="left_ankle_pitch_link.STL"/>
<mesh name="left_ankle_roll_link" file="left_ankle_roll_link.STL"/>
<mesh name="right_hip_pitch_link" file="right_hip_pitch_link.STL"/>
<mesh name="right_hip_roll_link" file="right_hip_roll_link.STL"/>
<mesh name="right_hip_yaw_link" file="right_hip_yaw_link.STL"/>
<mesh name="right_knee_link" file="right_knee_link.STL"/>
<mesh name="right_ankle_pitch_link" file="right_ankle_pitch_link.STL"/>
<mesh name="right_ankle_roll_link" file="right_ankle_roll_link.STL"/>
<mesh name="waist_yaw_link" file="waist_yaw_link_rev_1_0.STL"/>
<mesh name="waist_roll_link" file="waist_roll_link_rev_1_0.STL"/>
<mesh name="torso_link" file="torso_link_rev_1_0.STL"/>
<mesh name="logo_link" file="logo_link.STL"/>
<mesh name="head_link" file="head_link.STL"/>
<mesh name="left_shoulder_pitch_link" file="left_shoulder_pitch_link.STL"/>
<mesh name="left_shoulder_roll_link" file="left_shoulder_roll_link.STL"/>
<mesh name="left_shoulder_yaw_link" file="left_shoulder_yaw_link.STL"/>
<mesh name="left_elbow_link" file="left_elbow_link.STL"/>
<mesh name="left_wrist_roll_link" file="left_wrist_roll_link.STL"/>
<mesh name="left_wrist_pitch_link" file="left_wrist_pitch_link.STL"/>
<mesh name="left_wrist_yaw_link" file="left_wrist_yaw_link.STL"/>
<mesh name="left_hand_palm_link" file="left_hand_palm_link.STL"/>
<mesh name="left_hand_thumb_0_link" file="left_hand_thumb_0_link.STL"/>
<mesh name="left_hand_thumb_1_link" file="left_hand_thumb_1_link.STL"/>
<mesh name="left_hand_thumb_2_link" file="left_hand_thumb_2_link.STL"/>
<mesh name="left_hand_middle_0_link" file="left_hand_middle_0_link.STL"/>
<mesh name="left_hand_middle_1_link" file="left_hand_middle_1_link.STL"/>
<mesh name="left_hand_index_0_link" file="left_hand_index_0_link.STL"/>
<mesh name="left_hand_index_1_link" file="left_hand_index_1_link.STL"/>
<mesh name="right_shoulder_pitch_link" file="right_shoulder_pitch_link.STL"/>
<mesh name="right_shoulder_roll_link" file="right_shoulder_roll_link.STL"/>
<mesh name="right_shoulder_yaw_link" file="right_shoulder_yaw_link.STL"/>
<mesh name="right_elbow_link" file="right_elbow_link.STL"/>
<mesh name="right_wrist_roll_link" file="right_wrist_roll_link.STL"/>
<mesh name="right_wrist_pitch_link" file="right_wrist_pitch_link.STL"/>
<mesh name="right_wrist_yaw_link" file="right_wrist_yaw_link.STL"/>
<mesh name="right_hand_palm_link" file="right_hand_palm_link.STL"/>
<mesh name="right_hand_thumb_0_link" file="right_hand_thumb_0_link.STL"/>
<mesh name="right_hand_thumb_1_link" file="right_hand_thumb_1_link.STL"/>
<mesh name="right_hand_thumb_2_link" file="right_hand_thumb_2_link.STL"/>
<mesh name="right_hand_middle_0_link" file="right_hand_middle_0_link.STL"/>
<mesh name="right_hand_middle_1_link" file="right_hand_middle_1_link.STL"/>
<mesh name="right_hand_index_0_link" file="right_hand_index_0_link.STL"/>
<mesh name="right_hand_index_1_link" file="right_hand_index_1_link.STL"/>
</asset>
<worldbody>
<body name="pelvis" pos="0 0 0.793">
<inertial pos="0 0 -0.07605" quat="1 0 -0.000399148 0" mass="3.813" diaginertia="0.010549 0.0093089 0.0079184"/>
<joint name="floating_base_joint" type="free" limited="false" actuatorfrclimited="false"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.2 0.2 0.2 1" mesh="pelvis"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="pelvis_contour_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="pelvis_contour_link"/>
<site name="imu_in_pelvis" size="0.01" pos="0.04525 0 -0.08339"/>
<body name="left_hip_pitch_link" pos="0 0.064452 -0.1027">
<inertial pos="0.002741 0.047791 -0.02606" quat="0.954862 0.293964 0.0302556 0.030122" mass="1.35" diaginertia="0.00181517 0.00153422 0.00116212"/>
<joint name="left_hip_pitch_joint" pos="0 0 0" axis="0 1 0" range="-2.5307 2.8798" actuatorfrcrange="-88 88"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.2 0.2 0.2 1" mesh="left_hip_pitch_link"/>
<geom type="mesh" rgba="0.2 0.2 0.2 1" mesh="left_hip_pitch_link"/>
<body name="left_hip_roll_link" pos="0 0.052 -0.030465" quat="0.996179 0 -0.0873386 0">
<inertial pos="0.029812 -0.001045 -0.087934" quat="0.977808 -1.97119e-05 0.205576 -0.0403793" mass="1.52" diaginertia="0.00254986 0.00241169 0.00148755"/>
<joint name="left_hip_roll_joint" pos="0 0 0" axis="1 0 0" range="-0.5236 2.9671" actuatorfrcrange="-139 139"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_hip_roll_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_hip_roll_link"/>
<body name="left_hip_yaw_link" pos="0.025001 0 -0.12412">
<inertial pos="-0.057709 -0.010981 -0.15078" quat="0.600598 0.15832 0.223482 0.751181" mass="1.702" diaginertia="0.00776166 0.00717575 0.00160139"/>
<joint name="left_hip_yaw_joint" pos="0 0 0" axis="0 0 1" range="-2.7576 2.7576" actuatorfrcrange="-88 88"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_hip_yaw_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_hip_yaw_link"/>
<body name="left_knee_link" pos="-0.078273 0.0021489 -0.17734" quat="0.996179 0 0.0873386 0">
<inertial pos="0.005457 0.003964 -0.12074" quat="0.923418 -0.0327699 0.0158246 0.382067" mass="1.932" diaginertia="0.0113804 0.0112778 0.00146458"/>
<joint name="left_knee_joint" pos="0 0 0" axis="0 1 0" range="-0.087267 2.8798" actuatorfrcrange="-139 139"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_knee_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_knee_link"/>
<body name="left_ankle_pitch_link" pos="0 -9.4445e-05 -0.30001">
<inertial pos="-0.007269 0 0.011137" quat="0.603053 0.369225 0.369225 0.603053" mass="0.074" diaginertia="1.89e-05 1.40805e-05 6.9195e-06"/>
<joint name="left_ankle_pitch_joint" pos="0 0 0" axis="0 1 0" range="-0.87267 0.5236" actuatorfrcrange="-50 50"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_ankle_pitch_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_ankle_pitch_link"/>
<body name="left_ankle_roll_link" pos="0 0 -0.017558">
<inertial pos="0.026505 0 -0.016425" quat="-0.000481092 0.728482 -0.000618967 0.685065" mass="0.608" diaginertia="0.00167218 0.0016161 0.000217621"/>
<joint name="left_ankle_roll_joint" pos="0 0 0" axis="1 0 0" range="-0.2618 0.2618" actuatorfrcrange="-50 50"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.2 0.2 0.2 1" mesh="left_ankle_roll_link"/>
<geom size="0.005" pos="-0.05 0.025 -0.03" rgba="0.2 0.2 0.2 1"/>
<geom size="0.005" pos="-0.05 -0.025 -0.03" rgba="0.2 0.2 0.2 1"/>
<geom size="0.005" pos="0.12 0.03 -0.03" rgba="0.2 0.2 0.2 1"/>
<geom size="0.005" pos="0.12 -0.03 -0.03" rgba="0.2 0.2 0.2 1"/>
</body>
</body>
</body>
</body>
</body>
</body>
<body name="right_hip_pitch_link" pos="0 -0.064452 -0.1027">
<inertial pos="0.002741 -0.047791 -0.02606" quat="0.954862 -0.293964 0.0302556 -0.030122" mass="1.35" diaginertia="0.00181517 0.00153422 0.00116212"/>
<joint name="right_hip_pitch_joint" pos="0 0 0" axis="0 1 0" range="-2.5307 2.8798" actuatorfrcrange="-88 88"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.2 0.2 0.2 1" mesh="right_hip_pitch_link"/>
<geom type="mesh" rgba="0.2 0.2 0.2 1" mesh="right_hip_pitch_link"/>
<body name="right_hip_roll_link" pos="0 -0.052 -0.030465" quat="0.996179 0 -0.0873386 0">
<inertial pos="0.029812 0.001045 -0.087934" quat="0.977808 1.97119e-05 0.205576 0.0403793" mass="1.52" diaginertia="0.00254986 0.00241169 0.00148755"/>
<joint name="right_hip_roll_joint" pos="0 0 0" axis="1 0 0" range="-2.9671 0.5236" actuatorfrcrange="-139 139"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_hip_roll_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_hip_roll_link"/>
<body name="right_hip_yaw_link" pos="0.025001 0 -0.12412">
<inertial pos="-0.057709 0.010981 -0.15078" quat="0.751181 0.223482 0.15832 0.600598" mass="1.702" diaginertia="0.00776166 0.00717575 0.00160139"/>
<joint name="right_hip_yaw_joint" pos="0 0 0" axis="0 0 1" range="-2.7576 2.7576" actuatorfrcrange="-88 88"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_hip_yaw_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_hip_yaw_link"/>
<body name="right_knee_link" pos="-0.078273 -0.0021489 -0.17734" quat="0.996179 0 0.0873386 0">
<inertial pos="0.005457 -0.003964 -0.12074" quat="0.923439 0.0345276 0.0116333 -0.382012" mass="1.932" diaginertia="0.011374 0.0112843 0.00146452"/>
<joint name="right_knee_joint" pos="0 0 0" axis="0 1 0" range="-0.087267 2.8798" actuatorfrcrange="-139 139"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_knee_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_knee_link"/>
<body name="right_ankle_pitch_link" pos="0 9.4445e-05 -0.30001">
<inertial pos="-0.007269 0 0.011137" quat="0.603053 0.369225 0.369225 0.603053" mass="0.074" diaginertia="1.89e-05 1.40805e-05 6.9195e-06"/>
<joint name="right_ankle_pitch_joint" pos="0 0 0" axis="0 1 0" range="-0.87267 0.5236" actuatorfrcrange="-50 50"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_ankle_pitch_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_ankle_pitch_link"/>
<body name="right_ankle_roll_link" pos="0 0 -0.017558">
<inertial pos="0.026505 0 -0.016425" quat="0.000481092 0.728482 0.000618967 0.685065" mass="0.608" diaginertia="0.00167218 0.0016161 0.000217621"/>
<joint name="right_ankle_roll_joint" pos="0 0 0" axis="1 0 0" range="-0.2618 0.2618" actuatorfrcrange="-50 50"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.2 0.2 0.2 1" mesh="right_ankle_roll_link"/>
<geom size="0.005" pos="-0.05 0.025 -0.03" rgba="0.2 0.2 0.2 1"/>
<geom size="0.005" pos="-0.05 -0.025 -0.03" rgba="0.2 0.2 0.2 1"/>
<geom size="0.005" pos="0.12 0.03 -0.03" rgba="0.2 0.2 0.2 1"/>
<geom size="0.005" pos="0.12 -0.03 -0.03" rgba="0.2 0.2 0.2 1"/>
</body>
</body>
</body>
</body>
</body>
</body>
<body name="waist_yaw_link">
<inertial pos="0.003494 0.000233 0.018034" quat="0.289697 0.591001 -0.337795 0.672821" mass="0.214" diaginertia="0.000163531 0.000107714 0.000102205"/>
<joint name="waist_yaw_joint" pos="0 0 0" axis="0 0 1" range="-2.618 2.618" actuatorfrcrange="-88 88"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="waist_yaw_link"/>
<body name="waist_roll_link" pos="-0.0039635 0 0.044">
<inertial pos="0 2.3e-05 0" quat="0.5 0.5 -0.5 0.5" mass="0.086" diaginertia="8.245e-06 7.079e-06 6.339e-06"/>
<joint name="waist_roll_joint" pos="0 0 0" axis="1 0 0" range="-0.52 0.52" actuatorfrcrange="-50 50"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="waist_roll_link"/>
<body name="torso_link">
<inertial pos="0.00203158 0.000339683 0.184568" quat="0.999803 -6.03319e-05 0.0198256 0.00131986" mass="7.818" diaginertia="0.121847 0.109825 0.0273735"/>
<joint name="waist_pitch_joint" pos="0 0 0" axis="0 1 0" range="-0.52 0.52" actuatorfrcrange="-50 50"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="torso_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="torso_link"/>
<geom pos="0.0039635 0 -0.044" quat="1 0 0 0" type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.2 0.2 0.2 1" mesh="logo_link"/>
<geom pos="0.0039635 0 -0.044" quat="1 0 0 0" type="mesh" rgba="0.2 0.2 0.2 1" mesh="logo_link"/>
<geom pos="0.0039635 0 -0.044" type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.2 0.2 0.2 1" mesh="head_link"/>
<geom pos="0.0039635 0 -0.044" type="mesh" rgba="0.2 0.2 0.2 1" mesh="head_link"/>
<site name="imu_in_torso" size="0.01" pos="-0.03959 -0.00224 0.14792"/>
<body name="left_shoulder_pitch_link" pos="0.0039563 0.10022 0.24778" quat="0.990264 0.139201 1.38722e-05 -9.86868e-05">
<inertial pos="0 0.035892 -0.011628" quat="0.654152 0.0130458 -0.326267 0.68225" mass="0.718" diaginertia="0.000465864 0.000432842 0.000406394"/>
<joint name="left_shoulder_pitch_joint" pos="0 0 0" axis="0 1 0" range="-3.0892 2.6704" actuatorfrcrange="-25 25"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_shoulder_pitch_link"/>
<geom size="0.03 0.025" pos="0 0.04 -0.01" quat="0.707107 0 0.707107 0" type="cylinder" rgba="0.7 0.7 0.7 1"/>
<body name="left_shoulder_roll_link" pos="0 0.038 -0.013831" quat="0.990268 -0.139172 0 0">
<inertial pos="-0.000227 0.00727 -0.063243" quat="0.701256 -0.0196223 -0.00710317 0.712604" mass="0.643" diaginertia="0.000691311 0.000618011 0.000388977"/>
<joint name="left_shoulder_roll_joint" pos="0 0 0" axis="1 0 0" range="-1.5882 2.2515" actuatorfrcrange="-25 25"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_shoulder_roll_link"/>
<geom size="0.03 0.015" pos="-0.004 0.006 -0.053" type="cylinder" rgba="0.7 0.7 0.7 1"/>
<body name="left_shoulder_yaw_link" pos="0 0.00624 -0.1032">
<inertial pos="0.010773 -0.002949 -0.072009" quat="0.716879 -0.0964829 -0.0679942 0.687134" mass="0.734" diaginertia="0.00106187 0.00103217 0.000400661"/>
<joint name="left_shoulder_yaw_joint" pos="0 0 0" axis="0 0 1" range="-2.618 2.618" actuatorfrcrange="-25 25"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_shoulder_yaw_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_shoulder_yaw_link"/>
<body name="left_elbow_link" pos="0.015783 0 -0.080518">
<inertial pos="0.064956 0.004454 -0.010062" quat="0.541765 0.636132 0.388821 0.388129" mass="0.6" diaginertia="0.000443035 0.000421612 0.000259353"/>
<joint name="left_elbow_joint" pos="0 0 0" axis="0 1 0" range="-1.0472 2.0944" actuatorfrcrange="-25 25"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_elbow_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_elbow_link"/>
<body name="left_wrist_roll_link" pos="0.1 0.00188791 -0.01">
<inertial pos="0.0171394 0.000537591 4.8864e-07" quat="0.575338 0.411667 -0.574906 0.411094" mass="0.085445" diaginertia="5.48211e-05 4.96646e-05 3.57798e-05"/>
<joint name="left_wrist_roll_joint" pos="0 0 0" axis="1 0 0" range="-1.97222 1.97222" actuatorfrcrange="-25 25"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_wrist_roll_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_wrist_roll_link"/>
<body name="left_wrist_pitch_link" pos="0.038 0 0">
<inertial pos="0.0229999 -0.00111685 -0.00111658" quat="0.249998 0.661363 0.293036 0.643608" mass="0.48405" diaginertia="0.000430353 0.000429873 0.000164648"/>
<joint name="left_wrist_pitch_joint" pos="0 0 0" axis="0 1 0" range="-1.61443 1.61443" actuatorfrcrange="-5 5"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_wrist_pitch_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_wrist_pitch_link"/>
<body name="left_wrist_yaw_link" pos="0.046 0 0">
<inertial pos="0.0885506 0.00212216 -0.000374562" quat="0.487149 0.493844 0.513241 0.505358" mass="0.457415" diaginertia="0.00105989 0.000895419 0.000323842"/>
<joint name="left_wrist_yaw_joint" pos="0 0 0" axis="0 0 1" range="-1.61443 1.61443" actuatorfrcrange="-5 5"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_wrist_yaw_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_wrist_yaw_link"/>
<geom pos="0.0415 0.003 0" quat="1 0 0 0" type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_hand_palm_link"/>
<geom pos="0.0415 0.003 0" quat="1 0 0 0" type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_hand_palm_link"/>
<body name="left_hand_thumb_0_link" pos="0.067 0.003 0">
<inertial pos="-0.000884246 -0.00863407 0.000944293" quat="0.462991 0.643965 -0.460173 0.398986" mass="0.0862366" diaginertia="1.6546e-05 1.60058e-05 1.43741e-05"/>
<joint name="left_hand_thumb_0_joint" pos="0 0 0" axis="0 1 0" range="-1.0472 1.0472" actuatorfrcrange="-2.45 2.45"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_hand_thumb_0_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_hand_thumb_0_link"/>
<body name="left_hand_thumb_1_link" pos="-0.0025 -0.0193 0">
<inertial pos="-0.000827888 -0.0354744 -0.0003809" quat="0.685598 0.705471 -0.15207 0.0956069" mass="0.0588507" diaginertia="1.28514e-05 1.22902e-05 5.9666e-06"/>
<joint name="left_hand_thumb_1_joint" pos="0 0 0" axis="0 0 1" range="-0.724312 1.0472" actuatorfrcrange="-1.4 1.4"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_hand_thumb_1_link"/>
<geom size="0.01 0.015 0.01" pos="-0.001 -0.032 0" type="box" rgba="0.7 0.7 0.7 1"/>
<body name="left_hand_thumb_2_link" pos="0 -0.0458 0">
<inertial pos="-0.00171735 -0.0262819 0.000107789" quat="0.703174 0.710977 -0.00017564 -0.00766553" mass="0.0203063" diaginertia="4.61314e-06 3.86645e-06 1.53495e-06"/>
<joint name="left_hand_thumb_2_joint" pos="0 0 0" axis="0 0 1" range="0 1.74533" actuatorfrcrange="-1.4 1.4"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_hand_thumb_2_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_hand_thumb_2_link"/>
</body>
</body>
</body>
<body name="left_hand_middle_0_link" pos="0.1192 0.0046 -0.0285">
<inertial pos="0.0354744 0.000827888 0.0003809" quat="0.391313 0.552395 0.417187 0.606373" mass="0.0588507" diaginertia="1.28514e-05 1.22902e-05 5.9666e-06"/>
<joint name="left_hand_middle_0_joint" pos="0 0 0" axis="0 0 1" range="-1.5708 0" actuatorfrcrange="-1.4 1.4"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_hand_middle_0_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_hand_middle_0_link"/>
<body name="left_hand_middle_1_link" pos="0.0458 0 0">
<inertial pos="0.0262819 0.00171735 -0.000107789" quat="0.502612 0.491799 0.502639 0.502861" mass="0.0203063" diaginertia="4.61314e-06 3.86645e-06 1.53495e-06"/>
<joint name="left_hand_middle_1_joint" pos="0 0 0" axis="0 0 1" range="-1.74533 0" actuatorfrcrange="-1.4 1.4"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_hand_middle_1_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_hand_middle_1_link"/>
</body>
</body>
<body name="left_hand_index_0_link" pos="0.1192 0.0046 0.0285">
<inertial pos="0.0354744 0.000827888 0.0003809" quat="0.391313 0.552395 0.417187 0.606373" mass="0.0588507" diaginertia="1.28514e-05 1.22902e-05 5.9666e-06"/>
<joint name="left_hand_index_0_joint" pos="0 0 0" axis="0 0 1" range="-1.5708 0" actuatorfrcrange="-1.4 1.4"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_hand_index_0_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_hand_index_0_link"/>
<body name="left_hand_index_1_link" pos="0.0458 0 0">
<inertial pos="0.0262819 0.00171735 -0.000107789" quat="0.502612 0.491799 0.502639 0.502861" mass="0.0203063" diaginertia="4.61314e-06 3.86645e-06 1.53495e-06"/>
<joint name="left_hand_index_1_joint" pos="0 0 0" axis="0 0 1" range="-1.74533 0" actuatorfrcrange="-1.4 1.4"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_hand_index_1_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_hand_index_1_link"/>
</body>
</body>
</body>
</body>
</body>
</body>
</body>
</body>
</body>
<body name="right_shoulder_pitch_link" pos="0.0039563 -0.10021 0.24778" quat="0.990264 -0.139201 1.38722e-05 9.86868e-05">
<inertial pos="0 -0.035892 -0.011628" quat="0.68225 -0.326267 0.0130458 0.654152" mass="0.718" diaginertia="0.000465864 0.000432842 0.000406394"/>
<joint name="right_shoulder_pitch_joint" pos="0 0 0" axis="0 1 0" range="-3.0892 2.6704" actuatorfrcrange="-25 25"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_shoulder_pitch_link"/>
<geom size="0.03 0.025" pos="0 -0.04 -0.01" quat="0.707107 0 0.707107 0" type="cylinder" rgba="0.7 0.7 0.7 1"/>
<body name="right_shoulder_roll_link" pos="0 -0.038 -0.013831" quat="0.990268 0.139172 0 0">
<inertial pos="-0.000227 -0.00727 -0.063243" quat="0.712604 -0.00710317 -0.0196223 0.701256" mass="0.643" diaginertia="0.000691311 0.000618011 0.000388977"/>
<joint name="right_shoulder_roll_joint" pos="0 0 0" axis="1 0 0" range="-2.2515 1.5882" actuatorfrcrange="-25 25"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_shoulder_roll_link"/>
<geom size="0.03 0.015" pos="-0.004 -0.006 -0.053" type="cylinder" rgba="0.7 0.7 0.7 1"/>
<body name="right_shoulder_yaw_link" pos="0 -0.00624 -0.1032">
<inertial pos="0.010773 0.002949 -0.072009" quat="0.687134 -0.0679942 -0.0964829 0.716879" mass="0.734" diaginertia="0.00106187 0.00103217 0.000400661"/>
<joint name="right_shoulder_yaw_joint" pos="0 0 0" axis="0 0 1" range="-2.618 2.618" actuatorfrcrange="-25 25"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_shoulder_yaw_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_shoulder_yaw_link"/>
<body name="right_elbow_link" pos="0.015783 0 -0.080518">
<inertial pos="0.064956 -0.004454 -0.010062" quat="0.388129 0.388821 0.636132 0.541765" mass="0.6" diaginertia="0.000443035 0.000421612 0.000259353"/>
<joint name="right_elbow_joint" pos="0 0 0" axis="0 1 0" range="-1.0472 2.0944" actuatorfrcrange="-25 25"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_elbow_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_elbow_link"/>
<body name="right_wrist_roll_link" pos="0.1 -0.00188791 -0.01">
<inertial pos="0.0171394 -0.000537591 4.8864e-07" quat="0.411667 0.575338 -0.411094 0.574906" mass="0.085445" diaginertia="5.48211e-05 4.96646e-05 3.57798e-05"/>
<joint name="right_wrist_roll_joint" pos="0 0 0" axis="1 0 0" range="-1.97222 1.97222" actuatorfrcrange="-25 25"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_wrist_roll_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_wrist_roll_link"/>
<body name="right_wrist_pitch_link" pos="0.038 0 0">
<inertial pos="0.0229999 0.00111685 -0.00111658" quat="0.643608 0.293036 0.661363 0.249998" mass="0.48405" diaginertia="0.000430353 0.000429873 0.000164648"/>
<joint name="right_wrist_pitch_joint" pos="0 0 0" axis="0 1 0" range="-1.61443 1.61443" actuatorfrcrange="-5 5"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_wrist_pitch_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_wrist_pitch_link"/>
<body name="right_wrist_yaw_link" pos="0.046 0 0">
<inertial pos="0.0885506 -0.00212216 -0.000374562" quat="0.505358 0.513241 0.493844 0.487149" mass="0.457415" diaginertia="0.00105989 0.000895419 0.000323842"/>
<joint name="right_wrist_yaw_joint" pos="0 0 0" axis="0 0 1" range="-1.61443 1.61443" actuatorfrcrange="-5 5"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_wrist_yaw_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_wrist_yaw_link"/>
<geom pos="0.0415 -0.003 0" quat="1 0 0 0" type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_hand_palm_link"/>
<geom pos="0.0415 -0.003 0" quat="1 0 0 0" type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_hand_palm_link"/>
<body name="right_hand_thumb_0_link" pos="0.067 -0.003 0">
<inertial pos="-0.000884246 0.00863407 0.000944293" quat="0.643965 0.462991 -0.398986 0.460173" mass="0.0862366" diaginertia="1.6546e-05 1.60058e-05 1.43741e-05"/>
<joint name="right_hand_thumb_0_joint" pos="0 0 0" axis="0 1 0" range="-1.0472 1.0472" actuatorfrcrange="-2.45 2.45"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_hand_thumb_0_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_hand_thumb_0_link"/>
<body name="right_hand_thumb_1_link" pos="-0.0025 0.0193 0">
<inertial pos="-0.000827888 0.0354744 -0.0003809" quat="0.705471 0.685598 -0.0956069 0.15207" mass="0.0588507" diaginertia="1.28514e-05 1.22902e-05 5.9666e-06"/>
<joint name="right_hand_thumb_1_joint" pos="0 0 0" axis="0 0 1" range="-1.0472 0.724312" actuatorfrcrange="-1.4 1.4"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_hand_thumb_1_link"/>
<geom size="0.01 0.015 0.01" pos="-0.001 0.032 0" type="box" rgba="0.7 0.7 0.7 1"/>
<body name="right_hand_thumb_2_link" pos="0 0.0458 0">
<inertial pos="-0.00171735 0.0262819 0.000107789" quat="0.710977 0.703174 0.00766553 0.00017564" mass="0.0203063" diaginertia="4.61314e-06 3.86645e-06 1.53495e-06"/>
<joint name="right_hand_thumb_2_joint" pos="0 0 0" axis="0 0 1" range="-1.74533 0" actuatorfrcrange="-1.4 1.4"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_hand_thumb_2_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_hand_thumb_2_link"/>
</body>
</body>
</body>
<body name="right_hand_middle_0_link" pos="0.1192 -0.0046 -0.0285">
<inertial pos="0.0354744 -0.000827888 0.0003809" quat="0.606373 0.417187 0.552395 0.391313" mass="0.0588507" diaginertia="1.28514e-05 1.22902e-05 5.9666e-06"/>
<joint name="right_hand_middle_0_joint" pos="0 0 0" axis="0 0 1" range="0 1.5708" actuatorfrcrange="-1.4 1.4"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_hand_middle_0_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_hand_middle_0_link"/>
<body name="right_hand_middle_1_link" pos="0.0458 0 0">
<inertial pos="0.0262819 -0.00171735 -0.000107789" quat="0.502861 0.502639 0.491799 0.502612" mass="0.0203063" diaginertia="4.61314e-06 3.86645e-06 1.53495e-06"/>
<joint name="right_hand_middle_1_joint" pos="0 0 0" axis="0 0 1" range="0 1.74533" actuatorfrcrange="-1.4 1.4"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_hand_middle_1_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_hand_middle_1_link"/>
</body>
</body>
<body name="right_hand_index_0_link" pos="0.1192 -0.0046 0.0285">
<inertial pos="0.0354744 -0.000827888 0.0003809" quat="0.606373 0.417187 0.552395 0.391313" mass="0.0588507" diaginertia="1.28514e-05 1.22902e-05 5.9666e-06"/>
<joint name="right_hand_index_0_joint" pos="0 0 0" axis="0 0 1" range="0 1.5708" actuatorfrcrange="-1.4 1.4"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_hand_index_0_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_hand_index_0_link"/>
<body name="right_hand_index_1_link" pos="0.0458 0 0">
<inertial pos="0.0262819 -0.00171735 -0.000107789" quat="0.502861 0.502639 0.491799 0.502612" mass="0.0203063" diaginertia="4.61314e-06 3.86645e-06 1.53495e-06"/>
<joint name="right_hand_index_1_joint" pos="0 0 0" axis="0 0 1" range="0 1.74533" actuatorfrcrange="-1.4 1.4"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_hand_index_1_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_hand_index_1_link"/>
</body>
</body>
</body>
</body>
</body>
</body>
</body>
</body>
</body>
</body>
</body>
</body>
</body>
</worldbody>
<actuator>
<motor name="left_hip_pitch_joint" joint="left_hip_pitch_joint"/>
<motor name="left_hip_roll_joint" joint="left_hip_roll_joint"/>
<motor name="left_hip_yaw_joint" joint="left_hip_yaw_joint"/>
<motor name="left_knee_joint" joint="left_knee_joint"/>
<motor name="left_ankle_pitch_joint" joint="left_ankle_pitch_joint"/>
<motor name="left_ankle_roll_joint" joint="left_ankle_roll_joint"/>
<motor name="right_hip_pitch_joint" joint="right_hip_pitch_joint"/>
<motor name="right_hip_roll_joint" joint="right_hip_roll_joint"/>
<motor name="right_hip_yaw_joint" joint="right_hip_yaw_joint"/>
<motor name="right_knee_joint" joint="right_knee_joint"/>
<motor name="right_ankle_pitch_joint" joint="right_ankle_pitch_joint"/>
<motor name="right_ankle_roll_joint" joint="right_ankle_roll_joint"/>
<motor name="waist_yaw_joint" joint="waist_yaw_joint"/>
<motor name="waist_roll_joint" joint="waist_roll_joint"/>
<motor name="waist_pitch_joint" joint="waist_pitch_joint"/>
<motor name="left_shoulder_pitch_joint" joint="left_shoulder_pitch_joint"/>
<motor name="left_shoulder_roll_joint" joint="left_shoulder_roll_joint"/>
<motor name="left_shoulder_yaw_joint" joint="left_shoulder_yaw_joint"/>
<motor name="left_elbow_joint" joint="left_elbow_joint"/>
<motor name="left_wrist_roll_joint" joint="left_wrist_roll_joint"/>
<motor name="left_wrist_pitch_joint" joint="left_wrist_pitch_joint"/>
<motor name="left_wrist_yaw_joint" joint="left_wrist_yaw_joint"/>
<motor name="left_hand_thumb_0_joint" joint="left_hand_thumb_0_joint"/>
<motor name="left_hand_thumb_1_joint" joint="left_hand_thumb_1_joint"/>
<motor name="left_hand_thumb_2_joint" joint="left_hand_thumb_2_joint"/>
<motor name="left_hand_middle_0_joint" joint="left_hand_middle_0_joint"/>
<motor name="left_hand_middle_1_joint" joint="left_hand_middle_1_joint"/>
<motor name="left_hand_index_0_joint" joint="left_hand_index_0_joint"/>
<motor name="left_hand_index_1_joint" joint="left_hand_index_1_joint"/>
<motor name="right_shoulder_pitch_joint" joint="right_shoulder_pitch_joint"/>
<motor name="right_shoulder_roll_joint" joint="right_shoulder_roll_joint"/>
<motor name="right_shoulder_yaw_joint" joint="right_shoulder_yaw_joint"/>
<motor name="right_elbow_joint" joint="right_elbow_joint"/>
<motor name="right_wrist_roll_joint" joint="right_wrist_roll_joint"/>
<motor name="right_wrist_pitch_joint" joint="right_wrist_pitch_joint"/>
<motor name="right_wrist_yaw_joint" joint="right_wrist_yaw_joint"/>
<motor name="right_hand_thumb_0_joint" joint="right_hand_thumb_0_joint"/>
<motor name="right_hand_thumb_1_joint" joint="right_hand_thumb_1_joint"/>
<motor name="right_hand_thumb_2_joint" joint="right_hand_thumb_2_joint"/>
<motor name="right_hand_index_0_joint" joint="right_hand_index_0_joint"/>
<motor name="right_hand_index_1_joint" joint="right_hand_index_1_joint"/>
<motor name="right_hand_middle_0_joint" joint="right_hand_middle_0_joint"/>
<motor name="right_hand_middle_1_joint" joint="right_hand_middle_1_joint"/>
</actuator>
<sensor>
<gyro name="imu-torso-angular-velocity" site="imu_in_torso" noise="5e-4" cutoff="34.9"/>
<accelerometer name="imu-torso-linear-acceleration" site="imu_in_torso" noise="1e-2" cutoff="157"/>
<gyro name="imu-pelvis-angular-velocity" site="imu_in_pelvis" noise="5e-4" cutoff="34.9"/>
<accelerometer name="imu-pelvis-linear-acceleration" site="imu_in_pelvis" noise="1e-2" cutoff="157"/>
</sensor>
<!-- setup scene -->
<statistic center="1.0 0.7 1.0" extent="0.8"/>
<visual>
<headlight diffuse="0.6 0.6 0.6" ambient="0.1 0.1 0.1" specular="0.9 0.9 0.9"/>
<rgba haze="0.15 0.25 0.35 1"/>
<global azimuth="-140" elevation="-20"/>
</visual>
<asset>
<texture type="skybox" builtin="flat" rgb1="0 0 0" rgb2="0 0 0" width="512" height="3072"/>
<texture type="2d" name="groundplane" builtin="checker" mark="edge" rgb1="0.2 0.3 0.4" rgb2="0.1 0.2 0.3" markrgb="0.8 0.8 0.8" width="300" height="300"/>
<material name="groundplane" texture="groundplane" texuniform="true" texrepeat="5 5" reflectance="0.2"/>
</asset>
<worldbody>
<light pos="1 0 3.5" dir="0 0 -1" directional="true"/>
<geom name="floor" size="0 0 0.05" type="plane" material="groundplane"/>
</worldbody>
</mujoco>

View File

@@ -0,0 +1,261 @@
<robot name="z1">
<mujoco>
<compiler meshdir="meshes" discardvisual="false"/>
</mujoco>
<link name="world"/>
<joint name="base_static_joint" type="fixed">
<origin rpy="0 0 0" xyz="0 0 0"/>
<parent link="world"/>
<child link="link00"/>
</joint>
<link name="link00">
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="meshes/z1_Link00.STL" scale="1 1 1"/>
</geometry>
</visual>
<collision>
<geometry>
<cylinder length="0.051" radius="0.0325"/>
</geometry>
<origin rpy="0 0 0" xyz="0 0 0.0255"/>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="-0.00334984 -0.00013615 0.02495843"/>
<mass value="0.47247481"/>
<inertia ixx="0.00037937" ixy="-3.5e-07" ixz="-1.037e-05" iyy="0.00041521" iyz="-9.9e-07" izz="0.00053066"/>
</inertial>
</link>
<joint name="joint1" type="revolute">
<origin rpy="0 0 0" xyz="0 0 0.0585"/>
<parent link="link00"/>
<child link="link01"/>
<axis xyz="0 0 1"/>
<dynamics damping="1.0" friction="1.0"/>
<limit effort="30.0" lower="-2.6179938779914944" upper="2.6179938779914944" velocity="3.1415"/>
</joint>
<link name="link01">
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="meshes/z1_Link01.STL" scale="1 1 1"/>
</geometry>
</visual>
<inertial>
<origin rpy="0 0 0" xyz="2.47e-06 -0.00025198 0.02317169"/>
<mass value="0.67332551"/>
<inertia ixx="0.00128328" ixy="-6e-08" ixz="-4e-07" iyy="0.00071931" iyz="5e-07" izz="0.00083936"/>
</inertial>
</link>
<joint name="joint2" type="revolute">
<origin rpy="0 0 0" xyz="0 0 0.045"/>
<parent link="link01"/>
<child link="link02"/>
<axis xyz="0 1 0"/>
<dynamics damping="2.0" friction="2.0"/>
<limit effort="60.0" lower="0.0" upper="2.9670597283903604" velocity="3.1415"/>
</joint>
<link name="link02">
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="meshes/z1_Link02.STL" scale="1 1 1"/>
</geometry>
</visual>
<collision>
<geometry>
<cylinder length="0.102" radius="0.0325"/>
</geometry>
<origin rpy="1.5707963267948966 0 0" xyz="0 0 0"/>
</collision>
<collision>
<geometry>
<cylinder length="0.235" radius="0.0225"/>
</geometry>
<origin rpy="0 1.5707963267948966 0" xyz="-0.16249999999999998 0 0"/>
</collision>
<collision>
<geometry>
<cylinder length="0.051" radius="0.0325"/>
</geometry>
<origin rpy="1.5707963267948966 0 0" xyz="-0.35 0 0"/>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="-0.11012601 0.00240029 0.00158266"/>
<mass value="1.19132258"/>
<inertia ixx="0.00102138" ixy="0.00062358" ixz="5.13e-06" iyy="0.02429457" iyz="-2.1e-06" izz="0.02466114"/>
</inertial>
</link>
<joint name="joint3" type="revolute">
<origin rpy="0 0 0" xyz="-0.35 0 0"/>
<parent link="link02"/>
<child link="link03"/>
<axis xyz="0 1 0"/>
<dynamics damping="1.0" friction="1.0"/>
<limit effort="30.0" lower="-2.8797932657906435" upper="0.0" velocity="3.1415"/>
</joint>
<link name="link03">
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="meshes/z1_Link03.STL" scale="1 1 1"/>
</geometry>
</visual>
<collision>
<geometry>
<cylinder length="0.116" radius="0.02"/>
</geometry>
<origin rpy="0 1.5707963267948966 0" xyz="0.128 0 0.055"/>
</collision>
<collision>
<geometry>
<cylinder length="0.059" radius="0.0325"/>
</geometry>
<origin rpy="0 1.5707963267948966 1.5707963267948966" xyz="0.2205 0 0.055"/>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="0.10609208 -0.00541815 0.03476383"/>
<mass value="0.83940874"/>
<inertia ixx="0.00108061" ixy="-8.669e-05" ixz="-0.00208102" iyy="0.00954238" iyz="-1.332e-05" izz="0.00886621"/>
</inertial>
</link>
<joint name="joint4" type="revolute">
<origin rpy="0 0 0" xyz="0.218 0 0.057"/>
<parent link="link03"/>
<child link="link04"/>
<axis xyz="0 1 0"/>
<dynamics damping="1.0" friction="1.0"/>
<limit effort="30.0" lower="-1.5184364492350666" upper="1.5184364492350666" velocity="3.1415"/>
</joint>
<link name="link04">
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="meshes/z1_Link04.STL" scale="1 1 1"/>
</geometry>
</visual>
<collision>
<geometry>
<cylinder length="0.067" radius="0.0325"/>
</geometry>
<origin rpy="0 0 0" xyz="0.072 0 0"/>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="0.04366681 0.00364738 -0.00170192"/>
<mass value="0.56404563"/>
<inertia ixx="0.00031576" ixy="8.13e-05" ixz="4.091e-05" iyy="0.00092996" iyz="-5.96e-06" izz="0.00097912"/>
</inertial>
</link>
<joint name="joint5" type="revolute">
<origin rpy="0 0 0" xyz="0.07 0.0 0.0"/>
<parent link="link04"/>
<child link="link05"/>
<axis xyz="0 0 1"/>
<dynamics damping="1.0" friction="1.0"/>
<limit effort="30.0" lower="-1.3439035240356338" upper="1.3439035240356338" velocity="3.1415"/>
</joint>
<link name="link05">
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="meshes/z1_Link05.STL" scale="1 1 1"/>
</geometry>
</visual>
<inertial>
<origin rpy="0 0 0" xyz="0.03121533 0.0 0.00646316"/>
<mass value="0.38938492"/>
<inertia ixx="0.00017605" ixy="4e-07" ixz="5.689e-05" iyy="0.00055896" iyz="-1.3e-07" izz="0.0005386"/>
</inertial>
</link>
<joint name="joint6" type="revolute">
<origin rpy="0 0 0" xyz="0.0492 0.0 0.0"/>
<parent link="link05"/>
<child link="link06"/>
<axis xyz="1 0 0"/>
<dynamics damping="1.0" friction="1.0"/>
<limit effort="30.0" lower="-2.792526803190927" upper="2.792526803190927" velocity="3.1415"/>
</joint>
<link name="link06">
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="meshes/z1_Link06.STL" scale="1 1 1"/>
</geometry>
</visual>
<collision>
<geometry>
<cylinder length="0.051" radius="0.0325"/>
</geometry>
<origin rpy="0 1.5707963267948966 0" xyz="0.0255 0 0"/>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="0.0241569 -0.00017355 -0.00143876"/>
<mass value="0.28875807"/>
<inertia ixx="0.00018328" ixy="1.22e-06" ixz="5.4e-07" iyy="0.0001475" iyz="8e-08" izz="0.0001468"/>
</inertial>
</link>
<transmission name="JointTrans1">
<type>transmission_interface/SimpleTransmission</type>
<joint name="joint1">
<hardwareInterface>hardware_interface/EffortJointInterface</hardwareInterface>
</joint>
<actuator name="Actuator1">
<hardwareInterface>hardware_interface/EffortJointInterface</hardwareInterface>
<mechanicalReduction>1</mechanicalReduction>
</actuator>
</transmission>
<transmission name="JointTrans2">
<type>transmission_interface/SimpleTransmission</type>
<joint name="joint2">
<hardwareInterface>hardware_interface/EffortJointInterface</hardwareInterface>
</joint>
<actuator name="Actuator2">
<hardwareInterface>hardware_interface/EffortJointInterface</hardwareInterface>
<mechanicalReduction>1</mechanicalReduction>
</actuator>
</transmission>
<transmission name="JointTrans3">
<type>transmission_interface/SimpleTransmission</type>
<joint name="joint3">
<hardwareInterface>hardware_interface/EffortJointInterface</hardwareInterface>
</joint>
<actuator name="Actuator3">
<hardwareInterface>hardware_interface/EffortJointInterface</hardwareInterface>
<mechanicalReduction>1</mechanicalReduction>
</actuator>
</transmission>
<transmission name="JointTrans4">
<type>transmission_interface/SimpleTransmission</type>
<joint name="joint4">
<hardwareInterface>hardware_interface/EffortJointInterface</hardwareInterface>
</joint>
<actuator name="Actuator4">
<hardwareInterface>hardware_interface/EffortJointInterface</hardwareInterface>
<mechanicalReduction>1</mechanicalReduction>
</actuator>
</transmission>
<transmission name="JointTrans5">
<type>transmission_interface/SimpleTransmission</type>
<joint name="joint5">
<hardwareInterface>hardware_interface/EffortJointInterface</hardwareInterface>
</joint>
<actuator name="Actuator5">
<hardwareInterface>hardware_interface/EffortJointInterface</hardwareInterface>
<mechanicalReduction>1</mechanicalReduction>
</actuator>
</transmission>
<transmission name="JointTrans6">
<type>transmission_interface/SimpleTransmission</type>
<joint name="joint6">
<hardwareInterface>hardware_interface/EffortJointInterface</hardwareInterface>
</joint>
<actuator name="Actuator6">
<hardwareInterface>hardware_interface/EffortJointInterface</hardwareInterface>
<mechanicalReduction>1</mechanicalReduction>
</actuator>
</transmission>
</robot>

View File

@@ -0,0 +1,125 @@
"""
@misc{cadene2024lerobot,
author = {Cadene, Remi and Alibert, Simon and Soare, Alexander and Gallouedec, Quentin and Zouitine, Adil and Wolf, Thomas},
title = {LeRobot: State-of-the-art Machine Learning for Real-World Robotics in PyTorch},
howpublished = {Available at: https://github.com/huggingface/lerobot},
year = {2024},
}
"""
import abc
from dataclasses import dataclass
import draccus
@dataclass
class CameraConfig(draccus.ChoiceRegistry, abc.ABC):
@property
def type(self) -> str:
return self.get_choice_name(self.__class__)
@CameraConfig.register_subclass("opencv")
@dataclass
class OpenCVCameraConfig(CameraConfig):
"""
Example of tested options for Intel Real Sense D405:
```python
OpenCVCameraConfig(0, 30, 640, 480)
OpenCVCameraConfig(0, 60, 640, 480)
OpenCVCameraConfig(0, 90, 640, 480)
OpenCVCameraConfig(0, 30, 1280, 720)
```
"""
camera_index: int
fps: int | None = None
width: int | None = None
height: int | None = None
color_mode: str = "rgb"
channels: int | None = None
rotation: int | None = None
mock: bool = False
def __post_init__(self):
if self.color_mode not in ["rgb", "bgr"]:
raise ValueError(
f"`color_mode` is expected to be 'rgb' or 'bgr', but {self.color_mode} is provided."
)
self.channels = 3
if self.rotation not in [-90, None, 90, 180]:
raise ValueError(f"`rotation` must be in [-90, None, 90, 180] (got {self.rotation})")
@CameraConfig.register_subclass("intelrealsense")
@dataclass
class IntelRealSenseCameraConfig(CameraConfig):
"""
Example of tested options for Intel Real Sense D405:
```python
IntelRealSenseCameraConfig(128422271347, 30, 640, 480)
IntelRealSenseCameraConfig(128422271347, 60, 640, 480)
IntelRealSenseCameraConfig(128422271347, 90, 640, 480)
IntelRealSenseCameraConfig(128422271347, 30, 1280, 720)
IntelRealSenseCameraConfig(128422271347, 30, 640, 480, use_depth=True)
IntelRealSenseCameraConfig(128422271347, 30, 640, 480, rotation=90)
```
"""
name: str | None = None
serial_number: int | None = None
fps: int | None = None
width: int | None = None
height: int | None = None
color_mode: str = "rgb"
channels: int | None = None
use_depth: bool = False
force_hardware_reset: bool = True
rotation: int | None = None
mock: bool = False
def __post_init__(self):
# bool is stronger than is None, since it works with empty strings
if bool(self.name) and bool(self.serial_number):
raise ValueError(
f"One of them must be set: name or serial_number, but {self.name=} and {self.serial_number=} provided."
)
if self.color_mode not in ["rgb", "bgr"]:
raise ValueError(
f"`color_mode` is expected to be 'rgb' or 'bgr', but {self.color_mode} is provided."
)
self.channels = 3
at_least_one_is_not_none = self.fps is not None or self.width is not None or self.height is not None
at_least_one_is_none = self.fps is None or self.width is None or self.height is None
if at_least_one_is_not_none and at_least_one_is_none:
raise ValueError(
"For `fps`, `width` and `height`, either all of them need to be set, or none of them, "
f"but {self.fps=}, {self.width=}, {self.height=} were provided."
)
if self.rotation not in [-90, None, 90, 180]:
raise ValueError(f"`rotation` must be in [-90, None, 90, 180] (got {self.rotation})")
@CameraConfig.register_subclass("imageclient")
@dataclass
class ImageClientCameraConfig(CameraConfig):
head_camera_type: str
head_camera_id_numbers: list[int]
head_camera_image_shape: list[int]
wrist_camera_type: str | None = None
wrist_camera_id_numbers: list[int] | None = None
wrist_camera_image_shape: list[int] | None = None
aspect_ratio_threshold: float = 2.0
fps: int = 30
mock: bool = False

View File

@@ -0,0 +1,312 @@
"""
This file contains utilities for recording frames from cameras. For more info look at `OpenCVCamera` docstring.
"""
import struct
import threading
import time
from collections import deque
from multiprocessing import shared_memory
import cv2
import numpy as np
import zmq
from unitree_deploy.robot_devices.cameras.configs import ImageClientCameraConfig
from unitree_deploy.robot_devices.robots_devices_utils import (
RobotDeviceAlreadyConnectedError,
RobotDeviceNotConnectedError,
)
from unitree_deploy.utils.rich_logger import log_error, log_info, log_success, log_warning
class ImageClient:
def __init__(
self,
tv_img_shape=None,
tv_img_shm_name=None,
wrist_img_shape=None,
wrist_img_shm_name=None,
image_show=False,
server_address="192.168.123.164",
port=5555,
unit_test=False,
):
"""
tv_img_shape: User's expected head camera resolution shape (H, W, C). It should match the output of the image service terminal.
tv_img_shm_name: Shared memory is used to easily transfer images across processes to the Vuer.
wrist_img_shape: User's expected wrist camera resolution shape (H, W, C). It should maintain the same shape as tv_img_shape.
wrist_img_shm_name: Shared memory is used to easily transfer images.
image_show: Whether to display received images in real time.
server_address: The ip address to execute the image server script.
port: The port number to bind to. It should be the same as the image server.
Unit_Test: When both server and client are True, it can be used to test the image transfer latency, \
network jitter, frame loss rate and other information.
"""
self.running = True
self._image_show = image_show
self._server_address = server_address
self._port = port
self.tv_img_shape = tv_img_shape
self.wrist_img_shape = wrist_img_shape
self.tv_enable_shm = False
if self.tv_img_shape is not None and tv_img_shm_name is not None:
self.tv_image_shm = shared_memory.SharedMemory(name=tv_img_shm_name)
self.tv_img_array = np.ndarray(tv_img_shape, dtype=np.uint8, buffer=self.tv_image_shm.buf)
self.tv_enable_shm = True
self.wrist_enable_shm = False
if self.wrist_img_shape is not None and wrist_img_shm_name is not None:
self.wrist_image_shm = shared_memory.SharedMemory(name=wrist_img_shm_name)
self.wrist_img_array = np.ndarray(wrist_img_shape, dtype=np.uint8, buffer=self.wrist_image_shm.buf)
self.wrist_enable_shm = True
# Performance evaluation parameters
self._enable_performance_eval = unit_test
if self._enable_performance_eval:
self._init_performance_metrics()
def _init_performance_metrics(self):
self._frame_count = 0 # Total frames received
self._last_frame_id = -1 # Last received frame ID
# Real-time FPS calculation using a time window
self._time_window = 1.0 # Time window size (in seconds)
self._frame_times = deque() # Timestamps of frames received within the time window
# Data transmission quality metrics
self._latencies = deque() # Latencies of frames within the time window
self._lost_frames = 0 # Total lost frames
self._total_frames = 0 # Expected total frames based on frame IDs
def _update_performance_metrics(self, timestamp, frame_id, receive_time):
# Update latency
latency = receive_time - timestamp
self._latencies.append(latency)
# Remove latencies outside the time window
while self._latencies and self._frame_times and self._latencies[0] < receive_time - self._time_window:
self._latencies.popleft()
# Update frame times
self._frame_times.append(receive_time)
# Remove timestamps outside the time window
while self._frame_times and self._frame_times[0] < receive_time - self._time_window:
self._frame_times.popleft()
# Update frame counts for lost frame calculation
expected_frame_id = self._last_frame_id + 1 if self._last_frame_id != -1 else frame_id
if frame_id != expected_frame_id:
lost = frame_id - expected_frame_id
if lost < 0:
log_info(f"[Image Client] Received out-of-order frame ID: {frame_id}")
else:
self._lost_frames += lost
log_info(
f"[Image Client] Detected lost frames: {lost}, Expected frame ID: {expected_frame_id}, Received frame ID: {frame_id}"
)
self._last_frame_id = frame_id
self._total_frames = frame_id + 1
self._frame_count += 1
def _print_performance_metrics(self, receive_time):
if self._frame_count % 30 == 0:
# Calculate real-time FPS
real_time_fps = len(self._frame_times) / self._time_window if self._time_window > 0 else 0
# Calculate latency metrics
if self._latencies:
avg_latency = sum(self._latencies) / len(self._latencies)
max_latency = max(self._latencies)
min_latency = min(self._latencies)
jitter = max_latency - min_latency
else:
avg_latency = max_latency = min_latency = jitter = 0
# Calculate lost frame rate
lost_frame_rate = (self._lost_frames / self._total_frames) * 100 if self._total_frames > 0 else 0
log_info(
f"[Image Client] Real-time FPS: {real_time_fps:.2f}, Avg Latency: {avg_latency * 1000:.2f} ms, Max Latency: {max_latency * 1000:.2f} ms, \
Min Latency: {min_latency * 1000:.2f} ms, Jitter: {jitter * 1000:.2f} ms, Lost Frame Rate: {lost_frame_rate:.2f}%"
)
def _close(self):
self._socket.close()
self._context.term()
if self._image_show:
cv2.destroyAllWindows()
log_success("Image client has been closed.")
def receive_process(self):
# Set up ZeroMQ context and socket
self._context = zmq.Context()
self._socket = self._context.socket(zmq.SUB)
self._socket.connect(f"tcp://{self._server_address}:{self._port}")
self._socket.setsockopt_string(zmq.SUBSCRIBE, "")
log_warning("\nImage client has started, waiting to receive data...")
try:
while self.running:
# Receive message
message = self._socket.recv()
receive_time = time.time()
if self._enable_performance_eval:
header_size = struct.calcsize("dI")
try:
# Attempt to extract header and image data
header = message[:header_size]
jpg_bytes = message[header_size:]
timestamp, frame_id = struct.unpack("dI", header)
except struct.error as e:
log_error(f"[Image Client] Error unpacking header: {e}, discarding message.")
continue
else:
# No header, entire message is image data
jpg_bytes = message
# Decode image
np_img = np.frombuffer(jpg_bytes, dtype=np.uint8)
current_image = cv2.imdecode(np_img, cv2.IMREAD_COLOR)
if current_image is None:
log_error("[Image Client] Failed to decode image.")
continue
if self.tv_enable_shm:
np.copyto(self.tv_img_array, np.array(current_image[:, : self.tv_img_shape[1]]))
if self.wrist_enable_shm:
np.copyto(self.wrist_img_array, np.array(current_image[:, -self.wrist_img_shape[1] :]))
if self._image_show:
height, width = current_image.shape[:2]
resized_image = cv2.resize(current_image, (width // 2, height // 2))
cv2.imshow("Image Client Stream", resized_image)
if cv2.waitKey(1) & 0xFF == ord("q"):
self.running = False
if self._enable_performance_eval:
self._update_performance_metrics(timestamp, frame_id, receive_time)
self._print_performance_metrics(receive_time)
except KeyboardInterrupt:
log_error("Image client interrupted by user.")
except Exception as e:
log_error(f"[Image Client] An error occurred while receiving data: {e}")
finally:
self._close()
class ImageClientCamera:
def __init__(self, config: ImageClientCameraConfig):
self.config = config
self.fps = config.fps
self.head_camera_type = config.head_camera_type
self.head_camera_image_shape = config.head_camera_image_shape
self.head_camera_id_numbers = config.head_camera_id_numbers
self.wrist_camera_type = config.wrist_camera_type
self.wrist_camera_image_shape = config.wrist_camera_image_shape
self.wrist_camera_id_numbers = config.wrist_camera_id_numbers
self.aspect_ratio_threshold = config.aspect_ratio_threshold
self.mock = config.mock
self.is_binocular = (
len(self.head_camera_id_numbers) > 1
or self.head_camera_image_shape[1] / self.head_camera_image_shape[0] > self.aspect_ratio_threshold
) # self.is_binocular
self.has_wrist_camera = self.wrist_camera_type is not None # self.has_wrist_camera
self.tv_img_shape = (
(self.head_camera_image_shape[0], self.head_camera_image_shape[1] * 2, 3)
if self.is_binocular
and not (self.head_camera_image_shape[1] / self.head_camera_image_shape[0] > self.aspect_ratio_threshold)
else (self.head_camera_image_shape[0], self.head_camera_image_shape[1], 3)
)
self.tv_img_shm = shared_memory.SharedMemory(create=True, size=np.prod(self.tv_img_shape) * np.uint8().itemsize)
self.tv_img_array = np.ndarray(self.tv_img_shape, dtype=np.uint8, buffer=self.tv_img_shm.buf)
self.wrist_img_shape = None
self.wrist_img_shm = None
if self.has_wrist_camera:
self.wrist_img_shape = (self.wrist_camera_image_shape[0], self.wrist_camera_image_shape[1] * 2, 3)
self.wrist_img_shm = shared_memory.SharedMemory(
create=True, size=np.prod(self.wrist_img_shape) * np.uint8().itemsize
)
self.wrist_img_array = np.ndarray(self.wrist_img_shape, dtype=np.uint8, buffer=self.wrist_img_shm.buf)
self.img_shm_name = self.tv_img_shm.name
self.is_connected = False
def connect(self):
try:
if self.is_connected:
raise RobotDeviceAlreadyConnectedError(f"ImageClient({self.camera_index}) is already connected.")
self.img_client = ImageClient(
tv_img_shape=self.tv_img_shape,
tv_img_shm_name=self.tv_img_shm.name,
wrist_img_shape=self.wrist_img_shape,
wrist_img_shm_name=self.wrist_img_shm.name if self.wrist_img_shm else None,
)
image_receive_thread = threading.Thread(target=self.img_client.receive_process, daemon=True)
image_receive_thread.daemon = True
image_receive_thread.start()
self.is_connected = True
except Exception as e:
self.disconnect()
log_error(f"❌ Error in ImageClientCamera.connect: {e}")
def read(self) -> np.ndarray:
pass
def async_read(self):
try:
if not self.is_connected:
raise RobotDeviceNotConnectedError(
"ImageClient is not connected. Try running `camera.connect()` first."
)
current_tv_image = self.tv_img_array.copy()
current_wrist_image = self.wrist_img_array.copy() if self.has_wrist_camera else None
colors = {}
if self.is_binocular:
colors["cam_left_high"] = current_tv_image[:, : self.tv_img_shape[1] // 2]
colors["cam_right_high"] = current_tv_image[:, self.tv_img_shape[1] // 2 :]
if self.has_wrist_camera:
colors["cam_left_wrist"] = current_wrist_image[:, : self.wrist_img_shape[1] // 2]
colors["cam_right_wrist"] = current_wrist_image[:, self.wrist_img_shape[1] // 2 :]
else:
colors["cam_high"] = current_tv_image
if self.has_wrist_camera:
colors["cam_left_wrist"] = current_wrist_image[:, : self.wrist_img_shape[1] // 2]
colors["cam_right_wrist"] = current_wrist_image[:, self.wrist_img_shape[1] // 2 :]
return colors
except Exception as e:
self.disconnect()
log_error(f"❌ Error in ImageClientCamera.async_read: {e}")
def disconnect(self):
if not self.is_connected:
raise RobotDeviceNotConnectedError(
f"ImageClient({self.camera_index}) is not connected. Try running `camera.connect()` first."
)
self.tv_img_shm.unlink()
self.tv_img_shm.close()
if self.has_wrist_camera:
self.wrist_img_shm.unlink()
self.wrist_img_shm.close()
self.is_connected = False
def __del__(self):
if getattr(self, "is_connected", False):
self.disconnect()

View File

@@ -0,0 +1,504 @@
"""
@misc{cadene2024lerobot,
author = {Cadene, Remi and Alibert, Simon and Soare, Alexander and Gallouedec, Quentin and Zouitine, Adil and Wolf, Thomas},
title = {LeRobot: State-of-the-art Machine Learning for Real-World Robotics in PyTorch},
howpublished = {Available at: https://github.com/huggingface/lerobot},
year = {2024},
}
This file contains utilities for recording frames from Intel Realsense cameras.
"""
import argparse
import concurrent.futures
import logging
import math
import shutil
import threading
import time
import traceback
from collections import Counter
from pathlib import Path
from threading import Thread
import cv2
import numpy as np
import pyrealsense2 as rs
from PIL import Image
from unitree_deploy.robot_devices.cameras.configs import IntelRealSenseCameraConfig
from unitree_deploy.robot_devices.robots_devices_utils import (
RobotDeviceAlreadyConnectedError,
RobotDeviceNotConnectedError,
busy_wait,
capture_timestamp_utc,
)
SERIAL_NUMBER_INDEX = 1
def find_cameras(raise_when_empty=True, mock=False) -> list[dict]:
"""
Find the names and the serial numbers of the Intel RealSense cameras
connected to the computer.
"""
cameras = []
for device in rs.context().query_devices():
serial_number = device.get_info(rs.camera_info(SERIAL_NUMBER_INDEX))
name = device.get_info(rs.camera_info.name)
cameras.append(
{
"serial_number": serial_number,
"name": name,
}
)
if raise_when_empty and len(cameras) == 0:
raise OSError(
"Not a single camera was detected. Try re-plugging, or re-installing `librealsense` and its python wrapper `pyrealsense2`, or updating the firmware."
)
return cameras
def save_image(img_array, serial_number, frame_index, images_dir):
try:
img = Image.fromarray(img_array)
path = images_dir / f"camera_{serial_number}_frame_{frame_index:06d}.png"
path.parent.mkdir(parents=True, exist_ok=True)
img.save(str(path), quality=100)
logging.info(f"Saved image: {path}")
except Exception as e:
logging.error(f"Failed to save image for camera {serial_number} frame {frame_index}: {e}")
def save_images_from_cameras(
images_dir: Path,
serial_numbers: list[int] | None = None,
fps=None,
width=None,
height=None,
record_time_s=2,
mock=False,
):
"""
Initializes all the cameras and saves images to the directory. Useful to visually identify the camera
associated to a given serial number.
"""
if serial_numbers is None or len(serial_numbers) == 0:
camera_infos = find_cameras(mock=mock)
serial_numbers = [cam["serial_number"] for cam in camera_infos]
print("Connecting cameras")
cameras = []
for cam_sn in serial_numbers:
print(f"{cam_sn=}")
config = IntelRealSenseCameraConfig(
serial_number=cam_sn, fps=fps, width=width, height=height, mock=mock
)
camera = IntelRealSenseCamera(config)
camera.connect()
print(
f"IntelRealSenseCamera({camera.serial_number}, fps={camera.fps}, width={camera.width}, height={camera.height}, color_mode={camera.color_mode})"
)
cameras.append(camera)
images_dir = Path(images_dir)
if images_dir.exists():
shutil.rmtree(
images_dir,
)
images_dir.mkdir(parents=True, exist_ok=True)
print(f"Saving images to {images_dir}")
frame_index = 0
start_time = time.perf_counter()
try:
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
while True:
now = time.perf_counter()
for camera in cameras:
# If we use async_read when fps is None, the loop will go full speed, and we will end up
# saving the same images from the cameras multiple times until the RAM/disk is full.
image = camera.read() if fps is None else camera.async_read()
if image is None:
print("No Frame")
bgr_converted_image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
executor.submit(
save_image,
bgr_converted_image,
camera.serial_number,
frame_index,
images_dir,
)
if fps is not None:
dt_s = time.perf_counter() - now
busy_wait(1 / fps - dt_s)
if time.perf_counter() - start_time > record_time_s:
break
print(f"Frame: {frame_index:04d}\tLatency (ms): {(time.perf_counter() - now) * 1000:.2f}")
frame_index += 1
finally:
print(f"Images have been saved to {images_dir}")
for camera in cameras:
camera.disconnect()
class IntelRealSenseCamera:
"""
The IntelRealSenseCamera class is similar to OpenCVCamera class but adds additional features for Intel Real Sense cameras:
- is instantiated with the serial number of the camera - won't randomly change as it can be the case of OpenCVCamera for Linux,
- can also be instantiated with the camera's name — if it's unique — using IntelRealSenseCamera.init_from_name(),
- depth map can be returned.
To find the camera indices of your cameras, you can run our utility script that will save a few frames for each camera:
```bash
python lerobot/common/robot_devices/cameras/intelrealsense.py --images-dir outputs/images_from_intelrealsense_cameras
```
When an IntelRealSenseCamera is instantiated, if no specific config is provided, the default fps, width, height and color_mode
of the given camera will be used.
Example of instantiating with a serial number:
```python
from lerobot.common.robot_devices.cameras.configs import IntelRealSenseCameraConfig
config = IntelRealSenseCameraConfig(serial_number=128422271347)
camera = IntelRealSenseCamera(config)
camera.connect()
color_image = camera.read()
# when done using the camera, consider disconnecting
camera.disconnect()
```
Example of instantiating with a name if it's unique:
```
config = IntelRealSenseCameraConfig(name="Intel RealSense D405")
```
Example of changing default fps, width, height and color_mode:
```python
config = IntelRealSenseCameraConfig(serial_number=128422271347, fps=30, width=1280, height=720)
config = IntelRealSenseCameraConfig(serial_number=128422271347, fps=90, width=640, height=480)
config = IntelRealSenseCameraConfig(serial_number=128422271347, fps=90, width=640, height=480, color_mode="bgr")
# Note: might error out upon `camera.connect()` if these settings are not compatible with the camera
```
Example of returning depth:
```python
config = IntelRealSenseCameraConfig(serial_number=128422271347, use_depth=True)
camera = IntelRealSenseCamera(config)
camera.connect()
color_image, depth_map = camera.read()
```
"""
def __init__(
self,
config: IntelRealSenseCameraConfig,
):
self.config = config
if config.name is not None:
self.serial_number = self.find_serial_number_from_name(config.name)
else:
self.serial_number = config.serial_number
self.fps = config.fps
self.width = config.width
self.height = config.height
self.channels = config.channels
self.color_mode = config.color_mode
self.use_depth = config.use_depth
self.force_hardware_reset = config.force_hardware_reset
self.mock = config.mock
self.camera = None
self.is_connected = False
self.thread = None
self.stop_event = None
self.color_image = None
self.depth_map = None
self.logs = {}
if self.mock:
import tests.mock_cv2 as cv2
else:
import cv2
self.rotation = None
if config.rotation == -90:
self.rotation = cv2.ROTATE_90_COUNTERCLOCKWISE
elif config.rotation == 90:
self.rotation = cv2.ROTATE_90_CLOCKWISE
elif config.rotation == 180:
self.rotation = cv2.ROTATE_180
def find_serial_number_from_name(self, name):
camera_infos = find_cameras()
camera_names = [cam["name"] for cam in camera_infos]
this_name_count = Counter(camera_names)[name]
if this_name_count > 1:
raise ValueError(
f"Multiple {name} cameras have been detected. Please use their serial number to instantiate them."
)
name_to_serial_dict = {cam["name"]: cam["serial_number"] for cam in camera_infos}
cam_sn = name_to_serial_dict[name]
return cam_sn
def connect(self):
if self.is_connected:
raise RobotDeviceAlreadyConnectedError(
f"IntelRealSenseCamera({self.serial_number}) is already connected."
)
if self.mock:
import tests.mock_pyrealsense2 as rs
else:
import pyrealsense2 as rs
config = rs.config()
config.enable_device(str(self.serial_number))
if self.fps and self.width and self.height:
config.enable_stream(rs.stream.color, self.width, self.height, rs.format.rgb8, self.fps)
else:
config.enable_stream(rs.stream.color)
if self.use_depth:
if self.fps and self.width and self.height:
config.enable_stream(rs.stream.depth, self.width, self.height, rs.format.z16, self.fps)
else:
config.enable_stream(rs.stream.depth)
self.camera = rs.pipeline()
try:
profile = self.camera.start(config)
is_camera_open = True
except RuntimeError:
is_camera_open = False
traceback.print_exc()
# If the camera doesn't work, display the camera indices corresponding to
# valid cameras.
if not is_camera_open:
# Verify that the provided `serial_number` is valid before printing the traceback
camera_infos = find_cameras()
serial_numbers = [cam["serial_number"] for cam in camera_infos]
if self.serial_number not in serial_numbers:
raise ValueError(
f"`serial_number` is expected to be one of these available cameras {serial_numbers}, but {self.serial_number} is provided instead. "
"To find the serial number you should use, run `python lerobot/common/robot_devices/cameras/intelrealsense.py`."
)
raise OSError(f"Can't access IntelRealSenseCamera({self.serial_number}).")
color_stream = profile.get_stream(rs.stream.color)
color_profile = color_stream.as_video_stream_profile()
actual_fps = color_profile.fps()
actual_width = color_profile.width()
actual_height = color_profile.height()
# Using `math.isclose` since actual fps can be a float (e.g. 29.9 instead of 30)
if self.fps is not None and not math.isclose(self.fps, actual_fps, rel_tol=1e-3):
# Using `OSError` since it's a broad that encompasses issues related to device communication
raise OSError(
f"Can't set {self.fps=} for IntelRealSenseCamera({self.serial_number}). Actual value is {actual_fps}."
)
if self.width is not None and self.width != actual_width:
raise OSError(
f"Can't set {self.width=} for IntelRealSenseCamera({self.serial_number}). Actual value is {actual_width}."
)
if self.height is not None and self.height != actual_height:
raise OSError(
f"Can't set {self.height=} for IntelRealSenseCamera({self.serial_number}). Actual value is {actual_height}."
)
self.fps = round(actual_fps)
self.width = round(actual_width)
self.height = round(actual_height)
self.is_connected = True
def read(self, temporary_color: str | None = None) -> np.ndarray | tuple[np.ndarray, np.ndarray]:
"""Read a frame from the camera returned in the format height x width x channels (e.g. 480 x 640 x 3)
of type `np.uint8`, contrarily to the pytorch format which is float channel first.
When `use_depth=True`, returns a tuple `(color_image, depth_map)` with a depth map in the format
height x width (e.g. 480 x 640) of type np.uint16.
Note: Reading a frame is done every `camera.fps` times per second, and it is blocking.
If you are reading data from other sensors, we advise to use `camera.async_read()` which is non blocking version of `camera.read()`.
"""
if not self.is_connected:
raise RobotDeviceNotConnectedError(
f"IntelRealSenseCamera({self.serial_number}) is not connected. Try running `camera.connect()` first."
)
if self.mock:
import tests.mock_cv2 as cv2
else:
import cv2
start_time = time.perf_counter()
frame = self.camera.wait_for_frames(timeout_ms=5000)
color_frame = frame.get_color_frame()
if not color_frame:
raise OSError(f"Can't capture color image from IntelRealSenseCamera({self.serial_number}).")
color_image = np.asanyarray(color_frame.get_data())
requested_color_mode = self.color_mode if temporary_color is None else temporary_color
if requested_color_mode not in ["rgb", "bgr"]:
raise ValueError(
f"Expected color values are 'rgb' or 'bgr', but {requested_color_mode} is provided."
)
# IntelRealSense uses RGB format as default (red, green, blue).
if requested_color_mode == "bgr":
color_image = cv2.cvtColor(color_image, cv2.COLOR_RGB2BGR)
h, w, _ = color_image.shape
if h != self.height or w != self.width:
raise OSError(
f"Can't capture color image with expected height and width ({self.height} x {self.width}). ({h} x {w}) returned instead."
)
if self.rotation is not None:
color_image = cv2.rotate(color_image, self.rotation)
# log the number of seconds it took to read the image
self.logs["delta_timestamp_s"] = time.perf_counter() - start_time
# log the utc time at which the image was received
self.logs["timestamp_utc"] = capture_timestamp_utc()
if self.use_depth:
depth_frame = frame.get_depth_frame()
if not depth_frame:
raise OSError(f"Can't capture depth image from IntelRealSenseCamera({self.serial_number}).")
depth_map = np.asanyarray(depth_frame.get_data())
h, w = depth_map.shape
if h != self.height or w != self.width:
raise OSError(
f"Can't capture depth map with expected height and width ({self.height} x {self.width}). ({h} x {w}) returned instead."
)
if self.rotation is not None:
depth_map = cv2.rotate(depth_map, self.rotation)
return color_image, depth_map
else:
return color_image
def read_loop(self):
while not self.stop_event.is_set():
if self.use_depth:
self.color_image, self.depth_map = self.read()
else:
self.color_image = self.read()
def async_read(self):
"""Access the latest color image"""
if not self.is_connected:
raise RobotDeviceNotConnectedError(
f"IntelRealSenseCamera({self.serial_number}) is not connected. Try running `camera.connect()` first."
)
if self.thread is None:
self.stop_event = threading.Event()
self.thread = Thread(target=self.read_loop, args=())
self.thread.daemon = True
self.thread.start()
num_tries = 0
while self.color_image is None:
num_tries += 1
time.sleep(1 / self.fps)
if num_tries > self.fps and (self.thread.ident is None or not self.thread.is_alive()):
raise Exception(
"The thread responsible for `self.async_read()` took too much time to start. There might be an issue. Verify that `self.thread.start()` has been called."
)
if self.use_depth:
return self.color_image, self.depth_map
else:
return self.color_image
def disconnect(self):
if not self.is_connected:
raise RobotDeviceNotConnectedError(
f"IntelRealSenseCamera({self.serial_number}) is not connected. Try running `camera.connect()` first."
)
if self.thread is not None and self.thread.is_alive():
# wait for the thread to finish
self.stop_event.set()
self.thread.join()
self.thread = None
self.stop_event = None
self.camera.stop()
self.camera = None
self.is_connected = False
def __del__(self):
if getattr(self, "is_connected", False):
self.disconnect()
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Save a few frames using `IntelRealSenseCamera` for all cameras connected to the computer, or a selected subset."
)
parser.add_argument(
"--serial-numbers",
type=int,
nargs="*",
default=None,
help="List of serial numbers used to instantiate the `IntelRealSenseCamera`. If not provided, find and use all available camera indices.",
)
parser.add_argument(
"--fps",
type=int,
default=30,
help="Set the number of frames recorded per seconds for all cameras. If not provided, use the default fps of each camera.",
)
parser.add_argument(
"--width",
type=str,
default=640,
help="Set the width for all cameras. If not provided, use the default width of each camera.",
)
parser.add_argument(
"--height",
type=str,
default=480,
help="Set the height for all cameras. If not provided, use the default height of each camera.",
)
parser.add_argument(
"--images-dir",
type=Path,
default="outputs/images_from_intelrealsense_cameras",
help="Set directory to save a few frames for each camera.",
)
parser.add_argument(
"--record-time-s",
type=float,
default=2.0,
help="Set the number of seconds used to record the frames. By default, 2 seconds.",
)
args = parser.parse_args()
save_images_from_cameras(**vars(args))

View File

@@ -0,0 +1,483 @@
"""
@misc{cadene2024lerobot,
author = {Cadene, Remi and Alibert, Simon and Soare, Alexander and Gallouedec, Quentin and Zouitine, Adil and Wolf, Thomas},
title = {LeRobot: State-of-the-art Machine Learning for Real-World Robotics in PyTorch},
howpublished = {Available at: https://github.com/huggingface/lerobot},
year = {2024},
}
This file contains utilities for recording frames from cameras. For more info look at `OpenCVCamera` docstring.
"""
import argparse
import concurrent.futures
import math
import platform
import shutil
import threading
import time
from pathlib import Path
from threading import Thread
import cv2
import numpy as np
from PIL import Image
from unitree_deploy.robot_devices.cameras.configs import OpenCVCameraConfig
from unitree_deploy.robot_devices.robots_devices_utils import (
RobotDeviceAlreadyConnectedError,
RobotDeviceNotConnectedError,
busy_wait,
capture_timestamp_utc,
)
# The maximum opencv device index depends on your operating system. For instance,
# if you have 3 cameras, they should be associated to index 0, 1, and 2. This is the case
# on MacOS. However, on Ubuntu, the indices are different like 6, 16, 23.
# When you change the USB port or reboot the computer, the operating system might
# treat the same cameras as new devices. Thus we select a higher bound to search indices.
MAX_OPENCV_INDEX = 60
def find_cameras(raise_when_empty=False, max_index_search_range=MAX_OPENCV_INDEX, mock=False) -> list[dict]:
cameras = []
if platform.system() == "Linux":
print("Linux detected. Finding available camera indices through scanning '/dev/video*' ports")
possible_ports = [str(port) for port in Path("/dev").glob("video*")]
ports = _find_cameras(possible_ports, mock=mock)
for port in ports:
cameras.append(
{
"port": port,
"index": int(port.removeprefix("/dev/video")),
}
)
else:
print(
"Mac or Windows detected. Finding available camera indices through "
f"scanning all indices from 0 to {MAX_OPENCV_INDEX}"
)
possible_indices = range(max_index_search_range)
indices = _find_cameras(possible_indices, mock=mock)
for index in indices:
cameras.append(
{
"port": None,
"index": index,
}
)
return cameras
def _find_cameras(
possible_camera_ids: list[int | str], raise_when_empty=False, mock=False
) -> list[int | str]:
camera_ids = []
for camera_idx in possible_camera_ids:
camera = cv2.VideoCapture(camera_idx)
is_open = camera.isOpened()
camera.release()
if is_open:
print(f"Camera found at index {camera_idx}")
camera_ids.append(camera_idx)
if raise_when_empty and len(camera_ids) == 0:
raise OSError(
"Not a single camera was detected. Try re-plugging, or re-installing `opencv2`, "
"or your camera driver, or make sure your camera is compatible with opencv2."
)
return camera_ids
def is_valid_unix_path(path: str) -> bool:
"""Note: if 'path' points to a symlink, this will return True only if the target exists"""
p = Path(path)
return p.is_absolute() and p.exists()
def get_camera_index_from_unix_port(port: Path) -> int:
return int(str(port.resolve()).removeprefix("/dev/video"))
def save_image(img_array, camera_index, frame_index, images_dir):
img = Image.fromarray(img_array)
path = images_dir / f"camera_{camera_index:02d}_frame_{frame_index:06d}.png"
path.parent.mkdir(parents=True, exist_ok=True)
img.save(str(path), quality=100)
def save_images_from_cameras(
images_dir: Path,
camera_ids: list | None = None,
fps=None,
width=None,
height=None,
record_time_s=2,
mock=False,
):
"""
Initializes all the cameras and saves images to the directory. Useful to visually identify the camera
associated to a given camera index.
"""
if camera_ids is None or len(camera_ids) == 0:
camera_infos = find_cameras(mock=mock)
camera_ids = [cam["index"] for cam in camera_infos]
print("Connecting cameras")
cameras = []
for cam_idx in camera_ids:
config = OpenCVCameraConfig(camera_index=cam_idx, fps=fps, width=width, height=height, mock=mock)
camera = OpenCVCamera(config)
camera.connect()
print(
f"OpenCVCamera({camera.camera_index}, fps={camera.fps}, width={camera.width}, "
f"height={camera.height}, color_mode={camera.color_mode})"
)
cameras.append(camera)
images_dir = Path(images_dir)
if images_dir.exists():
shutil.rmtree(
images_dir,
)
images_dir.mkdir(parents=True, exist_ok=True)
print(f"Saving images to {images_dir}")
frame_index = 0
start_time = time.perf_counter()
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
while True:
now = time.perf_counter()
for camera in cameras:
# If we use async_read when fps is None, the loop will go full speed, and we will endup
# saving the same images from the cameras multiple times until the RAM/disk is full.
image = camera.read() if fps is None else camera.async_read()
executor.submit(
save_image,
image,
camera.camera_index,
frame_index,
images_dir,
)
if fps is not None:
dt_s = time.perf_counter() - now
busy_wait(1 / fps - dt_s)
print(f"Frame: {frame_index:04d}\tLatency (ms): {(time.perf_counter() - now) * 1000:.2f}")
if time.perf_counter() - start_time > record_time_s:
break
frame_index += 1
print(f"Images have been saved to {images_dir}")
class OpenCVCamera:
"""
The OpenCVCamera class allows to efficiently record images from cameras. It relies on opencv2 to communicate
with the cameras. Most cameras are compatible. For more info, see the [Video I/O with OpenCV Overview](https://docs.opencv.org/4.x/d0/da7/videoio_overview.html).
An OpenCVCamera instance requires a camera index (e.g. `OpenCVCamera(camera_index=0)`). When you only have one camera
like a webcam of a laptop, the camera index is expected to be 0, but it might also be very different, and the camera index
might change if you reboot your computer or re-plug your camera. This behavior depends on your operation system.
To find the camera indices of your cameras, you can run our utility script that will be save a few frames for each camera:
```bash
python lerobot/common/robot_devices/cameras/opencv.py --images-dir outputs/images_from_opencv_cameras
```
When an OpenCVCamera is instantiated, if no specific config is provided, the default fps, width, height and color_mode
of the given camera will be used.
Example of usage:
```python
from lerobot.common.robot_devices.cameras.configs import OpenCVCameraConfig
config = OpenCVCameraConfig(camera_index=0)
camera = OpenCVCamera(config)
camera.connect()
color_image = camera.read()
# when done using the camera, consider disconnecting
camera.disconnect()
```
Example of changing default fps, width, height and color_mode:
```python
config = OpenCVCameraConfig(camera_index=0, fps=30, width=1280, height=720)
config = OpenCVCameraConfig(camera_index=0, fps=90, width=640, height=480)
config = OpenCVCameraConfig(camera_index=0, fps=90, width=640, height=480, color_mode="bgr")
# Note: might error out open `camera.connect()` if these settings are not compatible with the camera
```
"""
def __init__(self, config: OpenCVCameraConfig):
self.config = config
self.camera_index = config.camera_index
self.port = None
# Linux uses ports for connecting to cameras
if platform.system() == "Linux":
if isinstance(self.camera_index, int):
self.port = Path(f"/dev/video{self.camera_index}")
elif isinstance(self.camera_index, str) and is_valid_unix_path(self.camera_index):
self.port = Path(self.camera_index)
# Retrieve the camera index from a potentially symlinked path
self.camera_index = get_camera_index_from_unix_port(self.port)
else:
raise ValueError(f"Please check the provided camera_index: {self.camera_index}")
self.fps = config.fps
self.width = config.width
self.height = config.height
self.channels = config.channels
self.color_mode = config.color_mode
self.mock = config.mock
self.camera = None
self.is_connected = False
self.thread = None
self.stop_event = None
self.color_image = None
self.logs = {}
if self.mock:
import tests.mock_cv2 as cv2
else:
import cv2
self.rotation = None
if config.rotation == -90:
self.rotation = cv2.ROTATE_90_COUNTERCLOCKWISE
elif config.rotation == 90:
self.rotation = cv2.ROTATE_90_CLOCKWISE
elif config.rotation == 180:
self.rotation = cv2.ROTATE_180
def connect(self):
if self.is_connected:
raise RobotDeviceAlreadyConnectedError(f"OpenCVCamera({self.camera_index}) is already connected.")
if self.mock:
import tests.mock_cv2 as cv2
else:
import cv2
# Use 1 thread to avoid blocking the main thread. Especially useful during data collection
# when other threads are used to save the images.
cv2.setNumThreads(1)
camera_idx = f"/dev/video{self.camera_index}" if platform.system() == "Linux" else self.camera_index
# First create a temporary camera trying to access `camera_index`,
# and verify it is a valid camera by calling `isOpened`.
tmp_camera = cv2.VideoCapture(camera_idx)
is_camera_open = tmp_camera.isOpened()
# Release camera to make it accessible for `find_camera_indices`
tmp_camera.release()
del tmp_camera
# If the camera doesn't work, display the camera indices corresponding to
# valid cameras.
if not is_camera_open:
# Verify that the provided `camera_index` is valid before printing the traceback
cameras_info = find_cameras()
available_cam_ids = [cam["index"] for cam in cameras_info]
if self.camera_index not in available_cam_ids:
raise ValueError(
f"`camera_index` is expected to be one of these available cameras {available_cam_ids}, but {self.camera_index} is provided instead. "
"To find the camera index you should use, run `python lerobot/common/robot_devices/cameras/opencv.py`."
)
raise OSError(f"Can't access OpenCVCamera({camera_idx}).")
# Secondly, create the camera that will be used downstream.
# Note: For some unknown reason, calling `isOpened` blocks the camera which then
# needs to be re-created.
self.camera = cv2.VideoCapture(camera_idx)
if self.fps is not None:
self.camera.set(cv2.CAP_PROP_FPS, self.fps)
if self.width is not None:
self.camera.set(cv2.CAP_PROP_FRAME_WIDTH, self.width)
if self.height is not None:
self.camera.set(cv2.CAP_PROP_FRAME_HEIGHT, self.height)
actual_fps = self.camera.get(cv2.CAP_PROP_FPS)
actual_width = self.camera.get(cv2.CAP_PROP_FRAME_WIDTH)
actual_height = self.camera.get(cv2.CAP_PROP_FRAME_HEIGHT)
# Using `math.isclose` since actual fps can be a float (e.g. 29.9 instead of 30)
if self.fps is not None and not math.isclose(self.fps, actual_fps, rel_tol=1e-3):
# Using `OSError` since it's a broad that encompasses issues related to device communication
raise OSError(
f"Can't set {self.fps=} for OpenCVCamera({self.camera_index}). Actual value is {actual_fps}."
)
if self.width is not None and not math.isclose(self.width, actual_width, rel_tol=1e-3):
raise OSError(
f"Can't set {self.width=} for OpenCVCamera({self.camera_index}). Actual value is {actual_width}."
)
if self.height is not None and not math.isclose(self.height, actual_height, rel_tol=1e-3):
raise OSError(
f"Can't set {self.height=} for OpenCVCamera({self.camera_index}). Actual value is {actual_height}."
)
self.fps = round(actual_fps)
self.width = round(actual_width)
self.height = round(actual_height)
self.is_connected = True
def read(self, temporary_color_mode: str | None = None) -> np.ndarray:
"""Read a frame from the camera returned in the format (height, width, channels)
(e.g. 480 x 640 x 3), contrarily to the pytorch format which is channel first.
Note: Reading a frame is done every `camera.fps` times per second, and it is blocking.
If you are reading data from other sensors, we advise to use `camera.async_read()` which is non blocking version of `camera.read()`.
"""
if not self.is_connected:
raise RobotDeviceNotConnectedError(
f"OpenCVCamera({self.camera_index}) is not connected. Try running `camera.connect()` first."
)
start_time = time.perf_counter()
ret, color_image = self.camera.read()
if not ret:
raise OSError(f"Can't capture color image from camera {self.camera_index}.")
requested_color_mode = self.color_mode if temporary_color_mode is None else temporary_color_mode
if requested_color_mode not in ["rgb", "bgr"]:
raise ValueError(
f"Expected color values are 'rgb' or 'bgr', but {requested_color_mode} is provided."
)
# OpenCV uses BGR format as default (blue, green, red) for all operations, including displaying images.
# However, Deep Learning framework such as LeRobot uses RGB format as default to train neural networks,
# so we convert the image color from BGR to RGB.
if requested_color_mode == "rgb":
if self.mock:
import tests.mock_cv2 as cv2
else:
import cv2
color_image = cv2.cvtColor(color_image, cv2.COLOR_BGR2RGB)
h, w, _ = color_image.shape
if h != self.height or w != self.width:
raise OSError(
f"Can't capture color image with expected height and width ({self.height} x {self.width}). ({h} x {w}) returned instead."
)
if self.rotation is not None:
color_image = cv2.rotate(color_image, self.rotation)
# log the number of seconds it took to read the image
self.logs["delta_timestamp_s"] = time.perf_counter() - start_time
# log the utc time at which the image was received
self.logs["timestamp_utc"] = capture_timestamp_utc()
self.color_image = color_image
return color_image
def read_loop(self):
while not self.stop_event.is_set():
try:
self.color_image = self.read()
except Exception as e:
print(f"Error reading in thread: {e}")
def async_read(self):
if not self.is_connected:
raise RobotDeviceNotConnectedError(
f"OpenCVCamera({self.camera_index}) is not connected. Try running `camera.connect()` first."
)
if self.thread is None:
self.stop_event = threading.Event()
self.thread = Thread(target=self.read_loop, args=())
self.thread.daemon = True
self.thread.start()
num_tries = 0
while True:
if self.color_image is not None:
return self.color_image
time.sleep(1 / self.fps)
num_tries += 1
if num_tries > self.fps * 2:
raise TimeoutError("Timed out waiting for async_read() to start.")
def disconnect(self):
if not self.is_connected:
raise RobotDeviceNotConnectedError(
f"OpenCVCamera({self.camera_index}) is not connected. Try running `camera.connect()` first."
)
if self.thread is not None:
self.stop_event.set()
self.thread.join() # wait for the thread to finish
self.thread = None
self.stop_event = None
self.camera.release()
self.camera = None
self.is_connected = False
def __del__(self):
if getattr(self, "is_connected", False):
self.disconnect()
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Save a few frames using `OpenCVCamera` for all cameras connected to the computer, or a selected subset."
)
parser.add_argument(
"--camera-ids",
type=int,
nargs="*",
default=None,
help="List of camera indices used to instantiate the `OpenCVCamera`. If not provided, find and use all available camera indices.",
)
parser.add_argument(
"--fps",
type=int,
default=30,
help="Set the number of frames recorded per seconds for all cameras. If not provided, use the default fps of each camera.",
)
parser.add_argument(
"--width",
type=str,
default=640,
help="Set the width for all cameras. If not provided, use the default width of each camera.",
)
parser.add_argument(
"--height",
type=str,
default=480,
help="Set the height for all cameras. If not provided, use the default height of each camera.",
)
parser.add_argument(
"--images-dir",
type=Path,
default="outputs/images_from_opencv_cameras",
help="Set directory to save a few frames for each camera.",
)
parser.add_argument(
"--record-time-s",
type=float,
default=4.0,
help="Set the number of seconds used to record the frames. By default, 2 seconds.",
)
args = parser.parse_args()
save_images_from_cameras(**vars(args))

View File

@@ -0,0 +1,74 @@
"""
@misc{cadene2024lerobot,
author = {Cadene, Remi and Alibert, Simon and Soare, Alexander and Gallouedec, Quentin and Zouitine, Adil and Wolf, Thomas},
title = {LeRobot: State-of-the-art Machine Learning for Real-World Robotics in PyTorch},
howpublished = {Available at: https://github.com/huggingface/lerobot},
year = {2024},
}
"""
from typing import Protocol
import numpy as np
from unitree_deploy.robot_devices.cameras.configs import (
CameraConfig,
ImageClientCameraConfig,
IntelRealSenseCameraConfig,
OpenCVCameraConfig,
)
# Defines a camera type
class Camera(Protocol):
def connect(self): ...
def read(self, temporary_color: str | None = None) -> np.ndarray: ...
def async_read(self) -> np.ndarray: ...
def disconnect(self): ...
def make_cameras_from_configs(camera_configs: dict[str, CameraConfig]) -> list[Camera]:
cameras = {}
for key, cfg in camera_configs.items():
if cfg.type == "opencv":
from unitree_deploy.robot_devices.cameras.opencv import OpenCVCamera
cameras[key] = OpenCVCamera(cfg)
elif cfg.type == "intelrealsense":
from unitree_deploy.robot_devices.cameras.intelrealsense import IntelRealSenseCamera
cameras[key] = IntelRealSenseCamera(cfg)
elif cfg.type == "imageclient":
from unitree_deploy.robot_devices.cameras.imageclient import ImageClientCamera
cameras[key] = ImageClientCamera(cfg)
else:
raise ValueError(f"The motor type '{cfg.type}' is not valid.")
return cameras
def make_camera(camera_type, **kwargs) -> Camera:
if camera_type == "opencv":
from unitree_deploy.robot_devices.cameras.opencv import OpenCVCamera
config = OpenCVCameraConfig(**kwargs)
return OpenCVCamera(config)
elif camera_type == "intelrealsense":
from unitree_deploy.robot_devices.cameras.intelrealsense import IntelRealSenseCamera
config = IntelRealSenseCameraConfig(**kwargs)
return IntelRealSenseCamera(config)
elif camera_type == "imageclient":
from unitree_deploy.robot_devices.cameras.imageclient import ImageClientCamera
config = ImageClientCameraConfig(**kwargs)
return ImageClientCamera(config)
else:
raise ValueError(f"The camera type '{camera_type}' is not valid.")

View File

@@ -0,0 +1,29 @@
import abc
from dataclasses import dataclass
import draccus
import numpy as np
@dataclass
class EndEffectorConfig(draccus.ChoiceRegistry, abc.ABC):
@property
def type(self) -> str:
return self.get_choice_name(self.__class__)
@EndEffectorConfig.register_subclass("dex_1")
@dataclass
class Dex1_GripperConfig(EndEffectorConfig):
motors: dict[str, tuple[int, str]]
unit_test: bool = False
init_pose: list | None = None
control_dt: float = 1 / 200
mock: bool = False
max_pos_speed: float = 180 * (np.pi / 180) * 2
topic_gripper_command: str = "rt/unitree_actuator/cmd"
topic_gripper_state: str = "rt/unitree_actuator/state"
def __post_init__(self):
if self.control_dt < 0.002:
raise ValueError(f"`control_dt` must > 1/500 (got {self.control_dt})")

View File

@@ -0,0 +1,227 @@
# for gripper
import threading
import time
import numpy as np
from unitree_sdk2py.core.channel import ChannelFactoryInitialize, ChannelPublisher, ChannelSubscriber # dds
from unitree_sdk2py.idl.default import unitree_go_msg_dds__MotorCmd_
from unitree_sdk2py.idl.unitree_go.msg.dds_ import MotorCmds_, MotorStates_ # idl
from unitree_deploy.robot_devices.arm.arm_indexs import Gripper_Sigle_JointIndex
from unitree_deploy.robot_devices.endeffector.configs import Dex1_GripperConfig
from unitree_deploy.robot_devices.robots_devices_utils import DataBuffer, MotorState, Robot_Num_Motors
from unitree_deploy.utils.joint_trajcetory_inter import JointTrajectoryInterpolator
from unitree_deploy.utils.rich_logger import log_error, log_info, log_warning
class Gripper_LowState:
def __init__(self):
self.motor_state = [MotorState() for _ in range(Robot_Num_Motors.Dex1_Gripper_Num_Motors)]
class Dex1_Gripper_Controller:
def __init__(self, config: Dex1_GripperConfig):
log_info("Initialize Dex1_Gripper_Controller...")
self.init_pose = np.array(config.init_pose)
self.motors = config.motors
self.mock = config.mock
self.control_dt = config.control_dt
self.unit_test = config.unit_test
self.max_pos_speed = config.max_pos_speed
self.topic_gripper_command = config.topic_gripper_command
self.topic_gripper_state = config.topic_gripper_state
self.q_target = np.zeros(1)
self.tauff_target = np.zeros(1)
self.time_target = time.monotonic()
self.gripper_cmd = "schedule_waypoint"
self.lowstate_buffer = DataBuffer()
self.ctrl_lock = threading.Lock()
self.MAX_DIST = 5.45
self.MIN_DIST = 0.0
self.DELTA_GRIPPER_CMD = 0.18
self.is_connected = False
@property
def motor_names(self) -> list[str]:
return list(self.motors.keys())
@property
def motor_models(self) -> list[str]:
return [model for _, model in self.motors.values()]
@property
def motor_indices(self) -> list[int]:
return [idx for idx, _ in self.motors.values()]
def connect(self):
try:
if self.unit_test:
ChannelFactoryInitialize(0)
dq = 0.0
tau = 0.0
kp = 10.0
kd = 0.05
# initialize gripper cmd msg
self.gripper_msg = MotorCmds_()
self.gripper_msg.cmds = [unitree_go_msg_dds__MotorCmd_() for _ in range(len(Gripper_Sigle_JointIndex))]
for id in Gripper_Sigle_JointIndex:
self.gripper_msg.cmds[id].dq = dq
self.gripper_msg.cmds[id].tau_est = tau
self.gripper_msg.cmds[id].kp = kp
self.gripper_msg.cmds[id].kd = kd
# initialize handcmd publisher and handstate subscriber
self.GripperCmb_publisher = ChannelPublisher(self.topic_gripper_command, MotorCmds_)
self.GripperCmb_publisher.Init()
self.GripperState_subscriber = ChannelSubscriber(self.topic_gripper_state, MotorStates_)
self.GripperState_subscriber.Init()
# initialize subscribe thread
self.subscribe_state_thread = threading.Thread(target=self._subscribe_gripper_motor_state)
self.subscribe_state_thread.daemon = True
self.subscribe_state_thread.start()
while not self.lowstate_buffer.get_data():
time.sleep(0.01)
log_warning("[Dex1_Gripper_Controller] Waiting to subscribe dds...")
self.gripper_control_thread = threading.Thread(target=self._ctrl_gripper_motor)
self.gripper_control_thread.daemon = True
self.gripper_control_thread.start()
self.is_connected = True
except Exception as e:
self.disconnect()
log_error(f"❌ Error in Dex1_Gripper_Controller.connect: {e}")
def _subscribe_gripper_motor_state(self):
try:
while True:
gripper_msg = self.GripperState_subscriber.Read()
if gripper_msg is not None:
lowstate = Gripper_LowState()
for idx, id in enumerate(Gripper_Sigle_JointIndex):
lowstate.motor_state[idx].q = gripper_msg.states[id].q
lowstate.motor_state[idx].dq = gripper_msg.states[id].dq
self.lowstate_buffer.set_data(lowstate)
time.sleep(0.002)
except Exception as e:
self.disconnect()
log_error(f"❌ Error in Dex1_Gripper_Controller._subscribe_gripper_motor_state: {e}")
def _update_gripper(self, gripper_q_target: np.ndarray):
current_qs = np.array([self.lowstate_buffer.get_data().motor_state[id].q for id in Gripper_Sigle_JointIndex])
clamped_qs = np.clip(gripper_q_target, current_qs - self.DELTA_GRIPPER_CMD, current_qs + self.DELTA_GRIPPER_CMD)
"""set current left, right gripper motor state target q"""
for idx, id in enumerate(Gripper_Sigle_JointIndex):
self.gripper_msg.cmds[id].q = np.array(clamped_qs)[idx]
self.GripperCmb_publisher.Write(self.gripper_msg)
def _drive_to_waypoint(self, target_pose: np.ndarray, t_insert_time: float):
curr_time = time.monotonic() + self.control_dt
t_insert = curr_time + t_insert_time
self.pose_interp = self.pose_interp.drive_to_waypoint(
pose=target_pose,
time=t_insert,
curr_time=curr_time,
max_pos_speed=self.max_pos_speed,
)
while time.monotonic() < t_insert:
self._update_gripper(self.pose_interp(time.monotonic()))
time.sleep(self.control_dt)
def _ctrl_gripper_motor(self):
try:
self.pose_interp = JointTrajectoryInterpolator(
times=[time.monotonic()],
joint_positions=[self.read_current_endeffector_q()],
)
gripper_q_target = self.read_current_endeffector_q()
gripper_tauff_target = self.tauff_target
gripper_time_target = time.monotonic()
gripper_cmd = "schedule_waypoint"
last_waypoint_time = time.monotonic()
while True:
start_time = time.perf_counter()
t_now = time.monotonic()
with self.ctrl_lock:
gripper_q_target = self.q_target
gripper_tauff_target = self.tauff_target # noqa: F841
gripper_time_target = self.time_target
gripper_cmd = self.gripper_cmd
if gripper_cmd is None:
self._update_gripper(gripper_q_target)
# time.sleep(max(0, (self.control_dt - (time.perf_counter() - start_time))))
elif gripper_cmd == "drive_to_waypoint":
self._drive_to_waypoint(target_pose=gripper_q_target, t_insert_time=0.8)
elif gripper_cmd == "schedule_waypoint":
target_time = time.monotonic() - time.perf_counter() + gripper_time_target
curr_time = t_now + self.control_dt
target_time = max(target_time, curr_time + self.control_dt)
self.pose_interp = self.pose_interp.schedule_waypoint(
pose=gripper_q_target,
time=target_time,
max_pos_speed=self.max_pos_speed,
curr_time=curr_time,
last_waypoint_time=last_waypoint_time,
)
last_waypoint_time = target_time
self._update_gripper(self.pose_interp(t_now))
time.sleep(max(0, (self.control_dt - (time.perf_counter() - start_time))))
except Exception as e:
self.disconnect()
log_error(f"❌ Error in Dex1_Gripper_Controller._ctrl_gripper_motor: {e}")
def read_current_endeffector_q(self) -> np.ndarray:
# Motor inversion left is 1 and right is 0 TODO(gh): Correct this
motor_states = np.array([self.lowstate_buffer.get_data().motor_state[id].q for id in Gripper_Sigle_JointIndex])
return np.array(motor_states)
def read_current_endeffector_dq(self) -> np.ndarray:
# Motor inversion left is 1 and right is 0 TODO(gh): Correct this
motor_states_dq = np.array(
[self.lowstate_buffer.get_data().motor_state[id].dq for id in Gripper_Sigle_JointIndex]
)
return np.array(motor_states_dq)
def write_endeffector(
self,
q_target: list[float] | np.ndarray,
tauff_target: list[float] | np.ndarray = None,
time_target: float | None = None,
cmd_target: str | None = None,
):
with self.ctrl_lock:
self.q_target = q_target
self.tauff_target = tauff_target
self.time_target = time_target
self.gripper_cmd = cmd_target
def go_start(self):
self._drive_to_waypoint(target_pose=self.init_pose, t_insert_time=0.8)
def go_home(self):
self._drive_to_waypoint(target_pose=self.init_pose, t_insert_time=0.8)
def disconnect(self):
self.is_connected = False
# self.go_home()

View File

@@ -0,0 +1,50 @@
from typing import Protocol
from unitree_deploy.robot_devices.endeffector.configs import (
Dex1_GripperConfig,
EndEffectorConfig,
)
class EndEffector(Protocol):
def connect(self): ...
def disconnect(self): ...
def motor_names(self): ...
def read_current_endeffector_q(self): ...
def read_current_endeffector_dq(self): ...
def write_endeffector(self): ...
def retarget_to_endeffector(self): ...
def endeffector_ik(self): ...
def go_start(self): ...
def go_home(self): ...
def make_endeffector_motors_buses_from_configs(
endeffector_configs: dict[str, EndEffectorConfig],
) -> list[EndEffectorConfig]:
endeffector_motors_buses = {}
for key, cfg in endeffector_configs.items():
if cfg.type == "dex_1":
from unitree_deploy.robot_devices.endeffector.gripper import Dex1_Gripper_Controller
endeffector_motors_buses[key] = Dex1_Gripper_Controller(cfg)
else:
raise ValueError(f"The motor type '{cfg.type}' is not valid.")
return endeffector_motors_buses
def make_endeffector_motors_bus(endeffector_type: str, **kwargs) -> EndEffectorConfig:
if endeffector_type == "dex_1":
from unitree_deploy.robot_devices.endeffector.gripper import Dex1_Gripper_Controller
config = Dex1_GripperConfig(**kwargs)
return Dex1_Gripper_Controller(config)
else:
raise ValueError(f"The motor type '{endeffector_type}' is not valid.")

View File

@@ -0,0 +1,87 @@
import platform
import threading
import time
from dataclasses import dataclass
from datetime import datetime, timezone
from enum import IntEnum
from typing import Optional
class Robot_Num_Motors(IntEnum):
Z1_6_Num_Motors = 6
Z1_7_Num_Motors = 7
Z1_12_Num_Motors = 12
Dex1_Gripper_Num_Motors = 2
G1_29_Num_Motors = 35
@dataclass
class MotorState:
q: Optional[float] = None
dq: Optional[float] = None
tau: Optional[float] = None
class DataBuffer:
def __init__(self) -> None:
self.data = None
self.lock = threading.Lock()
def get_data(self):
with self.lock:
return self.data
def set_data(self, data) -> None:
with self.lock:
self.data = data
class RobotDeviceNotConnectedError(Exception):
"""Exception raised when the robot device is not connected."""
def __init__(
self, message="This robot device is not connected. Try calling `robot_device.connect()` first."
):
self.message = message
super().__init__(self.message)
class RobotDeviceAlreadyConnectedError(Exception):
"""Exception raised when the robot device is already connected."""
def __init__(
self,
message="This robot device is already connected. Try not calling `robot_device.connect()` twice.",
):
self.message = message
super().__init__(self.message)
def capture_timestamp_utc():
return datetime.now(timezone.utc)
def busy_wait(seconds):
if platform.system() == "Darwin":
# On Mac, `time.sleep` is not accurate and we need to use this while loop trick,
# but it consumes CPU cycles.
end_time = time.perf_counter() + seconds
while time.perf_counter() < end_time:
pass
else:
# On Linux time.sleep is accurate
if seconds > 0:
time.sleep(seconds)
def precise_wait(t_end: float, slack_time: float = 0.001, time_func=time.monotonic):
t_start = time_func()
t_wait = t_end - t_start
if t_wait > 0:
t_sleep = t_wait - slack_time
if t_sleep > 0:
time.sleep(t_sleep)
while time_func() < t_end:
pass
return

View File

@@ -0,0 +1,281 @@
import logging
import sys
import time
import traceback
import warnings
from dataclasses import dataclass, field
from pathlib import Path
from typing import Any, ClassVar
import cv2
import numpy as np
import pandas as pd
import pyarrow as pa
import requests
import torch
import torchvision
from datasets import load_from_disk
from datasets.features.features import register_feature
from safetensors.torch import load_file
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
class LongConnectionClient:
def __init__(self, base_url):
self.session = requests.Session()
self.base_url = base_url
def send_post(self, endpoint, json_data):
"""send POST request to endpoint"""
url = f"{self.base_url}{endpoint}"
response = None
while True:
try:
response = self.session.post(url, json=json_data)
if response.status_code == 200:
data = response.json()
if data["result"] == "ok":
response = data
break
else:
logging.info(data["desc"])
time.sleep(1)
except Exception as e:
logging.error(f"An error occurred: {e}")
logging.error(traceback.format_exc())
return response
def close(self):
""" "close session"""
self.session.close()
def predict_action(self, language_instruction, batch) -> torch.Tensor:
# collect data
data = {
"language_instruction": language_instruction,
"observation.state": torch.stack(list(batch["observation.state"])).tolist(),
"observation.images.top": torch.stack(list(batch["observation.images.top"])).tolist(),
"action": torch.stack(list(batch["action"])).tolist(),
}
# send data
endpoint = "/predict_action"
response = self.send_post(endpoint, data)
# action = torch.tensor(response['action']).unsqueeze(0)
action = torch.tensor(response["action"])
return action
class ACTTemporalEnsembler:
def __init__(self, temporal_ensemble_coeff: float, chunk_size: int, exe_steps: int) -> None:
"""Temporal ensembling as described in Algorithm 2 of https://arxiv.org/abs/2304.13705.
The weights are calculated as wᵢ = exp(-temporal_ensemble_coeff * i) where w₀ is the oldest action.
They are then normalized to sum to 1 by dividing by Σwᵢ. Here's some intuition around how the
coefficient works:
- Setting it to 0 uniformly weighs all actions.
- Setting it positive gives more weight to older actions.
- Setting it negative gives more weight to newer actions.
NOTE: The default value for `temporal_ensemble_coeff` used by the original ACT work is 0.01. This
results in older actions being weighed more highly than newer actions (the experiments documented in
https://github.com/huggingface/lerobot/pull/319 hint at why highly weighing new actions might be
detrimental: doing so aggressively may diminish the benefits of action chunking).
Here we use an online method for computing the average rather than caching a history of actions in
order to compute the average offline. For a simple 1D sequence it looks something like:
```
import torch
seq = torch.linspace(8, 8.5, 100)
print(seq)
m = 0.01
exp_weights = torch.exp(-m * torch.arange(len(seq)))
print(exp_weights)
# Calculate offline
avg = (exp_weights * seq).sum() / exp_weights.sum()
print("offline", avg)
# Calculate online
for i, item in enumerate(seq):
if i == 0:
avg = item
continue
avg *= exp_weights[:i].sum()
avg += item * exp_weights[i]
avg /= exp_weights[:i+1].sum()
print("online", avg)
```
"""
self.chunk_size = chunk_size
self.ensemble_weights = torch.exp(-temporal_ensemble_coeff * torch.arange(chunk_size))
self.ensemble_weights_cumsum = torch.cumsum(self.ensemble_weights, dim=0)
self.exe_steps = exe_steps
self.reset()
def reset(self):
"""Resets the online computation variables."""
self.ensembled_actions = None
# (chunk_size,) count of how many actions are in the ensemble for each time step in the sequence.
self.ensembled_actions_count = None
def update(self, actions):
"""
Takes a (batch, chunk_size, action_dim) sequence of actions, update the temporal ensemble for all
time steps, and pop/return the next batch of actions in the sequence.
"""
self.ensemble_weights = self.ensemble_weights.to(device=actions.device)
self.ensemble_weights_cumsum = self.ensemble_weights_cumsum.to(device=actions.device)
if self.ensembled_actions is None:
# Initializes `self._ensembled_action` to the sequence of actions predicted during the first
# time step of the episode.
self.ensembled_actions = actions.clone()
# Note: The last dimension is unsqueeze to make sure we can broadcast properly for tensor
# operations later.
self.ensembled_actions_count = torch.ones(
(self.chunk_size, 1), dtype=torch.long, device=self.ensembled_actions.device
)
else:
# self.ensembled_actions will have shape (batch_size, chunk_size - 1, action_dim). Compute
# the online update for those entries.
self.ensembled_actions *= self.ensemble_weights_cumsum[self.ensembled_actions_count - 1]
self.ensembled_actions += (
actions[:, : -self.exe_steps] * self.ensemble_weights[self.ensembled_actions_count]
)
self.ensembled_actions /= self.ensemble_weights_cumsum[self.ensembled_actions_count]
self.ensembled_actions_count = torch.clamp(self.ensembled_actions_count + 1, max=self.chunk_size)
# The last action, which has no prior online average, needs to get concatenated onto the end.
self.ensembled_actions = torch.cat([self.ensembled_actions, actions[:, -self.exe_steps :]], dim=1)
self.ensembled_actions_count = torch.cat(
# [self.ensembled_actions_count, torch.ones_like(self.ensembled_actions_count[-self.exe_steps:])]
[
self.ensembled_actions_count,
torch.ones((self.exe_steps, 1), dtype=torch.long, device=self.ensembled_actions_count.device),
]
)
# "Consume" the first action.
actions, self.ensembled_actions, self.ensembled_actions_count = (
self.ensembled_actions[:, : self.exe_steps],
self.ensembled_actions[:, self.exe_steps :],
self.ensembled_actions_count[self.exe_steps :],
)
return actions
@dataclass
class VideoFrame:
"""
Provides a type for a dataset containing video frames.
Example:
```python
data_dict = [{"image": {"path": "videos/episode_0.mp4", "timestamp": 0.3}}]
features = {"image": VideoFrame()}
Dataset.from_dict(data_dict, features=Features(features))
```
"""
pa_type: ClassVar[Any] = pa.struct({"path": pa.string(), "timestamp": pa.float32()})
_type: str = field(default="VideoFrame", init=False, repr=False)
def __call__(self):
return self.pa_type
with warnings.catch_warnings():
warnings.filterwarnings(
"ignore",
"'register_feature' is experimental and might be subject to breaking changes in the future.",
category=UserWarning,
)
# to make VideoFrame available in HuggingFace `datasets`
register_feature(VideoFrame, "VideoFrame")
def get_image(cam_list, target_shape=None, save_image=False):
curr_images = []
for cam in cam_list:
color, _ = cam.get_frame()
if save_image:
cv2.imwrite("/home/world-model-x/output.png", color)
color = cv2.cvtColor(color, cv2.COLOR_BGR2RGB)
if target_shape:
color = cv2.resize(color, target_shape)
curr_images.append(color)
curr_images = np.stack(curr_images, axis=0)
return curr_images
def load_action_from_dataset(dataset_dir, episode_id):
data = load_from_disk(dataset_dir + "/train")
episode_data = load_file(dataset_dir + "/meta_data/episode_data_index.safetensors")
start_id = episode_data["from"][episode_id]
end_id = episode_data["to"][episode_id]
actions = torch.FloatTensor(data["action"][start_id:end_id])
return actions
def load_stats_from_prompt_dir(dataset_dir, prompt_dir, subdir=""):
dataset_dir += subdir + "/meta_data"
stats = load_file(dataset_dir + "/stats.safetensors")
return stats
def populate_queues(queues, batch):
for key in batch:
# Ignore keys not in the queues already (leaving the responsibility to the caller to make sure the
# queues have the keys they want).
if key not in queues:
continue
if len(queues[key]) != queues[key].maxlen:
# initialize by copying the first observation several times until the queue is full
while len(queues[key]) != queues[key].maxlen:
queues[key].append(batch[key])
else:
# add latest observation to the queue
queues[key].append(batch[key])
return queues
def action_safe_checking(action, action_max, action_min, threshold=0.01):
over_max = any(action - threshold > action_max.cpu().numpy())
over_min = any(action + threshold < action_min.cpu().numpy())
return not (over_max or over_min)
def get_init_pose(dataset_dir, start_id=0):
# load all par
dataset_dir_path = Path(dataset_dir) / "data" / "chunk-000"
parquet_files = list(dataset_dir_path.glob("*.parquet"))
parquet_files = sorted([str(f) for f in parquet_files])
first_rows = [pd.read_parquet(f, engine="pyarrow").iloc[[0]] for f in parquet_files]
df = pd.concat(first_rows, ignore_index=True)
action_array = np.stack(df["action"].values)
init_pose = action_array[192:193, ...]
return init_pose
def save_image(obs, num_step=None, output_dir=None):
rgb_image = cv2.cvtColor(obs.observation["images"]["cam_left_high"], cv2.COLOR_BGR2RGB)
cv2.imwrite(f"{output_dir}/top_{num_step:06d}.png", rgb_image)
def log_to_tensorboard(writer, data, tag, fps=10):
if isinstance(data, torch.Tensor) and data.dim() == 5:
video = data
n = video.shape[0]
video = video.permute(2, 0, 1, 3, 4) # t,n,c,h,w
frame_grids = [
torchvision.utils.make_grid(framesheet, nrow=int(n), padding=0) for framesheet in video
] # [3, n*h, 1*w]
grid = torch.stack(frame_grids, dim=0) # stack in temporal dim [t, 3, n*h, w]
grid = (grid + 1.0) / 2.0
grid = grid.unsqueeze(dim=0)
writer.add_video(tag, grid, fps=fps)

View File

@@ -0,0 +1,196 @@
"""The modification is derived from diffusion_policy/common/pose_trajectory_interpolator.py. Thank you for the outstanding contribution."""
import numbers
from typing import Union
import numpy as np
import scipy.interpolate as si
def joint_pose_distance(start_joint_angles, end_joint_angles):
start_joint_angles = np.array(start_joint_angles)
end_joint_angles = np.array(end_joint_angles)
joint_angle_dist = np.linalg.norm(end_joint_angles - start_joint_angles)
return joint_angle_dist
class JointTrajectoryInterpolator:
def __init__(self, times: np.ndarray, joint_positions: np.ndarray):
assert len(times) >= 1
assert len(joint_positions) == len(times)
self.num_joints = len(joint_positions[0])
if not isinstance(times, np.ndarray):
times = np.array(times)
if not isinstance(joint_positions, np.ndarray):
joint_positions = np.array(joint_positions)
if len(times) == 1:
self.single_step = True
self._times = times
self._joint_positions = joint_positions
else:
self.single_step = False
assert np.all(times[1:] >= times[:-1])
self.interpolators = si.interp1d(times, joint_positions, axis=0, assume_sorted=True)
@property
def times(self) -> np.ndarray:
if self.single_step:
return self._times
else:
return self.interpolators.x
@property
def joint_positions(self) -> np.ndarray:
if self.single_step:
return self._joint_positions
else:
n = len(self.times)
joint_positions = np.zeros((n, self.num_joints))
joint_positions = self.interpolators.y
return joint_positions
def trim(self, start_t: float, end_t: float) -> "JointTrajectoryInterpolator":
assert start_t <= end_t
times = self.times
should_keep = (start_t < times) & (times < end_t)
keep_times = times[should_keep]
all_times = np.concatenate([[start_t], keep_times, [end_t]])
all_times = np.unique(all_times)
all_joint_positions = self(all_times)
return JointTrajectoryInterpolator(times=all_times, joint_positions=all_joint_positions)
def drive_to_waypoint(
self,
pose,
time,
curr_time,
max_pos_speed=np.inf,
) -> "JointTrajectoryInterpolator":
assert max_pos_speed > 0
time = max(time, curr_time)
curr_pose = self(curr_time)
pos_dist = joint_pose_distance(curr_pose, pose)
pos_min_duration = pos_dist / max_pos_speed
duration = time - curr_time
duration = max(duration, pos_min_duration)
assert duration >= 0
last_waypoint_time = curr_time + duration
# insert new pose
trimmed_interp = self.trim(curr_time, curr_time)
times = np.append(trimmed_interp.times, [last_waypoint_time], axis=0)
poses = np.append(trimmed_interp.joint_positions, [pose], axis=0)
# create new interpolator
final_interp = JointTrajectoryInterpolator(times, poses)
return final_interp
def schedule_waypoint(
self, pose, time, max_pos_speed=np.inf, curr_time=None, last_waypoint_time=None
) -> "JointTrajectoryInterpolator":
assert max_pos_speed > 0
if last_waypoint_time is not None:
assert curr_time is not None
# trim current interpolator to between curr_time and last_waypoint_time
start_time = self.times[0]
end_time = self.times[-1]
assert start_time <= end_time
if curr_time is not None:
if time <= curr_time:
# if insert time is earlier than current time
# no effect should be done to the interpolator
return self
# now, curr_time < time
start_time = max(curr_time, start_time)
if last_waypoint_time is not None:
# if last_waypoint_time is earlier than start_time
# use start_time
end_time = curr_time if time <= last_waypoint_time else max(last_waypoint_time, curr_time)
else:
end_time = curr_time
end_time = min(end_time, time)
start_time = min(start_time, end_time)
# end time should be the latest of all times except time after this we can assume order (proven by zhenjia, due to the 2 min operations)
# Constraints:
# start_time <= end_time <= time (proven by zhenjia)
# curr_time <= start_time (proven by zhenjia)
# curr_time <= time (proven by zhenjia)
assert start_time <= end_time
assert end_time <= time
if last_waypoint_time is not None:
if time <= last_waypoint_time:
assert end_time == curr_time
else:
assert end_time == max(last_waypoint_time, curr_time)
if curr_time is not None:
assert curr_time <= start_time
assert curr_time <= time
trimmed_interp = self.trim(start_time, end_time)
# determine speed
duration = time - end_time
end_pose = trimmed_interp(end_time)
pos_dist = joint_pose_distance(pose, end_pose)
joint_min_duration = pos_dist / max_pos_speed
duration = max(duration, joint_min_duration)
assert duration >= 0
last_waypoint_time = end_time + duration
# insert new pose
times = np.append(trimmed_interp.times, [last_waypoint_time], axis=0)
poses = np.append(trimmed_interp.joint_positions, [pose], axis=0)
# create new interpolator
final_interp = JointTrajectoryInterpolator(times, poses)
return final_interp
def __call__(self, t: Union[numbers.Number, np.ndarray]) -> np.ndarray:
is_single = False
if isinstance(t, numbers.Number):
is_single = True
t = np.array([t])
joint_positions = np.zeros((len(t), self.num_joints))
if self.single_step:
joint_positions[:] = self._joint_positions[0]
else:
start_time = self.times[0]
end_time = self.times[-1]
t = np.clip(t, start_time, end_time)
joint_positions[:, :] = self.interpolators(t)
if is_single:
joint_positions = joint_positions[0]
return joint_positions
def generate_joint_positions(
num_rows: int, num_cols: int, start: float = 0.0, step: float = 0.1, row_offset: float = 0.1
) -> np.ndarray:
base_row = np.arange(start, start + step * num_cols, step)
array = np.vstack([base_row + i * row_offset for i in range(num_rows)])
return array
if __name__ == "__main__":
# Example joint trajectory data (time in seconds, joint positions as an array of NUM_JOINTS joint angles)
times = np.array([0.0, 1.0, 2.0, 3.0, 4.0])
joint_positions = generate_joint_positions(num_rows=5, num_cols=7, start=0.0, step=0.1, row_offset=0.1)
interpolator = JointTrajectoryInterpolator(times, joint_positions)
# Get joint positions at a specific time (e.g., t = 2.5 seconds)
t = 0.1
joint_pos_at_t = interpolator(t)
print("Joint positions at time", t, ":", joint_pos_at_t)

View File

@@ -0,0 +1,175 @@
from datetime import datetime
from typing import Any, Dict, Optional, Tuple
import rerun as rr
import rerun.blueprint as rrb
import torch
class RerunLogger:
"""
A fully automatic Rerun logger designed to parse and visualize step
dictionaries directly from a LeRobotDataset.
"""
def __init__(
self,
prefix: str = "",
memory_limit: str = "200MB",
idxrangeboundary: Optional[int] = 300,
):
"""Initializes the Rerun logger."""
# Use a descriptive name for the Rerun recording
rr.init(f"Dataset_Log_{datetime.now().strftime('%Y%m%d_%H%M%S')}")
rr.spawn(memory_limit=memory_limit)
self.prefix = prefix
self.blueprint_sent = False
self.idxrangeboundary = idxrangeboundary
# --- Internal cache for discovered keys ---
self._image_keys: Tuple[str, ...] = ()
self._state_key: str = ""
self._action_key: str = ""
self._index_key: str = "index"
self._task_key: str = "task"
self._episode_index_key: str = "episode_index"
self.current_episode = -1
def _initialize_from_data(self, step_data: Dict[str, Any]):
"""Inspects the first data dictionary to discover components and set up the blueprint."""
print("RerunLogger: First data packet received. Auto-configuring...")
image_keys = []
for key, value in step_data.items():
if key.startswith("observation.images.") and isinstance(value, torch.Tensor) and value.ndim > 2:
image_keys.append(key)
elif key == "observation.state":
self._state_key = key
elif key == "action":
self._action_key = key
self._image_keys = tuple(sorted(image_keys))
if "index" in step_data:
self._index_key = "index"
elif "frame_index" in step_data:
self._index_key = "frame_index"
print(f" - Using '{self._index_key}' for time sequence.")
print(f" - Detected State Key: '{self._state_key}'")
print(f" - Detected Action Key: '{self._action_key}'")
print(f" - Detected Image Keys: {self._image_keys}")
if self.idxrangeboundary:
self.setup_blueprint()
def setup_blueprint(self):
"""Sets up and sends the Rerun blueprint based on detected components."""
views = []
for key in self._image_keys:
clean_name = key.replace("observation.images.", "")
entity_path = f"{self.prefix}images/{clean_name}"
views.append(rrb.Spatial2DView(origin=entity_path, name=clean_name))
if self._state_key:
entity_path = f"{self.prefix}state"
views.append(
rrb.TimeSeriesView(
origin=entity_path,
name="Observation State",
time_ranges=[
rrb.VisibleTimeRange(
"frame",
start=rrb.TimeRangeBoundary.cursor_relative(seq=-self.idxrangeboundary),
end=rrb.TimeRangeBoundary.cursor_relative(),
)
],
plot_legend=rrb.PlotLegend(visible=True),
)
)
if self._action_key:
entity_path = f"{self.prefix}action"
views.append(
rrb.TimeSeriesView(
origin=entity_path,
name="Action",
time_ranges=[
rrb.VisibleTimeRange(
"frame",
start=rrb.TimeRangeBoundary.cursor_relative(seq=-self.idxrangeboundary),
end=rrb.TimeRangeBoundary.cursor_relative(),
)
],
plot_legend=rrb.PlotLegend(visible=True),
)
)
if not views:
print("Warning: No visualizable components detected in the data.")
return
grid = rrb.Grid(contents=views)
rr.send_blueprint(grid)
self.blueprint_sent = True
def log_step(self, step_data: Dict[str, Any]):
"""Logs a single step dictionary from your dataset."""
if not self.blueprint_sent:
self._initialize_from_data(step_data)
if self._index_key in step_data:
current_index = step_data[self._index_key].item()
rr.set_time_sequence("frame", current_index)
episode_idx = step_data.get(self._episode_index_key, torch.tensor(-1)).item()
if episode_idx != self.current_episode:
self.current_episode = episode_idx
task_name = step_data.get(self._task_key, "Unknown Task")
log_text = f"Starting Episode {self.current_episode}: {task_name}"
rr.log(f"{self.prefix}info/task", rr.TextLog(log_text, level=rr.TextLogLevel.INFO))
for key in self._image_keys:
if key in step_data:
image_tensor = step_data[key]
if image_tensor.ndim > 2:
clean_name = key.replace("observation.images.", "")
entity_path = f"{self.prefix}images/{clean_name}"
if image_tensor.shape[0] in [1, 3, 4]:
image_tensor = image_tensor.permute(1, 2, 0)
rr.log(entity_path, rr.Image(image_tensor))
if self._state_key in step_data:
state_tensor = step_data[self._state_key]
entity_path = f"{self.prefix}state"
for i, val in enumerate(state_tensor):
rr.log(f"{entity_path}/joint_{i}", rr.Scalar(val.item()))
if self._action_key in step_data:
action_tensor = step_data[self._action_key]
entity_path = f"{self.prefix}action"
for i, val in enumerate(action_tensor):
rr.log(f"{entity_path}/joint_{i}", rr.Scalar(val.item()))
def visualization_data(idx, observation, state, action, online_logger):
item_data: Dict[str, Any] = {
"index": torch.tensor(idx),
"observation.state": state,
"action": action,
}
for k, v in observation.items():
if k not in ("index", "observation.state", "action"):
item_data[k] = v
# print(item_data)
online_logger.log_step(item_data)
def flatten_images(obs: dict) -> dict:
flat = {}
if "images" in obs:
for k, v in obs["images"].items():
flat[f"observation.images.{k}"] = torch.from_numpy(v)
return flat

View File

@@ -0,0 +1,180 @@
import time
from rich.console import Console
from rich.progress import (
BarColumn,
Progress,
SpinnerColumn,
TextColumn,
TimeElapsedColumn,
)
from rich.text import Text
class RichLogger:
def __init__(self, level: str = "INFO"):
# Initialize the console for rich output
self.console = Console()
# Define log levels with corresponding priority
self.levels = {
"DEBUG": 0, # Lowest level, all logs are displayed
"INFO": 1, # Standard level, displays Info and higher
"SUCCESS": 2, # Displays success and higher priority logs
"WARNING": 3, # Displays warnings and errors
"ERROR": 4, # Highest level, only errors are shown
}
# Set default log level, use INFO if the level is invalid
self.level = self.levels.get(level.upper(), 1)
def _log(self, level: str, message: str, style: str, emoji=None):
# Check if the current log level allows this message to be printed
if self.levels[level] < self.levels["INFO"]:
return
# Format the timestamp
timestamp = time.strftime("%Y-%m-%d %H:%M:%S")
# Create a styled message
text = Text(f"[{timestamp}] [{level}] {message}", style=style)
# Print the message to the console
self.console.print(text)
def _log(self, level: str, message: str, style: str, emoji: str = None):
# Check if the current log level allows this message to be printed
if self.levels[level] < self.levels["INFO"]:
return
# Format the timestamp
timestamp = time.strftime("%Y-%m-%d %H:%M:%S")
# If emoji is provided, prepend it to the message
if emoji:
message = f"{emoji} {message}"
# Create a styled message
text = Text(f"[{timestamp}] [{level}] {message}", style=style)
# Print the message to the console
self.console.print(text)
# Basic log methods
def info(self, message: str, emoji: str | None = None):
# If the level is INFO or higher, print info log
if self.levels["INFO"] >= self.level:
self._log("INFO", message, "bold cyan", emoji)
def warning(self, message: str, emoji: str = "⚠️"):
# If the level is WARNING or higher, print warning log
if self.levels["WARNING"] >= self.level:
self._log("WARNING", message, "bold yellow", emoji)
def error(self, message: str, emoji: str = ""):
# If the level is ERROR or higher, print error log
if self.levels["ERROR"] >= self.level:
self._log("ERROR", message, "bold red", emoji)
def success(self, message: str, emoji: str = "🚀"):
# If the level is SUCCESS or higher, print success log
if self.levels["SUCCESS"] >= self.level:
self._log("SUCCESS", message, "bold green", emoji)
def debug(self, message: str, emoji: str = "🔍"):
# If the level is DEBUG or higher, print debug log
if self.levels["DEBUG"] >= self.level:
self._log("DEBUG", message, "dim", emoji)
# ========== Extended Features ==========
# Display a message with an emoji
def emoji(self, message: str, emoji: str = "🚀"):
self.console.print(f"{emoji} {message}", style="bold magenta")
# Show a loading animation for a certain period
def loading(self, message: str, seconds: float = 2.0):
# Display a loading message with a spinner animation
with self.console.status(f"[bold blue]{message}...", spinner="dots"):
time.sleep(seconds)
# Show a progress bar for small tasks
def progress(self, task_description: str, total: int = 100, speed: float = 0.02):
# Create and display a progress bar with time elapsed
with Progress(
SpinnerColumn(),
BarColumn(bar_width=None),
TextColumn("[progress.percentage]{task.percentage:>3.0f}%"),
TimeElapsedColumn(),
console=self.console,
) as progress:
# Add a task to the progress bar
task = progress.add_task(f"[cyan]{task_description}", total=total)
while not progress.finished:
progress.update(task, advance=1)
time.sleep(speed)
# ========== Singleton Logger Instance ==========
_logger = RichLogger()
# ========== Function-style API ==========
def log_info(message: str, emoji: str | None = None):
_logger.info(message=message, emoji=emoji)
def log_success(message: str, emoji: str = "🚀"):
_logger.success(message=message, emoji=emoji)
def log_warning(message: str, emoji: str = "⚠️"):
_logger.warning(message=message, emoji=emoji)
def log_error(message: str, emoji: str = ""):
_logger.error(message=message, emoji=emoji)
def log_debug(message: str, emoji: str = "🔍"):
_logger.debug(message=message, emoji=emoji)
def log_emoji(message: str, emoji: str = "🚀"):
_logger.emoji(message, emoji)
def log_loading(message: str, seconds: float = 2.0):
_logger.loading(message, seconds)
def log_progress(task_description: str, total: int = 100, speed: float = 0.02):
_logger.progress(task_description, total, speed)
if __name__ == "__main__":
# Example usage:
# Initialize logger instance
logger = RichLogger(level="INFO") # Set initial log level to INFO
# Log at different levels
logger.info("System initialization complete.")
logger.success("Robot started successfully!")
logger.warning("Warning: Joint temperature high!")
logger.error("Error: Failed to connect to robot")
logger.debug("Debug: Initializing motor controllers")
# Display an emoji message
logger.emoji("This is a fun message with an emoji!", emoji="🔥")
# Display loading animation for 3 seconds
logger.loading("Loading motor control data...", seconds=3)
# Show progress bar for a task with 100 steps
logger.progress("Processing task", total=100, speed=0.05)
# You can also use different log levels with a higher level than INFO, like ERROR:
logger = RichLogger(level="ERROR")
# Only error and higher priority logs will be shown (INFO, SUCCESS, WARNING will be hidden)
logger.info("This won't be displayed because the level is set to ERROR")
logger.error("This error will be displayed!")

View File

@@ -0,0 +1,171 @@
import time
from dataclasses import dataclass
from multiprocessing import Process, Queue
from queue import Empty
import mujoco
import mujoco.viewer
import numpy as np
from unitree_deploy.utils.rich_logger import log_info, log_success
@dataclass
class MujocoSimulationConfig:
xml_path: str
dof: int
robot_type: str
ctr_dof: int
stop_dof: int
def get_mujoco_sim_config(robot_type: str) -> MujocoSimulationConfig:
if robot_type == "g1":
return MujocoSimulationConfig(
xml_path="unitree_deploy/robot_devices/assets/g1/g1_body29.xml",
dof=30,
robot_type="g1",
ctr_dof=14,
stop_dof=35,
)
elif robot_type == "z1":
return MujocoSimulationConfig(
xml_path="unitree_deploy/robot_devices/assets/z1/z1.xml",
dof=6,
robot_type="z1",
ctr_dof=6,
stop_dof=6,
)
elif robot_type == "h1_2":
return MujocoSimulationConfig(
xml_path="unitree_deploy/robot_devices/assets/z1/z1.urdf",
dof=30,
robot_type="g1",
ctr_dof=14,
stop_dof=35,
)
else:
raise ValueError(f"Unsupported robot_type: {robot_type}")
class MujicoSimulation:
def __init__(self, config: MujocoSimulationConfig):
self.xml_path = config.xml_path
self.robot_type = config.robot_type
self.dof = config.dof
self.ctr_dof = config.ctr_dof
self.stop_dof = config.stop_dof
self.action_queue = Queue()
self.state_queue = Queue()
self.process = Process(target=self._run_simulation, args=(self.xml_path, self.action_queue, self.state_queue))
self.process.daemon = True
self.process.start()
def set_positions(self, joint_positions: np.ndarray):
if joint_positions.shape[0] != self.ctr_dof:
raise ValueError(f"joint_positions must contain {self.ctr_dof} values!")
if self.robot_type == "g1":
joint_positions = np.concatenate([np.zeros(self.dof - self.ctr_dof, dtype=np.float32), joint_positions])
elif self.robot_type == "z1":
pass
elif self.robot_type == "h1_2":
joint_positions[: self.dof - self.ctr_dof] = 0.0
else:
raise ValueError(f"Unsupported robot_type: {self.robot_type}")
self.action_queue.put(joint_positions.tolist())
def get_current_positions(self, timeout=0.01):
try:
return self.state_queue.get(timeout=timeout)
except Empty:
return [0.0] * self.stop_dof
def stop(self):
if hasattr(self, "process") and self.process is not None and self.process.is_alive():
try:
self.process.terminate()
self.process.join()
except Exception as e:
print(f"[WARN] Failed to stop process: {e}")
self.process = None
for qname in ["action_queue", "state_queue"]:
queue = getattr(self, qname, None)
if queue is not None:
try:
if hasattr(queue, "close") and callable(queue.close):
queue.close()
if hasattr(queue, "join_thread") and callable(queue.join_thread):
queue.join_thread()
except Exception as e:
print(f"[WARN] Failed to cleanup {qname}: {e}")
setattr(self, qname, None)
def __del__(self):
self.stop()
@staticmethod
def _run_simulation(xml_path: str, action_queue: Queue, state_queue: Queue):
model = mujoco.MjModel.from_xml_path(xml_path)
data = mujoco.MjData(model)
joint_names = [mujoco.mj_id2name(model, mujoco.mjtObj.mjOBJ_JOINT, i) for i in range(model.njnt)]
joints_indices = [
model.jnt_qposadr[mujoco.mj_name2id(model, mujoco.mjtObj.mjOBJ_JOINT, name)] for name in joint_names
]
log_info(f"len joints indices: {len(joints_indices)}")
viewer = mujoco.viewer.launch_passive(model, data)
current_positions = np.zeros(len(joints_indices), dtype=np.float32)
try:
while viewer.is_running():
try:
new_pos = action_queue.get_nowait()
if len(new_pos) == len(joints_indices):
current_positions = new_pos
except Empty:
pass
for idx, pos in zip(joints_indices, current_positions, strict=True):
data.qpos[idx] = pos
data.qvel[:] = 0
mujoco.mj_forward(model, data)
state_queue.put(data.qpos.copy())
viewer.sync()
time.sleep(0.001)
except KeyboardInterrupt:
log_success("The simulation process was interrupted.")
finally:
viewer.close()
def main():
config = get_mujoco_sim_config(robot_type="g1")
sim = MujicoSimulation(config)
time.sleep(1) # Allow time for the simulation to start
try:
while True:
positions = np.random.uniform(-1.0, 1.0, sim.ctr_dof)
sim.set_positions(positions)
# print(sim.get_current_positions())
time.sleep(1 / 50)
except KeyboardInterrupt:
print("Simulation stopped.")
sim.stop()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,36 @@
import math
import numpy as np
import pinocchio as pin
def generate_rotation(step: int, rotation_speed: float, max_step: int = 240):
"""Generate rotation (quaternions) and translation deltas for left and right arm motions."""
angle = rotation_speed * step if step <= max_step // 2 else rotation_speed * (max_step - step)
# Create rotation quaternion for left arm (around Y-axis)
l_quat = pin.Quaternion(np.cos(angle / 2), 0, np.sin(angle / 2), 0)
# Create rotation quaternion for right arm (around Z-axis)
r_quat = pin.Quaternion(np.cos(angle / 2), 0, 0, np.sin(angle / 2))
# Define translation increments for left and right arm
delta_l = np.array([0.001, 0.001, 0.001]) * 1.2
delta_r = np.array([0.001, -0.001, 0.001]) * 1.2
# Reverse direction in second half of cycle
if step > max_step // 2:
delta_l *= -1
delta_r *= -1
return l_quat, r_quat, delta_l, delta_r
def sinusoidal_single_gripper_motion(period: float, amplitude: float, current_time: float) -> np.ndarray:
value = amplitude * (math.sin(2 * math.pi * current_time / period) + 1) / 2
return np.array([value*5])
def sinusoidal_gripper_motion(period: float, amplitude: float, current_time: float) -> np.ndarray:
value = amplitude * (math.sin(2 * math.pi * current_time / period) + 1) / 2
return np.array([value]*5)

View File

@@ -0,0 +1,40 @@
import numpy as np
class WeightedMovingFilter:
def __init__(self, weights, data_size=14):
self._window_size = len(weights)
self._weights = np.array(weights)
assert np.isclose(np.sum(self._weights), 1.0), (
"[WeightedMovingFilter] the sum of weights list must be 1.0!"
)
self._data_size = data_size
self._filtered_data = np.zeros(self._data_size)
self._data_queue = []
def _apply_filter(self):
if len(self._data_queue) < self._window_size:
return self._data_queue[-1]
data_array = np.array(self._data_queue)
temp_filtered_data = np.zeros(self._data_size)
for i in range(self._data_size):
temp_filtered_data[i] = np.convolve(data_array[:, i], self._weights, mode="valid")[-1]
return temp_filtered_data
def add_data(self, new_data):
assert len(new_data) == self._data_size
if len(self._data_queue) > 0 and np.array_equal(new_data, self._data_queue[-1]):
return # skip duplicate data
if len(self._data_queue) >= self._window_size:
self._data_queue.pop(0)
self._data_queue.append(new_data)
self._filtered_data = self._apply_filter()
@property
def filtered_data(self):
return self._filtered_data