integrate agilex piper arm into lerobot
This commit is contained in:
426
README.md
426
README.md
@@ -1,88 +1,4 @@
|
|||||||
<p align="center">
|
# Install
|
||||||
<picture>
|
|
||||||
<source media="(prefers-color-scheme: dark)" srcset="media/lerobot-logo-thumbnail.png">
|
|
||||||
<source media="(prefers-color-scheme: light)" srcset="media/lerobot-logo-thumbnail.png">
|
|
||||||
<img alt="LeRobot, Hugging Face Robotics Library" src="media/lerobot-logo-thumbnail.png" style="max-width: 100%;">
|
|
||||||
</picture>
|
|
||||||
<br/>
|
|
||||||
<br/>
|
|
||||||
</p>
|
|
||||||
|
|
||||||
<div align="center">
|
|
||||||
|
|
||||||
[](https://github.com/huggingface/lerobot/actions/workflows/nightly-tests.yml?query=branch%3Amain)
|
|
||||||
[](https://codecov.io/gh/huggingface/lerobot)
|
|
||||||
[](https://www.python.org/downloads/)
|
|
||||||
[](https://github.com/huggingface/lerobot/blob/main/LICENSE)
|
|
||||||
[](https://pypi.org/project/lerobot/)
|
|
||||||
[](https://pypi.org/project/lerobot/)
|
|
||||||
[](https://github.com/huggingface/lerobot/tree/main/examples)
|
|
||||||
[](https://github.com/huggingface/lerobot/blob/main/CODE_OF_CONDUCT.md)
|
|
||||||
[](https://discord.gg/s3KuuzsPFb)
|
|
||||||
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<h2 align="center">
|
|
||||||
<p><a href="https://github.com/huggingface/lerobot/blob/main/examples/10_use_so100.md">New robot in town: SO-100</a></p>
|
|
||||||
</h2>
|
|
||||||
|
|
||||||
<div align="center">
|
|
||||||
<img src="media/so100/leader_follower.webp?raw=true" alt="SO-100 leader and follower arms" title="SO-100 leader and follower arms" width="50%">
|
|
||||||
<p>We just added a new tutorial on how to build a more affordable robot, at the price of $110 per arm!</p>
|
|
||||||
<p>Teach it new skills by showing it a few moves with just a laptop.</p>
|
|
||||||
<p>Then watch your homemade robot act autonomously 🤯</p>
|
|
||||||
<p>Follow the link to the <a href="https://github.com/huggingface/lerobot/blob/main/examples/10_use_so100.md">full tutorial for SO-100</a>.</p>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<br/>
|
|
||||||
|
|
||||||
<h3 align="center">
|
|
||||||
<p>LeRobot: State-of-the-art AI for real-world robotics</p>
|
|
||||||
</h3>
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
|
|
||||||
🤗 LeRobot aims to provide models, datasets, and tools for real-world robotics in PyTorch. The goal is to lower the barrier to entry to robotics so that everyone can contribute and benefit from sharing datasets and pretrained models.
|
|
||||||
|
|
||||||
🤗 LeRobot contains state-of-the-art approaches that have been shown to transfer to the real-world with a focus on imitation learning and reinforcement learning.
|
|
||||||
|
|
||||||
🤗 LeRobot already provides a set of pretrained models, datasets with human collected demonstrations, and simulation environments to get started without assembling a robot. In the coming weeks, the plan is to add more and more support for real-world robotics on the most affordable and capable robots out there.
|
|
||||||
|
|
||||||
🤗 LeRobot hosts pretrained models and datasets on this Hugging Face community page: [huggingface.co/lerobot](https://huggingface.co/lerobot)
|
|
||||||
|
|
||||||
#### Examples of pretrained models on simulation environments
|
|
||||||
|
|
||||||
<table>
|
|
||||||
<tr>
|
|
||||||
<td><img src="media/gym/aloha_act.gif" width="100%" alt="ACT policy on ALOHA env"/></td>
|
|
||||||
<td><img src="media/gym/simxarm_tdmpc.gif" width="100%" alt="TDMPC policy on SimXArm env"/></td>
|
|
||||||
<td><img src="media/gym/pusht_diffusion.gif" width="100%" alt="Diffusion policy on PushT env"/></td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center">ACT policy on ALOHA env</td>
|
|
||||||
<td align="center">TDMPC policy on SimXArm env</td>
|
|
||||||
<td align="center">Diffusion policy on PushT env</td>
|
|
||||||
</tr>
|
|
||||||
</table>
|
|
||||||
|
|
||||||
### Acknowledgment
|
|
||||||
|
|
||||||
- Thanks to Tony Zhao, Zipeng Fu and colleagues for open sourcing ACT policy, ALOHA environments and datasets. Ours are adapted from [ALOHA](https://tonyzhaozh.github.io/aloha) and [Mobile ALOHA](https://mobile-aloha.github.io).
|
|
||||||
- Thanks to Cheng Chi, Zhenjia Xu and colleagues for open sourcing Diffusion policy, Pusht environment and datasets, as well as UMI datasets. Ours are adapted from [Diffusion Policy](https://diffusion-policy.cs.columbia.edu) and [UMI Gripper](https://umi-gripper.github.io).
|
|
||||||
- Thanks to Nicklas Hansen, Yunhai Feng and colleagues for open sourcing TDMPC policy, Simxarm environments and datasets. Ours are adapted from [TDMPC](https://github.com/nicklashansen/tdmpc) and [FOWM](https://www.yunhaifeng.com/FOWM).
|
|
||||||
- Thanks to Antonio Loquercio and Ashish Kumar for their early support.
|
|
||||||
- Thanks to [Seungjae (Jay) Lee](https://sjlee.cc/), [Mahi Shafiullah](https://mahis.life/) and colleagues for open sourcing [VQ-BeT](https://sjlee.cc/vq-bet/) policy and helping us adapt the codebase to our repository. The policy is adapted from [VQ-BeT repo](https://github.com/jayLEE0301/vq_bet_official).
|
|
||||||
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
Download our source code:
|
|
||||||
```bash
|
|
||||||
git clone https://github.com/huggingface/lerobot.git
|
|
||||||
cd lerobot
|
|
||||||
```
|
|
||||||
|
|
||||||
Create a virtual environment with Python 3.10 and activate it, e.g. with [`miniconda`](https://docs.anaconda.com/free/miniconda/index.html):
|
Create a virtual environment with Python 3.10 and activate it, e.g. with [`miniconda`](https://docs.anaconda.com/free/miniconda/index.html):
|
||||||
```bash
|
```bash
|
||||||
conda create -y -n lerobot python=3.10
|
conda create -y -n lerobot python=3.10
|
||||||
@@ -91,287 +7,121 @@ conda activate lerobot
|
|||||||
|
|
||||||
Install 🤗 LeRobot:
|
Install 🤗 LeRobot:
|
||||||
```bash
|
```bash
|
||||||
pip install -e .
|
pip install -e . -i https://pypi.tuna.tsinghua.edu.cn/simple
|
||||||
|
|
||||||
|
pip uninstall numpy
|
||||||
|
pip install numpy==1.26.0
|
||||||
|
pip install pynput
|
||||||
```
|
```
|
||||||
|
|
||||||
> **NOTE:** Depending on your platform, If you encounter any build errors during this step
|
/!\ For Linux only, ffmpeg and opencv requires conda install for now. Run this exact sequence of commands:
|
||||||
you may need to install `cmake` and `build-essential` for building some of our dependencies.
|
|
||||||
On linux: `sudo apt-get install cmake build-essential`
|
|
||||||
|
|
||||||
For simulations, 🤗 LeRobot comes with gymnasium environments that can be installed as extras:
|
|
||||||
- [aloha](https://github.com/huggingface/gym-aloha)
|
|
||||||
- [xarm](https://github.com/huggingface/gym-xarm)
|
|
||||||
- [pusht](https://github.com/huggingface/gym-pusht)
|
|
||||||
|
|
||||||
For instance, to install 🤗 LeRobot with aloha and pusht, use:
|
|
||||||
```bash
|
```bash
|
||||||
pip install -e ".[aloha, pusht]"
|
conda install -c conda-forge ffmpeg
|
||||||
|
pip uninstall opencv-python
|
||||||
|
conda install "opencv>=4.10.0"
|
||||||
```
|
```
|
||||||
|
|
||||||
To use [Weights and Biases](https://docs.wandb.ai/quickstart) for experiment tracking, log in with
|
Install Piper:
|
||||||
```bash
|
```bash
|
||||||
wandb login
|
pip install python-can
|
||||||
|
pip install piper_sdk
|
||||||
|
sudo apt update && sudo apt install can-utils ethtool
|
||||||
|
pip install pygame
|
||||||
```
|
```
|
||||||
|
|
||||||
(note: you will also need to enable WandB in the configuration. See below.)
|
# piper集成lerobot
|
||||||
|
见lerobot_piper_tutorial/1. 🤗 LeRobot:新增机械臂的一般流程.pdf
|
||||||
|
|
||||||
## Walkthrough
|
# Teleoperate
|
||||||
|
```bash
|
||||||
|
cd piper_scripts/
|
||||||
|
bash can_activate.sh can0 1000000
|
||||||
|
|
||||||
```
|
cd ..
|
||||||
.
|
python lerobot/scripts/control_robot.py \
|
||||||
├── examples # contains demonstration examples, start here to learn about LeRobot
|
--robot.type=piper \
|
||||||
| └── advanced # contains even more examples for those who have mastered the basics
|
--robot.inference_time=false \
|
||||||
├── lerobot
|
--control.type=teleoperate
|
||||||
| ├── configs # contains config classes with all options that you can override in the command line
|
|
||||||
| ├── common # contains classes and utilities
|
|
||||||
| | ├── datasets # various datasets of human demonstrations: aloha, pusht, xarm
|
|
||||||
| | ├── envs # various sim environments: aloha, pusht, xarm
|
|
||||||
| | ├── policies # various policies: act, diffusion, tdmpc
|
|
||||||
| | ├── robot_devices # various real devices: dynamixel motors, opencv cameras, koch robots
|
|
||||||
| | └── utils # various utilities
|
|
||||||
| └── scripts # contains functions to execute via command line
|
|
||||||
| ├── eval.py # load policy and evaluate it on an environment
|
|
||||||
| ├── train.py # train a policy via imitation learning and/or reinforcement learning
|
|
||||||
| ├── control_robot.py # teleoperate a real robot, record data, run a policy
|
|
||||||
| ├── push_dataset_to_hub.py # convert your dataset into LeRobot dataset format and upload it to the Hugging Face hub
|
|
||||||
| └── visualize_dataset.py # load a dataset and render its demonstrations
|
|
||||||
├── outputs # contains results of scripts execution: logs, videos, model checkpoints
|
|
||||||
└── tests # contains pytest utilities for continuous integration
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Visualize datasets
|
# Record
|
||||||
|
Set dataset root path
|
||||||
|
```bash
|
||||||
|
HF_USER=$PWD/data
|
||||||
|
echo $HF_USER
|
||||||
|
```
|
||||||
|
|
||||||
Check out [example 1](./examples/1_load_lerobot_dataset.py) that illustrates how to use our dataset class which automatically downloads data from the Hugging Face hub.
|
```bash
|
||||||
|
python lerobot/scripts/control_robot.py \
|
||||||
|
--robot.type=piper \
|
||||||
|
--robot.inference_time=false \
|
||||||
|
--control.type=record \
|
||||||
|
--control.fps=30 \
|
||||||
|
--control.single_task="move" \
|
||||||
|
--control.repo_id=${HF_USER}/test \
|
||||||
|
--control.num_episodes=2 \
|
||||||
|
--control.warmup_time_s=2 \
|
||||||
|
--control.episode_time_s=10 \
|
||||||
|
--control.reset_time_s=10 \
|
||||||
|
--control.play_sounds=true \
|
||||||
|
--control.push_to_hub=false
|
||||||
|
```
|
||||||
|
|
||||||
You can also locally visualize episodes from a dataset on the hub by executing our script from the command line:
|
Press right arrow -> at any time during episode recording to early stop and go to resetting. Same during resetting, to early stop and to go to the next episode recording.
|
||||||
|
Press left arrow <- at any time during episode recording or resetting to early stop, cancel the current episode, and re-record it.
|
||||||
|
Press escape ESC at any time during episode recording to end the session early and go straight to video encoding and dataset uploading.
|
||||||
|
|
||||||
|
# visualize
|
||||||
```bash
|
```bash
|
||||||
python lerobot/scripts/visualize_dataset.py \
|
python lerobot/scripts/visualize_dataset.py \
|
||||||
--repo-id lerobot/pusht \
|
--repo-id ${HF_USER}/test \
|
||||||
--episode-index 0
|
--episode-index 0
|
||||||
```
|
```
|
||||||
|
|
||||||
or from a dataset in a local folder with the `root` option and the `--local-files-only` (in the following case the dataset will be searched for in `./my_local_data_dir/lerobot/pusht`)
|
# Replay
|
||||||
```bash
|
```bash
|
||||||
python lerobot/scripts/visualize_dataset.py \
|
python lerobot/scripts/control_robot.py \
|
||||||
--repo-id lerobot/pusht \
|
--robot.type=piper \
|
||||||
--root ./my_local_data_dir \
|
--robot.inference_time=false \
|
||||||
--local-files-only 1 \
|
--control.type=replay \
|
||||||
--episode-index 0
|
--control.fps=30 \
|
||||||
|
--control.repo_id=${HF_USER}/test \
|
||||||
|
--control.episode=0
|
||||||
```
|
```
|
||||||
|
|
||||||
|
# Caution
|
||||||
|
|
||||||
It will open `rerun.io` and display the camera streams, robot states and actions, like this:
|
1. In lerobots/common/datasets/video_utils, the vcodec is set to **libopenh264**, please find your vcodec by **ffmpeg -codecs**
|
||||||
|
|
||||||
https://github-production-user-asset-6210df.s3.amazonaws.com/4681518/328035972-fd46b787-b532-47e2-bb6f-fd536a55a7ed.mov?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20240505%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240505T172924Z&X-Amz-Expires=300&X-Amz-Signature=d680b26c532eeaf80740f08af3320d22ad0b8a4e4da1bcc4f33142c15b509eda&X-Amz-SignedHeaders=host&actor_id=24889239&key_id=0&repo_id=748713144
|
|
||||||
|
|
||||||
|
|
||||||
Our script can also visualize datasets stored on a distant server. See `python lerobot/scripts/visualize_dataset.py --help` for more instructions.
|
# Train
|
||||||
|
具体的训练流程见lerobot_piper_tutorial/2. 🤗 AutoDL训练.pdf
|
||||||
### The `LeRobotDataset` format
|
|
||||||
|
|
||||||
A dataset in `LeRobotDataset` format is very simple to use. It can be loaded from a repository on the Hugging Face hub or a local folder simply with e.g. `dataset = LeRobotDataset("lerobot/aloha_static_coffee")` and can be indexed into like any Hugging Face and PyTorch dataset. For instance `dataset[0]` will retrieve a single temporal frame from the dataset containing observation(s) and an action as PyTorch tensors ready to be fed to a model.
|
|
||||||
|
|
||||||
A specificity of `LeRobotDataset` is that, rather than retrieving a single frame by its index, we can retrieve several frames based on their temporal relationship with the indexed frame, by setting `delta_timestamps` to a list of relative times with respect to the indexed frame. For example, with `delta_timestamps = {"observation.image": [-1, -0.5, -0.2, 0]}` one can retrieve, for a given index, 4 frames: 3 "previous" frames 1 second, 0.5 seconds, and 0.2 seconds before the indexed frame, and the indexed frame itself (corresponding to the 0 entry). See example [1_load_lerobot_dataset.py](examples/1_load_lerobot_dataset.py) for more details on `delta_timestamps`.
|
|
||||||
|
|
||||||
Under the hood, the `LeRobotDataset` format makes use of several ways to serialize data which can be useful to understand if you plan to work more closely with this format. We tried to make a flexible yet simple dataset format that would cover most type of features and specificities present in reinforcement learning and robotics, in simulation and in real-world, with a focus on cameras and robot states but easily extended to other types of sensory inputs as long as they can be represented by a tensor.
|
|
||||||
|
|
||||||
Here are the important details and internal structure organization of a typical `LeRobotDataset` instantiated with `dataset = LeRobotDataset("lerobot/aloha_static_coffee")`. The exact features will change from dataset to dataset but not the main aspects:
|
|
||||||
|
|
||||||
```
|
|
||||||
dataset attributes:
|
|
||||||
├ hf_dataset: a Hugging Face dataset (backed by Arrow/parquet). Typical features example:
|
|
||||||
│ ├ observation.images.cam_high (VideoFrame):
|
|
||||||
│ │ VideoFrame = {'path': path to a mp4 video, 'timestamp' (float32): timestamp in the video}
|
|
||||||
│ ├ observation.state (list of float32): position of an arm joints (for instance)
|
|
||||||
│ ... (more observations)
|
|
||||||
│ ├ action (list of float32): goal position of an arm joints (for instance)
|
|
||||||
│ ├ episode_index (int64): index of the episode for this sample
|
|
||||||
│ ├ frame_index (int64): index of the frame for this sample in the episode ; starts at 0 for each episode
|
|
||||||
│ ├ timestamp (float32): timestamp in the episode
|
|
||||||
│ ├ next.done (bool): indicates the end of en episode ; True for the last frame in each episode
|
|
||||||
│ └ index (int64): general index in the whole dataset
|
|
||||||
├ episode_data_index: contains 2 tensors with the start and end indices of each episode
|
|
||||||
│ ├ from (1D int64 tensor): first frame index for each episode — shape (num episodes,) starts with 0
|
|
||||||
│ └ to: (1D int64 tensor): last frame index for each episode — shape (num episodes,)
|
|
||||||
├ stats: a dictionary of statistics (max, mean, min, std) for each feature in the dataset, for instance
|
|
||||||
│ ├ observation.images.cam_high: {'max': tensor with same number of dimensions (e.g. `(c, 1, 1)` for images, `(c,)` for states), etc.}
|
|
||||||
│ ...
|
|
||||||
├ info: a dictionary of metadata on the dataset
|
|
||||||
│ ├ codebase_version (str): this is to keep track of the codebase version the dataset was created with
|
|
||||||
│ ├ fps (float): frame per second the dataset is recorded/synchronized to
|
|
||||||
│ ├ video (bool): indicates if frames are encoded in mp4 video files to save space or stored as png files
|
|
||||||
│ └ encoding (dict): if video, this documents the main options that were used with ffmpeg to encode the videos
|
|
||||||
├ videos_dir (Path): where the mp4 videos or png images are stored/accessed
|
|
||||||
└ camera_keys (list of string): the keys to access camera features in the item returned by the dataset (e.g. `["observation.images.cam_high", ...]`)
|
|
||||||
```
|
|
||||||
|
|
||||||
A `LeRobotDataset` is serialised using several widespread file formats for each of its parts, namely:
|
|
||||||
- hf_dataset stored using Hugging Face datasets library serialization to parquet
|
|
||||||
- videos are stored in mp4 format to save space
|
|
||||||
- metadata are stored in plain json/jsonl files
|
|
||||||
|
|
||||||
Dataset can be uploaded/downloaded from the HuggingFace hub seamlessly. To work on a local dataset, you can specify its location with the `root` argument if it's not in the default `~/.cache/huggingface/lerobot` location.
|
|
||||||
|
|
||||||
### Evaluate a pretrained policy
|
|
||||||
|
|
||||||
Check out [example 2](./examples/2_evaluate_pretrained_policy.py) that illustrates how to download a pretrained policy from Hugging Face hub, and run an evaluation on its corresponding environment.
|
|
||||||
|
|
||||||
We also provide a more capable script to parallelize the evaluation over multiple environments during the same rollout. Here is an example with a pretrained model hosted on [lerobot/diffusion_pusht](https://huggingface.co/lerobot/diffusion_pusht):
|
|
||||||
```bash
|
```bash
|
||||||
python lerobot/scripts/eval.py \
|
python lerobot/scripts/train.py \
|
||||||
--policy.path=lerobot/diffusion_pusht \
|
--dataset.repo_id=${HF_USER}/jack \
|
||||||
--env.type=pusht \
|
--policy.type=act \
|
||||||
--eval.batch_size=10 \
|
--output_dir=outputs/train/act_jack \
|
||||||
--eval.n_episodes=10 \
|
--job_name=act_jack \
|
||||||
--use_amp=false \
|
--device=cuda \
|
||||||
--device=cuda
|
--wandb.enable=true
|
||||||
```
|
```
|
||||||
|
|
||||||
Note: After training your own policy, you can re-evaluate the checkpoints with:
|
|
||||||
|
|
||||||
|
# Inference
|
||||||
|
还是使用control_robot.py中的record loop吗,使用 **--robot.inference_time=true** 可以将手柄移出。
|
||||||
```bash
|
```bash
|
||||||
python lerobot/scripts/eval.py --policy.path={OUTPUT_DIR}/checkpoints/last/pretrained_model
|
python lerobot/scripts/control_robot.py \
|
||||||
|
--robot.type=piper \
|
||||||
|
--robot.inference_time=true \
|
||||||
|
--control.type=record \
|
||||||
|
--control.fps=30 \
|
||||||
|
--control.single_task="move" \
|
||||||
|
--control.repo_id=$USER/eval_act_jack \
|
||||||
|
--control.num_episodes=1 \
|
||||||
|
--control.warmup_time_s=2 \
|
||||||
|
--control.episode_time_s=30 \
|
||||||
|
--control.reset_time_s=10 \
|
||||||
|
--control.push_to_hub=false \
|
||||||
|
--control.policy.path=outputs/train/act_koch_pick_place_lego/checkpoints/latest/pretrained_model
|
||||||
```
|
```
|
||||||
|
|
||||||
See `python lerobot/scripts/eval.py --help` for more instructions.
|
|
||||||
|
|
||||||
### Train your own policy
|
|
||||||
|
|
||||||
Check out [example 3](./examples/3_train_policy.py) that illustrate how to train a model using our core library in python, and [example 4](./examples/4_train_policy_with_script.md) that shows how to use our training script from command line.
|
|
||||||
|
|
||||||
To use wandb for logging training and evaluation curves, make sure you've run `wandb login` as a one-time setup step. Then, when running the training command above, enable WandB in the configuration by adding `--wandb.enable=true`.
|
|
||||||
|
|
||||||
A link to the wandb logs for the run will also show up in yellow in your terminal. Here is an example of what they look like in your browser. Please also check [here](./examples/4_train_policy_with_script.md#typical-logs-and-metrics) for the explanation of some commonly used metrics in logs.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
Note: For efficiency, during training every checkpoint is evaluated on a low number of episodes. You may use `--eval.n_episodes=500` to evaluate on more episodes than the default. Or, after training, you may want to re-evaluate your best checkpoints on more episodes or change the evaluation settings. See `python lerobot/scripts/eval.py --help` for more instructions.
|
|
||||||
|
|
||||||
#### Reproduce state-of-the-art (SOTA)
|
|
||||||
|
|
||||||
We provide some pretrained policies on our [hub page](https://huggingface.co/lerobot) that can achieve state-of-the-art performances.
|
|
||||||
You can reproduce their training by loading the config from their run. Simply running:
|
|
||||||
```bash
|
|
||||||
python lerobot/scripts/train.py --config_path=lerobot/diffusion_pusht
|
|
||||||
```
|
|
||||||
reproduces SOTA results for Diffusion Policy on the PushT task.
|
|
||||||
|
|
||||||
## Contribute
|
|
||||||
|
|
||||||
If you would like to contribute to 🤗 LeRobot, please check out our [contribution guide](https://github.com/huggingface/lerobot/blob/main/CONTRIBUTING.md).
|
|
||||||
|
|
||||||
<!-- ### Add a new dataset
|
|
||||||
|
|
||||||
To add a dataset to the hub, you need to login using a write-access token, which can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens):
|
|
||||||
```bash
|
|
||||||
huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
|
|
||||||
```
|
|
||||||
|
|
||||||
Then point to your raw dataset folder (e.g. `data/aloha_static_pingpong_test_raw`), and push your dataset to the hub with:
|
|
||||||
```bash
|
|
||||||
python lerobot/scripts/push_dataset_to_hub.py \
|
|
||||||
--raw-dir data/aloha_static_pingpong_test_raw \
|
|
||||||
--out-dir data \
|
|
||||||
--repo-id lerobot/aloha_static_pingpong_test \
|
|
||||||
--raw-format aloha_hdf5
|
|
||||||
```
|
|
||||||
|
|
||||||
See `python lerobot/scripts/push_dataset_to_hub.py --help` for more instructions.
|
|
||||||
|
|
||||||
If your dataset format is not supported, implement your own in `lerobot/common/datasets/push_dataset_to_hub/${raw_format}_format.py` by copying examples like [pusht_zarr](https://github.com/huggingface/lerobot/blob/main/lerobot/common/datasets/push_dataset_to_hub/pusht_zarr_format.py), [umi_zarr](https://github.com/huggingface/lerobot/blob/main/lerobot/common/datasets/push_dataset_to_hub/umi_zarr_format.py), [aloha_hdf5](https://github.com/huggingface/lerobot/blob/main/lerobot/common/datasets/push_dataset_to_hub/aloha_hdf5_format.py), or [xarm_pkl](https://github.com/huggingface/lerobot/blob/main/lerobot/common/datasets/push_dataset_to_hub/xarm_pkl_format.py). -->
|
|
||||||
|
|
||||||
|
|
||||||
### Add a pretrained policy
|
|
||||||
|
|
||||||
Once you have trained a policy you may upload it to the Hugging Face hub using a hub id that looks like `${hf_user}/${repo_name}` (e.g. [lerobot/diffusion_pusht](https://huggingface.co/lerobot/diffusion_pusht)).
|
|
||||||
|
|
||||||
You first need to find the checkpoint folder located inside your experiment directory (e.g. `outputs/train/2024-05-05/20-21-12_aloha_act_default/checkpoints/002500`). Within that there is a `pretrained_model` directory which should contain:
|
|
||||||
- `config.json`: A serialized version of the policy configuration (following the policy's dataclass config).
|
|
||||||
- `model.safetensors`: A set of `torch.nn.Module` parameters, saved in [Hugging Face Safetensors](https://huggingface.co/docs/safetensors/index) format.
|
|
||||||
- `train_config.json`: A consolidated configuration containing all parameter userd for training. The policy configuration should match `config.json` exactly. Thisis useful for anyone who wants to evaluate your policy or for reproducibility.
|
|
||||||
|
|
||||||
To upload these to the hub, run the following:
|
|
||||||
```bash
|
|
||||||
huggingface-cli upload ${hf_user}/${repo_name} path/to/pretrained_model
|
|
||||||
```
|
|
||||||
|
|
||||||
See [eval.py](https://github.com/huggingface/lerobot/blob/main/lerobot/scripts/eval.py) for an example of how other people may use your policy.
|
|
||||||
|
|
||||||
|
|
||||||
### Improve your code with profiling
|
|
||||||
|
|
||||||
An example of a code snippet to profile the evaluation of a policy:
|
|
||||||
```python
|
|
||||||
from torch.profiler import profile, record_function, ProfilerActivity
|
|
||||||
|
|
||||||
def trace_handler(prof):
|
|
||||||
prof.export_chrome_trace(f"tmp/trace_schedule_{prof.step_num}.json")
|
|
||||||
|
|
||||||
with profile(
|
|
||||||
activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
|
|
||||||
schedule=torch.profiler.schedule(
|
|
||||||
wait=2,
|
|
||||||
warmup=2,
|
|
||||||
active=3,
|
|
||||||
),
|
|
||||||
on_trace_ready=trace_handler
|
|
||||||
) as prof:
|
|
||||||
with record_function("eval_policy"):
|
|
||||||
for i in range(num_episodes):
|
|
||||||
prof.step()
|
|
||||||
# insert code to profile, potentially whole body of eval_policy function
|
|
||||||
```
|
|
||||||
|
|
||||||
## Citation
|
|
||||||
|
|
||||||
If you want, you can cite this work with:
|
|
||||||
```bibtex
|
|
||||||
@misc{cadene2024lerobot,
|
|
||||||
author = {Cadene, Remi and Alibert, Simon and Soare, Alexander and Gallouedec, Quentin and Zouitine, Adil and Wolf, Thomas},
|
|
||||||
title = {LeRobot: State-of-the-art Machine Learning for Real-World Robotics in Pytorch},
|
|
||||||
howpublished = "\url{https://github.com/huggingface/lerobot}",
|
|
||||||
year = {2024}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Additionally, if you are using any of the particular policy architecture, pretrained models, or datasets, it is recommended to cite the original authors of the work as they appear below:
|
|
||||||
|
|
||||||
- [Diffusion Policy](https://diffusion-policy.cs.columbia.edu)
|
|
||||||
```bibtex
|
|
||||||
@article{chi2024diffusionpolicy,
|
|
||||||
author = {Cheng Chi and Zhenjia Xu and Siyuan Feng and Eric Cousineau and Yilun Du and Benjamin Burchfiel and Russ Tedrake and Shuran Song},
|
|
||||||
title ={Diffusion Policy: Visuomotor Policy Learning via Action Diffusion},
|
|
||||||
journal = {The International Journal of Robotics Research},
|
|
||||||
year = {2024},
|
|
||||||
}
|
|
||||||
```
|
|
||||||
- [ACT or ALOHA](https://tonyzhaozh.github.io/aloha)
|
|
||||||
```bibtex
|
|
||||||
@article{zhao2023learning,
|
|
||||||
title={Learning fine-grained bimanual manipulation with low-cost hardware},
|
|
||||||
author={Zhao, Tony Z and Kumar, Vikash and Levine, Sergey and Finn, Chelsea},
|
|
||||||
journal={arXiv preprint arXiv:2304.13705},
|
|
||||||
year={2023}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
- [TDMPC](https://www.nicklashansen.com/td-mpc/)
|
|
||||||
|
|
||||||
```bibtex
|
|
||||||
@inproceedings{Hansen2022tdmpc,
|
|
||||||
title={Temporal Difference Learning for Model Predictive Control},
|
|
||||||
author={Nicklas Hansen and Xiaolong Wang and Hao Su},
|
|
||||||
booktitle={ICML},
|
|
||||||
year={2022}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
- [VQ-BeT](https://sjlee.cc/vq-bet/)
|
|
||||||
```bibtex
|
|
||||||
@article{lee2024behavior,
|
|
||||||
title={Behavior generation with latent actions},
|
|
||||||
author={Lee, Seungjae and Wang, Yibin and Etukuru, Haritheja and Kim, H Jin and Shafiullah, Nur Muhammad Mahi and Pinto, Lerrel},
|
|
||||||
journal={arXiv preprint arXiv:2403.03181},
|
|
||||||
year={2024}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|||||||
377
README_OLD.md
Normal file
377
README_OLD.md
Normal file
@@ -0,0 +1,377 @@
|
|||||||
|
<p align="center">
|
||||||
|
<picture>
|
||||||
|
<source media="(prefers-color-scheme: dark)" srcset="media/lerobot-logo-thumbnail.png">
|
||||||
|
<source media="(prefers-color-scheme: light)" srcset="media/lerobot-logo-thumbnail.png">
|
||||||
|
<img alt="LeRobot, Hugging Face Robotics Library" src="media/lerobot-logo-thumbnail.png" style="max-width: 100%;">
|
||||||
|
</picture>
|
||||||
|
<br/>
|
||||||
|
<br/>
|
||||||
|
</p>
|
||||||
|
|
||||||
|
<div align="center">
|
||||||
|
|
||||||
|
[](https://github.com/huggingface/lerobot/actions/workflows/nightly-tests.yml?query=branch%3Amain)
|
||||||
|
[](https://codecov.io/gh/huggingface/lerobot)
|
||||||
|
[](https://www.python.org/downloads/)
|
||||||
|
[](https://github.com/huggingface/lerobot/blob/main/LICENSE)
|
||||||
|
[](https://pypi.org/project/lerobot/)
|
||||||
|
[](https://pypi.org/project/lerobot/)
|
||||||
|
[](https://github.com/huggingface/lerobot/tree/main/examples)
|
||||||
|
[](https://github.com/huggingface/lerobot/blob/main/CODE_OF_CONDUCT.md)
|
||||||
|
[](https://discord.gg/s3KuuzsPFb)
|
||||||
|
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h2 align="center">
|
||||||
|
<p><a href="https://github.com/huggingface/lerobot/blob/main/examples/10_use_so100.md">New robot in town: SO-100</a></p>
|
||||||
|
</h2>
|
||||||
|
|
||||||
|
<div align="center">
|
||||||
|
<img src="media/so100/leader_follower.webp?raw=true" alt="SO-100 leader and follower arms" title="SO-100 leader and follower arms" width="50%">
|
||||||
|
<p>We just added a new tutorial on how to build a more affordable robot, at the price of $110 per arm!</p>
|
||||||
|
<p>Teach it new skills by showing it a few moves with just a laptop.</p>
|
||||||
|
<p>Then watch your homemade robot act autonomously 🤯</p>
|
||||||
|
<p>Follow the link to the <a href="https://github.com/huggingface/lerobot/blob/main/examples/10_use_so100.md">full tutorial for SO-100</a>.</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<br/>
|
||||||
|
|
||||||
|
<h3 align="center">
|
||||||
|
<p>LeRobot: State-of-the-art AI for real-world robotics</p>
|
||||||
|
</h3>
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
|
||||||
|
🤗 LeRobot aims to provide models, datasets, and tools for real-world robotics in PyTorch. The goal is to lower the barrier to entry to robotics so that everyone can contribute and benefit from sharing datasets and pretrained models.
|
||||||
|
|
||||||
|
🤗 LeRobot contains state-of-the-art approaches that have been shown to transfer to the real-world with a focus on imitation learning and reinforcement learning.
|
||||||
|
|
||||||
|
🤗 LeRobot already provides a set of pretrained models, datasets with human collected demonstrations, and simulation environments to get started without assembling a robot. In the coming weeks, the plan is to add more and more support for real-world robotics on the most affordable and capable robots out there.
|
||||||
|
|
||||||
|
🤗 LeRobot hosts pretrained models and datasets on this Hugging Face community page: [huggingface.co/lerobot](https://huggingface.co/lerobot)
|
||||||
|
|
||||||
|
#### Examples of pretrained models on simulation environments
|
||||||
|
|
||||||
|
<table>
|
||||||
|
<tr>
|
||||||
|
<td><img src="media/gym/aloha_act.gif" width="100%" alt="ACT policy on ALOHA env"/></td>
|
||||||
|
<td><img src="media/gym/simxarm_tdmpc.gif" width="100%" alt="TDMPC policy on SimXArm env"/></td>
|
||||||
|
<td><img src="media/gym/pusht_diffusion.gif" width="100%" alt="Diffusion policy on PushT env"/></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td align="center">ACT policy on ALOHA env</td>
|
||||||
|
<td align="center">TDMPC policy on SimXArm env</td>
|
||||||
|
<td align="center">Diffusion policy on PushT env</td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
### Acknowledgment
|
||||||
|
|
||||||
|
- Thanks to Tony Zhao, Zipeng Fu and colleagues for open sourcing ACT policy, ALOHA environments and datasets. Ours are adapted from [ALOHA](https://tonyzhaozh.github.io/aloha) and [Mobile ALOHA](https://mobile-aloha.github.io).
|
||||||
|
- Thanks to Cheng Chi, Zhenjia Xu and colleagues for open sourcing Diffusion policy, Pusht environment and datasets, as well as UMI datasets. Ours are adapted from [Diffusion Policy](https://diffusion-policy.cs.columbia.edu) and [UMI Gripper](https://umi-gripper.github.io).
|
||||||
|
- Thanks to Nicklas Hansen, Yunhai Feng and colleagues for open sourcing TDMPC policy, Simxarm environments and datasets. Ours are adapted from [TDMPC](https://github.com/nicklashansen/tdmpc) and [FOWM](https://www.yunhaifeng.com/FOWM).
|
||||||
|
- Thanks to Antonio Loquercio and Ashish Kumar for their early support.
|
||||||
|
- Thanks to [Seungjae (Jay) Lee](https://sjlee.cc/), [Mahi Shafiullah](https://mahis.life/) and colleagues for open sourcing [VQ-BeT](https://sjlee.cc/vq-bet/) policy and helping us adapt the codebase to our repository. The policy is adapted from [VQ-BeT repo](https://github.com/jayLEE0301/vq_bet_official).
|
||||||
|
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
Download our source code:
|
||||||
|
```bash
|
||||||
|
git clone https://github.com/huggingface/lerobot.git
|
||||||
|
cd lerobot
|
||||||
|
```
|
||||||
|
|
||||||
|
Create a virtual environment with Python 3.10 and activate it, e.g. with [`miniconda`](https://docs.anaconda.com/free/miniconda/index.html):
|
||||||
|
```bash
|
||||||
|
conda create -y -n lerobot python=3.10
|
||||||
|
conda activate lerobot
|
||||||
|
```
|
||||||
|
|
||||||
|
Install 🤗 LeRobot:
|
||||||
|
```bash
|
||||||
|
pip install -e .
|
||||||
|
```
|
||||||
|
|
||||||
|
> **NOTE:** Depending on your platform, If you encounter any build errors during this step
|
||||||
|
you may need to install `cmake` and `build-essential` for building some of our dependencies.
|
||||||
|
On linux: `sudo apt-get install cmake build-essential`
|
||||||
|
|
||||||
|
For simulations, 🤗 LeRobot comes with gymnasium environments that can be installed as extras:
|
||||||
|
- [aloha](https://github.com/huggingface/gym-aloha)
|
||||||
|
- [xarm](https://github.com/huggingface/gym-xarm)
|
||||||
|
- [pusht](https://github.com/huggingface/gym-pusht)
|
||||||
|
|
||||||
|
For instance, to install 🤗 LeRobot with aloha and pusht, use:
|
||||||
|
```bash
|
||||||
|
pip install -e ".[aloha, pusht]"
|
||||||
|
```
|
||||||
|
|
||||||
|
To use [Weights and Biases](https://docs.wandb.ai/quickstart) for experiment tracking, log in with
|
||||||
|
```bash
|
||||||
|
wandb login
|
||||||
|
```
|
||||||
|
|
||||||
|
(note: you will also need to enable WandB in the configuration. See below.)
|
||||||
|
|
||||||
|
## Walkthrough
|
||||||
|
|
||||||
|
```
|
||||||
|
.
|
||||||
|
├── examples # contains demonstration examples, start here to learn about LeRobot
|
||||||
|
| └── advanced # contains even more examples for those who have mastered the basics
|
||||||
|
├── lerobot
|
||||||
|
| ├── configs # contains config classes with all options that you can override in the command line
|
||||||
|
| ├── common # contains classes and utilities
|
||||||
|
| | ├── datasets # various datasets of human demonstrations: aloha, pusht, xarm
|
||||||
|
| | ├── envs # various sim environments: aloha, pusht, xarm
|
||||||
|
| | ├── policies # various policies: act, diffusion, tdmpc
|
||||||
|
| | ├── robot_devices # various real devices: dynamixel motors, opencv cameras, koch robots
|
||||||
|
| | └── utils # various utilities
|
||||||
|
| └── scripts # contains functions to execute via command line
|
||||||
|
| ├── eval.py # load policy and evaluate it on an environment
|
||||||
|
| ├── train.py # train a policy via imitation learning and/or reinforcement learning
|
||||||
|
| ├── control_robot.py # teleoperate a real robot, record data, run a policy
|
||||||
|
| ├── push_dataset_to_hub.py # convert your dataset into LeRobot dataset format and upload it to the Hugging Face hub
|
||||||
|
| └── visualize_dataset.py # load a dataset and render its demonstrations
|
||||||
|
├── outputs # contains results of scripts execution: logs, videos, model checkpoints
|
||||||
|
└── tests # contains pytest utilities for continuous integration
|
||||||
|
```
|
||||||
|
|
||||||
|
### Visualize datasets
|
||||||
|
|
||||||
|
Check out [example 1](./examples/1_load_lerobot_dataset.py) that illustrates how to use our dataset class which automatically downloads data from the Hugging Face hub.
|
||||||
|
|
||||||
|
You can also locally visualize episodes from a dataset on the hub by executing our script from the command line:
|
||||||
|
```bash
|
||||||
|
python lerobot/scripts/visualize_dataset.py \
|
||||||
|
--repo-id lerobot/pusht \
|
||||||
|
--episode-index 0
|
||||||
|
```
|
||||||
|
|
||||||
|
or from a dataset in a local folder with the `root` option and the `--local-files-only` (in the following case the dataset will be searched for in `./my_local_data_dir/lerobot/pusht`)
|
||||||
|
```bash
|
||||||
|
python lerobot/scripts/visualize_dataset.py \
|
||||||
|
--repo-id lerobot/pusht \
|
||||||
|
--root ./my_local_data_dir \
|
||||||
|
--local-files-only 1 \
|
||||||
|
--episode-index 0
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
It will open `rerun.io` and display the camera streams, robot states and actions, like this:
|
||||||
|
|
||||||
|
https://github-production-user-asset-6210df.s3.amazonaws.com/4681518/328035972-fd46b787-b532-47e2-bb6f-fd536a55a7ed.mov?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20240505%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240505T172924Z&X-Amz-Expires=300&X-Amz-Signature=d680b26c532eeaf80740f08af3320d22ad0b8a4e4da1bcc4f33142c15b509eda&X-Amz-SignedHeaders=host&actor_id=24889239&key_id=0&repo_id=748713144
|
||||||
|
|
||||||
|
|
||||||
|
Our script can also visualize datasets stored on a distant server. See `python lerobot/scripts/visualize_dataset.py --help` for more instructions.
|
||||||
|
|
||||||
|
### The `LeRobotDataset` format
|
||||||
|
|
||||||
|
A dataset in `LeRobotDataset` format is very simple to use. It can be loaded from a repository on the Hugging Face hub or a local folder simply with e.g. `dataset = LeRobotDataset("lerobot/aloha_static_coffee")` and can be indexed into like any Hugging Face and PyTorch dataset. For instance `dataset[0]` will retrieve a single temporal frame from the dataset containing observation(s) and an action as PyTorch tensors ready to be fed to a model.
|
||||||
|
|
||||||
|
A specificity of `LeRobotDataset` is that, rather than retrieving a single frame by its index, we can retrieve several frames based on their temporal relationship with the indexed frame, by setting `delta_timestamps` to a list of relative times with respect to the indexed frame. For example, with `delta_timestamps = {"observation.image": [-1, -0.5, -0.2, 0]}` one can retrieve, for a given index, 4 frames: 3 "previous" frames 1 second, 0.5 seconds, and 0.2 seconds before the indexed frame, and the indexed frame itself (corresponding to the 0 entry). See example [1_load_lerobot_dataset.py](examples/1_load_lerobot_dataset.py) for more details on `delta_timestamps`.
|
||||||
|
|
||||||
|
Under the hood, the `LeRobotDataset` format makes use of several ways to serialize data which can be useful to understand if you plan to work more closely with this format. We tried to make a flexible yet simple dataset format that would cover most type of features and specificities present in reinforcement learning and robotics, in simulation and in real-world, with a focus on cameras and robot states but easily extended to other types of sensory inputs as long as they can be represented by a tensor.
|
||||||
|
|
||||||
|
Here are the important details and internal structure organization of a typical `LeRobotDataset` instantiated with `dataset = LeRobotDataset("lerobot/aloha_static_coffee")`. The exact features will change from dataset to dataset but not the main aspects:
|
||||||
|
|
||||||
|
```
|
||||||
|
dataset attributes:
|
||||||
|
├ hf_dataset: a Hugging Face dataset (backed by Arrow/parquet). Typical features example:
|
||||||
|
│ ├ observation.images.cam_high (VideoFrame):
|
||||||
|
│ │ VideoFrame = {'path': path to a mp4 video, 'timestamp' (float32): timestamp in the video}
|
||||||
|
│ ├ observation.state (list of float32): position of an arm joints (for instance)
|
||||||
|
│ ... (more observations)
|
||||||
|
│ ├ action (list of float32): goal position of an arm joints (for instance)
|
||||||
|
│ ├ episode_index (int64): index of the episode for this sample
|
||||||
|
│ ├ frame_index (int64): index of the frame for this sample in the episode ; starts at 0 for each episode
|
||||||
|
│ ├ timestamp (float32): timestamp in the episode
|
||||||
|
│ ├ next.done (bool): indicates the end of en episode ; True for the last frame in each episode
|
||||||
|
│ └ index (int64): general index in the whole dataset
|
||||||
|
├ episode_data_index: contains 2 tensors with the start and end indices of each episode
|
||||||
|
│ ├ from (1D int64 tensor): first frame index for each episode — shape (num episodes,) starts with 0
|
||||||
|
│ └ to: (1D int64 tensor): last frame index for each episode — shape (num episodes,)
|
||||||
|
├ stats: a dictionary of statistics (max, mean, min, std) for each feature in the dataset, for instance
|
||||||
|
│ ├ observation.images.cam_high: {'max': tensor with same number of dimensions (e.g. `(c, 1, 1)` for images, `(c,)` for states), etc.}
|
||||||
|
│ ...
|
||||||
|
├ info: a dictionary of metadata on the dataset
|
||||||
|
│ ├ codebase_version (str): this is to keep track of the codebase version the dataset was created with
|
||||||
|
│ ├ fps (float): frame per second the dataset is recorded/synchronized to
|
||||||
|
│ ├ video (bool): indicates if frames are encoded in mp4 video files to save space or stored as png files
|
||||||
|
│ └ encoding (dict): if video, this documents the main options that were used with ffmpeg to encode the videos
|
||||||
|
├ videos_dir (Path): where the mp4 videos or png images are stored/accessed
|
||||||
|
└ camera_keys (list of string): the keys to access camera features in the item returned by the dataset (e.g. `["observation.images.cam_high", ...]`)
|
||||||
|
```
|
||||||
|
|
||||||
|
A `LeRobotDataset` is serialised using several widespread file formats for each of its parts, namely:
|
||||||
|
- hf_dataset stored using Hugging Face datasets library serialization to parquet
|
||||||
|
- videos are stored in mp4 format to save space
|
||||||
|
- metadata are stored in plain json/jsonl files
|
||||||
|
|
||||||
|
Dataset can be uploaded/downloaded from the HuggingFace hub seamlessly. To work on a local dataset, you can specify its location with the `root` argument if it's not in the default `~/.cache/huggingface/lerobot` location.
|
||||||
|
|
||||||
|
### Evaluate a pretrained policy
|
||||||
|
|
||||||
|
Check out [example 2](./examples/2_evaluate_pretrained_policy.py) that illustrates how to download a pretrained policy from Hugging Face hub, and run an evaluation on its corresponding environment.
|
||||||
|
|
||||||
|
We also provide a more capable script to parallelize the evaluation over multiple environments during the same rollout. Here is an example with a pretrained model hosted on [lerobot/diffusion_pusht](https://huggingface.co/lerobot/diffusion_pusht):
|
||||||
|
```bash
|
||||||
|
python lerobot/scripts/eval.py \
|
||||||
|
--policy.path=lerobot/diffusion_pusht \
|
||||||
|
--env.type=pusht \
|
||||||
|
--eval.batch_size=10 \
|
||||||
|
--eval.n_episodes=10 \
|
||||||
|
--use_amp=false \
|
||||||
|
--device=cuda
|
||||||
|
```
|
||||||
|
|
||||||
|
Note: After training your own policy, you can re-evaluate the checkpoints with:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python lerobot/scripts/eval.py --policy.path={OUTPUT_DIR}/checkpoints/last/pretrained_model
|
||||||
|
```
|
||||||
|
|
||||||
|
See `python lerobot/scripts/eval.py --help` for more instructions.
|
||||||
|
|
||||||
|
### Train your own policy
|
||||||
|
|
||||||
|
Check out [example 3](./examples/3_train_policy.py) that illustrate how to train a model using our core library in python, and [example 4](./examples/4_train_policy_with_script.md) that shows how to use our training script from command line.
|
||||||
|
|
||||||
|
To use wandb for logging training and evaluation curves, make sure you've run `wandb login` as a one-time setup step. Then, when running the training command above, enable WandB in the configuration by adding `--wandb.enable=true`.
|
||||||
|
|
||||||
|
A link to the wandb logs for the run will also show up in yellow in your terminal. Here is an example of what they look like in your browser. Please also check [here](./examples/4_train_policy_with_script.md#typical-logs-and-metrics) for the explanation of some commonly used metrics in logs.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Note: For efficiency, during training every checkpoint is evaluated on a low number of episodes. You may use `--eval.n_episodes=500` to evaluate on more episodes than the default. Or, after training, you may want to re-evaluate your best checkpoints on more episodes or change the evaluation settings. See `python lerobot/scripts/eval.py --help` for more instructions.
|
||||||
|
|
||||||
|
#### Reproduce state-of-the-art (SOTA)
|
||||||
|
|
||||||
|
We provide some pretrained policies on our [hub page](https://huggingface.co/lerobot) that can achieve state-of-the-art performances.
|
||||||
|
You can reproduce their training by loading the config from their run. Simply running:
|
||||||
|
```bash
|
||||||
|
python lerobot/scripts/train.py --config_path=lerobot/diffusion_pusht
|
||||||
|
```
|
||||||
|
reproduces SOTA results for Diffusion Policy on the PushT task.
|
||||||
|
|
||||||
|
## Contribute
|
||||||
|
|
||||||
|
If you would like to contribute to 🤗 LeRobot, please check out our [contribution guide](https://github.com/huggingface/lerobot/blob/main/CONTRIBUTING.md).
|
||||||
|
|
||||||
|
<!-- ### Add a new dataset
|
||||||
|
|
||||||
|
To add a dataset to the hub, you need to login using a write-access token, which can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens):
|
||||||
|
```bash
|
||||||
|
huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
|
||||||
|
```
|
||||||
|
|
||||||
|
Then point to your raw dataset folder (e.g. `data/aloha_static_pingpong_test_raw`), and push your dataset to the hub with:
|
||||||
|
```bash
|
||||||
|
python lerobot/scripts/push_dataset_to_hub.py \
|
||||||
|
--raw-dir data/aloha_static_pingpong_test_raw \
|
||||||
|
--out-dir data \
|
||||||
|
--repo-id lerobot/aloha_static_pingpong_test \
|
||||||
|
--raw-format aloha_hdf5
|
||||||
|
```
|
||||||
|
|
||||||
|
See `python lerobot/scripts/push_dataset_to_hub.py --help` for more instructions.
|
||||||
|
|
||||||
|
If your dataset format is not supported, implement your own in `lerobot/common/datasets/push_dataset_to_hub/${raw_format}_format.py` by copying examples like [pusht_zarr](https://github.com/huggingface/lerobot/blob/main/lerobot/common/datasets/push_dataset_to_hub/pusht_zarr_format.py), [umi_zarr](https://github.com/huggingface/lerobot/blob/main/lerobot/common/datasets/push_dataset_to_hub/umi_zarr_format.py), [aloha_hdf5](https://github.com/huggingface/lerobot/blob/main/lerobot/common/datasets/push_dataset_to_hub/aloha_hdf5_format.py), or [xarm_pkl](https://github.com/huggingface/lerobot/blob/main/lerobot/common/datasets/push_dataset_to_hub/xarm_pkl_format.py). -->
|
||||||
|
|
||||||
|
|
||||||
|
### Add a pretrained policy
|
||||||
|
|
||||||
|
Once you have trained a policy you may upload it to the Hugging Face hub using a hub id that looks like `${hf_user}/${repo_name}` (e.g. [lerobot/diffusion_pusht](https://huggingface.co/lerobot/diffusion_pusht)).
|
||||||
|
|
||||||
|
You first need to find the checkpoint folder located inside your experiment directory (e.g. `outputs/train/2024-05-05/20-21-12_aloha_act_default/checkpoints/002500`). Within that there is a `pretrained_model` directory which should contain:
|
||||||
|
- `config.json`: A serialized version of the policy configuration (following the policy's dataclass config).
|
||||||
|
- `model.safetensors`: A set of `torch.nn.Module` parameters, saved in [Hugging Face Safetensors](https://huggingface.co/docs/safetensors/index) format.
|
||||||
|
- `train_config.json`: A consolidated configuration containing all parameter userd for training. The policy configuration should match `config.json` exactly. Thisis useful for anyone who wants to evaluate your policy or for reproducibility.
|
||||||
|
|
||||||
|
To upload these to the hub, run the following:
|
||||||
|
```bash
|
||||||
|
huggingface-cli upload ${hf_user}/${repo_name} path/to/pretrained_model
|
||||||
|
```
|
||||||
|
|
||||||
|
See [eval.py](https://github.com/huggingface/lerobot/blob/main/lerobot/scripts/eval.py) for an example of how other people may use your policy.
|
||||||
|
|
||||||
|
|
||||||
|
### Improve your code with profiling
|
||||||
|
|
||||||
|
An example of a code snippet to profile the evaluation of a policy:
|
||||||
|
```python
|
||||||
|
from torch.profiler import profile, record_function, ProfilerActivity
|
||||||
|
|
||||||
|
def trace_handler(prof):
|
||||||
|
prof.export_chrome_trace(f"tmp/trace_schedule_{prof.step_num}.json")
|
||||||
|
|
||||||
|
with profile(
|
||||||
|
activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
|
||||||
|
schedule=torch.profiler.schedule(
|
||||||
|
wait=2,
|
||||||
|
warmup=2,
|
||||||
|
active=3,
|
||||||
|
),
|
||||||
|
on_trace_ready=trace_handler
|
||||||
|
) as prof:
|
||||||
|
with record_function("eval_policy"):
|
||||||
|
for i in range(num_episodes):
|
||||||
|
prof.step()
|
||||||
|
# insert code to profile, potentially whole body of eval_policy function
|
||||||
|
```
|
||||||
|
|
||||||
|
## Citation
|
||||||
|
|
||||||
|
If you want, you can cite this work with:
|
||||||
|
```bibtex
|
||||||
|
@misc{cadene2024lerobot,
|
||||||
|
author = {Cadene, Remi and Alibert, Simon and Soare, Alexander and Gallouedec, Quentin and Zouitine, Adil and Wolf, Thomas},
|
||||||
|
title = {LeRobot: State-of-the-art Machine Learning for Real-World Robotics in Pytorch},
|
||||||
|
howpublished = "\url{https://github.com/huggingface/lerobot}",
|
||||||
|
year = {2024}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Additionally, if you are using any of the particular policy architecture, pretrained models, or datasets, it is recommended to cite the original authors of the work as they appear below:
|
||||||
|
|
||||||
|
- [Diffusion Policy](https://diffusion-policy.cs.columbia.edu)
|
||||||
|
```bibtex
|
||||||
|
@article{chi2024diffusionpolicy,
|
||||||
|
author = {Cheng Chi and Zhenjia Xu and Siyuan Feng and Eric Cousineau and Yilun Du and Benjamin Burchfiel and Russ Tedrake and Shuran Song},
|
||||||
|
title ={Diffusion Policy: Visuomotor Policy Learning via Action Diffusion},
|
||||||
|
journal = {The International Journal of Robotics Research},
|
||||||
|
year = {2024},
|
||||||
|
}
|
||||||
|
```
|
||||||
|
- [ACT or ALOHA](https://tonyzhaozh.github.io/aloha)
|
||||||
|
```bibtex
|
||||||
|
@article{zhao2023learning,
|
||||||
|
title={Learning fine-grained bimanual manipulation with low-cost hardware},
|
||||||
|
author={Zhao, Tony Z and Kumar, Vikash and Levine, Sergey and Finn, Chelsea},
|
||||||
|
journal={arXiv preprint arXiv:2304.13705},
|
||||||
|
year={2023}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [TDMPC](https://www.nicklashansen.com/td-mpc/)
|
||||||
|
|
||||||
|
```bibtex
|
||||||
|
@inproceedings{Hansen2022tdmpc,
|
||||||
|
title={Temporal Difference Learning for Model Predictive Control},
|
||||||
|
author={Nicklas Hansen and Xiaolong Wang and Hao Su},
|
||||||
|
booktitle={ICML},
|
||||||
|
year={2022}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [VQ-BeT](https://sjlee.cc/vq-bet/)
|
||||||
|
```bibtex
|
||||||
|
@article{lee2024behavior,
|
||||||
|
title={Behavior generation with latent actions},
|
||||||
|
author={Lee, Seungjae and Wang, Yibin and Etukuru, Haritheja and Kim, H Jin and Shafiullah, Nur Muhammad Mahi and Pinto, Lerrel},
|
||||||
|
journal={arXiv preprint arXiv:2403.03181},
|
||||||
|
year={2024}
|
||||||
|
}
|
||||||
|
```
|
||||||
@@ -131,7 +131,7 @@ def encode_video_frames(
|
|||||||
imgs_dir: Path | str,
|
imgs_dir: Path | str,
|
||||||
video_path: Path | str,
|
video_path: Path | str,
|
||||||
fps: int,
|
fps: int,
|
||||||
vcodec: str = "libsvtav1",
|
vcodec: str = "libopenh264",
|
||||||
pix_fmt: str = "yuv420p",
|
pix_fmt: str = "yuv420p",
|
||||||
g: int | None = 2,
|
g: int | None = 2,
|
||||||
crf: int | None = 30,
|
crf: int | None = 30,
|
||||||
|
|||||||
@@ -43,7 +43,7 @@ def log_control_info(robot: Robot, dt_s, episode_index=None, frame_index=None, f
|
|||||||
log_dt("dt", dt_s)
|
log_dt("dt", dt_s)
|
||||||
|
|
||||||
# TODO(aliberts): move robot-specific logs logic in robot.print_logs()
|
# TODO(aliberts): move robot-specific logs logic in robot.print_logs()
|
||||||
if not robot.robot_type.startswith("stretch"):
|
if not robot.robot_type.startswith(("stretch", "piper")):
|
||||||
for name in robot.leader_arms:
|
for name in robot.leader_arms:
|
||||||
key = f"read_leader_{name}_pos_dt_s"
|
key = f"read_leader_{name}_pos_dt_s"
|
||||||
if key in robot.logs:
|
if key in robot.logs:
|
||||||
@@ -301,7 +301,7 @@ def stop_recording(robot, listener, display_cameras):
|
|||||||
|
|
||||||
|
|
||||||
def sanity_check_dataset_name(repo_id, policy_cfg):
|
def sanity_check_dataset_name(repo_id, policy_cfg):
|
||||||
_, dataset_name = repo_id.split("/")
|
dataset_name = repo_id.split("/")[-1]
|
||||||
# either repo_id doesnt start with "eval_" and there is no policy
|
# either repo_id doesnt start with "eval_" and there is no policy
|
||||||
# or repo_id starts with "eval_" and there is a policy
|
# or repo_id starts with "eval_" and there is a policy
|
||||||
|
|
||||||
|
|||||||
@@ -25,3 +25,10 @@ class FeetechMotorsBusConfig(MotorsBusConfig):
|
|||||||
port: str
|
port: str
|
||||||
motors: dict[str, tuple[int, str]]
|
motors: dict[str, tuple[int, str]]
|
||||||
mock: bool = False
|
mock: bool = False
|
||||||
|
|
||||||
|
|
||||||
|
@MotorsBusConfig.register_subclass("piper")
|
||||||
|
@dataclass
|
||||||
|
class PiperMotorsBusConfig(MotorsBusConfig):
|
||||||
|
can_name: str
|
||||||
|
motors: dict[str, tuple[int, str]]
|
||||||
146
lerobot/common/robot_devices/motors/piper.py
Normal file
146
lerobot/common/robot_devices/motors/piper.py
Normal file
@@ -0,0 +1,146 @@
|
|||||||
|
import time
|
||||||
|
from typing import Dict
|
||||||
|
from piper_sdk import *
|
||||||
|
from lerobot.common.robot_devices.motors.configs import PiperMotorsBusConfig
|
||||||
|
|
||||||
|
class PiperMotorsBus:
|
||||||
|
"""
|
||||||
|
对Piper SDK的二次封装
|
||||||
|
"""
|
||||||
|
def __init__(self,
|
||||||
|
config: PiperMotorsBusConfig):
|
||||||
|
self.piper = C_PiperInterface_V2(config.can_name)
|
||||||
|
self.piper.ConnectPort()
|
||||||
|
self.motors = config.motors
|
||||||
|
self.init_joint_position = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] # [6 joints + 1 gripper] * 0.0
|
||||||
|
self.safe_disable_position = [0.0, 0.0, 0.0, 0.0, 0.52, 0.0, 0.0]
|
||||||
|
self.pose_factor = 1000 # 单位 0.001mm
|
||||||
|
self.joint_factor = 57324.840764 # 1000*180/3.14, rad -> 度(单位0.001度)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def motor_names(self) -> list[str]:
|
||||||
|
return list(self.motors.keys())
|
||||||
|
|
||||||
|
@property
|
||||||
|
def motor_models(self) -> list[str]:
|
||||||
|
return [model for _, model in self.motors.values()]
|
||||||
|
|
||||||
|
@property
|
||||||
|
def motor_indices(self) -> list[int]:
|
||||||
|
return [idx for idx, _ in self.motors.values()]
|
||||||
|
|
||||||
|
|
||||||
|
def connect(self, enable:bool) -> bool:
|
||||||
|
'''
|
||||||
|
使能机械臂并检测使能状态,尝试5s,如果使能超时则退出程序
|
||||||
|
'''
|
||||||
|
enable_flag = False
|
||||||
|
loop_flag = False
|
||||||
|
# 设置超时时间(秒)
|
||||||
|
timeout = 5
|
||||||
|
# 记录进入循环前的时间
|
||||||
|
start_time = time.time()
|
||||||
|
while not (loop_flag):
|
||||||
|
elapsed_time = time.time() - start_time
|
||||||
|
print(f"--------------------")
|
||||||
|
enable_list = []
|
||||||
|
enable_list.append(self.piper.GetArmLowSpdInfoMsgs().motor_1.foc_status.driver_enable_status)
|
||||||
|
enable_list.append(self.piper.GetArmLowSpdInfoMsgs().motor_2.foc_status.driver_enable_status)
|
||||||
|
enable_list.append(self.piper.GetArmLowSpdInfoMsgs().motor_3.foc_status.driver_enable_status)
|
||||||
|
enable_list.append(self.piper.GetArmLowSpdInfoMsgs().motor_4.foc_status.driver_enable_status)
|
||||||
|
enable_list.append(self.piper.GetArmLowSpdInfoMsgs().motor_5.foc_status.driver_enable_status)
|
||||||
|
enable_list.append(self.piper.GetArmLowSpdInfoMsgs().motor_6.foc_status.driver_enable_status)
|
||||||
|
if(enable):
|
||||||
|
enable_flag = all(enable_list)
|
||||||
|
self.piper.EnableArm(7)
|
||||||
|
self.piper.GripperCtrl(0,1000,0x01, 0)
|
||||||
|
else:
|
||||||
|
# move to safe disconnect position
|
||||||
|
enable_flag = any(enable_list)
|
||||||
|
self.piper.DisableArm(7)
|
||||||
|
self.piper.GripperCtrl(0,1000,0x02, 0)
|
||||||
|
print(f"使能状态: {enable_flag}")
|
||||||
|
print(f"--------------------")
|
||||||
|
if(enable_flag == enable):
|
||||||
|
loop_flag = True
|
||||||
|
enable_flag = True
|
||||||
|
else:
|
||||||
|
loop_flag = False
|
||||||
|
enable_flag = False
|
||||||
|
# 检查是否超过超时时间
|
||||||
|
if elapsed_time > timeout:
|
||||||
|
print(f"超时....")
|
||||||
|
enable_flag = False
|
||||||
|
loop_flag = True
|
||||||
|
break
|
||||||
|
time.sleep(0.5)
|
||||||
|
resp = enable_flag
|
||||||
|
print(f"Returning response: {resp}")
|
||||||
|
return resp
|
||||||
|
|
||||||
|
def motor_names(self):
|
||||||
|
return
|
||||||
|
|
||||||
|
def set_calibration(self):
|
||||||
|
return
|
||||||
|
|
||||||
|
def revert_calibration(self):
|
||||||
|
return
|
||||||
|
|
||||||
|
def apply_calibration(self):
|
||||||
|
"""
|
||||||
|
移动到初始位置
|
||||||
|
"""
|
||||||
|
self.write(target_joint=self.init_joint_position)
|
||||||
|
|
||||||
|
def write(self, target_joint:list):
|
||||||
|
"""
|
||||||
|
Joint control
|
||||||
|
- target joint: in radians
|
||||||
|
joint_1 (float): 关节1角度 (-92000~92000) / 57324.840764
|
||||||
|
joint_2 (float): 关节2角度 -1300 ~ 90000 / 57324.840764
|
||||||
|
joint_3 (float): 关节3角度 2400 ~ -80000 / 57324.840764
|
||||||
|
joint_4 (float): 关节4角度 -90000~90000 / 57324.840764
|
||||||
|
joint_5 (float): 关节5角度 19000~-77000 / 57324.840764
|
||||||
|
joint_6 (float): 关节6角度 -90000~90000 / 57324.840764
|
||||||
|
gripper_range: 夹爪角度 0~0.08
|
||||||
|
"""
|
||||||
|
joint_0 = round(target_joint[0]*self.joint_factor)
|
||||||
|
joint_1 = round(target_joint[1]*self.joint_factor)
|
||||||
|
joint_2 = round(target_joint[2]*self.joint_factor)
|
||||||
|
joint_3 = round(target_joint[3]*self.joint_factor)
|
||||||
|
joint_4 = round(target_joint[4]*self.joint_factor)
|
||||||
|
joint_5 = round(target_joint[5]*self.joint_factor)
|
||||||
|
gripper_range = round(target_joint[6]*1000*1000)
|
||||||
|
|
||||||
|
self.piper.MotionCtrl_2(0x01, 0x01, 100, 0x00) # joint control
|
||||||
|
self.piper.JointCtrl(joint_0, joint_1, joint_2, joint_3, joint_4, joint_5)
|
||||||
|
self.piper.GripperCtrl(abs(gripper_range), 1000, 0x01, 0) # 单位 0.001°
|
||||||
|
|
||||||
|
|
||||||
|
def read(self) -> Dict:
|
||||||
|
"""
|
||||||
|
- 机械臂关节消息,单位0.001度
|
||||||
|
- 机械臂夹爪消息
|
||||||
|
"""
|
||||||
|
joint_msg = self.piper.GetArmJointMsgs()
|
||||||
|
joint_state = joint_msg.joint_state
|
||||||
|
|
||||||
|
gripper_msg = self.piper.GetArmGripperMsgs()
|
||||||
|
gripper_state = gripper_msg.gripper_state
|
||||||
|
|
||||||
|
return {
|
||||||
|
"joint_1": joint_state.joint_1,
|
||||||
|
"joint_2": joint_state.joint_2,
|
||||||
|
"joint_3": joint_state.joint_3,
|
||||||
|
"joint_4": joint_state.joint_4,
|
||||||
|
"joint_5": joint_state.joint_5,
|
||||||
|
"joint_6": joint_state.joint_6,
|
||||||
|
"gripper": gripper_state.grippers_angle
|
||||||
|
}
|
||||||
|
|
||||||
|
def safe_disconnect(self):
|
||||||
|
"""
|
||||||
|
Move to safe disconnect position
|
||||||
|
"""
|
||||||
|
self.write(target_joint=self.safe_disable_position)
|
||||||
@@ -3,7 +3,8 @@ from typing import Protocol
|
|||||||
from lerobot.common.robot_devices.motors.configs import (
|
from lerobot.common.robot_devices.motors.configs import (
|
||||||
DynamixelMotorsBusConfig,
|
DynamixelMotorsBusConfig,
|
||||||
FeetechMotorsBusConfig,
|
FeetechMotorsBusConfig,
|
||||||
MotorsBusConfig,
|
PiperMotorsBusConfig,
|
||||||
|
MotorsBusConfig
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@@ -30,6 +31,11 @@ def make_motors_buses_from_configs(motors_bus_configs: dict[str, MotorsBusConfig
|
|||||||
|
|
||||||
motors_buses[key] = FeetechMotorsBus(cfg)
|
motors_buses[key] = FeetechMotorsBus(cfg)
|
||||||
|
|
||||||
|
elif cfg.type == "piper":
|
||||||
|
from lerobot.common.robot_devices.motors.piper import PiperMotorsBus
|
||||||
|
|
||||||
|
motors_buses[key] = PiperMotorsBus(cfg)
|
||||||
|
|
||||||
else:
|
else:
|
||||||
raise ValueError(f"The motor type '{cfg.type}' is not valid.")
|
raise ValueError(f"The motor type '{cfg.type}' is not valid.")
|
||||||
|
|
||||||
@@ -48,6 +54,15 @@ def make_motors_bus(motor_type: str, **kwargs) -> MotorsBus:
|
|||||||
|
|
||||||
config = FeetechMotorsBusConfig(**kwargs)
|
config = FeetechMotorsBusConfig(**kwargs)
|
||||||
return FeetechMotorsBus(config)
|
return FeetechMotorsBus(config)
|
||||||
|
|
||||||
|
elif motor_type == "piper":
|
||||||
|
from lerobot.common.robot_devices.motors.piper import PiperMotorsBus
|
||||||
|
|
||||||
|
config = PiperMotorsBusConfig(**kwargs)
|
||||||
|
return PiperMotorsBus(config)
|
||||||
|
|
||||||
else:
|
else:
|
||||||
raise ValueError(f"The motor type '{motor_type}' is not valid.")
|
raise ValueError(f"The motor type '{motor_type}' is not valid.")
|
||||||
|
|
||||||
|
def get_motor_names(arm: dict[str, MotorsBus]) -> list:
|
||||||
|
return [f"{arm}_{motor}" for arm, bus in arm.items() for motor in bus.motors]
|
||||||
@@ -13,6 +13,7 @@ from lerobot.common.robot_devices.motors.configs import (
|
|||||||
DynamixelMotorsBusConfig,
|
DynamixelMotorsBusConfig,
|
||||||
FeetechMotorsBusConfig,
|
FeetechMotorsBusConfig,
|
||||||
MotorsBusConfig,
|
MotorsBusConfig,
|
||||||
|
PiperMotorsBusConfig
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@@ -597,3 +598,44 @@ class LeKiwiRobotConfig(RobotConfig):
|
|||||||
)
|
)
|
||||||
|
|
||||||
mock: bool = False
|
mock: bool = False
|
||||||
|
|
||||||
|
|
||||||
|
@RobotConfig.register_subclass("piper")
|
||||||
|
@dataclass
|
||||||
|
class PiperRobotConfig(RobotConfig):
|
||||||
|
inference_time: bool
|
||||||
|
|
||||||
|
follower_arm: dict[str, MotorsBusConfig] = field(
|
||||||
|
default_factory=lambda: {
|
||||||
|
"main": PiperMotorsBusConfig(
|
||||||
|
can_name="can0",
|
||||||
|
motors={
|
||||||
|
# name: (index, model)
|
||||||
|
"joint_1": [1, "agilex_piper"],
|
||||||
|
"joint_2": [2, "agilex_piper"],
|
||||||
|
"joint_3": [3, "agilex_piper"],
|
||||||
|
"joint_4": [4, "agilex_piper"],
|
||||||
|
"joint_5": [5, "agilex_piper"],
|
||||||
|
"joint_6": [6, "agilex_piper"],
|
||||||
|
"gripper": (7, "agilex_piper"),
|
||||||
|
},
|
||||||
|
),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
cameras: dict[str, CameraConfig] = field(
|
||||||
|
default_factory=lambda: {
|
||||||
|
"one": OpenCVCameraConfig(
|
||||||
|
camera_index=0,
|
||||||
|
fps=30,
|
||||||
|
width=640,
|
||||||
|
height=480,
|
||||||
|
),
|
||||||
|
"two": OpenCVCameraConfig(
|
||||||
|
camera_index=2,
|
||||||
|
fps=30,
|
||||||
|
width=640,
|
||||||
|
height=480,
|
||||||
|
),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|||||||
235
lerobot/common/robot_devices/robots/piper.py
Normal file
235
lerobot/common/robot_devices/robots/piper.py
Normal file
@@ -0,0 +1,235 @@
|
|||||||
|
"""
|
||||||
|
Teleoperation Agilex Piper with a PS5 controller
|
||||||
|
"""
|
||||||
|
|
||||||
|
import time
|
||||||
|
import torch
|
||||||
|
import numpy as np
|
||||||
|
from dataclasses import dataclass, field, replace
|
||||||
|
|
||||||
|
from lerobot.common.robot_devices.teleop.gamepad import SixAxisArmController
|
||||||
|
from lerobot.common.robot_devices.motors.utils import get_motor_names, make_motors_buses_from_configs
|
||||||
|
from lerobot.common.robot_devices.cameras.utils import make_cameras_from_configs
|
||||||
|
from lerobot.common.robot_devices.utils import RobotDeviceAlreadyConnectedError, RobotDeviceNotConnectedError
|
||||||
|
from lerobot.common.robot_devices.robots.configs import PiperRobotConfig
|
||||||
|
|
||||||
|
class PiperRobot:
|
||||||
|
def __init__(self, config: PiperRobotConfig | None = None, **kwargs):
|
||||||
|
if config is None:
|
||||||
|
config = PiperRobotConfig()
|
||||||
|
# Overwrite config arguments using kwargs
|
||||||
|
self.config = replace(config, **kwargs)
|
||||||
|
self.robot_type = self.config.type
|
||||||
|
self.inference_time = self.config.inference_time # if it is inference time
|
||||||
|
|
||||||
|
# build cameras
|
||||||
|
self.cameras = make_cameras_from_configs(self.config.cameras)
|
||||||
|
|
||||||
|
# build piper motors
|
||||||
|
self.piper_motors = make_motors_buses_from_configs(self.config.follower_arm)
|
||||||
|
self.arm = self.piper_motors['main']
|
||||||
|
|
||||||
|
# build gamepad teleop
|
||||||
|
if not self.inference_time:
|
||||||
|
self.teleop = SixAxisArmController()
|
||||||
|
else:
|
||||||
|
self.teleop = None
|
||||||
|
|
||||||
|
self.logs = {}
|
||||||
|
self.is_connected = False
|
||||||
|
|
||||||
|
@property
|
||||||
|
def camera_features(self) -> dict:
|
||||||
|
cam_ft = {}
|
||||||
|
for cam_key, cam in self.cameras.items():
|
||||||
|
key = f"observation.images.{cam_key}"
|
||||||
|
cam_ft[key] = {
|
||||||
|
"shape": (cam.height, cam.width, cam.channels),
|
||||||
|
"names": ["height", "width", "channels"],
|
||||||
|
"info": None,
|
||||||
|
}
|
||||||
|
return cam_ft
|
||||||
|
|
||||||
|
|
||||||
|
@property
|
||||||
|
def motor_features(self) -> dict:
|
||||||
|
action_names = get_motor_names(self.piper_motors)
|
||||||
|
state_names = get_motor_names(self.piper_motors)
|
||||||
|
return {
|
||||||
|
"action": {
|
||||||
|
"dtype": "float32",
|
||||||
|
"shape": (len(action_names),),
|
||||||
|
"names": action_names,
|
||||||
|
},
|
||||||
|
"observation.state": {
|
||||||
|
"dtype": "float32",
|
||||||
|
"shape": (len(state_names),),
|
||||||
|
"names": state_names,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
@property
|
||||||
|
def has_camera(self):
|
||||||
|
return len(self.cameras) > 0
|
||||||
|
|
||||||
|
@property
|
||||||
|
def num_cameras(self):
|
||||||
|
return len(self.cameras)
|
||||||
|
|
||||||
|
|
||||||
|
def connect(self) -> None:
|
||||||
|
"""Connect piper and cameras"""
|
||||||
|
if self.is_connected:
|
||||||
|
raise RobotDeviceAlreadyConnectedError(
|
||||||
|
"Piper is already connected. Do not run `robot.connect()` twice."
|
||||||
|
)
|
||||||
|
|
||||||
|
# connect piper
|
||||||
|
self.arm.connect(enable=True)
|
||||||
|
print("piper conneted")
|
||||||
|
|
||||||
|
# connect cameras
|
||||||
|
for name in self.cameras:
|
||||||
|
self.cameras[name].connect()
|
||||||
|
self.is_connected = self.is_connected and self.cameras[name].is_connected
|
||||||
|
print(f"camera {name} conneted")
|
||||||
|
|
||||||
|
print("All connected")
|
||||||
|
self.is_connected = True
|
||||||
|
|
||||||
|
self.run_calibration()
|
||||||
|
|
||||||
|
|
||||||
|
def disconnect(self) -> None:
|
||||||
|
"""move to home position, disenable piper and cameras"""
|
||||||
|
# move piper to home position, disable
|
||||||
|
if not self.inference_time:
|
||||||
|
self.teleop.stop()
|
||||||
|
|
||||||
|
# disconnect piper
|
||||||
|
self.arm.safe_disconnect()
|
||||||
|
print("piper disable after 5 seconds")
|
||||||
|
time.sleep(5)
|
||||||
|
self.arm.connect(enable=False)
|
||||||
|
|
||||||
|
# disconnect cameras
|
||||||
|
if len(self.cameras) > 0:
|
||||||
|
for cam in self.cameras.values():
|
||||||
|
cam.disconnect()
|
||||||
|
|
||||||
|
self.is_connected = False
|
||||||
|
|
||||||
|
|
||||||
|
def run_calibration(self):
|
||||||
|
"""move piper to the home position"""
|
||||||
|
if not self.is_connected:
|
||||||
|
raise ConnectionError()
|
||||||
|
|
||||||
|
self.arm.apply_calibration()
|
||||||
|
if not self.inference_time:
|
||||||
|
self.teleop.reset()
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def teleop_step(
|
||||||
|
self, record_data=False
|
||||||
|
) -> None | tuple[dict[str, torch.Tensor], dict[str, torch.Tensor]]:
|
||||||
|
if not self.is_connected:
|
||||||
|
raise ConnectionError()
|
||||||
|
|
||||||
|
if self.teleop is None and self.inference_time:
|
||||||
|
self.teleop = SixAxisArmController()
|
||||||
|
|
||||||
|
# read target pose state as
|
||||||
|
before_read_t = time.perf_counter()
|
||||||
|
state = self.arm.read() # read current joint position from robot
|
||||||
|
action = self.teleop.get_action() # target joint position from gamepad
|
||||||
|
self.logs["read_pos_dt_s"] = time.perf_counter() - before_read_t
|
||||||
|
|
||||||
|
# do action
|
||||||
|
before_write_t = time.perf_counter()
|
||||||
|
target_joints = list(action.values())
|
||||||
|
self.arm.write(target_joints)
|
||||||
|
self.logs["write_pos_dt_s"] = time.perf_counter() - before_write_t
|
||||||
|
|
||||||
|
if not record_data:
|
||||||
|
return
|
||||||
|
|
||||||
|
state = torch.as_tensor(list(state.values()), dtype=torch.float32)
|
||||||
|
action = torch.as_tensor(list(action.values()), dtype=torch.float32)
|
||||||
|
|
||||||
|
# Capture images from cameras
|
||||||
|
images = {}
|
||||||
|
for name in self.cameras:
|
||||||
|
before_camread_t = time.perf_counter()
|
||||||
|
images[name] = self.cameras[name].async_read()
|
||||||
|
images[name] = torch.from_numpy(images[name])
|
||||||
|
self.logs[f"read_camera_{name}_dt_s"] = self.cameras[name].logs["delta_timestamp_s"]
|
||||||
|
self.logs[f"async_read_camera_{name}_dt_s"] = time.perf_counter() - before_camread_t
|
||||||
|
|
||||||
|
# Populate output dictionnaries
|
||||||
|
obs_dict, action_dict = {}, {}
|
||||||
|
obs_dict["observation.state"] = state
|
||||||
|
action_dict["action"] = action
|
||||||
|
for name in self.cameras:
|
||||||
|
obs_dict[f"observation.images.{name}"] = images[name]
|
||||||
|
|
||||||
|
return obs_dict, action_dict
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def send_action(self, action: torch.Tensor) -> torch.Tensor:
|
||||||
|
"""Write the predicted actions from policy to the motors"""
|
||||||
|
if not self.is_connected:
|
||||||
|
raise RobotDeviceNotConnectedError(
|
||||||
|
"Piper is not connected. You need to run `robot.connect()`."
|
||||||
|
)
|
||||||
|
|
||||||
|
# send to motors, torch to list
|
||||||
|
target_joints = action.tolist()
|
||||||
|
self.arm.write(target_joints)
|
||||||
|
|
||||||
|
return action
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def capture_observation(self) -> dict:
|
||||||
|
"""capture current images and joint positions"""
|
||||||
|
if not self.is_connected:
|
||||||
|
raise RobotDeviceNotConnectedError(
|
||||||
|
"Piper is not connected. You need to run `robot.connect()`."
|
||||||
|
)
|
||||||
|
|
||||||
|
# read current joint positions
|
||||||
|
before_read_t = time.perf_counter()
|
||||||
|
state = self.arm.read() # 6 joints + 1 gripper
|
||||||
|
self.logs["read_pos_dt_s"] = time.perf_counter() - before_read_t
|
||||||
|
|
||||||
|
state = torch.as_tensor(list(state.values()), dtype=torch.float32)
|
||||||
|
|
||||||
|
# read images from cameras
|
||||||
|
images = {}
|
||||||
|
for name in self.cameras:
|
||||||
|
before_camread_t = time.perf_counter()
|
||||||
|
images[name] = self.cameras[name].async_read()
|
||||||
|
images[name] = torch.from_numpy(images[name])
|
||||||
|
self.logs[f"read_camera_{name}_dt_s"] = self.cameras[name].logs["delta_timestamp_s"]
|
||||||
|
self.logs[f"async_read_camera_{name}_dt_s"] = time.perf_counter() - before_camread_t
|
||||||
|
|
||||||
|
# Populate output dictionnaries and format to pytorch
|
||||||
|
obs_dict = {}
|
||||||
|
obs_dict["observation.state"] = state
|
||||||
|
for name in self.cameras:
|
||||||
|
obs_dict[f"observation.images.{name}"] = images[name]
|
||||||
|
return obs_dict
|
||||||
|
|
||||||
|
def teleop_safety_stop(self):
|
||||||
|
""" move to home position after record one episode """
|
||||||
|
self.run_calibration()
|
||||||
|
|
||||||
|
|
||||||
|
def __del__(self):
|
||||||
|
if self.is_connected:
|
||||||
|
self.disconnect()
|
||||||
|
if not self.inference_time:
|
||||||
|
self.teleop.stop()
|
||||||
@@ -10,6 +10,7 @@ from lerobot.common.robot_devices.robots.configs import (
|
|||||||
RobotConfig,
|
RobotConfig,
|
||||||
So100RobotConfig,
|
So100RobotConfig,
|
||||||
StretchRobotConfig,
|
StretchRobotConfig,
|
||||||
|
PiperRobotConfig
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@@ -48,6 +49,8 @@ def make_robot_config(robot_type: str, **kwargs) -> RobotConfig:
|
|||||||
return StretchRobotConfig(**kwargs)
|
return StretchRobotConfig(**kwargs)
|
||||||
elif robot_type == "lekiwi":
|
elif robot_type == "lekiwi":
|
||||||
return LeKiwiRobotConfig(**kwargs)
|
return LeKiwiRobotConfig(**kwargs)
|
||||||
|
elif robot_type == 'piper':
|
||||||
|
return PiperRobotConfig(**kwargs)
|
||||||
else:
|
else:
|
||||||
raise ValueError(f"Robot type '{robot_type}' is not available.")
|
raise ValueError(f"Robot type '{robot_type}' is not available.")
|
||||||
|
|
||||||
@@ -61,6 +64,10 @@ def make_robot_from_config(config: RobotConfig):
|
|||||||
from lerobot.common.robot_devices.robots.mobile_manipulator import MobileManipulator
|
from lerobot.common.robot_devices.robots.mobile_manipulator import MobileManipulator
|
||||||
|
|
||||||
return MobileManipulator(config)
|
return MobileManipulator(config)
|
||||||
|
elif isinstance(config, PiperRobotConfig):
|
||||||
|
from lerobot.common.robot_devices.robots.piper import PiperRobot
|
||||||
|
|
||||||
|
return PiperRobot(config)
|
||||||
else:
|
else:
|
||||||
from lerobot.common.robot_devices.robots.stretch import StretchRobot
|
from lerobot.common.robot_devices.robots.stretch import StretchRobot
|
||||||
|
|
||||||
|
|||||||
134
lerobot/common/robot_devices/teleop/gamepad.py
Executable file
134
lerobot/common/robot_devices/teleop/gamepad.py
Executable file
@@ -0,0 +1,134 @@
|
|||||||
|
import pygame
|
||||||
|
import threading
|
||||||
|
import time
|
||||||
|
from typing import Dict
|
||||||
|
|
||||||
|
class SixAxisArmController:
|
||||||
|
def __init__(self):
|
||||||
|
# 初始化pygame和手柄
|
||||||
|
pygame.init()
|
||||||
|
pygame.joystick.init()
|
||||||
|
|
||||||
|
# 检查是否有连接的手柄
|
||||||
|
if pygame.joystick.get_count() == 0:
|
||||||
|
raise Exception("未检测到手柄")
|
||||||
|
|
||||||
|
# 初始化手柄
|
||||||
|
self.joystick = pygame.joystick.Joystick(0)
|
||||||
|
self.joystick.init()
|
||||||
|
|
||||||
|
# 初始化关节和夹爪状态
|
||||||
|
self.joints = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0] # 6个关节
|
||||||
|
self.gripper = 0.0 # 夹爪状态
|
||||||
|
self.speeds = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0] # 6个关节的速度
|
||||||
|
self.gripper_speed = 0.0 # 夹爪速度
|
||||||
|
|
||||||
|
# 定义关节弧度限制(计算好的范围)
|
||||||
|
self.joint_limits = [
|
||||||
|
(-92000 / 57324.840764, 92000 / 57324.840764), # joint1
|
||||||
|
(-1300 / 57324.840764, 90000 / 57324.840764), # joint2
|
||||||
|
(-80000 / 57324.840764, 0 / 57324.840764), # joint3
|
||||||
|
(-90000 / 57324.840764, 90000 / 57324.840764), # joint4
|
||||||
|
(-77000 / 57324.840764, 19000 / 57324.840764), # joint5
|
||||||
|
(-90000 / 57324.840764, 90000 / 57324.840764) # joint6
|
||||||
|
]
|
||||||
|
|
||||||
|
# 启动更新线程
|
||||||
|
self.running = True
|
||||||
|
self.thread = threading.Thread(target=self.update_joints)
|
||||||
|
self.thread.start()
|
||||||
|
|
||||||
|
def update_joints(self):
|
||||||
|
while self.running:
|
||||||
|
# 处理事件队列
|
||||||
|
try:
|
||||||
|
pygame.event.pump()
|
||||||
|
except Exception as e:
|
||||||
|
self.stop()
|
||||||
|
continue
|
||||||
|
|
||||||
|
# 获取摇杆和按钮输入
|
||||||
|
left_x = -self.joystick.get_axis(0) # 左摇杆x轴
|
||||||
|
if abs(left_x) < 0.5:
|
||||||
|
left_x = 0.0
|
||||||
|
|
||||||
|
left_y = -self.joystick.get_axis(1) # 左摇杆y轴(取反,因为y轴向下为正
|
||||||
|
if abs(left_y) < 0.5:
|
||||||
|
left_y = 0.0
|
||||||
|
|
||||||
|
right_x = -self.joystick.get_axis(3) # 右摇杆x轴(取反,因为y轴向下为正)
|
||||||
|
if abs(right_x) < 0.5:
|
||||||
|
right_x = 0.0
|
||||||
|
|
||||||
|
# 获取方向键输入
|
||||||
|
hat = self.joystick.get_hat(0)
|
||||||
|
up = hat[1] == 1
|
||||||
|
down = hat[1] == -1
|
||||||
|
left = hat[0] == -1
|
||||||
|
right = hat[0] == 1
|
||||||
|
|
||||||
|
# 获取按钮输入
|
||||||
|
circle = self.joystick.get_button(1) # 圈按钮
|
||||||
|
cross = self.joystick.get_button(0) # 叉按钮
|
||||||
|
triangle = self.joystick.get_button(2)
|
||||||
|
square = self.joystick.get_button(3)
|
||||||
|
|
||||||
|
# 映射输入到速度
|
||||||
|
self.speeds[0] = left_x * 0.01 # joint1速度
|
||||||
|
self.speeds[1] = left_y * 0.01 # joint2速度
|
||||||
|
self.speeds[2] = 0.01 if triangle else (-0.01 if square else 0.0) # joint3速度
|
||||||
|
self.speeds[3] = right_x * 0.01 # joint4速度
|
||||||
|
self.speeds[4] = 0.01 if up else (-0.01 if down else 0.0) # joint5速度
|
||||||
|
self.speeds[5] = 0.01 if right else (-0.01 if left else 0.0) # joint6速度
|
||||||
|
self.gripper_speed = 0.01 if circle else (-0.01 if cross else 0.0) # 夹爪速度
|
||||||
|
|
||||||
|
# 积分速度到关节位置
|
||||||
|
for i in range(6):
|
||||||
|
self.joints[i] += self.speeds[i]
|
||||||
|
self.gripper += self.gripper_speed
|
||||||
|
|
||||||
|
# 关节范围保护
|
||||||
|
for i in range(6):
|
||||||
|
min_val, max_val = self.joint_limits[i]
|
||||||
|
self.joints[i] = max(min_val, min(max_val, self.joints[i]))
|
||||||
|
|
||||||
|
# 夹爪范围保护(0~0.08弧度)
|
||||||
|
self.gripper = max(0.0, min(0.08, self.gripper))
|
||||||
|
|
||||||
|
# 控制更新频率
|
||||||
|
time.sleep(0.02)
|
||||||
|
|
||||||
|
def get_action(self) -> Dict:
|
||||||
|
# 返回机械臂的当前状态
|
||||||
|
return {
|
||||||
|
'joint0': self.joints[0],
|
||||||
|
'joint1': self.joints[1],
|
||||||
|
'joint2': self.joints[2],
|
||||||
|
'joint3': self.joints[3],
|
||||||
|
'joint4': self.joints[4],
|
||||||
|
'joint5': self.joints[5],
|
||||||
|
'gripper': self.gripper
|
||||||
|
}
|
||||||
|
|
||||||
|
def stop(self):
|
||||||
|
# 停止更新线程
|
||||||
|
self.running = False
|
||||||
|
self.thread.join()
|
||||||
|
pygame.quit()
|
||||||
|
print("Gamepad exits")
|
||||||
|
|
||||||
|
def reset(self):
|
||||||
|
self.joints = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0] # 6个关节
|
||||||
|
self.gripper = 0.0 # 夹爪状态
|
||||||
|
self.speeds = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0] # 6个关节的速度
|
||||||
|
self.gripper_speed = 0.0 # 夹爪速度
|
||||||
|
|
||||||
|
# 使用示例
|
||||||
|
if __name__ == "__main__":
|
||||||
|
arm_controller = SixAxisArmController()
|
||||||
|
try:
|
||||||
|
while True:
|
||||||
|
print(arm_controller.get_action())
|
||||||
|
time.sleep(0.1)
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
arm_controller.stop()
|
||||||
@@ -290,7 +290,7 @@ def record(
|
|||||||
fps=cfg.fps,
|
fps=cfg.fps,
|
||||||
single_task=cfg.single_task,
|
single_task=cfg.single_task,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Execute a few seconds without recording to give time to manually reset the environment
|
# Execute a few seconds without recording to give time to manually reset the environment
|
||||||
# Current code logic doesn't allow to teleoperate during this time.
|
# Current code logic doesn't allow to teleoperate during this time.
|
||||||
# TODO(rcadene): add an option to enable teleoperation during reset
|
# TODO(rcadene): add an option to enable teleoperation during reset
|
||||||
|
|||||||
BIN
lerobot_piper_tutorial/1. 🤗 LeRobot:新增机械臂的一般流程.pdf
Normal file
BIN
lerobot_piper_tutorial/1. 🤗 LeRobot:新增机械臂的一般流程.pdf
Normal file
Binary file not shown.
BIN
lerobot_piper_tutorial/2. 🤗 AutoDL训练.pdf
Normal file
BIN
lerobot_piper_tutorial/2. 🤗 AutoDL训练.pdf
Normal file
Binary file not shown.
138
piper_scripts/can_activate.sh
Normal file
138
piper_scripts/can_activate.sh
Normal file
@@ -0,0 +1,138 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# The default CAN name can be set by the user via command-line parameters.
|
||||||
|
DEFAULT_CAN_NAME="${1:-can0}"
|
||||||
|
|
||||||
|
# The default bitrate for a single CAN module can be set by the user via command-line parameters.
|
||||||
|
DEFAULT_BITRATE="${2:-1000000}"
|
||||||
|
|
||||||
|
# USB hardware address (optional parameter)
|
||||||
|
USB_ADDRESS="${3}"
|
||||||
|
echo "-------------------START-----------------------"
|
||||||
|
# Check if ethtool is installed.
|
||||||
|
if ! dpkg -l | grep -q "ethtool"; then
|
||||||
|
echo "\e[31mError: ethtool not detected in the system.\e[0m"
|
||||||
|
echo "Please use the following command to install ethtool:"
|
||||||
|
echo "sudo apt update && sudo apt install ethtool"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if can-utils is installed.
|
||||||
|
if ! dpkg -l | grep -q "can-utils"; then
|
||||||
|
echo "\e[31mError: can-utils not detected in the system.\e[0m"
|
||||||
|
echo "Please use the following command to install ethtool:"
|
||||||
|
echo "sudo apt update && sudo apt install can-utils"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Both ethtool and can-utils are installed."
|
||||||
|
|
||||||
|
# Retrieve the number of CAN modules in the current system.
|
||||||
|
CURRENT_CAN_COUNT=$(ip link show type can | grep -c "link/can")
|
||||||
|
|
||||||
|
# Verify if the number of CAN modules in the current system matches the expected value.
|
||||||
|
if [ "$CURRENT_CAN_COUNT" -ne "1" ]; then
|
||||||
|
if [ -z "$USB_ADDRESS" ]; then
|
||||||
|
# Iterate through all CAN interfaces.
|
||||||
|
for iface in $(ip -br link show type can | awk '{print $1}'); do
|
||||||
|
# Use ethtool to retrieve bus-info.
|
||||||
|
BUS_INFO=$(sudo ethtool -i "$iface" | grep "bus-info" | awk '{print $2}')
|
||||||
|
|
||||||
|
if [ -z "$BUS_INFO" ];then
|
||||||
|
echo "Error: Unable to retrieve bus-info for interface $iface."
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Interface $iface is inserted into USB port $BUS_INFO"
|
||||||
|
done
|
||||||
|
echo -e " \e[31m Error: The number of CAN modules detected by the system ($CURRENT_CAN_COUNT) does not match the expected number (1). \e[0m"
|
||||||
|
echo -e " \e[31m Please add the USB hardware address parameter, such as: \e[0m"
|
||||||
|
echo -e " bash can_activate.sh can0 1000000 1-2:1.0"
|
||||||
|
echo "-------------------ERROR-----------------------"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Load the gs_usb module.
|
||||||
|
# sudo modprobe gs_usb
|
||||||
|
# if [ $? -ne 0 ]; then
|
||||||
|
# echo "Error: Unable to load the gs_usb module."
|
||||||
|
# exit 1
|
||||||
|
# fi
|
||||||
|
|
||||||
|
if [ -n "$USB_ADDRESS" ]; then
|
||||||
|
echo "Detected USB hardware address parameter: $USB_ADDRESS"
|
||||||
|
|
||||||
|
# Use ethtool to find the CAN interface corresponding to the USB hardware address.
|
||||||
|
INTERFACE_NAME=""
|
||||||
|
for iface in $(ip -br link show type can | awk '{print $1}'); do
|
||||||
|
BUS_INFO=$(sudo ethtool -i "$iface" | grep "bus-info" | awk '{print $2}')
|
||||||
|
if [ "$BUS_INFO" == "$USB_ADDRESS" ]; then
|
||||||
|
INTERFACE_NAME="$iface"
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
if [ -z "$INTERFACE_NAME" ]; then
|
||||||
|
echo "Error: Unable to find CAN interface corresponding to USB hardware address $USB_ADDRESS."
|
||||||
|
exit 1
|
||||||
|
else
|
||||||
|
echo "Found the interface corresponding to USB hardware address $USB_ADDRESS: $INTERFACE_NAME."
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
# Retrieve the unique CAN interface.
|
||||||
|
INTERFACE_NAME=$(ip -br link show type can | awk '{print $1}')
|
||||||
|
|
||||||
|
# Check if the interface name has been retrieved.
|
||||||
|
if [ -z "$INTERFACE_NAME" ]; then
|
||||||
|
echo "Error: Unable to detect CAN interface."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
BUS_INFO=$(sudo ethtool -i "$INTERFACE_NAME" | grep "bus-info" | awk '{print $2}')
|
||||||
|
echo "Expected to configure a single CAN module, detected interface $INTERFACE_NAME with corresponding USB address $BUS_INFO."
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if the current interface is already activated.
|
||||||
|
IS_LINK_UP=$(ip link show "$INTERFACE_NAME" | grep -q "UP" && echo "yes" || echo "no")
|
||||||
|
|
||||||
|
# Retrieve the bitrate of the current interface.
|
||||||
|
CURRENT_BITRATE=$(ip -details link show "$INTERFACE_NAME" | grep -oP 'bitrate \K\d+')
|
||||||
|
|
||||||
|
if [ "$IS_LINK_UP" == "yes" ] && [ "$CURRENT_BITRATE" -eq "$DEFAULT_BITRATE" ]; then
|
||||||
|
echo "Interface $INTERFACE_NAME is already activated with a bitrate of $DEFAULT_BITRATE."
|
||||||
|
|
||||||
|
# Check if the interface name matches the default name.
|
||||||
|
if [ "$INTERFACE_NAME" != "$DEFAULT_CAN_NAME" ]; then
|
||||||
|
echo "Rename interface $INTERFACE_NAME to $DEFAULT_CAN_NAME."
|
||||||
|
sudo ip link set "$INTERFACE_NAME" down
|
||||||
|
sudo ip link set "$INTERFACE_NAME" name "$DEFAULT_CAN_NAME"
|
||||||
|
sudo ip link set "$DEFAULT_CAN_NAME" up
|
||||||
|
echo "The interface has been renamed to $DEFAULT_CAN_NAME and reactivated."
|
||||||
|
else
|
||||||
|
echo "The interface name is already $DEFAULT_CAN_NAME."
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
# If the interface is not activated or the bitrate is different, configure it.
|
||||||
|
if [ "$IS_LINK_UP" == "yes" ]; then
|
||||||
|
echo "Interface $INTERFACE_NAME is already activated, but the bitrate is $CURRENT_BITRATE, which does not match the set value of $DEFAULT_BITRATE."
|
||||||
|
else
|
||||||
|
echo "Interface $INTERFACE_NAME is not activated or bitrate is not set."
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Set the interface bitrate and activate it.
|
||||||
|
sudo ip link set "$INTERFACE_NAME" down
|
||||||
|
sudo ip link set "$INTERFACE_NAME" type can bitrate $DEFAULT_BITRATE
|
||||||
|
sudo ip link set "$INTERFACE_NAME" up
|
||||||
|
echo "Interface $INTERFACE_NAME has been reset to bitrate $DEFAULT_BITRATE and activated."
|
||||||
|
|
||||||
|
# Rename the interface to the default name.
|
||||||
|
if [ "$INTERFACE_NAME" != "$DEFAULT_CAN_NAME" ]; then
|
||||||
|
echo "Rename interface $INTERFACE_NAME to $DEFAULT_CAN_NAME."
|
||||||
|
sudo ip link set "$INTERFACE_NAME" down
|
||||||
|
sudo ip link set "$INTERFACE_NAME" name "$DEFAULT_CAN_NAME"
|
||||||
|
sudo ip link set "$DEFAULT_CAN_NAME" up
|
||||||
|
echo "The interface has been renamed to $DEFAULT_CAN_NAME and reactivated."
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "-------------------OVER------------------------"
|
||||||
69
piper_scripts/piper_disable.py
Normal file
69
piper_scripts/piper_disable.py
Normal file
@@ -0,0 +1,69 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
# -*-coding:utf8-*-
|
||||||
|
# 注意demo无法直接运行,需要pip安装sdk后才能运行
|
||||||
|
# 使能机械臂
|
||||||
|
from typing import (
|
||||||
|
Optional,
|
||||||
|
)
|
||||||
|
import time
|
||||||
|
from piper_sdk import *
|
||||||
|
|
||||||
|
def enable_fun(piper:C_PiperInterface_V2, enable:bool):
|
||||||
|
'''
|
||||||
|
使能机械臂并检测使能状态,尝试5s,如果使能超时则退出程序
|
||||||
|
'''
|
||||||
|
enable_flag = False
|
||||||
|
loop_flag = False
|
||||||
|
# 设置超时时间(秒)
|
||||||
|
timeout = 5
|
||||||
|
# 记录进入循环前的时间
|
||||||
|
start_time = time.time()
|
||||||
|
elapsed_time_flag = False
|
||||||
|
while not (loop_flag):
|
||||||
|
elapsed_time = time.time() - start_time
|
||||||
|
print(f"--------------------")
|
||||||
|
enable_list = []
|
||||||
|
enable_list.append(piper.GetArmLowSpdInfoMsgs().motor_1.foc_status.driver_enable_status)
|
||||||
|
enable_list.append(piper.GetArmLowSpdInfoMsgs().motor_2.foc_status.driver_enable_status)
|
||||||
|
enable_list.append(piper.GetArmLowSpdInfoMsgs().motor_3.foc_status.driver_enable_status)
|
||||||
|
enable_list.append(piper.GetArmLowSpdInfoMsgs().motor_4.foc_status.driver_enable_status)
|
||||||
|
enable_list.append(piper.GetArmLowSpdInfoMsgs().motor_5.foc_status.driver_enable_status)
|
||||||
|
enable_list.append(piper.GetArmLowSpdInfoMsgs().motor_6.foc_status.driver_enable_status)
|
||||||
|
if(enable):
|
||||||
|
enable_flag = all(enable_list)
|
||||||
|
piper.EnableArm(7)
|
||||||
|
piper.GripperCtrl(0,1000,0x01, 0)
|
||||||
|
else:
|
||||||
|
enable_flag = any(enable_list)
|
||||||
|
piper.DisableArm(7)
|
||||||
|
piper.GripperCtrl(0,1000,0x02, 0)
|
||||||
|
print(f"使能状态: {enable_flag}")
|
||||||
|
print(f"--------------------")
|
||||||
|
if(enable_flag == enable):
|
||||||
|
loop_flag = True
|
||||||
|
enable_flag = True
|
||||||
|
else:
|
||||||
|
loop_flag = False
|
||||||
|
enable_flag = False
|
||||||
|
# 检查是否超过超时时间
|
||||||
|
if elapsed_time > timeout:
|
||||||
|
print(f"超时....")
|
||||||
|
elapsed_time_flag = True
|
||||||
|
enable_flag = False
|
||||||
|
loop_flag = True
|
||||||
|
break
|
||||||
|
time.sleep(0.5)
|
||||||
|
resp = enable_flag
|
||||||
|
print(f"Returning response: {resp}")
|
||||||
|
return resp
|
||||||
|
|
||||||
|
# 测试代码
|
||||||
|
if __name__ == "__main__":
|
||||||
|
piper = C_PiperInterface_V2()
|
||||||
|
piper.ConnectPort()
|
||||||
|
import time
|
||||||
|
flag = enable_fun(piper=piper, enable=False)
|
||||||
|
if(flag == True):
|
||||||
|
print("失能成功!!!!")
|
||||||
|
exit(0)
|
||||||
|
pass
|
||||||
Reference in New Issue
Block a user