Want to take it to the next level? Make your SO-101 mobile by building LeKiwi!
-
Check out the LeKiwi tutorial and bring your robot to life on wheels.
+
Check out the LeKiwi tutorial and bring your robot to life on wheels.
-
+
@@ -77,9 +87,9 @@
-
-
-
+
+
+
ACT policy on ALOHA env
@@ -88,123 +98,141 @@
-### Acknowledgment
-
-- Thanks to Tony Zhao, Zipeng Fu and colleagues for open sourcing ACT policy, ALOHA environments and datasets. Ours are adapted from [ALOHA](https://tonyzhaozh.github.io/aloha) and [Mobile ALOHA](https://mobile-aloha.github.io).
-- Thanks to Cheng Chi, Zhenjia Xu and colleagues for open sourcing Diffusion policy, Pusht environment and datasets, as well as UMI datasets. Ours are adapted from [Diffusion Policy](https://diffusion-policy.cs.columbia.edu) and [UMI Gripper](https://umi-gripper.github.io).
-- Thanks to Nicklas Hansen, Yunhai Feng and colleagues for open sourcing TDMPC policy, Simxarm environments and datasets. Ours are adapted from [TDMPC](https://github.com/nicklashansen/tdmpc) and [FOWM](https://www.yunhaifeng.com/FOWM).
-- Thanks to Antonio Loquercio and Ashish Kumar for their early support.
-- Thanks to [Seungjae (Jay) Lee](https://sjlee.cc/), [Mahi Shafiullah](https://mahis.life/) and colleagues for open sourcing [VQ-BeT](https://sjlee.cc/vq-bet/) policy and helping us adapt the codebase to our repository. The policy is adapted from [VQ-BeT repo](https://github.com/jayLEE0301/vq_bet_official).
-
-
## Installation
-Download our source code:
-```bash
-git clone https://github.com/huggingface/lerobot.git
-cd lerobot
-```
+LeRobot works with Python 3.10+ and PyTorch 2.2+.
+
+### Environment Setup
+
+Create a virtual environment with Python 3.10 and activate it, e.g. with [`miniforge`](https://conda-forge.org/download/):
-Create a virtual environment with Python 3.10 and activate it, e.g. with [`miniconda`](https://docs.anaconda.com/free/miniconda/index.html):
```bash
conda create -y -n lerobot python=3.10
conda activate lerobot
```
-When using `miniconda`, install `ffmpeg` in your environment:
+When using `conda`, install `ffmpeg` in your environment:
+
```bash
conda install ffmpeg -c conda-forge
```
> **NOTE:** This usually installs `ffmpeg 7.X` for your platform compiled with the `libsvtav1` encoder. If `libsvtav1` is not supported (check supported encoders with `ffmpeg -encoders`), you can:
-> - _[On any platform]_ Explicitly install `ffmpeg 7.X` using:
-> ```bash
-> conda install ffmpeg=7.1.1 -c conda-forge
-> ```
-> - _[On Linux only]_ Install [ffmpeg build dependencies](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#GettheDependencies) and [compile ffmpeg from source with libsvtav1](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#libsvtav1), and make sure you use the corresponding ffmpeg binary to your install with `which ffmpeg`.
+>
+> - _[On any platform]_ Explicitly install `ffmpeg 7.X` using:
+>
+> ```bash
+> conda install ffmpeg=7.1.1 -c conda-forge
+> ```
+>
+> - _[On Linux only]_ Install [ffmpeg build dependencies](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#GettheDependencies) and [compile ffmpeg from source with libsvtav1](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#libsvtav1), and make sure you use the corresponding ffmpeg binary to your install with `which ffmpeg`.
+
+### Install LeRobot 🤗
+
+#### From Source
+
+First, clone the repository and navigate into the directory:
+
+```bash
+git clone https://github.com/huggingface/lerobot.git
+cd lerobot
+```
+
+Then, install the library in editable mode. This is useful if you plan to contribute to the code.
-Install 🤗 LeRobot:
```bash
pip install -e .
```
> **NOTE:** If you encounter build errors, you may need to install additional dependencies (`cmake`, `build-essential`, and `ffmpeg libs`). On Linux, run:
-`sudo apt-get install cmake build-essential python3-dev pkg-config libavformat-dev libavcodec-dev libavdevice-dev libavutil-dev libswscale-dev libswresample-dev libavfilter-dev pkg-config`. For other systems, see: [Compiling PyAV](https://pyav.org/docs/develop/overview/installation.html#bring-your-own-ffmpeg)
+> `sudo apt-get install cmake build-essential python3-dev pkg-config libavformat-dev libavcodec-dev libavdevice-dev libavutil-dev libswscale-dev libswresample-dev libavfilter-dev`. For other systems, see: [Compiling PyAV](https://pyav.org/docs/develop/overview/installation.html#bring-your-own-ffmpeg)
For simulations, 🤗 LeRobot comes with gymnasium environments that can be installed as extras:
+
- [aloha](https://github.com/huggingface/gym-aloha)
- [xarm](https://github.com/huggingface/gym-xarm)
- [pusht](https://github.com/huggingface/gym-pusht)
For instance, to install 🤗 LeRobot with aloha and pusht, use:
+
```bash
pip install -e ".[aloha, pusht]"
```
+### Installation from PyPI
+
+**Core Library:**
+Install the base package with:
+
+```bash
+pip install lerobot
+```
+
+_This installs only the default dependencies._
+
+**Extra Features:**
+To install additional functionality, use one of the following:
+
+```bash
+pip install 'lerobot[all]' # All available features
+pip install 'lerobot[aloha,pusht]' # Specific features (Aloha & Pusht)
+pip install 'lerobot[feetech]' # Feetech motor support
+```
+
+_Replace `[...]` with your desired features._
+
+**Available Tags:**
+For a full list of optional dependencies, see:
+https://pypi.org/project/lerobot/
+
+> [!NOTE]
+> For lerobot 0.4.0, if you want to install libero or pi tags, you will have to do: `pip install "lerobot[pi,libero]@git+https://github.com/huggingface/lerobot.git"`.
+>
+> This will be solved in the next patch release
+
+### Weights & Biases
+
To use [Weights and Biases](https://docs.wandb.ai/quickstart) for experiment tracking, log in with
+
```bash
wandb login
```
(note: you will also need to enable WandB in the configuration. See below.)
-## Walkthrough
-
-```
-.
-├── examples # contains demonstration examples, start here to learn about LeRobot
-| └── advanced # contains even more examples for those who have mastered the basics
-├── lerobot
-| ├── configs # contains config classes with all options that you can override in the command line
-| ├── common # contains classes and utilities
-| | ├── datasets # various datasets of human demonstrations: aloha, pusht, xarm
-| | ├── envs # various sim environments: aloha, pusht, xarm
-| | ├── policies # various policies: act, diffusion, tdmpc
-| | ├── robot_devices # various real devices: dynamixel motors, opencv cameras, koch robots
-| | └── utils # various utilities
-| └── scripts # contains functions to execute via command line
-| ├── eval.py # load policy and evaluate it on an environment
-| ├── train.py # train a policy via imitation learning and/or reinforcement learning
-| ├── control_robot.py # teleoperate a real robot, record data, run a policy
-| ├── push_dataset_to_hub.py # convert your dataset into LeRobot dataset format and upload it to the Hugging Face hub
-| └── visualize_dataset.py # load a dataset and render its demonstrations
-├── outputs # contains results of scripts execution: logs, videos, model checkpoints
-└── tests # contains pytest utilities for continuous integration
-```
-
### Visualize datasets
-Check out [example 1](./examples/1_load_lerobot_dataset.py) that illustrates how to use our dataset class which automatically downloads data from the Hugging Face hub.
+Check out [example 1](https://github.com/huggingface/lerobot/blob/main/examples/dataset/load_lerobot_dataset.py) that illustrates how to use our dataset class which automatically downloads data from the Hugging Face hub.
You can also locally visualize episodes from a dataset on the hub by executing our script from the command line:
+
```bash
-python lerobot/scripts/visualize_dataset.py \
+lerobot-dataset-viz \
--repo-id lerobot/pusht \
--episode-index 0
```
-or from a dataset in a local folder with the `root` option and the `--local-files-only` (in the following case the dataset will be searched for in `./my_local_data_dir/lerobot/pusht`)
+or from a dataset in a local folder with the `root` option and the `--mode local` (in the following case the dataset will be searched for in `./my_local_data_dir/lerobot/pusht`)
+
```bash
-python lerobot/scripts/visualize_dataset.py \
+lerobot-dataset-viz \
--repo-id lerobot/pusht \
--root ./my_local_data_dir \
- --local-files-only 1 \
+ --mode local \
--episode-index 0
```
-
It will open `rerun.io` and display the camera streams, robot states and actions, like this:
https://github-production-user-asset-6210df.s3.amazonaws.com/4681518/328035972-fd46b787-b532-47e2-bb6f-fd536a55a7ed.mov?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20240505%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240505T172924Z&X-Amz-Expires=300&X-Amz-Signature=d680b26c532eeaf80740f08af3320d22ad0b8a4e4da1bcc4f33142c15b509eda&X-Amz-SignedHeaders=host&actor_id=24889239&key_id=0&repo_id=748713144
-
-Our script can also visualize datasets stored on a distant server. See `python lerobot/scripts/visualize_dataset.py --help` for more instructions.
+Our script can also visualize datasets stored on a distant server. See `lerobot-dataset-viz --help` for more instructions.
### The `LeRobotDataset` format
A dataset in `LeRobotDataset` format is very simple to use. It can be loaded from a repository on the Hugging Face hub or a local folder simply with e.g. `dataset = LeRobotDataset("lerobot/aloha_static_coffee")` and can be indexed into like any Hugging Face and PyTorch dataset. For instance `dataset[0]` will retrieve a single temporal frame from the dataset containing observation(s) and an action as PyTorch tensors ready to be fed to a model.
-A specificity of `LeRobotDataset` is that, rather than retrieving a single frame by its index, we can retrieve several frames based on their temporal relationship with the indexed frame, by setting `delta_timestamps` to a list of relative times with respect to the indexed frame. For example, with `delta_timestamps = {"observation.image": [-1, -0.5, -0.2, 0]}` one can retrieve, for a given index, 4 frames: 3 "previous" frames 1 second, 0.5 seconds, and 0.2 seconds before the indexed frame, and the indexed frame itself (corresponding to the 0 entry). See example [1_load_lerobot_dataset.py](examples/1_load_lerobot_dataset.py) for more details on `delta_timestamps`.
+A specificity of `LeRobotDataset` is that, rather than retrieving a single frame by its index, we can retrieve several frames based on their temporal relationship with the indexed frame, by setting `delta_timestamps` to a list of relative times with respect to the indexed frame. For example, with `delta_timestamps = {"observation.image": [-1, -0.5, -0.2, 0]}` one can retrieve, for a given index, 4 frames: 3 "previous" frames 1 second, 0.5 seconds, and 0.2 seconds before the indexed frame, and the indexed frame itself (corresponding to the 0 entry). See example [1_load_lerobot_dataset.py](https://github.com/huggingface/lerobot/blob/main/examples/dataset/load_lerobot_dataset.py) for more details on `delta_timestamps`.
Under the hood, the `LeRobotDataset` format makes use of several ways to serialize data which can be useful to understand if you plan to work more closely with this format. We tried to make a flexible yet simple dataset format that would cover most type of features and specificities present in reinforcement learning and robotics, in simulation and in real-world, with a focus on cameras and robot states but easily extended to other types of sensory inputs as long as they can be represented by a tensor.
@@ -223,191 +251,94 @@ dataset attributes:
│ ├ timestamp (float32): timestamp in the episode
│ ├ next.done (bool): indicates the end of an episode ; True for the last frame in each episode
│ └ index (int64): general index in the whole dataset
- ├ episode_data_index: contains 2 tensors with the start and end indices of each episode
- │ ├ from (1D int64 tensor): first frame index for each episode — shape (num episodes,) starts with 0
- │ └ to: (1D int64 tensor): last frame index for each episode — shape (num episodes,)
- ├ stats: a dictionary of statistics (max, mean, min, std) for each feature in the dataset, for instance
- │ ├ observation.images.cam_high: {'max': tensor with same number of dimensions (e.g. `(c, 1, 1)` for images, `(c,)` for states), etc.}
- │ ...
- ├ info: a dictionary of metadata on the dataset
- │ ├ codebase_version (str): this is to keep track of the codebase version the dataset was created with
- │ ├ fps (float): frame per second the dataset is recorded/synchronized to
- │ ├ video (bool): indicates if frames are encoded in mp4 video files to save space or stored as png files
- │ └ encoding (dict): if video, this documents the main options that were used with ffmpeg to encode the videos
- ├ videos_dir (Path): where the mp4 videos or png images are stored/accessed
- └ camera_keys (list of string): the keys to access camera features in the item returned by the dataset (e.g. `["observation.images.cam_high", ...]`)
+ ├ meta: a LeRobotDatasetMetadata object containing:
+ │ ├ info: a dictionary of metadata on the dataset
+ │ │ ├ codebase_version (str): this is to keep track of the codebase version the dataset was created with
+ │ │ ├ fps (int): frame per second the dataset is recorded/synchronized to
+ │ │ ├ features (dict): all features contained in the dataset with their shapes and types
+ │ │ ├ total_episodes (int): total number of episodes in the dataset
+ │ │ ├ total_frames (int): total number of frames in the dataset
+ │ │ ├ robot_type (str): robot type used for recording
+ │ │ ├ data_path (str): formattable string for the parquet files
+ │ │ └ video_path (str): formattable string for the video files (if using videos)
+ │ ├ episodes: a DataFrame containing episode metadata with columns:
+ │ │ ├ episode_index (int): index of the episode
+ │ │ ├ tasks (list): list of tasks for this episode
+ │ │ ├ length (int): number of frames in this episode
+ │ │ ├ dataset_from_index (int): start index of this episode in the dataset
+ │ │ └ dataset_to_index (int): end index of this episode in the dataset
+ │ ├ stats: a dictionary of statistics (max, mean, min, std) for each feature in the dataset, for instance
+ │ │ ├ observation.images.front_cam: {'max': tensor with same number of dimensions (e.g. `(c, 1, 1)` for images, `(c,)` for states), etc.}
+ │ │ └ ...
+ │ └ tasks: a DataFrame containing task information with task names as index and task_index as values
+ ├ root (Path): local directory where the dataset is stored
+ ├ image_transforms (Callable): optional image transformations to apply to visual modalities
+ └ delta_timestamps (dict): optional delta timestamps for temporal queries
```
A `LeRobotDataset` is serialised using several widespread file formats for each of its parts, namely:
+
- hf_dataset stored using Hugging Face datasets library serialization to parquet
- videos are stored in mp4 format to save space
- metadata are stored in plain json/jsonl files
Dataset can be uploaded/downloaded from the HuggingFace hub seamlessly. To work on a local dataset, you can specify its location with the `root` argument if it's not in the default `~/.cache/huggingface/lerobot` location.
-### Evaluate a pretrained policy
-
-Check out [example 2](./examples/2_evaluate_pretrained_policy.py) that illustrates how to download a pretrained policy from Hugging Face hub, and run an evaluation on its corresponding environment.
-
-We also provide a more capable script to parallelize the evaluation over multiple environments during the same rollout. Here is an example with a pretrained model hosted on [lerobot/diffusion_pusht](https://huggingface.co/lerobot/diffusion_pusht):
-```bash
-python lerobot/scripts/eval.py \
- --policy.path=lerobot/diffusion_pusht \
- --env.type=pusht \
- --eval.batch_size=10 \
- --eval.n_episodes=10 \
- --policy.use_amp=false \
- --policy.device=cuda
-```
-
-Note: After training your own policy, you can re-evaluate the checkpoints with:
-
-```bash
-python lerobot/scripts/eval.py --policy.path={OUTPUT_DIR}/checkpoints/last/pretrained_model
-```
-
-See `python lerobot/scripts/eval.py --help` for more instructions.
-
-### Train your own policy
-
-Check out [example 3](./examples/3_train_policy.py) that illustrates how to train a model using our core library in python, and [example 4](./examples/4_train_policy_with_script.md) that shows how to use our training script from command line.
-
-To use wandb for logging training and evaluation curves, make sure you've run `wandb login` as a one-time setup step. Then, when running the training command above, enable WandB in the configuration by adding `--wandb.enable=true`.
-
-A link to the wandb logs for the run will also show up in yellow in your terminal. Here is an example of what they look like in your browser. Please also check [here](./examples/4_train_policy_with_script.md#typical-logs-and-metrics) for the explanation of some commonly used metrics in logs.
-
-
-
-Note: For efficiency, during training every checkpoint is evaluated on a low number of episodes. You may use `--eval.n_episodes=500` to evaluate on more episodes than the default. Or, after training, you may want to re-evaluate your best checkpoints on more episodes or change the evaluation settings. See `python lerobot/scripts/eval.py --help` for more instructions.
-
#### Reproduce state-of-the-art (SOTA)
We provide some pretrained policies on our [hub page](https://huggingface.co/lerobot) that can achieve state-of-the-art performances.
You can reproduce their training by loading the config from their run. Simply running:
+
```bash
-python lerobot/scripts/train.py --config_path=lerobot/diffusion_pusht
+lerobot-train --config_path=lerobot/diffusion_pusht
```
+
reproduces SOTA results for Diffusion Policy on the PushT task.
## Contribute
If you would like to contribute to 🤗 LeRobot, please check out our [contribution guide](https://github.com/huggingface/lerobot/blob/main/CONTRIBUTING.md).
-
-
-
### Add a pretrained policy
Once you have trained a policy you may upload it to the Hugging Face hub using a hub id that looks like `${hf_user}/${repo_name}` (e.g. [lerobot/diffusion_pusht](https://huggingface.co/lerobot/diffusion_pusht)).
You first need to find the checkpoint folder located inside your experiment directory (e.g. `outputs/train/2024-05-05/20-21-12_aloha_act_default/checkpoints/002500`). Within that there is a `pretrained_model` directory which should contain:
+
- `config.json`: A serialized version of the policy configuration (following the policy's dataclass config).
- `model.safetensors`: A set of `torch.nn.Module` parameters, saved in [Hugging Face Safetensors](https://huggingface.co/docs/safetensors/index) format.
- `train_config.json`: A consolidated configuration containing all parameters used for training. The policy configuration should match `config.json` exactly. This is useful for anyone who wants to evaluate your policy or for reproducibility.
To upload these to the hub, run the following:
+
```bash
huggingface-cli upload ${hf_user}/${repo_name} path/to/pretrained_model
```
-See [eval.py](https://github.com/huggingface/lerobot/blob/main/lerobot/scripts/eval.py) for an example of how other people may use your policy.
+See [lerobot_eval.py](https://github.com/huggingface/lerobot/blob/main/src/lerobot/scripts/lerobot_eval.py) for an example of how other people may use your policy.
+### Acknowledgment
-### Improve your code with profiling
-
-An example of a code snippet to profile the evaluation of a policy:
-```python
-from torch.profiler import profile, record_function, ProfilerActivity
-
-def trace_handler(prof):
- prof.export_chrome_trace(f"tmp/trace_schedule_{prof.step_num}.json")
-
-with profile(
- activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
- schedule=torch.profiler.schedule(
- wait=2,
- warmup=2,
- active=3,
- ),
- on_trace_ready=trace_handler
-) as prof:
- with record_function("eval_policy"):
- for i in range(num_episodes):
- prof.step()
- # insert code to profile, potentially whole body of eval_policy function
-```
+- The LeRobot team 🤗 for building SmolVLA [Paper](https://arxiv.org/abs/2506.01844), [Blog](https://huggingface.co/blog/smolvla).
+- Thanks to Tony Zhao, Zipeng Fu and colleagues for open sourcing ACT policy, ALOHA environments and datasets. Ours are adapted from [ALOHA](https://tonyzhaozh.github.io/aloha) and [Mobile ALOHA](https://mobile-aloha.github.io).
+- Thanks to Cheng Chi, Zhenjia Xu and colleagues for open sourcing Diffusion policy, Pusht environment and datasets, as well as UMI datasets. Ours are adapted from [Diffusion Policy](https://diffusion-policy.cs.columbia.edu) and [UMI Gripper](https://umi-gripper.github.io).
+- Thanks to Nicklas Hansen, Yunhai Feng and colleagues for open sourcing TDMPC policy, Simxarm environments and datasets. Ours are adapted from [TDMPC](https://github.com/nicklashansen/tdmpc) and [FOWM](https://www.yunhaifeng.com/FOWM).
+- Thanks to Antonio Loquercio and Ashish Kumar for their early support.
+- Thanks to [Seungjae (Jay) Lee](https://sjlee.cc/), [Mahi Shafiullah](https://mahis.life/) and colleagues for open sourcing [VQ-BeT](https://sjlee.cc/vq-bet/) policy and helping us adapt the codebase to our repository. The policy is adapted from [VQ-BeT repo](https://github.com/jayLEE0301/vq_bet_official).
## Citation
If you want, you can cite this work with:
+
```bibtex
@misc{cadene2024lerobot,
- author = {Cadene, Remi and Alibert, Simon and Soare, Alexander and Gallouedec, Quentin and Zouitine, Adil and Palma, Steven and Kooijmans, Pepijn and Aractingi, Michel and Shukor, Mustafa and Aubakirova, Dana and Russi, Martino and Capuano, Francesco and Pascale, Caroline and Choghari, Jade and Moss, Jess and Wolf, Thomas},
+ author = {Cadene, Remi and Alibert, Simon and Soare, Alexander and Gallouedec, Quentin and Zouitine, Adil and Palma, Steven and Kooijmans, Pepijn and Aractingi, Michel and Shukor, Mustafa and Aubakirova, Dana and Russi, Martino and Capuano, Francesco and Pascal, Caroline and Choghari, Jade and Moss, Jess and Wolf, Thomas},
title = {LeRobot: State-of-the-art Machine Learning for Real-World Robotics in Pytorch},
howpublished = "\url{https://github.com/huggingface/lerobot}",
year = {2024}
}
```
-Additionally, if you are using any of the particular policy architecture, pretrained models, or datasets, it is recommended to cite the original authors of the work as they appear below:
-
-- [Diffusion Policy](https://diffusion-policy.cs.columbia.edu)
-```bibtex
-@article{chi2024diffusionpolicy,
- author = {Cheng Chi and Zhenjia Xu and Siyuan Feng and Eric Cousineau and Yilun Du and Benjamin Burchfiel and Russ Tedrake and Shuran Song},
- title ={Diffusion Policy: Visuomotor Policy Learning via Action Diffusion},
- journal = {The International Journal of Robotics Research},
- year = {2024},
-}
-```
-- [ACT or ALOHA](https://tonyzhaozh.github.io/aloha)
-```bibtex
-@article{zhao2023learning,
- title={Learning fine-grained bimanual manipulation with low-cost hardware},
- author={Zhao, Tony Z and Kumar, Vikash and Levine, Sergey and Finn, Chelsea},
- journal={arXiv preprint arXiv:2304.13705},
- year={2023}
-}
-```
-
-- [TDMPC](https://www.nicklashansen.com/td-mpc/)
-
-```bibtex
-@inproceedings{Hansen2022tdmpc,
- title={Temporal Difference Learning for Model Predictive Control},
- author={Nicklas Hansen and Xiaolong Wang and Hao Su},
- booktitle={ICML},
- year={2022}
-}
-```
-
-- [VQ-BeT](https://sjlee.cc/vq-bet/)
-```bibtex
-@article{lee2024behavior,
- title={Behavior generation with latent actions},
- author={Lee, Seungjae and Wang, Yibin and Etukuru, Haritheja and Kim, H Jin and Shafiullah, Nur Muhammad Mahi and Pinto, Lerrel},
- journal={arXiv preprint arXiv:2403.03181},
- year={2024}
-}
-```
## Star History
[](https://star-history.com/#huggingface/lerobot&Timeline)
diff --git a/benchmarks/video/README.md b/benchmarks/video/README.md
index daa3e1f4..490a4b49 100644
--- a/benchmarks/video/README.md
+++ b/benchmarks/video/README.md
@@ -1,28 +1,32 @@
# Video benchmark
-
## Questions
+
What is the optimal trade-off between:
+
- maximizing loading time with random access,
- minimizing memory space on disk,
- maximizing success rate of policies,
- compatibility across devices/platforms for decoding videos (e.g. video players, web browsers).
How to encode videos?
+
- Which video codec (`-vcodec`) to use? h264, h265, AV1?
- What pixel format to use (`-pix_fmt`)? `yuv444p` or `yuv420p`?
- How much compression (`-crf`)? No compression with `0`, intermediate compression with `25` or extreme with `50+`?
- Which frequency to chose for key frames (`-g`)? A key frame every `10` frames?
How to decode videos?
+
- Which `decoder`? `torchvision`, `torchaudio`, `ffmpegio`, `decord`, or `nvc`?
- What scenarios to use for the requesting timestamps during benchmark? (`timestamps_mode`)
-
## Variables
+
**Image content & size**
We don't expect the same optimal settings for a dataset of images from a simulation, or from real-world in an apartment, or in a factory, or outdoor, or with lots of moving objects in the scene, etc. Similarly, loading times might not vary linearly with the image size (resolution).
For these reasons, we run this benchmark on four representative datasets:
+
- `lerobot/pusht_image`: (96 x 96 pixels) simulation with simple geometric shapes, fixed camera.
- `aliberts/aloha_mobile_shrimp_image`: (480 x 640 pixels) real-world indoor, moving camera.
- `aliberts/paris_street`: (720 x 1280 pixels) real-world outdoor, moving camera.
@@ -34,8 +38,9 @@ Note: The datasets used for this benchmark need to be image datasets, not video
We might revisit this benchmark and find better settings if we train our policies with various data augmentations to make them more robust (e.g. robust to color changes, compression, etc.).
### Encoding parameters
+
| parameter | values |
-|-------------|--------------------------------------------------------------|
+| ----------- | ------------------------------------------------------------ |
| **vcodec** | `libx264`, `libx265`, `libsvtav1` |
| **pix_fmt** | `yuv444p`, `yuv420p` |
| **g** | `1`, `2`, `3`, `4`, `5`, `6`, `10`, `15`, `20`, `40`, `None` |
@@ -44,19 +49,23 @@ We might revisit this benchmark and find better settings if we train our policie
Note that `crf` value might be interpreted differently by various video codecs. In other words, the same value used with one codec doesn't necessarily translate into the same compression level with another codec. In fact, the default value (`None`) isn't the same amongst the different video codecs. Importantly, it is also the case for many other ffmpeg arguments like `g` which specifies the frequency of the key frames.
For a comprehensive list and documentation of these parameters, see the ffmpeg documentation depending on the video codec used:
+
- h264: https://trac.ffmpeg.org/wiki/Encode/H.264
- h265: https://trac.ffmpeg.org/wiki/Encode/H.265
- AV1: https://trac.ffmpeg.org/wiki/Encode/AV1
### Decoding parameters
+
**Decoder**
We tested two video decoding backends from torchvision:
+
- `pyav`
- `video_reader` (requires to build torchvision from source)
**Requested timestamps**
Given the way video decoding works, once a keyframe has been loaded, the decoding of subsequent frames is fast.
This of course is affected by the `-g` parameter during encoding, which specifies the frequency of the keyframes. Given our typical use cases in robotics policies which might request a few timestamps in different random places, we want to replicate these use cases with the following scenarios:
+
- `1_frame`: 1 frame,
- `2_frames`: 2 consecutive frames (e.g. `[t, t + 1 / fps]`),
- `6_frames`: 6 consecutive frames (e.g. `[t + i / fps for i in range(6)]`)
@@ -64,12 +73,13 @@ This of course is affected by the `-g` parameter during encoding, which specifie
Note that this differs significantly from a typical use case like watching a movie, in which every frame is loaded sequentially from the beginning to the end and it's acceptable to have big values for `-g`.
Additionally, because some policies might request single timestamps that are a few frames apart, we also have the following scenario:
+
- `2_frames_4_space`: 2 frames with 4 consecutive frames of spacing in between (e.g `[t, t + 5 / fps]`),
However, due to how video decoding is implemented with `pyav`, we don't have access to an accurate seek so in practice this scenario is essentially the same as `6_frames` since all 6 frames between `t` and `t + 5 / fps` will be decoded.
-
## Metrics
+
**Data compression ratio (lower is better)**
`video_images_size_ratio` is the ratio of the memory space on disk taken by the encoded video over the memory space taken by the original images. For instance, `video_images_size_ratio=25%` means that the video takes 4 times less memory space on disk compared to the original images.
@@ -87,18 +97,18 @@ However, due to how video decoding is implemented with `pyav`, we don't have acc
One aspect that can't be measured here with those metrics is the compatibility of the encoding across platforms, in particular on web browser, for visualization purposes.
h264, h265 and AV1 are all commonly used codecs and should not pose an issue. However, the chroma subsampling (`pix_fmt`) format might affect compatibility:
+
- `yuv420p` is more widely supported across various platforms, including web browsers.
- `yuv444p` offers higher color fidelity but might not be supported as broadly.
-
-
## How the benchmark works
+
The benchmark evaluates both encoding and decoding of video frames on the first episode of each dataset.
**Encoding:** for each `vcodec` and `pix_fmt` pair, we use a default value for `g` and `crf` upon which we change a single value (either `g` or `crf`) to one of the specified values (we don't test every combination of those as this would be computationally too heavy).
@@ -110,15 +120,18 @@ Intermediate results saved for each `vcodec` and `pix_fmt` combination in csv ta
These are then all concatenated to a single table ready for analysis.
## Caveats
+
We tried to measure the most impactful parameters for both encoding and decoding. However, for computational reasons we can't test out every combination.
Additional encoding parameters exist that are not included in this benchmark. In particular:
+
- `-preset` which allows for selecting encoding presets. This represents a collection of options that will provide a certain encoding speed to compression ratio. By leaving this parameter unspecified, it is considered to be `medium` for libx264 and libx265 and `8` for libsvtav1.
- `-tune` which allows to optimize the encoding for certain aspects (e.g. film quality, fast decoding, etc.).
See the documentation mentioned above for more detailed info on these settings and for a more comprehensive list of other parameters.
Similarly on the decoding side, other decoders exist but are not implemented in our current benchmark. To name a few:
+
- `torchaudio`
- `ffmpegio`
- `decord`
@@ -127,16 +140,17 @@ Similarly on the decoding side, other decoders exist but are not implemented in
Note as well that since we are mostly interested in the performance at decoding time (also because encoding is done only once before uploading a dataset), we did not measure encoding times nor have any metrics regarding encoding.
However, besides the necessity to build ffmpeg from source, encoding did not pose any issue and it didn't take a significant amount of time during this benchmark.
-
## Install
+
Building ffmpeg from source is required to include libx265 and libaom/libsvtav1 (av1) video codecs ([compilation guide](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu)).
**Note:** While you still need to build torchvision with a conda-installed `ffmpeg<4.3` to use the `video_reader` decoder (as described in [#220](https://github.com/huggingface/lerobot/pull/220)), you also need another version which is custom-built with all the video codecs for encoding. For the script to then use that version, you can prepend the command above with `PATH="$HOME/bin:$PATH"`, which is where ffmpeg should be built.
-
## Adding a video decoder
+
Right now, we're only benchmarking the two video decoder available with torchvision: `pyav` and `video_reader`.
You can easily add a new decoder to benchmark by adding it to this function in the script:
+
```diff
def decode_video_frames(
video_path: str,
@@ -156,9 +170,10 @@ def decode_video_frames(
raise NotImplementedError(backend)
```
-
## Example
+
For a quick run, you can try these parameters:
+
```bash
python benchmark/video/run_video_benchmark.py \
--output-dir outputs/video_benchmark \
@@ -176,11 +191,12 @@ python benchmark/video/run_video_benchmark.py \
--save-frames 0
```
-
## Results
### Reproduce
+
We ran the benchmark with the following parameters:
+
```bash
# h264 and h265 encodings
python benchmark/video/run_video_benchmark.py \
@@ -221,9 +237,10 @@ python benchmark/video/run_video_benchmark.py \
The full results are available [here](https://docs.google.com/spreadsheets/d/1OYJB43Qu8fC26k_OyoMFgGBBKfQRCi4BIuYitQnq3sw/edit?usp=sharing)
-
### Parameters selected for LeRobotDataset
+
Considering these results, we chose what we think is the best set of encoding parameter:
+
- vcodec: `libsvtav1`
- pix-fmt: `yuv420p`
- g: `2`
@@ -236,7 +253,7 @@ Since we're using av1 encoding, we're choosing the `pyav` decoder as `video_read
These tables show the results for `g=2` and `crf=30`, using `timestamps-modes=6_frames` and `backend=pyav`
| video_images_size_ratio | vcodec | pix_fmt | | | |
-|------------------------------------|------------|---------|-----------|-----------|-----------|
+| ---------------------------------- | ---------- | ------- | --------- | --------- | --------- |
| | libx264 | | libx265 | | libsvtav1 |
| repo_id | yuv420p | yuv444p | yuv420p | yuv444p | yuv420p |
| lerobot/pusht_image | **16.97%** | 17.58% | 18.57% | 18.86% | 22.06% |
@@ -245,7 +262,7 @@ These tables show the results for `g=2` and `crf=30`, using `timestamps-modes=6_
| aliberts/kitchen | 1.40% | 1.39% | **1.00%** | **1.00%** | 2.52% |
| video_images_load_time_ratio | vcodec | pix_fmt | | | |
-|------------------------------------|---------|---------|----------|---------|-----------|
+| ---------------------------------- | ------- | ------- | -------- | ------- | --------- |
| | libx264 | | libx265 | | libsvtav1 |
| repo_id | yuv420p | yuv444p | yuv420p | yuv444p | yuv420p |
| lerobot/pusht_image | 6.45 | 5.19 | **1.90** | 2.12 | 2.47 |
@@ -254,7 +271,7 @@ These tables show the results for `g=2` and `crf=30`, using `timestamps-modes=6_
| aliberts/kitchen | 1.46 | 1.46 | 0.28 | 0.51 | **0.26** |
| | | vcodec | pix_fmt | | | |
-|------------------------------------|----------|----------|--------------|----------|-----------|--------------|
+| ---------------------------------- | -------- | -------- | ------------ | -------- | --------- | ------------ |
| | | libx264 | | libx265 | | libsvtav1 |
| repo_id | metric | yuv420p | yuv444p | yuv420p | yuv444p | yuv420p |
| lerobot/pusht_image | avg_mse | 2.90E-04 | **2.03E-04** | 3.13E-04 | 2.29E-04 | 2.19E-04 |
diff --git a/lerobot/common/utils/benchmark.py b/benchmarks/video/benchmark.py
similarity index 99%
rename from lerobot/common/utils/benchmark.py
rename to benchmarks/video/benchmark.py
index 4b08e6f6..d9e5e62b 100644
--- a/lerobot/common/utils/benchmark.py
+++ b/benchmarks/video/benchmark.py
@@ -46,11 +46,13 @@ class TimeBenchmark(ContextDecorator):
benchmark = TimeBenchmark()
+
def context_manager_example():
with benchmark:
time.sleep(0.01)
print(f"Block took {benchmark.result_ms:.2f} milliseconds")
+
threads = []
for _ in range(3):
t1 = threading.Thread(target=context_manager_example)
diff --git a/benchmarks/video/capture_camera_feed.py b/benchmarks/video/capture_camera_feed.py
old mode 100644
new mode 100755
index ce248f20..8f853053
--- a/benchmarks/video/capture_camera_feed.py
+++ b/benchmarks/video/capture_camera_feed.py
@@ -55,7 +55,7 @@ def display_and_save_video_stream(output_dir: Path, fps: int, width: int, height
if not ret:
print("Error: Could not read frame.")
break
- rr.log("video/stream", rr.Image(frame.numpy()), static=True)
+ rr.log("video/stream", rr.Image(frame), static=True)
cv2.imwrite(str(capture_dir / f"frame_{frame_index:06d}.png"), frame)
frame_index += 1
diff --git a/benchmarks/video/run_video_benchmark.py b/benchmarks/video/run_video_benchmark.py
index 9d587ee9..9f34b227 100644
--- a/benchmarks/video/run_video_benchmark.py
+++ b/benchmarks/video/run_video_benchmark.py
@@ -35,12 +35,13 @@ import torch
from skimage.metrics import mean_squared_error, peak_signal_noise_ratio, structural_similarity
from tqdm import tqdm
-from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
-from lerobot.common.datasets.video_utils import (
+from benchmarks.video.benchmark import TimeBenchmark
+from lerobot.datasets.lerobot_dataset import LeRobotDataset
+from lerobot.datasets.video_utils import (
decode_video_frames_torchvision,
encode_video_frames,
)
-from lerobot.common.utils.benchmark import TimeBenchmark
+from lerobot.utils.constants import OBS_IMAGE
BASE_ENCODING = OrderedDict(
[
@@ -108,7 +109,8 @@ def save_decoded_frames(
def save_first_episode(imgs_dir: Path, dataset: LeRobotDataset) -> None:
- ep_num_images = dataset.episode_data_index["to"][0].item()
+ episode_index = 0
+ ep_num_images = dataset.meta.episodes["length"][episode_index]
if imgs_dir.exists() and len(list(imgs_dir.glob("frame_*.png"))) == ep_num_images:
return
@@ -116,7 +118,7 @@ def save_first_episode(imgs_dir: Path, dataset: LeRobotDataset) -> None:
hf_dataset = dataset.hf_dataset.with_format(None)
# We only save images from the first camera
- img_keys = [key for key in hf_dataset.features if key.startswith("observation.image")]
+ img_keys = [key for key in hf_dataset.features if key.startswith(OBS_IMAGE)]
imgs_dataset = hf_dataset.select_columns(img_keys[0])
for i, item in enumerate(
@@ -265,7 +267,8 @@ def benchmark_encoding_decoding(
overwrite=True,
)
- ep_num_images = dataset.episode_data_index["to"][0].item()
+ episode_index = 0
+ ep_num_images = dataset.meta.episodes["length"][episode_index]
width, height = tuple(dataset[0][dataset.meta.camera_keys[0]].shape[-2:])
num_pixels = width * height
video_size_bytes = video_path.stat().st_size
diff --git a/docker/Dockerfile.internal b/docker/Dockerfile.internal
new file mode 100644
index 00000000..2616cd06
--- /dev/null
+++ b/docker/Dockerfile.internal
@@ -0,0 +1,93 @@
+# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# This Dockerfile is designed for HuggingFace internal CI environments
+# that require GPU access. It starts from an NVIDIA CUDA base image.
+
+# docker build -f docker/Dockerfile.internal -t lerobot-internal .
+
+# Configure the base image for CI with GPU access
+# TODO(Steven): Bump these versions
+ARG CUDA_VERSION=12.4.1
+ARG OS_VERSION=22.04
+FROM nvidia/cuda:${CUDA_VERSION}-base-ubuntu${OS_VERSION}
+
+# Define Python version argument
+ARG PYTHON_VERSION=3.10
+
+# Configure environment variables
+ENV DEBIAN_FRONTEND=noninteractive \
+ MUJOCO_GL=egl \
+ PATH=/lerobot/.venv/bin:$PATH \
+ CUDA_VISIBLE_DEVICES=0 \
+ TEST_TYPE=single_gpu \
+ DEVICE=cuda
+
+# Install Python, system dependencies, and uv (as root)
+RUN apt-get update && apt-get install -y --no-install-recommends \
+ software-properties-common build-essential git curl \
+ libglib2.0-0 libgl1-mesa-glx libegl1-mesa ffmpeg \
+ libusb-1.0-0-dev speech-dispatcher libgeos-dev portaudio19-dev \
+ cmake pkg-config ninja-build \
+ && add-apt-repository -y ppa:deadsnakes/ppa \
+ && apt-get update \
+ && apt-get install -y --no-install-recommends \
+ python${PYTHON_VERSION} \
+ python${PYTHON_VERSION}-venv \
+ python${PYTHON_VERSION}-dev \
+ && curl -LsSf https://astral.sh/uv/install.sh | sh \
+ && mv /root/.local/bin/uv /usr/local/bin/uv \
+ && useradd --create-home --shell /bin/bash user_lerobot \
+ && usermod -aG sudo user_lerobot \
+ && apt-get clean && rm -rf /var/lib/apt/lists/*
+
+# Create application directory and set permissions
+WORKDIR /lerobot
+RUN chown -R user_lerobot:user_lerobot /lerobot
+
+# Switch to the non-root user
+USER user_lerobot
+
+# Environment variables for the testing
+ENV HOME=/home/user_lerobot \
+ HF_HOME=/home/user_lerobot/.cache/huggingface \
+ HF_LEROBOT_HOME=/home/user_lerobot/.cache/huggingface/lerobot \
+ TORCH_HOME=/home/user_lerobot/.cache/torch \
+ TRITON_CACHE_DIR=/home/user_lerobot/.cache/triton
+
+# Create the virtual environment
+# We use a virtual environment inside the container—even though the container itself \
+# provides isolation—to ensure compatibility with the cluster and to prevent \
+# issues with MuJoCo and OpenGL drivers.
+RUN uv venv --python python${PYTHON_VERSION}
+
+# Install Python dependencies for caching
+COPY --chown=user_lerobot:user_lerobot pyproject.toml README.md MANIFEST.in ./
+COPY --chown=user_lerobot:user_lerobot src/ src/
+
+ARG UNBOUND_DEPS=false
+
+RUN if [ "$UNBOUND_DEPS" = "true" ]; then \
+ sed -i 's/,[[:space:]]*<[0-9\.]*//g' pyproject.toml; \
+ echo "Dependencies unbound:" && cat pyproject.toml; \
+ fi
+
+RUN uv pip install --no-cache ".[all]"
+
+# Copy the rest of the application source code
+# Make sure to have the git-LFS files for testing
+COPY --chown=user_lerobot:user_lerobot . .
+
+# Set the default command
+CMD ["/bin/bash"]
diff --git a/docker/Dockerfile.user b/docker/Dockerfile.user
new file mode 100644
index 00000000..c1b28445
--- /dev/null
+++ b/docker/Dockerfile.user
@@ -0,0 +1,79 @@
+# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# This Dockerfile is designed for a lerobot user who wants to
+# experiment with the project. It starts from an Python Slim base image.
+
+# docker build -f docker/Dockerfile.user -t lerobot-user .
+# docker run -it --rm lerobot-user
+
+# Configure the base image
+ARG PYTHON_VERSION=3.10
+FROM python:${PYTHON_VERSION}-slim
+
+# Configure environment variables
+ENV DEBIAN_FRONTEND=noninteractive \
+ MUJOCO_GL=egl \
+ PATH=/lerobot/.venv/bin:$PATH
+
+# Install system dependencies and uv (as root)
+RUN apt-get update && apt-get install -y --no-install-recommends \
+ build-essential git curl libglib2.0-0 libegl1-mesa-dev ffmpeg \
+ libusb-1.0-0-dev speech-dispatcher libgeos-dev portaudio19-dev \
+ cmake pkg-config ninja-build \
+ && curl -LsSf https://astral.sh/uv/install.sh | sh \
+ && mv /root/.local/bin/uv /usr/local/bin/uv \
+ && useradd --create-home --shell /bin/bash user_lerobot \
+ && usermod -aG sudo user_lerobot \
+ && apt-get clean && rm -rf /var/lib/apt/lists/*
+
+# Create application directory and set permissions
+WORKDIR /lerobot
+RUN chown -R user_lerobot:user_lerobot /lerobot
+
+# Switch to the non-root user
+USER user_lerobot
+
+# Environment variables for the testing
+ENV HOME=/home/user_lerobot \
+ HF_HOME=/home/user_lerobot/.cache/huggingface \
+ HF_LEROBOT_HOME=/home/user_lerobot/.cache/huggingface/lerobot \
+ TORCH_HOME=/home/user_lerobot/.cache/torch \
+ TRITON_CACHE_DIR=/home/user_lerobot/.cache/triton
+
+# Create the virtual environment
+# We use a virtual environment inside the container—even though the container itself \
+# provides isolation—to closely resemble local development and allow users to \
+# run other Python projects in the same container without dependency conflicts.
+RUN uv venv
+
+# Install Python dependencies for caching
+COPY --chown=user_lerobot:user_lerobot pyproject.toml README.md MANIFEST.in ./
+COPY --chown=user_lerobot:user_lerobot src/ src/
+
+ARG UNBOUND_DEPS=false
+
+RUN if [ "$UNBOUND_DEPS" = "true" ]; then \
+ sed -i 's/,[[:space:]]*<[0-9\.]*//g' pyproject.toml; \
+ echo "Dependencies unbound:" && cat pyproject.toml; \
+ fi
+
+RUN uv pip install --no-cache ".[all]"
+
+# Copy the rest of the application code
+# Make sure to have the git-LFS files for testing
+COPY --chown=user_lerobot:user_lerobot . .
+
+# Set the default command
+CMD ["/bin/bash"]
diff --git a/docker/lerobot-cpu/Dockerfile b/docker/lerobot-cpu/Dockerfile
deleted file mode 100644
index 13a45d24..00000000
--- a/docker/lerobot-cpu/Dockerfile
+++ /dev/null
@@ -1,29 +0,0 @@
-# Configure image
-ARG PYTHON_VERSION=3.10
-FROM python:${PYTHON_VERSION}-slim
-
-# Configure environment variables
-ARG PYTHON_VERSION
-ENV DEBIAN_FRONTEND=noninteractive
-ENV MUJOCO_GL="egl"
-ENV PATH="/opt/venv/bin:$PATH"
-
-# Install dependencies and set up Python in a single layer
-RUN apt-get update && apt-get install -y --no-install-recommends \
- build-essential cmake git \
- libglib2.0-0 libgl1-mesa-glx libegl1-mesa ffmpeg \
- speech-dispatcher libgeos-dev \
- && ln -s /usr/bin/python${PYTHON_VERSION} /usr/bin/python \
- && python -m venv /opt/venv \
- && apt-get clean && rm -rf /var/lib/apt/lists/* \
- && echo "source /opt/venv/bin/activate" >> /root/.bashrc
-
-# Clone repository and install LeRobot in a single layer
-COPY . /lerobot
-WORKDIR /lerobot
-RUN /opt/venv/bin/pip install --upgrade --no-cache-dir pip \
- && /opt/venv/bin/pip install --no-cache-dir ".[test, aloha, xarm, pusht, dynamixel]" \
- --extra-index-url https://download.pytorch.org/whl/cpu
-
-# Execute in bash shell rather than python
-CMD ["/bin/bash"]
diff --git a/docker/lerobot-gpu-dev/Dockerfile b/docker/lerobot-gpu-dev/Dockerfile
deleted file mode 100644
index 4d25b255..00000000
--- a/docker/lerobot-gpu-dev/Dockerfile
+++ /dev/null
@@ -1,68 +0,0 @@
-FROM nvidia/cuda:12.2.2-devel-ubuntu22.04
-
-# Configure image
-ARG PYTHON_VERSION=3.10
-ARG DEBIAN_FRONTEND=noninteractive
-
-# Install apt dependencies
-RUN apt-get update && apt-get install -y --no-install-recommends \
- build-essential cmake \
- git git-lfs openssh-client \
- nano vim less util-linux tree \
- htop atop nvtop \
- sed gawk grep curl wget zip unzip \
- tcpdump sysstat screen tmux \
- libglib2.0-0 libgl1-mesa-glx libegl1-mesa \
- speech-dispatcher portaudio19-dev libgeos-dev \
- python${PYTHON_VERSION} python${PYTHON_VERSION}-venv python${PYTHON_VERSION}-dev \
- && apt-get clean && rm -rf /var/lib/apt/lists/*
-
-# Install ffmpeg build dependencies. See:
-# https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu
-# TODO(aliberts): create image to build dependencies from source instead
-RUN apt-get update && apt-get install -y --no-install-recommends \
- autoconf automake yasm \
- libass-dev \
- libfreetype6-dev \
- libgnutls28-dev \
- libunistring-dev \
- libmp3lame-dev \
- libtool \
- libvorbis-dev \
- meson \
- ninja-build \
- pkg-config \
- texinfo \
- yasm \
- zlib1g-dev \
- nasm \
- libx264-dev \
- libx265-dev libnuma-dev \
- libvpx-dev \
- libfdk-aac-dev \
- libopus-dev \
- libsvtav1-dev libsvtav1enc-dev libsvtav1dec-dev \
- libdav1d-dev
-
-# Install gh cli tool
-RUN (type -p wget >/dev/null || (apt update && apt-get install wget -y)) \
- && mkdir -p -m 755 /etc/apt/keyrings \
- && wget -qO- https://cli.github.com/packages/githubcli-archive-keyring.gpg | tee /etc/apt/keyrings/githubcli-archive-keyring.gpg > /dev/null \
- && chmod go+r /etc/apt/keyrings/githubcli-archive-keyring.gpg \
- && echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | tee /etc/apt/sources.list.d/github-cli.list > /dev/null \
- && apt update \
- && apt install gh -y \
- && apt clean && rm -rf /var/lib/apt/lists/*
-
-# Setup `python`
-RUN ln -s /usr/bin/python3 /usr/bin/python
-
-# Install poetry
-RUN curl -sSL https://install.python-poetry.org | python -
-ENV PATH="/root/.local/bin:$PATH"
-RUN echo 'if [ "$HOME" != "/root" ]; then ln -sf /root/.local/bin/poetry $HOME/.local/bin/poetry; fi' >> /root/.bashrc
-RUN poetry config virtualenvs.create false
-RUN poetry config virtualenvs.in-project true
-
-# Set EGL as the rendering backend for MuJoCo
-ENV MUJOCO_GL="egl"
diff --git a/docker/lerobot-gpu/Dockerfile b/docker/lerobot-gpu/Dockerfile
deleted file mode 100644
index 642a8ded..00000000
--- a/docker/lerobot-gpu/Dockerfile
+++ /dev/null
@@ -1,24 +0,0 @@
-FROM nvidia/cuda:12.4.1-base-ubuntu22.04
-
-# Configure environment variables
-ARG PYTHON_VERSION=3.10
-ENV DEBIAN_FRONTEND=noninteractive
-ENV MUJOCO_GL="egl"
-ENV PATH="/opt/venv/bin:$PATH"
-
-# Install dependencies and set up Python in a single layer
-RUN apt-get update && apt-get install -y --no-install-recommends \
- build-essential cmake git \
- libglib2.0-0 libgl1-mesa-glx libegl1-mesa ffmpeg \
- speech-dispatcher libgeos-dev \
- python${PYTHON_VERSION}-dev python${PYTHON_VERSION}-venv \
- && ln -s /usr/bin/python${PYTHON_VERSION} /usr/bin/python \
- && python -m venv /opt/venv \
- && apt-get clean && rm -rf /var/lib/apt/lists/* \
- && echo "source /opt/venv/bin/activate" >> /root/.bashrc
-
-# Clone repository and install LeRobot in a single layer
-COPY . /lerobot
-WORKDIR /lerobot
-RUN /opt/venv/bin/pip install --upgrade --no-cache-dir pip \
- && /opt/venv/bin/pip install --no-cache-dir ".[test, aloha, xarm, pusht, dynamixel]"
diff --git a/docs-requirements.txt b/docs-requirements.txt
new file mode 100644
index 00000000..e286ad2b
--- /dev/null
+++ b/docs-requirements.txt
@@ -0,0 +1,3 @@
+# docs-requirements.txt
+hf-doc-builder @ git+https://github.com/huggingface/doc-builder.git@main
+watchdog>=6.0.0
diff --git a/docs/README.md b/docs/README.md
index 275fee46..476eb8dc 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -20,12 +20,13 @@ To generate the documentation, you first have to build it. Several packages are
you can install them with the following command, at the root of the code repository:
```bash
-pip install -e ".[docs]"
+pip install -e . -r docs-requirements.txt
```
You will also need `nodejs`. Please refer to their [installation page](https://nodejs.org/en/download)
---
+
**NOTE**
You only need to generate the documentation to inspect it locally (if you're planning changes and want to
@@ -63,6 +64,7 @@ doc-builder preview lerobot docs/source/
The docs will be viewable at [http://localhost:3000](http://localhost:3000). You can also preview the docs once you have opened a PR. You will see a bot add a comment to a link where the documentation with your changes lives.
---
+
**NOTE**
The `preview` command only works with existing doc files. When you add a completely new file, you need to update `_toctree.yml` & restart `preview` command (`ctrl-c` to stop it & call `doc-builder preview ...` again).
@@ -89,6 +91,7 @@ Sections that were moved:
[ Section A ]
```
+
and of course, if you moved it to another file, then:
```
@@ -119,7 +122,6 @@ and objects like True, None or any strings should usually be put in `code`.
Multi-line code blocks can be useful for displaying examples. They are done between two lines of three backticks as usual in Markdown:
-
````
```
# first line of code
diff --git a/docs/source/_toctree.yml b/docs/source/_toctree.yml
index a0f69d0a..0cf8aa9a 100644
--- a/docs/source/_toctree.yml
+++ b/docs/source/_toctree.yml
@@ -5,8 +5,88 @@
title: Installation
title: Get started
- sections:
- - local: assemble_so101
- title: Assemble SO-101
- - local: getting_started_real_world_robot
- title: Getting Started with Real-World Robots
+ - local: il_robots
+ title: Imitation Learning for Robots
+ - local: cameras
+ title: Cameras
+ - local: integrate_hardware
+ title: Bring Your Own Hardware
+ - local: hilserl
+ title: Train a Robot with RL
+ - local: hilserl_sim
+ title: Train RL in Simulation
+ - local: async
+ title: Use Async Inference
+ - local: multi_gpu_training
+ title: Multi GPU training
title: "Tutorials"
+- sections:
+ - local: lerobot-dataset-v3
+ title: Using LeRobotDataset
+ - local: porting_datasets_v3
+ title: Porting Large Datasets
+ - local: using_dataset_tools
+ title: Using the Dataset Tools
+ title: "Datasets"
+- sections:
+ - local: act
+ title: ACT
+ - local: smolvla
+ title: SmolVLA
+ - local: pi0
+ title: π₀ (Pi0)
+ - local: pi05
+ title: π₀.₅ (Pi05)
+ - local: groot
+ title: NVIDIA GR00T N1.5
+ title: "Policies"
+- sections:
+ - local: envhub
+ title: Environments from the Hub
+ - local: il_sim
+ title: Imitation Learning in Sim
+ - local: libero
+ title: Using Libero
+ - local: metaworld
+ title: Using MetaWorld
+ title: "Simulation"
+- sections:
+ - local: introduction_processors
+ title: Introduction to Robot Processors
+ - local: debug_processor_pipeline
+ title: Debug your processor pipeline
+ - local: implement_your_own_processor
+ title: Implement your own processor
+ - local: processors_robots_teleop
+ title: Processors for Robots and Teleoperators
+ title: "Robot Processors"
+- sections:
+ - local: so101
+ title: SO-101
+ - local: so100
+ title: SO-100
+ - local: koch
+ title: Koch v1.1
+ - local: lekiwi
+ title: LeKiwi
+ - local: hope_jr
+ title: Hope Jr
+ - local: reachy2
+ title: Reachy 2
+ title: "Robots"
+- sections:
+ - local: phone_teleop
+ title: Phone
+ title: "Teleoperators"
+- sections:
+ - local: notebooks
+ title: Notebooks
+ - local: feetech
+ title: Updating Feetech Firmware
+ title: "Resources"
+- sections:
+ - local: contributing
+ title: Contribute to LeRobot
+ - local: backwardcomp
+ title: Backward compatibility
+ title: "About"
diff --git a/docs/source/act.mdx b/docs/source/act.mdx
new file mode 100644
index 00000000..e3294ca6
--- /dev/null
+++ b/docs/source/act.mdx
@@ -0,0 +1,92 @@
+# ACT (Action Chunking with Transformers)
+
+ACT is a **lightweight and efficient policy for imitation learning**, especially well-suited for fine-grained manipulation tasks. It's the **first model we recommend when you're starting out** with LeRobot due to its fast training time, low computational requirements, and strong performance.
+
+
+
+
+
+_Watch this tutorial from the LeRobot team to learn how ACT works: [LeRobot ACT Tutorial](https://www.youtube.com/watch?v=ft73x0LfGpM)_
+
+## Model Overview
+
+Action Chunking with Transformers (ACT) was introduced in the paper [Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware](https://arxiv.org/abs/2304.13705) by Zhao et al. The policy was designed to enable precise, contact-rich manipulation tasks using affordable hardware and minimal demonstration data.
+
+### Why ACT is Great for Beginners
+
+ACT stands out as an excellent starting point for several reasons:
+
+- **Fast Training**: Trains in a few hours on a single GPU
+- **Lightweight**: Only ~80M parameters, making it efficient and easy to work with
+- **Data Efficient**: Often achieves high success rates with just 50 demonstrations
+
+### Architecture
+
+ACT uses a transformer-based architecture with three main components:
+
+1. **Vision Backbone**: ResNet-18 processes images from multiple camera viewpoints
+2. **Transformer Encoder**: Synthesizes information from camera features, joint positions, and a learned latent variable
+3. **Transformer Decoder**: Generates coherent action sequences using cross-attention
+
+The policy takes as input:
+
+- Multiple RGB images (e.g., from wrist cameras, front/top cameras)
+- Current robot joint positions
+- A latent style variable `z` (learned during training, set to zero during inference)
+
+And outputs a chunk of `k` future action sequences.
+
+## Installation Requirements
+
+1. Install LeRobot by following our [Installation Guide](./installation).
+2. ACT is included in the base LeRobot installation, so no additional dependencies are needed!
+
+## Training ACT
+
+ACT works seamlessly with the standard LeRobot training pipeline. Here's a complete example for training ACT on your dataset:
+
+```bash
+lerobot-train \
+ --dataset.repo_id=${HF_USER}/your_dataset \
+ --policy.type=act \
+ --output_dir=outputs/train/act_your_dataset \
+ --job_name=act_your_dataset \
+ --policy.device=cuda \
+ --wandb.enable=true \
+ --policy.repo_id=${HF_USER}/act_policy
+```
+
+### Training Tips
+
+1. **Start with defaults**: ACT's default hyperparameters work well for most tasks
+2. **Training duration**: Expect a few hours for 100k training steps on a single GPU
+3. **Batch size**: Start with batch size 8 and adjust based on your GPU memory
+
+### Train using Google Colab
+
+If your local computer doesn't have a powerful GPU, you can utilize Google Colab to train your model by following the [ACT training notebook](./notebooks#training-act).
+
+## Evaluating ACT
+
+Once training is complete, you can evaluate your ACT policy using the `lerobot-record` command with your trained policy. This will run inference and record evaluation episodes:
+
+```bash
+lerobot-record \
+ --robot.type=so100_follower \
+ --robot.port=/dev/ttyACM0 \
+ --robot.id=my_robot \
+ --robot.cameras="{ front: {type: opencv, index_or_path: 0, width: 640, height: 480, fps: 30}}" \
+ --display_data=true \
+ --dataset.repo_id=${HF_USER}/eval_act_your_dataset \
+ --dataset.num_episodes=10 \
+ --dataset.single_task="Your task description" \
+ --policy.path=${HF_USER}/act_policy
+```
diff --git a/docs/source/assemble_so101.mdx b/docs/source/assemble_so101.mdx
deleted file mode 100644
index de280a39..00000000
--- a/docs/source/assemble_so101.mdx
+++ /dev/null
@@ -1,348 +0,0 @@
-# Assemble SO-101
-
-In the steps below we explain how to assemble our flagship robot, the SO-101.
-
-## Source the parts
-
-Follow this [README](https://github.com/TheRobotStudio/SO-ARM100). It contains the bill of materials, with a link to source the parts, as well as the instructions to 3D print the parts,
-and advice if it's your first time printing or if you don't own a 3D printer.
-
-Before assembling, you will first need to configure your motors. To this end, we provide a nice script, so let's first install LeRobot. After configuration, we will also guide you through assembly.
-
-## Install LeRobot
-
-To install LeRobot follow our [Installation Guide](./installation)
-
-## Configure motors
-
-To configure the motors designate one bus servo adapter and 6 motors for your leader arm, and similarly the other bus servo adapter and 6 motors for the follower arm. It's convenient to label them and write on each motor if it's for the follower `F` or for the leader `L` and it's ID from 1 to 6.
-
-You now should plug the 5V or 12V power supply to the motor bus. 5V for the STS3215 7.4V motors and 12V for the STS3215 12V motors. Note that the leader arm always uses the 7.4V motors, so watch out that you plug in the right power supply if you have 12V and 7.4V motors, otherwise you might burn your motors! Now, connect the motor bus to your computer via USB. Note that the USB doesn't provide any power, and both the power supply and USB have to be plugged in.
-
-### Find the USB ports associated to each arm
-
-To find the port for each bus servo adapter, run this script:
-```bash
-python lerobot/scripts/find_motors_bus_port.py
-```
-##### Example outputs of script
-
-
-
-
-Example output leader arm's port: `/dev/tty.usbmodem575E0031751`
-
-```bash
-Finding all available ports for the MotorBus.
-['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
-Remove the usb cable from your MotorsBus and press Enter when done.
-
-[...Disconnect leader arm and press Enter...]
-
-The port of this MotorsBus is /dev/tty.usbmodem575E0031751
-Reconnect the usb cable.
-```
-
-Example output follower arm port: `/dev/tty.usbmodem575E0032081`
-
-```
-Finding all available ports for the MotorBus.
-['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
-Remove the usb cable from your MotorsBus and press Enter when done.
-
-[...Disconnect follower arm and press Enter...]
-
-The port of this MotorsBus is /dev/tty.usbmodem575E0032081
-Reconnect the usb cable.
-```
-
-
-
-
-On Linux, you might need to give access to the USB ports by running:
-```bash
-sudo chmod 666 /dev/ttyACM0
-sudo chmod 666 /dev/ttyACM1
-```
-
-Example output leader arm port: `/dev/ttyACM0`
-
-```bash
-Finding all available ports for the MotorBus.
-['/dev/ttyACM0', '/dev/ttyACM1']
-Remove the usb cable from your MotorsBus and press Enter when done.
-
-[...Disconnect leader arm and press Enter...]
-
-The port of this MotorsBus is /dev/ttyACM0
-Reconnect the usb cable.
-```
-
-Example output follower arm port: `/dev/ttyACM1`
-
-```
-Finding all available ports for the MotorBus.
-['/dev/ttyACM0', '/dev/ttyACM1']
-Remove the usb cable from your MotorsBus and press Enter when done.
-
-[...Disconnect follower arm and press Enter...]
-
-The port of this MotorsBus is /dev/ttyACM1
-Reconnect the usb cable.
-```
-
-
-
-#### Update config file
-
-Now that you have your ports, update the **port** default values of [`SO101RobotConfig`](https://github.com/huggingface/lerobot/blob/main/lerobot/common/robot_devices/robots/configs.py).
-You will find a class called `so101` where you can update the `port` values with your actual motor ports:
-```diff
-@RobotConfig.register_subclass("so101")
-@dataclass
-class So101RobotConfig(ManipulatorRobotConfig):
- calibration_dir: str = ".cache/calibration/so101"
- # `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
- # Set this to a positive scalar to have the same value for all motors, or a list that is the same length as
- # the number of motors in your follower arms.
- max_relative_target: int | None = None
-
- leader_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "main": FeetechMotorsBusConfig(
-- port="/dev/tty.usbmodem58760431091",
-+ port="{ADD YOUR LEADER PORT}",
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "sts3215"],
- "shoulder_lift": [2, "sts3215"],
- "elbow_flex": [3, "sts3215"],
- "wrist_flex": [4, "sts3215"],
- "wrist_roll": [5, "sts3215"],
- "gripper": [6, "sts3215"],
- },
- ),
- }
- )
-
- follower_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "main": FeetechMotorsBusConfig(
-- port="/dev/tty.usbmodem585A0076891",
-+ port="{ADD YOUR FOLLOWER PORT}",
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "sts3215"],
- "shoulder_lift": [2, "sts3215"],
- "elbow_flex": [3, "sts3215"],
- "wrist_flex": [4, "sts3215"],
- "wrist_roll": [5, "sts3215"],
- "gripper": [6, "sts3215"],
- },
- ),
- }
- )
-```
-
-Here is a video of the process:
-
-
-
-
-## Step-by-Step Assembly Instructions
-
-The follower arm uses 6x STS3215 motors with 1/345 gearing. The leader however uses three differently geared motors to make sure it can both sustain its own weight and it can be moved without requiring much force. Which motor is needed for which joint is shown in table below.
-
-| Leader-Arm Axis | Motor | Gear Ratio |
-|-----------------|:-------:|:----------:|
-| Base / Shoulder Yaw | 1 | 1 / 191 |
-| Shoulder Pitch | 2 | 1 / 345 |
-| Elbow | 3 | 1 / 191 |
-| Wrist Roll | 4 | 1 / 147 |
-| Wrist Pitch | 5 | 1 / 147 |
-| Gripper | 6 | 1 / 147 |
-
-### Set motor IDs
-
-Plug your motor in one of the two ports of the motor bus and run this script to set its ID to 1. Replace the text after --port to the corresponding control board port.
-```bash
-python lerobot/scripts/configure_motor.py \
- --port /dev/tty.usbmodem58760432961 \
- --brand feetech \
- --model sts3215 \
- --baudrate 1000000 \
- --ID 1
-```
-
-Then unplug your motor and plug the second motor and set its ID to 2.
-```bash
-python lerobot/scripts/configure_motor.py \
- --port /dev/tty.usbmodem58760432961 \
- --brand feetech \
- --model sts3215 \
- --baudrate 1000000 \
- --ID 2
-```
-
-Redo this process for all your motors until ID 6. Do the same for the 6 motors of the leader arm, but make sure to change the power supply if you use motors with different voltage and make sure you give the right ID to the right motor according to the table above.
-
-Here is a video of the process:
-
-
-
-
-### Clean Parts
-Remove all support material from the 3D-printed parts, the easiest way to do this is using a small screwdriver to get underneath the support material.
-
-### Joint 1
-
-- Place the first motor into the base.
-- Fasten the motor with 4 M2x6mm screws (smallest screws). Two from the top and two from bottom.
-- Slide over the first motor holder and fasten it using two M2x6mm screws (one on each side).
-- Install both motor horns, securing the top horn with a M3x6mm screw.
-- Attach the shoulder part.
-- Tighten the shoulder part with 4 M3x6mm screws on top and 4 M3x6mm screws on the bottom
-- Add the shoulder motor holder.
-
-
-
-
-
-### Joint 2
-
-- Slide the second motor in from the top.
-- Fasten the second motor with 4 M2x6mm screws.
-- Attach both motor horns to motor 2, again use the M3x6mm horn screw.
-- Attach the upper arm with 4 M3x6mm screws on each side.
-
-
-
-
-
-### Joint 3
-
-- Insert motor 3 and fasten using 4 M2x6mm screws
-- Attach both motor horns to motor 3 and secure one again with a M3x6mm horn screw.
-- Connect the forearm to motor 3 using 4 M3x6mm screws on each side.
-
-
-
-
-
-### Joint 4
-
-- Slide over motor holder 4.
-- Slide in motor 4.
-- Fasten motor 4 with 4 M2x6mm screws and attach its motor horns, use a M3x6mm horn screw.
-
-
-
-
-
-### Joint 5
-
-- Insert motor 5 into the wrist holder and secure it with 2 M2x6mm front screws.
-- Install only one motor horn on the wrist motor and secure it with a M3x6mm horn screw.
-- Secure the wrist to motor 4 using 4 M3x6mm screws on both sides.
-
-
-
-
-
-### Gripper / Handle
-
-
-
-
-- Attach the gripper to motor 5, attach it to the motor horn on the wrist using 4 M3x6mm screws.
-- Insert the gripper motor and secure it with 2 M2x6mm screws on each side.
-- Attach the motor horns and again use a M3x6mm horn screw.
-- Install the gripper claw and secure it with 4 M3x6mm screws on both sides.
-
-
-
-
-
-
-
-
-- Mount the leader holder onto the wrist and secure it with 4 M3x6mm screws.
-- Attach the handle to motor 5 using 1 M2x6mm screw.
-- Insert the gripper motor, secure it with 2 M2x6mm screws on each side, attach a motor horn using a M3x6mm horn screw.
-- Attach the follower trigger with 4 M3x6mm screws.
-
-
-
-
-
-
-
-
-##### Wiring
-
-- Attach the motor controller on the back.
-- Then insert all wires, use the wire guides everywhere to make sure the wires don't unplug themselves and stay in place.
-
-
-
-
-
-## Calibrate
-
-Next, you'll need to calibrate your SO-101 robot to ensure that the leader and follower arms have the same position values when they are in the same physical position.
-The calibration process is very important because it allows a neural network trained on one SO-101 robot to work on another.
-
-#### Manual calibration of follower arm
-
-You will need to move the follower arm to these positions sequentially, note that the rotated position is on the right side of the robot and you have to open the gripper fully.
-
-| 1. Middle position | 2. Zero position | 3. Rotated position | 4. Rest position |
-| ------------ |------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| | | | |
-
-Make sure both arms are connected and run this script to launch manual calibration:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=so101 \
- --robot.cameras='{}' \
- --control.type=calibrate \
- --control.arms='["main_follower"]'
-```
-
-#### Manual calibration of leader arm
-You will also need to move the leader arm to these positions sequentially:
-
-| 1. Middle position | 2. Zero position | 3. Rotated position | 4. Rest position |
-| ------------ |------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| | | | |
-
-Run this script to launch manual calibration:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=so101 \
- --robot.cameras='{}' \
- --control.type=calibrate \
- --control.arms='["main_leader"]'
-```
-
-Congrats 🎉, your robot is all set to learn a task on its own. Start training it by following this tutorial: [Getting started with real-world robots](./getting_started_real_world_robot)
diff --git a/docs/source/async.mdx b/docs/source/async.mdx
new file mode 100644
index 00000000..be10f8ba
--- /dev/null
+++ b/docs/source/async.mdx
@@ -0,0 +1,312 @@
+# Asynchronous Inference
+
+With our [SmolVLA](https://huggingface.co/papers/2506.01844) we introduced a new way to run inference on real-world robots, **decoupling action prediction from action execution**.
+In this tutorial, we'll show how to use asynchronous inference (_async inference_) using a finetuned version of SmolVLA, and all the policies supported by LeRobot.
+**Try async inference with all the policies** supported by LeRobot!
+
+**What you'll learn:**
+
+1. Why asynchronous inference matters and how it compares to, more traditional, sequential inference.
+2. How to spin-up a `PolicyServer` and connect a `RobotClient` from the same machine, and even over the network.
+3. How to tune key parameters (`actions_per_chunk`, `chunk_size_threshold`) for your robot and policy.
+
+If you get stuck, hop into our [Discord community](https://discord.gg/s3KuuzsPFb)!
+
+In a nutshell: with _async inference_, your robot keeps acting while the policy server is already busy computing the next chunk of actions---eliminating "wait-for-inference" lags and unlocking smoother, more reactive behaviours.
+This is fundamentally different from synchronous inference (sync), where the robot stays idle while the policy computes the next chunk of actions.
+
+---
+
+## Getting started with async inference
+
+You can read more information on asynchronous inference in our [blogpost](https://huggingface.co/blog/async-robot-inference). This guide is designed to help you quickly set up and run asynchronous inference in your environment.
+
+First, install `lerobot` with the `async` tag, to install the extra dependencies required to run async inference.
+
+```shell
+pip install -e ".[async]"
+```
+
+Then, spin up a policy server (in one terminal, or in a separate machine) specifying the host address and port for the client to connect to.
+You can spin up a policy server running:
+
+```shell
+python -m lerobot.async_inference.policy_server \
+ --host=127.0.0.1 \
+ --port=8080
+```
+
+This will start a policy server listening on `127.0.0.1:8080` (`localhost`, port 8080). At this stage, the policy server is empty, as all information related to which policy to run and with which parameters are specified during the first handshake with the client. Spin up a client with:
+
+```shell
+python -m lerobot.async_inference.robot_client \
+ --server_address=127.0.0.1:8080 \ # SERVER: the host address and port of the policy server
+ --robot.type=so100_follower \ # ROBOT: your robot type
+ --robot.port=/dev/tty.usbmodem585A0076841 \ # ROBOT: your robot port
+ --robot.id=follower_so100 \ # ROBOT: your robot id, to load calibration file
+ --robot.cameras="{ laptop: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}, phone: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}}" \ # POLICY: the cameras used to acquire frames, with keys matching the keys expected by the policy
+ --task="dummy" \ # POLICY: The task to run the policy on (`Fold my t-shirt`). Not necessarily defined for all policies, such as `act`
+ --policy_type=your_policy_type \ # POLICY: the type of policy to run (smolvla, act, etc)
+ --pretrained_name_or_path=user/model \ # POLICY: the model name/path on server to the checkpoint to run (e.g., lerobot/smolvla_base)
+ --policy_device=mps \ # POLICY: the device to run the policy on, on the server
+ --actions_per_chunk=50 \ # POLICY: the number of actions to output at once
+ --chunk_size_threshold=0.5 \ # CLIENT: the threshold for the chunk size before sending a new observation to the server
+ --aggregate_fn_name=weighted_average \ # CLIENT: the function to aggregate actions on overlapping portions
+ --debug_visualize_queue_size=True # CLIENT: whether to visualize the queue size at runtime
+```
+
+In summary, you need to specify instructions for:
+
+- `SERVER`: the address and port of the policy server
+- `ROBOT`: the type of robot to connect to, the port to connect to, and the local `id` of the robot
+- `POLICY`: the type of policy to run, and the model name/path on server to the checkpoint to run. You also need to specify which device should the sever be using, and how many actions to output at once (capped at the policy max actions value).
+- `CLIENT`: the threshold for the chunk size before sending a new observation to the server, and the function to aggregate actions on overlapping portions. Optionally, you can also visualize the queue size at runtime, to help you tune the `CLIENT` parameters.
+
+Importantly,
+
+- `actions_per_chunk` and `chunk_size_threshold` are key parameters to tune for your setup.
+- `aggregate_fn_name` is the function to aggregate actions on overlapping portions. You can either add a new one to a registry of functions, or add your own in `robot_client.py` (see [here](NOTE:addlinktoLOC))
+- `debug_visualize_queue_size` is a useful tool to tune the `CLIENT` parameters.
+
+## Done! You should see your robot moving around by now 😉
+
+## Async vs. synchronous inference
+
+Synchronous inference relies on interleaving action chunk prediction and action execution. This inherently results in _idle frames_, frames where the robot awaits idle the policy's output: a new action chunk.
+In turn, inference is plagued by evident real-time lags, where the robot simply stops acting due to the lack of available actions.
+With robotics models increasing in size, this problem risks becoming only more severe.
+
+
+
+
+
+ Synchronous inference makes the robot idle while the policy is
+ computing the next chunk of actions.
+
+
+To overcome this, we design async inference, a paradigm where action planning and execution are decoupled, resulting in (1) higher adaptability and, most importantly, (2) no idle frames.
+Crucially, with async inference, the next action chunk is computed _before_ the current one is exhausted, resulting in no idleness.
+Higher adaptability is ensured by aggregating the different action chunks on overlapping portions, obtaining an up-to-date plan and a tighter control loop.
+
+
+
+
+
+ Asynchronous inference results in no idleness because the next chunk is
+ computed before the current chunk is exhausted.
+
+
+---
+
+## Start the Policy Server
+
+Policy servers are wrappers around a `PreTrainedPolicy` interfacing them with observations coming from a robot client.
+Policy servers are initialized as empty containers which are populated with the requested policy specified in the initial handshake between the robot client and the policy server.
+As such, spinning up a policy server is as easy as specifying the host address and port. If you're running the policy server on the same machine as the robot client, you can use `localhost` as the host address.
+
+
+
+```bash
+python -m lerobot.async_inference.policy_server \
+ --host=127.0.0.1 \
+ --port=8080
+```
+
+
+
+
+```python
+from lerobot.async_inference.configs import PolicyServerConfig
+from lerobot.async_inference.policy_server import serve
+
+config = PolicyServerConfig(
+ host="localhost",
+ port=8080,
+)
+serve(config)
+```
+
+
+
+
+
+This listens on `localhost:8080` for an incoming connection from the associated`RobotClient`, which will communicate which policy to run during the first client-server handshake.
+
+---
+
+## Launch the Robot Client
+
+`RobotClient` is a wrapper around a `Robot` instance, which `RobotClient` connects to the (possibly remote) `PolicyServer`.
+The `RobotClient` streams observations to the `PolicyServer`, and receives action chunks obtained running inference on the server (which we assume to have better computational resources than the robot controller).
+
+
+
+```bash
+python -m lerobot.async_inference.robot_client \
+ --server_address=127.0.0.1:8080 \ # SERVER: the host address and port of the policy server
+ --robot.type=so100_follower \ # ROBOT: your robot type
+ --robot.port=/dev/tty.usbmodem585A0076841 \ # ROBOT: your robot port
+ --robot.id=follower_so100 \ # ROBOT: your robot id, to load calibration file
+ --robot.cameras="{ laptop: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}, phone: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}}" \ # POLICY: the cameras used to acquire frames, with keys matching the keys expected by the policy
+ --task="dummy" \ # POLICY: The task to run the policy on (`Fold my t-shirt`). Not necessarily defined for all policies, such as `act`
+ --policy_type=your_policy_type \ # POLICY: the type of policy to run (smolvla, act, etc)
+ --pretrained_name_or_path=user/model \ # POLICY: the model name/path on server to the checkpoint to run (e.g., lerobot/smolvla_base)
+ --policy_device=mps \ # POLICY: the device to run the policy on, on the server
+ --actions_per_chunk=50 \ # POLICY: the number of actions to output at once
+ --chunk_size_threshold=0.5 \ # CLIENT: the threshold for the chunk size before sending a new observation to the server
+ --aggregate_fn_name=weighted_average \ # CLIENT: the function to aggregate actions on overlapping portions
+ --debug_visualize_queue_size=True # CLIENT: whether to visualize the queue size at runtime
+```
+
+
+
+
+```python
+import threading
+from lerobot.robots.so100_follower import SO100FollowerConfig
+from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
+from lerobot.async_inference.configs import RobotClientConfig
+from lerobot.async_inference.robot_client import RobotClient
+from lerobot.async_inference.helpers import visualize_action_queue_size
+
+# 1. Create the robot instance
+"""Check out the cameras available in your setup by running `python lerobot/find_cameras.py`"""
+# these cameras must match the ones expected by the policy
+# check the config.json on the Hub for the policy you are using
+camera_cfg = {
+ "top": OpenCVCameraConfig(index_or_path=0, width=640, height=480, fps=30),
+ "side": OpenCVCameraConfig(index_or_path=1, width=640, height=480, fps=30)
+}
+
+robot_cfg = SO100FollowerConfig(
+ port="/dev/tty.usbmodem585A0076841",
+ id="follower_so100",
+ cameras=camera_cfg
+)
+
+# 3. Create client configuration
+client_cfg = RobotClientConfig(
+ robot=robot_cfg,
+ server_address="localhost:8080",
+ policy_device="mps",
+ policy_type="smolvla",
+ pretrained_name_or_path="fracapuano/smolvla_async",
+ chunk_size_threshold=0.5,
+ actions_per_chunk=50, # make sure this is less than the max actions of the policy
+)
+
+# 4. Create and start client
+client = RobotClient(client_cfg)
+
+# 5. Specify the task
+task = "Don't do anything, stay still"
+
+if client.start():
+ # Start action receiver thread
+ action_receiver_thread = threading.Thread(target=client.receive_actions, daemon=True)
+ action_receiver_thread.start()
+
+ try:
+ # Run the control loop
+ client.control_loop(task)
+ except KeyboardInterrupt:
+ client.stop()
+ action_receiver_thread.join()
+ # (Optionally) plot the action queue size
+ visualize_action_queue_size(client.action_queue_size)
+```
+
+
+
+
+
+The following two parameters are key in every setup:
+
+
+
+
+
Hyperparameter
+
Default
+
What it does
+
+
+
+
+
+ actions_per_chunk
+
+
50
+
+ How many actions the policy outputs at once. Typical values: 10-50.
+
+
+
+
+ chunk_size_threshold
+
+
0.7
+
+ When the queue is ≤ 50% full, the client sends a fresh observation.
+ Value in [0, 1].
+
+
+
+
+
+
+ Different values of `actions_per_chunk` and `chunk_size_threshold` do result
+ in different behaviours.
+
+
+On the one hand, increasing the value of `actions_per_chunk` will result in reducing the likelihood of ending up with no actions to execute, as more actions will be available when the new chunk is computed.
+However, larger values of `actions_per_chunk` might also result in less precise actions, due to the compounding errors consequent to predicting actions over longer timespans.
+
+On the other hand, increasing the value of `chunk_size_threshold` will result in sending out to the `PolicyServer` observations for inference more often, resulting in a larger number of updates action chunks, overlapping on significant portions. This results in high adaptability, in the limit predicting one action chunk for each observation, which is in turn only marginally consumed while a new one is produced.
+This option does also put more pressure on the inference pipeline, as a consequence of the many requests. Conversely, values of `chunk_size_threshold` close to 0.0 collapse to the synchronous edge case, whereby new observations are only sent out whenever the current chunk is exhausted.
+
+We found the default values of `actions_per_chunk` and `chunk_size_threshold` to work well in the experiments we developed for the [SmolVLA paper](https://huggingface.co/papers/2506.01844), but recommend experimenting with different values to find the best fit for your setup.
+
+### Tuning async inference for your setup
+
+1. **Choose your computational resources carefully.** [PI0](https://huggingface.co/lerobot/pi0) occupies 14GB of memory at inference time, while [SmolVLA](https://huggingface.co/lerobot/smolvla_base) requires only ~2GB. You should identify the best computational resource for your use case keeping in mind smaller policies require less computational resources. The combination of policy and device used (CPU-intensive, using MPS, or the number of CUDA cores on a given NVIDIA GPU) directly impacts the average inference latency you should expect.
+2. **Adjust your `fps` based on inference latency.** While the server generates a new action chunk, the client is not idle and is stepping through its current action queue. If the two processes happen at fundamentally different speeds, the client might end up with an empty queue. As such, you should reduce your fps if you consistently run out of actions in queue.
+3. **Adjust `chunk_size_threshold`**.
+ - Values closer to `0.0` result in almost sequential behavior. Values closer to `1.0` → send observation every step (more bandwidth, relies on good world-model).
+ - We found values around 0.5-0.6 to work well. If you want to tweak this, spin up a `RobotClient` setting the `--debug-visualize-queue-size` to `True`. This will plot the action queue size evolution at runtime, and you can use it to find the value of `chunk_size_threshold` that works best for your setup.
+
+
+
+
+
+
+ The action queue size is plotted at runtime when the
+ `--debug-visualize-queue-size` flag is passed, for various levels of
+ `chunk_size_threshold` (`g` in the SmolVLA paper).
+
+
+
+---
+
+## Conclusion
+
+Asynchronous inference represents a significant advancement in real-time robotics control, addressing the fundamental challenge of inference latency that has long plagued robotics applications. Through this tutorial, you've learned how to implement a complete async inference pipeline that eliminates idle frames and enables smoother, more reactive robot behaviors.
+
+**Key Takeaways:**
+
+- **Paradigm Shift**: Async inference decouples action prediction from execution, allowing robots to continue acting while new action chunks are computed in parallel
+- **Performance Benefits**: Eliminates "wait-for-inference" lags that are inherent in synchronous approaches, becoming increasingly important as policy models grow larger
+- **Flexible Architecture**: The server-client design enables distributed computing, where inference can run on powerful remote hardware while maintaining real-time robot control
+- **Tunable Parameters**: Success depends on properly configuring `actions_per_chunk` and `chunk_size_threshold` for your specific hardware, policy, and task requirements
+- **Universal Compatibility**: Works with all LeRobot-supported policies, from lightweight ACT models to vision-language models like SmolVLA
+
+Start experimenting with the default parameters, monitor your action queue sizes, and iteratively refine your setup to achieve optimal performance for your specific use case.
+If you want to discuss this further, hop into our [Discord community](https://discord.gg/s3KuuzsPFb), or open an issue on our [GitHub repository](https://github.com/lerobot/lerobot/issues).
diff --git a/docs/source/backwardcomp.mdx b/docs/source/backwardcomp.mdx
new file mode 100644
index 00000000..3366c8ab
--- /dev/null
+++ b/docs/source/backwardcomp.mdx
@@ -0,0 +1,151 @@
+# Backward compatibility
+
+## Policy Normalization Migration (PR #1452)
+
+**Breaking Change**: LeRobot policies no longer have built-in normalization layers embedded in their weights. Normalization is now handled by external `PolicyProcessorPipeline` components.
+
+### What changed?
+
+| | Before PR #1452 | After PR #1452 |
+| -------------------------- | ------------------------------------------------ | ------------------------------------------------------------ |
+| **Normalization Location** | Embedded in model weights (`normalize_inputs.*`) | External `PolicyProcessorPipeline` components |
+| **Model State Dict** | Contains normalization statistics | **Clean weights only** - no normalization parameters |
+| **Usage** | `policy(batch)` handles everything | `preprocessor(batch)` → `policy(...)` → `postprocessor(...)` |
+
+### Impact on existing models
+
+- Models trained **before** PR #1452 have normalization embedded in their weights
+- These models need migration to work with the new `PolicyProcessorPipeline` system
+- The migration extracts normalization statistics and creates separate processor pipelines
+
+### Migrating old models
+
+Use the migration script to convert models with embedded normalization:
+
+```shell
+python src/lerobot/processor/migrate_policy_normalization.py \
+ --pretrained-path lerobot/act_aloha_sim_transfer_cube_human \
+ --push-to-hub \
+ --branch migrated
+```
+
+The script:
+
+1. **Extracts** normalization statistics from model weights
+2. **Creates** external preprocessor and postprocessor pipelines
+3. **Removes** normalization layers from model weights
+4. **Saves** clean model + processor pipelines
+5. **Pushes** to Hub with automatic PR creation
+
+### Using migrated models
+
+```python
+# New usage pattern (after migration)
+from lerobot.policies.factory import make_policy, make_pre_post_processors
+
+# Load model and processors separately
+policy = make_policy(config, ds_meta=dataset.meta)
+preprocessor, postprocessor = make_pre_post_processors(
+ policy_cfg=config,
+ dataset_stats=dataset.meta.stats
+)
+
+# Process data through pipeline
+processed_batch = preprocessor(raw_batch)
+action = policy.select_action(processed_batch)
+final_action = postprocessor(action)
+```
+
+## Hardware API redesign
+
+PR [#777](https://github.com/huggingface/lerobot/pull/777) improves the LeRobot calibration but is **not backward-compatible**. Below is a overview of what changed and how you can continue to work with datasets created before this pull request.
+
+### What changed?
+
+| | Before PR #777 | After PR #777 |
+| --------------------------------- | ------------------------------------------------- | ------------------------------------------------------------ |
+| **Joint range** | Degrees `-180...180°` | **Normalised range** Joints: `–100...100` Gripper: `0...100` |
+| **Zero position (SO100 / SO101)** | Arm fully extended horizontally | **In middle of the range for each joint** |
+| **Boundary handling** | Software safeguards to detect ±180 ° wrap-arounds | No wrap-around logic needed due to mid-range zero |
+
+---
+
+### Impact on existing datasets
+
+- Recorded trajectories created **before** PR #777 will replay incorrectly if loaded directly:
+ - Joint angles are offset and incorrectly normalized.
+- Any models directly finetuned or trained on the old data will need their inputs and outputs converted.
+
+### Using datasets made with the previous calibration system
+
+We provide a migration example script for replaying an episode recorded with the previous calibration here: `examples/backward_compatibility/replay.py`.
+Below we take you through the modifications that are done in the example script to make the previous calibration datasets work.
+
+```diff
++ key = f"{name.removeprefix('main_')}.pos"
+ action[key] = action_array[i].item()
++ action["shoulder_lift.pos"] = -(action["shoulder_lift.pos"] - 90)
++ action["elbow_flex.pos"] -= 90
+```
+
+Let's break this down.
+New codebase uses `.pos` suffix for the position observations and we have removed `main_` prefix:
+
+
+```python
+key = f"{name.removeprefix('main_')}.pos"
+```
+
+
+For `"shoulder_lift"` (id = 2), the 0 position is changed by -90 degrees and the direction is reversed compared to old calibration/code.
+
+
+```python
+action["shoulder_lift.pos"] = -(action["shoulder_lift.pos"] - 90)
+```
+
+
+For `"elbow_flex"` (id = 3), the 0 position is changed by -90 degrees compared to old calibration/code.
+
+
+```python
+action["elbow_flex.pos"] -= 90
+```
+
+
+To use degrees normalization we then set the `--robot.use_degrees` option to `true`.
+
+```diff
+python examples/backward_compatibility/replay.py \
+ --robot.type=so101_follower \
+ --robot.port=/dev/tty.usbmodem5A460814411 \
+ --robot.id=blue \
++ --robot.use_degrees=true \
+ --dataset.repo_id=my_dataset_id \
+ --dataset.episode=0
+```
+
+### Using policies trained with the previous calibration system
+
+Policies output actions in the same format as the datasets (`torch.Tensors`). Therefore, the same transformations should be applied.
+
+To find these transformations, we recommend to first try and and replay an episode of the dataset your policy was trained on using the section above.
+Then, add these same transformations on your inference script (shown here in the `record.py` script):
+
+```diff
+action_values = predict_action(
+ observation_frame,
+ policy,
+ get_safe_torch_device(policy.config.device),
+ policy.config.use_amp,
+ task=single_task,
+ robot_type=robot.robot_type,
+ )
+ action = {key: action_values[i].item() for i, key in enumerate(robot.action_features)}
+
++ action["shoulder_lift.pos"] = -(action["shoulder_lift.pos"] - 90)
++ action["elbow_flex.pos"] -= 90
+ robot.send_action(action)
+```
+
+If you have questions or run into migration issues, feel free to ask them on [Discord](https://discord.gg/s3KuuzsPFb)
diff --git a/docs/source/cameras.mdx b/docs/source/cameras.mdx
new file mode 100644
index 00000000..5c35be0b
--- /dev/null
+++ b/docs/source/cameras.mdx
@@ -0,0 +1,206 @@
+# Cameras
+
+LeRobot offers multiple options for video capture, including phone cameras, built-in laptop cameras, external webcams, and Intel RealSense cameras. To efficiently record frames from most cameras, you can use either the `OpenCVCamera` or `RealSenseCamera` class. For additional compatibility details on the `OpenCVCamera` class, refer to the [Video I/O with OpenCV Overview](https://docs.opencv.org/4.x/d0/da7/videoio_overview.html).
+
+### Finding your camera
+
+To instantiate a camera, you need a camera identifier. This identifier might change if you reboot your computer or re-plug your camera, a behavior mostly dependant on your operating system.
+
+To find the camera indices of the cameras plugged into your system, run the following script:
+
+```bash
+lerobot-find-cameras opencv # or realsense for Intel Realsense cameras
+```
+
+The output will look something like this if you have two cameras connected:
+
+```
+--- Detected Cameras ---
+Camera #0:
+ Name: OpenCV Camera @ 0
+ Type: OpenCV
+ Id: 0
+ Backend api: AVFOUNDATION
+ Default stream profile:
+ Format: 16.0
+ Width: 1920
+ Height: 1080
+ Fps: 15.0
+--------------------
+(more cameras ...)
+```
+
+> [!WARNING]
+> When using Intel RealSense cameras in `macOS`, you could get this [error](https://github.com/IntelRealSense/librealsense/issues/12307): `Error finding RealSense cameras: failed to set power state`, this can be solved by running the same command with `sudo` permissions. Note that using RealSense cameras in `macOS` is unstable.
+
+## Use Cameras
+
+Below are two examples, demonstrating how to work with the API.
+
+- **Asynchronous frame capture** using an OpenCV-based camera
+- **Color and depth capture** using an Intel RealSense camera
+
+
+
+
+
+```python
+from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
+from lerobot.cameras.opencv.camera_opencv import OpenCVCamera
+from lerobot.cameras.configs import ColorMode, Cv2Rotation
+
+# Construct an `OpenCVCameraConfig` with your desired FPS, resolution, color mode, and rotation.
+config = OpenCVCameraConfig(
+ index_or_path=0,
+ fps=15,
+ width=1920,
+ height=1080,
+ color_mode=ColorMode.RGB,
+ rotation=Cv2Rotation.NO_ROTATION
+)
+
+# Instantiate and connect an `OpenCVCamera`, performing a warm-up read (default).
+camera = OpenCVCamera(config)
+camera.connect()
+
+# Read frames asynchronously in a loop via `async_read(timeout_ms)`
+try:
+ for i in range(10):
+ frame = camera.async_read(timeout_ms=200)
+ print(f"Async frame {i} shape:", frame.shape)
+finally:
+ camera.disconnect()
+```
+
+
+
+
+
+
+```python
+from lerobot.cameras.realsense.configuration_realsense import RealSenseCameraConfig
+from lerobot.cameras.realsense.camera_realsense import RealSenseCamera
+from lerobot.cameras.configs import ColorMode, Cv2Rotation
+
+# Create a `RealSenseCameraConfig` specifying your camera’s serial number and enabling depth.
+config = RealSenseCameraConfig(
+ serial_number_or_name="233522074606",
+ fps=15,
+ width=640,
+ height=480,
+ color_mode=ColorMode.RGB,
+ use_depth=True,
+ rotation=Cv2Rotation.NO_ROTATION
+)
+
+# Instantiate and connect a `RealSenseCamera` with warm-up read (default).
+camera = RealSenseCamera(config)
+camera.connect()
+
+# Capture a color frame via `read()` and a depth map via `read_depth()`.
+try:
+ color_frame = camera.read()
+ depth_map = camera.read_depth()
+ print("Color frame shape:", color_frame.shape)
+ print("Depth map shape:", depth_map.shape)
+finally:
+ camera.disconnect()
+```
+
+
+
+
+
+## Use your phone
+
+
+
+
+To use your iPhone as a camera on macOS, enable the Continuity Camera feature:
+
+- Ensure your Mac is running macOS 13 or later, and your iPhone is on iOS 16 or later.
+- Sign in both devices with the same Apple ID.
+- Connect your devices with a USB cable or turn on Wi-Fi and Bluetooth for a wireless connection.
+
+For more details, visit [Apple support](https://support.apple.com/en-gb/guide/mac-help/mchl77879b8a/mac).
+
+Your iPhone should be detected automatically when running the camera setup script in the next section.
+
+
+
+
+If you want to use your phone as a camera on Linux, follow these steps to set up a virtual camera
+
+1. _Install `v4l2loopback-dkms` and `v4l-utils`_. Those packages are required to create virtual camera devices (`v4l2loopback`) and verify their settings with the `v4l2-ctl` utility from `v4l-utils`. Install them using:
+
+
+```python
+sudo apt install v4l2loopback-dkms v4l-utils
+```
+
+
+2. _Install [DroidCam](https://droidcam.app) on your phone_. This app is available for both iOS and Android.
+3. _Install [OBS Studio](https://obsproject.com)_. This software will help you manage the camera feed. Install it using [Flatpak](https://flatpak.org):
+
+
+```python
+flatpak install flathub com.obsproject.Studio
+```
+
+
+4. _Install the DroidCam OBS plugin_. This plugin integrates DroidCam with OBS Studio. Install it with:
+
+
+```python
+flatpak install flathub com.obsproject.Studio.Plugin.DroidCam
+```
+
+
+5. _Start OBS Studio_. Launch with:
+
+
+```python
+flatpak run com.obsproject.Studio
+```
+
+
+6. _Add your phone as a source_. Follow the instructions [here](https://droidcam.app/obs/usage). Be sure to set the resolution to `640x480`.
+7. _Adjust resolution settings_. In OBS Studio, go to `File > Settings > Video`. Change the `Base(Canvas) Resolution` and the `Output(Scaled) Resolution` to `640x480` by manually typing it in.
+8. _Start virtual camera_. In OBS Studio, follow the instructions [here](https://obsproject.com/kb/virtual-camera-guide).
+9. _Verify the virtual camera setup_. Use `v4l2-ctl` to list the devices:
+
+
+```python
+v4l2-ctl --list-devices
+```
+
+
+You should see an entry like:
+
+```
+VirtualCam (platform:v4l2loopback-000):
+/dev/video1
+```
+
+10. _Check the camera resolution_. Use `v4l2-ctl` to ensure that the virtual camera output resolution is `640x480`. Change `/dev/video1` to the port of your virtual camera from the output of `v4l2-ctl --list-devices`.
+
+
+```python
+v4l2-ctl -d /dev/video1 --get-fmt-video
+```
+
+
+You should see an entry like:
+
+```
+>>> Format Video Capture:
+>>> Width/Height : 640/480
+>>> Pixel Format : 'YUYV' (YUYV 4:2:2)
+```
+
+Troubleshooting: If the resolution is not correct you will have to delete the Virtual Camera port and try again as it cannot be changed.
+
+If everything is set up correctly, you can proceed with the rest of the tutorial.
+
+
+
diff --git a/docs/source/contributing.md b/docs/source/contributing.md
new file mode 120000
index 00000000..f939e75f
--- /dev/null
+++ b/docs/source/contributing.md
@@ -0,0 +1 @@
+../../CONTRIBUTING.md
\ No newline at end of file
diff --git a/docs/source/debug_processor_pipeline.mdx b/docs/source/debug_processor_pipeline.mdx
new file mode 100644
index 00000000..4826c947
--- /dev/null
+++ b/docs/source/debug_processor_pipeline.mdx
@@ -0,0 +1,299 @@
+# Debug Your Processor Pipeline
+
+Processor pipelines can be complex, especially when chaining multiple transformation steps.
+Unlike simple function calls, pipelines lack natural observability, you can't easily see what happens
+between each step or where things go wrong.
+This guide provides debugging tools and techniques specifically designed to address these challenges
+and help you understand data flow through your pipelines.
+
+We'll explore three complementary debugging approaches: **hooks** for runtime monitoring, **step-through debugging** for detailed inspection, and **feature validation** for catching structural mismatches. Each serves a different purpose and together they provide complete visibility into your pipeline's behavior.
+
+## Understanding Hooks
+
+Hooks are functions that get called at specific points during pipeline execution.
+They provide a way to inspect, monitor, or modify data without changing your pipeline code.
+Think of them as "event listeners" for your pipeline.
+
+### What is a Hook?
+
+A hook is a callback function that gets automatically invoked at specific moments during pipeline execution.
+The concept comes from event-driven programming, imagine you could "hook into" the pipeline's execution flow to observe or react to what's happening.
+
+Think of hooks like inserting checkpoints into your pipeline. Every time the pipeline reaches one of these checkpoints, it pauses briefly to call your hook function, giving you a chance to inspect the current state, log information, and validate data.
+
+A hook is simply a function that accepts two parameters:
+
+- `step_idx: int` - The index of the current processing step (0, 1, 2, etc.)
+- `transition: EnvTransition` - The data transition at that point in the pipeline
+
+The beauty of hooks is their non-invasive nature: you can add monitoring, validation, or debugging logic without changing a single line of your pipeline code. The pipeline remains clean and focused on its core logic, while hooks handle the cross-cutting concerns like logging, monitoring, and debugging.
+
+### Before vs After Hooks
+
+The pipeline supports two types of hooks:
+
+- **Before hooks** (`register_before_step_hook`) - Called before each step executes
+- **After hooks** (`register_after_step_hook`) - Called after each step completes
+
+```python
+def before_hook(step_idx: int, transition: EnvTransition):
+ """Called before step processes the transition."""
+ print(f"About to execute step {step_idx}")
+ # Useful for: logging, validation, setup
+
+def after_hook(step_idx: int, transition: EnvTransition):
+ """Called after step has processed the transition."""
+ print(f"Completed step {step_idx}")
+ # Useful for: monitoring results, cleanup, debugging
+
+processor.register_before_step_hook(before_hook)
+processor.register_after_step_hook(after_hook)
+```
+
+### Implementing a NaN Detection Hook
+
+Here's a practical example of a hook that detects NaN values:
+
+```python
+def check_nans(step_idx: int, transition: EnvTransition):
+ """Check for NaN values in observations."""
+ obs = transition.get(TransitionKey.OBSERVATION)
+ if obs:
+ for key, value in obs.items():
+ if isinstance(value, torch.Tensor) and torch.isnan(value).any():
+ print(f"NaN detected in {key} at step {step_idx}")
+
+# Register the hook to run after each step
+processor.register_after_step_hook(check_nans)
+
+# Process your data - the hook will be called automatically
+output = processor(input_data)
+
+# Remove the hook when done debugging
+processor.unregister_after_step_hook(check_nans)
+```
+
+### How Hooks Work Internally
+
+Understanding the internal mechanism helps you use hooks more effectively. The pipeline maintains two separate lists: one for before-step hooks and another for after-step hooks. When you register a hook, it's simply appended to the appropriate list.
+
+During execution, the pipeline follows a strict sequence: for each processing step, it first calls all before-hooks in registration order, then executes the actual step transformation, and finally calls all after-hooks in registration order. This creates a predictable, sandwich-like structure around each step.
+
+The key insight is that hooks don't change the core pipeline logic—they're purely additive. The pipeline's `_forward` method orchestrates this dance between hooks and processing steps, ensuring that your debugging or monitoring code runs at exactly the right moments without interfering with the main data flow.
+
+Here's a simplified view of how the pipeline executes hooks:
+
+```python
+class DataProcessorPipeline:
+ def __init__(self):
+ self.steps = [...]
+ self.before_step_hooks = [] # List of before hooks
+ self.after_step_hooks = [] # List of after hooks
+
+ def _forward(self, transition):
+ """Internal method that processes the transition through all steps."""
+ for step_idx, processor_step in enumerate(self.steps):
+ # 1. Call all BEFORE hooks
+ for hook in self.before_step_hooks:
+ hook(step_idx, transition)
+
+ # 2. Execute the actual processing step
+ transition = processor_step(transition)
+
+ # 3. Call all AFTER hooks
+ for hook in self.after_step_hooks:
+ hook(step_idx, transition)
+
+ return transition
+
+ def register_before_step_hook(self, hook_fn):
+ self.before_step_hooks.append(hook_fn)
+
+ def register_after_step_hook(self, hook_fn):
+ self.after_step_hooks.append(hook_fn)
+```
+
+### Execution Flow
+
+The execution flow looks like this:
+
+```
+Input → Before Hook → Step 0 → After Hook → Before Hook → Step 1 → After Hook → ... → Output
+```
+
+For example, with 3 steps and both hook types:
+
+```python
+def timing_before(step_idx, transition):
+ print(f"⏱️ Starting step {step_idx}")
+
+def validation_after(step_idx, transition):
+ print(f"✅ Completed step {step_idx}")
+
+processor.register_before_step_hook(timing_before)
+processor.register_after_step_hook(validation_after)
+
+# This will output:
+# ⏱️ Starting step 0
+# ✅ Completed step 0
+# ⏱️ Starting step 1
+# ✅ Completed step 1
+# ⏱️ Starting step 2
+# ✅ Completed step 2
+```
+
+### Multiple Hooks
+
+You can register multiple hooks of the same type - they execute in the order registered:
+
+```python
+def log_shapes(step_idx: int, transition: EnvTransition):
+ obs = transition.get(TransitionKey.OBSERVATION)
+ if obs:
+ print(f"Step {step_idx} observation shapes:")
+ for key, value in obs.items():
+ if isinstance(value, torch.Tensor):
+ print(f" {key}: {value.shape}")
+
+processor.register_after_step_hook(check_nans) # Executes first
+processor.register_after_step_hook(log_shapes) # Executes second
+
+# Both hooks will be called after each step in registration order
+output = processor(input_data)
+```
+
+While hooks are excellent for monitoring specific issues (like NaN detection) or gathering metrics during normal pipeline execution, sometimes you need to dive deeper. When you want to understand exactly what happens at each step or debug complex transformation logic, step-through debugging provides the detailed inspection you need.
+
+## Step-Through Debugging
+
+Step-through debugging is like having a slow-motion replay for your pipeline. Instead of watching your data get transformed in one quick blur from input to output, you can pause and examine what happens after each individual step.
+
+This approach is particularly valuable when you're trying to understand a complex pipeline, debug unexpected behavior, or verify that each transformation is working as expected. Unlike hooks, which are great for automated monitoring, step-through debugging gives you manual, interactive control over the inspection process.
+
+The `step_through()` method is a generator that yields the transition state after each processing step, allowing you to inspect intermediate results. Think of it as creating a series of snapshots of your data as it flows through the pipeline—each snapshot shows you exactly what your data looks like after one more transformation has been applied.
+
+### How Step-Through Works
+
+The `step_through()` method fundamentally changes how the pipeline executes. Instead of running all steps in sequence and only returning the final result, it transforms the pipeline into an iterator that yields intermediate results.
+
+Here's what happens internally: the method starts by converting your input data into the pipeline's internal transition format, then yields this initial state. Next, it applies the first processing step and yields the result. Then it applies the second step to that result and yields again, and so on. Each `yield` gives you a complete snapshot of the transition at that point.
+
+This generator pattern is powerful because it's lazy—the pipeline only computes the next step when you ask for it. This means you can stop at any point, inspect the current state thoroughly, and decide whether to continue. You're not forced to run the entire pipeline just to debug one problematic step.
+
+Instead of running the entire pipeline and only seeing the final result, `step_through()` pauses after each step and gives you the intermediate transition:
+
+```python
+# This creates a generator that yields intermediate states
+for i, intermediate_result in enumerate(processor.step_through(input_data)):
+ print(f"=== After step {i} ===")
+
+ # Inspect the observation at this stage
+ obs = intermediate_result.get(TransitionKey.OBSERVATION)
+ if obs:
+ for key, value in obs.items():
+ if isinstance(value, torch.Tensor):
+ print(f"{key}: shape={value.shape}, dtype={value.dtype}")
+```
+
+### Interactive Debugging with Breakpoints
+
+You can add breakpoints in the step-through loop to interactively debug:
+
+```python
+# Step through the pipeline with debugging
+for i, intermediate in enumerate(processor.step_through(data)):
+ print(f"Step {i}: {processor.steps[i].__class__.__name__}")
+
+ # Set a breakpoint to inspect the current state
+ breakpoint() # Debugger will pause here
+
+ # You can now inspect 'intermediate' in the debugger:
+ # - Check tensor shapes and values
+ # - Verify expected transformations
+ # - Look for unexpected changes
+```
+
+During the debugger session, you can:
+
+- Examine `intermediate[TransitionKey.OBSERVATION]` to see observation data
+- Check `intermediate[TransitionKey.ACTION]` for action transformations
+- Inspect any part of the transition to understand what each step does
+
+Step-through debugging is perfect for understanding the _data_ transformations, but what about the _structure_ of that data? While hooks and step-through help you debug runtime behavior, you also need to ensure your pipeline produces data in the format expected by downstream components. This is where feature contract validation comes in.
+
+## Validating Feature Contracts
+
+Feature contracts define what data structure your pipeline expects as input and produces as output.
+Validating these contracts helps catch mismatches early.
+
+### Understanding Feature Contracts
+
+Each processor step has a `transform_features()` method that describes how it changes the data structure:
+
+```python
+# Get the expected output features from your pipeline
+initial_features = {
+ PipelineFeatureType.OBSERVATION: {
+ "observation.state": PolicyFeature(type=FeatureType.STATE, shape=(7,)),
+ "observation.image": PolicyFeature(type=FeatureType.IMAGE, shape=(3, 224, 224))
+ },
+ PipelineFeatureType.ACTION: {
+ "action": PolicyFeature(type=FeatureType.ACTION, shape=(4,))
+ }
+}
+
+# Check what your pipeline will output
+output_features = processor.transform_features(initial_features)
+
+print("Input features:")
+for feature_type, features in initial_features.items():
+ print(f" {feature_type}:")
+ for key, feature in features.items():
+ print(f" {key}: {feature.type.value}, shape={feature.shape}")
+
+print("\nOutput features:")
+for feature_type, features in output_features.items():
+ print(f" {feature_type}:")
+ for key, feature in features.items():
+ print(f" {key}: {feature.type.value}, shape={feature.shape}")
+```
+
+### Verifying Expected Features
+
+Check that your pipeline produces the features you expect:
+
+```python
+# Define what features you expect the pipeline to produce
+expected_keys = ["observation.state", "observation.image", "action"]
+
+print("Validating feature contract...")
+for expected_key in expected_keys:
+ found = False
+ for feature_type, features in output_features.items():
+ if expected_key in features:
+ feature = features[expected_key]
+ print(f"✅ {expected_key}: {feature.type.value}, shape={feature.shape}")
+ found = True
+ break
+
+ if not found:
+ print(f"❌ Missing expected feature: {expected_key}")
+```
+
+This validation helps ensure your pipeline will work correctly with downstream components that expect specific data structures.
+
+## Summary
+
+Now that you understand the three debugging approaches, you can tackle any pipeline issue systematically:
+
+1. **Hooks** - For runtime monitoring and validation without modifying pipeline code
+2. **Step-through** - For inspecting intermediate states and understanding transformations
+3. **Feature validation** - For ensuring data structure contracts are met
+
+**When to use each approach:**
+
+- Start with **step-through debugging** when you need to understand what your pipeline does or when something unexpected happens
+- Add **hooks** for continuous monitoring during development and production to catch issues automatically
+- Use **feature validation** before deployment to ensure your pipeline works with downstream components
+
+These three tools work together to give you the complete observability that complex pipelines naturally lack. With hooks watching for issues, step-through helping you understand behavior, and feature validation ensuring compatibility, you'll be able to debug any pipeline confidently and efficiently.
diff --git a/docs/source/envhub.mdx b/docs/source/envhub.mdx
new file mode 100644
index 00000000..ba646446
--- /dev/null
+++ b/docs/source/envhub.mdx
@@ -0,0 +1,424 @@
+# Loading Environments from the Hub
+
+The **EnvHub** feature allows you to load simulation environments directly from the Hugging Face Hub with a single line of code. This unlocks a powerful new model for collaboration: instead of environments being locked away inside monolithic libraries, anyone can publish custom environments and share them with the community.
+
+## Overview
+
+With EnvHub, you can:
+
+- Load environments from the Hub instantly
+- Share your custom simulation tasks with the community
+- Version control your environments using Git
+- Distribute complex physics simulations without packaging hassles
+
+## Quick Start
+
+Loading an environment from the Hub is as simple as:
+
+```python
+from lerobot.envs.factory import make_env
+
+# Load a hub environment (requires explicit consent to run remote code)
+env = make_env("lerobot/cartpole-env", trust_remote_code=True)
+```
+
+
+ **Security Notice**: Loading environments from the Hub executes Python code
+ from third-party repositories. Only use `trust_remote_code=True` with
+ repositories you trust. We strongly recommend pinning to a specific commit
+ hash for reproducibility and security.
+
+
+## What is EnvHub?
+
+EnvHub is a framework that allows researchers and developers to:
+
+1. **Publish environments** to the Hugging Face Hub as Git repositories
+2. **Load environments** dynamically without installing them as packages
+3. **Version and track** environment changes using Git semantics
+4. **Discover** new simulation tasks shared by the community
+
+This design means you can go from discovering an interesting environment on the Hub to running experiments in seconds, without worrying about dependency conflicts or complex installation procedures.
+
+## Repository Structure
+
+To make your environment loadable from the Hub, your repository must contain at minimum:
+
+### Required Files
+
+**`env.py`** (or custom Python file)
+
+- Must expose a `make_env(n_envs: int, use_async_envs: bool)` function
+- This function should return one of:
+ - A `gym.vector.VectorEnv` (most common)
+ - A single `gym.Env` (will be automatically wrapped)
+ - A dict mapping `{suite_name: {task_id: VectorEnv}}` (for multi-task benchmarks)
+
+### Optional Files
+
+**`requirements.txt`**
+
+- List any additional dependencies your environment needs
+- Users will need to install these manually before loading your environment
+
+**`README.md`**
+
+- Document your environment: what task it implements, observation/action spaces, rewards, etc.
+- Include usage examples and any special setup instructions
+
+**`.gitignore`**
+
+- Exclude unnecessary files from your repository
+
+### Example Repository Structure
+
+```
+my-environment-repo/
+├── env.py # Main environment definition (required)
+├── requirements.txt # Dependencies (optional)
+├── README.md # Documentation (recommended)
+├── assets/ # Images, videos, etc. (optional)
+│ └── demo.gif
+└── configs/ # Config files if needed (optional)
+ └── task_config.yaml
+```
+
+## Creating Your Environment Repository
+
+### Step 1: Define Your Environment
+
+Create an `env.py` file with a `make_env` function:
+
+```python
+# env.py
+import gymnasium as gym
+
+def make_env(n_envs: int = 1, use_async_envs: bool = False):
+ """
+ Create vectorized environments for your custom task.
+
+ Args:
+ n_envs: Number of parallel environments
+ use_async_envs: Whether to use AsyncVectorEnv or SyncVectorEnv
+
+ Returns:
+ gym.vector.VectorEnv or dict mapping suite names to vectorized envs
+ """
+ def _make_single_env():
+ # Create your custom environment
+ return gym.make("CartPole-v1")
+
+ # Choose vector environment type
+ env_cls = gym.vector.AsyncVectorEnv if use_async_envs else gym.vector.SyncVectorEnv
+
+ # Create vectorized environment
+ vec_env = env_cls([_make_single_env for _ in range(n_envs)])
+
+ return vec_env
+```
+
+### Step 2: Test Locally
+
+Before uploading, test your environment locally:
+
+```python
+from lerobot.envs.utils import _load_module_from_path, _call_make_env, _normalize_hub_result
+
+# Load your module
+module = _load_module_from_path("./env.py")
+
+# Test the make_env function
+result = _call_make_env(module, n_envs=2, use_async_envs=False)
+normalized = _normalize_hub_result(result)
+
+# Verify it works
+suite_name = next(iter(normalized))
+env = normalized[suite_name][0]
+obs, info = env.reset()
+print(f"Observation shape: {obs.shape if hasattr(obs, 'shape') else type(obs)}")
+env.close()
+```
+
+### Step 3: Upload to the Hub
+
+Upload your repository to Hugging Face:
+
+```bash
+# Install huggingface_hub if needed
+pip install huggingface_hub
+
+# Login to Hugging Face
+huggingface-cli login
+
+# Create a new repository
+huggingface-cli repo create my-custom-env --type space --org my-org
+
+# Initialize git and push
+git init
+git add .
+git commit -m "Initial environment implementation"
+git remote add origin https://huggingface.co/my-org/my-custom-env
+git push -u origin main
+```
+
+Alternatively, use the `huggingface_hub` Python API:
+
+```python
+from huggingface_hub import HfApi
+
+api = HfApi()
+
+# Create repository
+api.create_repo("my-custom-env", repo_type="space")
+
+# Upload files
+api.upload_folder(
+ folder_path="./my-env-folder",
+ repo_id="username/my-custom-env",
+ repo_type="space",
+)
+```
+
+## Loading Environments from the Hub
+
+### Basic Usage
+
+```python
+from lerobot.envs.factory import make_env
+
+# Load from the hub
+envs_dict = make_env(
+ "username/my-custom-env",
+ n_envs=4,
+ trust_remote_code=True
+)
+
+# Access the environment
+suite_name = next(iter(envs_dict))
+env = envs_dict[suite_name][0]
+
+# Use it like any gym environment
+obs, info = env.reset()
+action = env.action_space.sample()
+obs, reward, terminated, truncated, info = env.step(action)
+```
+
+### Advanced: Pinning to Specific Versions
+
+For reproducibility and security, pin to a specific Git revision:
+
+```python
+# Pin to a specific branch
+env = make_env("username/my-env@main", trust_remote_code=True)
+
+# Pin to a specific commit (recommended for papers/experiments)
+env = make_env("username/my-env@abc123def456", trust_remote_code=True)
+
+# Pin to a tag
+env = make_env("username/my-env@v1.0.0", trust_remote_code=True)
+```
+
+### Custom File Paths
+
+If your environment definition is not in `env.py`:
+
+```python
+# Load from a custom file
+env = make_env("username/my-env:custom_env.py", trust_remote_code=True)
+
+# Combine with version pinning
+env = make_env("username/my-env@v1.0:envs/task_a.py", trust_remote_code=True)
+```
+
+### Async Environments
+
+For better performance with multiple environments:
+
+```python
+envs_dict = make_env(
+ "username/my-env",
+ n_envs=8,
+ use_async_envs=True, # Use AsyncVectorEnv for parallel execution
+ trust_remote_code=True
+)
+```
+
+## URL Format Reference
+
+The hub URL format supports several patterns:
+
+| Pattern | Description | Example |
+| -------------------- | ------------------------------ | -------------------------------------- |
+| `user/repo` | Load `env.py` from main branch | `make_env("lerobot/pusht-env")` |
+| `user/repo@revision` | Load from specific revision | `make_env("lerobot/pusht-env@main")` |
+| `user/repo:path` | Load custom file | `make_env("lerobot/envs:pusht.py")` |
+| `user/repo@rev:path` | Revision + custom file | `make_env("lerobot/envs@v1:pusht.py")` |
+
+## Multi-Task Environments
+
+For benchmarks with multiple tasks (like LIBERO), return a nested dictionary:
+
+```python
+def make_env(n_envs: int = 1, use_async_envs: bool = False):
+ env_cls = gym.vector.AsyncVectorEnv if use_async_envs else gym.vector.SyncVectorEnv
+
+ # Return dict: {suite_name: {task_id: VectorEnv}}
+ return {
+ "suite_1": {
+ 0: env_cls([lambda: gym.make("Task1-v0") for _ in range(n_envs)]),
+ 1: env_cls([lambda: gym.make("Task2-v0") for _ in range(n_envs)]),
+ },
+ "suite_2": {
+ 0: env_cls([lambda: gym.make("Task3-v0") for _ in range(n_envs)]),
+ }
+ }
+```
+
+## Security Considerations
+
+
+ **Important**: The `trust_remote_code=True` flag is required to execute
+ environment code from the Hub. This is by design for security.
+
+
+When loading environments from the Hub:
+
+1. **Review the code first**: Visit the repository and inspect `env.py` before loading
+2. **Pin to commits**: Use specific commit hashes for reproducibility
+3. **Check dependencies**: Review `requirements.txt` for suspicious packages
+4. **Use trusted sources**: Prefer official organizations or well-known researchers
+5. **Sandbox if needed**: Run untrusted code in isolated environments (containers, VMs)
+
+Example of safe usage:
+
+```python
+# ❌ BAD: Loading without inspection
+env = make_env("random-user/untrusted-env", trust_remote_code=True)
+
+# ✅ GOOD: Review code, then pin to specific commit
+# 1. Visit https://huggingface.co/trusted-org/verified-env
+# 2. Review the env.py file
+# 3. Copy the commit hash
+env = make_env("trusted-org/verified-env@a1b2c3d4", trust_remote_code=True)
+```
+
+## Example: CartPole from the Hub
+
+Here's a complete example using the reference CartPole environment:
+
+```python
+from lerobot.envs.factory import make_env
+import numpy as np
+
+# Load the environment
+envs_dict = make_env("lerobot/cartpole-env", n_envs=4, trust_remote_code=True)
+
+# Get the vectorized environment
+suite_name = next(iter(envs_dict))
+env = envs_dict[suite_name][0]
+
+# Run a simple episode
+obs, info = env.reset()
+done = np.zeros(env.num_envs, dtype=bool)
+total_reward = np.zeros(env.num_envs)
+
+while not done.all():
+ # Random policy
+ action = env.action_space.sample()
+ obs, reward, terminated, truncated, info = env.step(action)
+ total_reward += reward
+ done = terminated | truncated
+
+print(f"Average reward: {total_reward.mean():.2f}")
+env.close()
+```
+
+## Benefits of EnvHub
+
+### For Environment Authors
+
+- **Easy distribution**: No PyPI packaging required
+- **Version control**: Use Git for environment versioning
+- **Rapid iteration**: Push updates instantly
+- **Documentation**: Hub README renders beautifully
+- **Community**: Reach LeRobot users directly
+
+### For Researchers
+
+- **Quick experiments**: Load any environment in one line
+- **Reproducibility**: Pin to specific commits
+- **Discovery**: Browse environments on the Hub
+- **No conflicts**: No need to install conflicting packages
+
+### For the Community
+
+- **Growing ecosystem**: More diverse simulation tasks
+- **Standardization**: Common `make_env` API
+- **Collaboration**: Fork and improve existing environments
+- **Accessibility**: Lower barrier to sharing research
+
+## Troubleshooting
+
+### "Refusing to execute remote code"
+
+You must explicitly pass `trust_remote_code=True`:
+
+```python
+env = make_env("user/repo", trust_remote_code=True)
+```
+
+### "Module X not found"
+
+The hub environment has dependencies you need to install:
+
+```bash
+# Check the repo's requirements.txt and install dependencies
+pip install gymnasium numpy
+```
+
+### "make_env not found in module"
+
+Your `env.py` must expose a `make_env` function:
+
+```python
+def make_env(n_envs: int, use_async_envs: bool):
+ # Your implementation
+ pass
+```
+
+### Environment returns wrong type
+
+The `make_env` function must return:
+
+- A `gym.vector.VectorEnv`, or
+- A single `gym.Env`, or
+- A dict `{suite_name: {task_id: VectorEnv}}`
+
+## Best Practices
+
+1. **Document your environment**: Include observation/action space descriptions, reward structure, and termination conditions in your README
+2. **Add requirements.txt**: List all dependencies with versions
+3. **Test thoroughly**: Verify your environment works locally before pushing
+4. **Use semantic versioning**: Tag releases with version numbers
+5. **Add examples**: Include usage examples in your README
+6. **Keep it simple**: Minimize dependencies when possible
+7. **License your work**: Add a LICENSE file to clarify usage terms
+
+## Future Directions
+
+The EnvHub ecosystem enables exciting possibilities:
+
+- **GPU-accelerated physics**: Share Isaac Gym or Brax environments
+- **Photorealistic rendering**: Distribute environments with advanced graphics
+- **Multi-agent scenarios**: Complex interaction tasks
+- **Real-world simulators**: Digital twins of physical setups
+- **Procedural generation**: Infinite task variations
+- **Domain randomization**: Pre-configured DR pipelines
+
+As more researchers and developers contribute, the diversity and quality of available environments will grow, benefiting the entire robotics learning community.
+
+## See Also
+
+- [Hugging Face Hub Documentation](https://huggingface.co/docs/hub/en/index)
+- [Gymnasium Documentation](https://gymnasium.farama.org/index.html)
+- [Example Hub Environment](https://huggingface.co/lerobot/cartpole-env)
diff --git a/docs/source/feetech.mdx b/docs/source/feetech.mdx
new file mode 100644
index 00000000..bba60e4c
--- /dev/null
+++ b/docs/source/feetech.mdx
@@ -0,0 +1,71 @@
+# Feetech Motor Firmware Update
+
+This tutorial guides you through updating the firmware of Feetech motors using the official Feetech software.
+
+## Prerequisites
+
+- Windows computer (Feetech software is only available for Windows)
+- Feetech motor control board
+- USB cable to connect the control board to your computer
+- Feetech motors connected to the control board
+
+## Step 1: Download Feetech Software
+
+1. Visit the official Feetech software download page: [https://www.feetechrc.com/software.html](https://www.feetechrc.com/software.html)
+2. Download the latest version of the Feetech debugging software (FD)
+3. Install the software on your Windows computer
+
+## Step 2: Hardware Setup
+
+1. Connect your Feetech motors to the motor control board
+2. Connect the motor control board to your Windows computer via USB cable
+3. Ensure power is supplied to the motors
+
+## Step 3: Configure Connection
+
+1. Launch the Feetech debugging software
+2. Select the correct COM port from the port dropdown menu
+ - If unsure which port to use, check Windows Device Manager under "Ports (COM & LPT)"
+3. Set the appropriate baud rate (typically 1000000 for most Feetech motors)
+4. Click "Open" to establish communication with the control board
+
+## Step 4: Scan for Motors
+
+1. Once connected, click the "Search" button to detect all connected motors
+2. The software will automatically discover and list all motors on the bus
+3. Each motor will appear with its ID number
+
+## Step 5: Update Firmware
+
+For each motor you want to update:
+
+1. **Select the motor** from the list by clicking on it
+2. **Click on Upgrade tab**:
+3. **Click on Online button**:
+ - If an potential firmware update is found, it will be displayed in the box
+4. **Click on Upgrade button**:
+ - The update progress will be displayed
+
+## Step 6: Verify Update
+
+1. After the update completes, the software should automatically refresh the motor information
+2. Verify that the firmware version has been updated to the expected version
+
+## Important Notes
+
+⚠️ **Warning**: Do not disconnect power or USB during firmware updates, it will potentially brick the motor.
+
+## Bonus: Motor Debugging on Linux/macOS
+
+For debugging purposes only, you can use the open-source Feetech Debug Tool:
+
+- **Repository**: [FT_SCServo_Debug_Qt](https://github.com/CarolinePascal/FT_SCServo_Debug_Qt/tree/fix/port-search-timer)
+
+### Installation Instructions
+
+Follow the instructions in the repository to install the tool, for Ubuntu you can directly install it, for MacOS you need to build it from source.
+
+**Limitations:**
+
+- This tool is for debugging and parameter adjustment only
+- Firmware updates must still be done on Windows with official Feetech software
diff --git a/docs/source/getting_started_real_world_robot.mdx b/docs/source/getting_started_real_world_robot.mdx
deleted file mode 100644
index dc7bb741..00000000
--- a/docs/source/getting_started_real_world_robot.mdx
+++ /dev/null
@@ -1,370 +0,0 @@
-# Getting Started with Real-World Robots
-
-This tutorial will explain you how to train a neural network to autonomously control a real robot.
-
-**You'll learn:**
-1. How to record and visualize your dataset.
-2. How to train a policy using your data and prepare it for evaluation.
-3. How to evaluate your policy and visualize the results.
-
-By following these steps, you'll be able to replicate tasks like picking up a Lego block and placing it in a bin with a high success rate, as demonstrated in [this video](https://x.com/RemiCadene/status/1814680760592572934).
-
-This tutorial is specifically made for the affordable [SO-101](https://github.com/TheRobotStudio/SO-ARM100) robot, but it contains additional information to be easily adapted to various types of robots like [Aloha bimanual robot](https://aloha-2.github.io) by changing some configurations. The SO-101 consists of a leader arm and a follower arm, each with 6 motors. It can work with one or several cameras to record the scene, which serve as visual sensors for the robot.
-
-During the data collection phase, you will control the follower arm by moving the leader arm. This process is known as "teleoperation." This technique is used to collect robot trajectories. Afterward, you'll train a neural network to imitate these trajectories and deploy the network to enable your robot to operate autonomously.
-
-If you encounter any issues at any step of the tutorial, feel free to seek help on [Discord](https://discord.com/invite/s3KuuzsPFb) or don't hesitate to iterate with us on the tutorial by creating issues or pull requests.
-
-## Setup and Calibrate
-
-If you haven't yet setup and calibrate the SO-101 follow these steps:
-1. [Find ports and update config file](./assemble_so101#find-the-usb-ports-associated-to-each-arm)
-2. [Calibrate](./assemble_so101#calibrate)
-
-## Teleoperate
-
-Run this simple script to teleoperate your robot (it won't connect and display the cameras):
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=so101 \
- --robot.cameras='{}' \
- --control.type=teleoperate
-```
-
-The teleoperate command will automatically:
-1. Identify any missing calibrations and initiate the calibration procedure.
-2. Connect the robot and start teleoperation.
-
-## Setup Cameras
-
-To connect a camera you have three options:
-1. OpenCVCamera which allows us to use any camera: usb, realsense, laptop webcam
-2. iPhone camera with MacOS
-3. Phone camera on Linux
-
-### Use OpenCVCamera
-
-The [`OpenCVCamera`](../lerobot/common/robot_devices/cameras/opencv.py) class allows you to efficiently record frames from most cameras using the [`opencv2`](https://docs.opencv.org) library. For more details on compatibility, see [Video I/O with OpenCV Overview](https://docs.opencv.org/4.x/d0/da7/videoio_overview.html).
-
-To instantiate an [`OpenCVCamera`](../lerobot/common/robot_devices/cameras/opencv.py), you need a camera index (e.g. `OpenCVCamera(camera_index=0)`). When you only have one camera like a webcam of a laptop, the camera index is usually `0` but it might differ, and the camera index might change if you reboot your computer or re-plug your camera. This behavior depends on your operating system.
-
-To find the camera indices, run the following utility script, which will save a few frames from each detected camera:
-```bash
-python lerobot/common/robot_devices/cameras/opencv.py \
- --images-dir outputs/images_from_opencv_cameras
-```
-
-The output will look something like this if you have two cameras connected:
-```
-Mac or Windows detected. Finding available camera indices through scanning all indices from 0 to 60
-[...]
-Camera found at index 0
-Camera found at index 1
-[...]
-Connecting cameras
-OpenCVCamera(0, fps=30.0, width=1920.0, height=1080.0, color_mode=rgb)
-OpenCVCamera(1, fps=24.0, width=1920.0, height=1080.0, color_mode=rgb)
-Saving images to outputs/images_from_opencv_cameras
-Frame: 0000 Latency (ms): 39.52
-[...]
-Frame: 0046 Latency (ms): 40.07
-Images have been saved to outputs/images_from_opencv_cameras
-```
-
-Check the saved images in `outputs/images_from_opencv_cameras` to identify which camera index corresponds to which physical camera (e.g. `0` for `camera_00` or `1` for `camera_01`):
-```
-camera_00_frame_000000.png
-[...]
-camera_00_frame_000047.png
-camera_01_frame_000000.png
-[...]
-camera_01_frame_000047.png
-```
-
-Note: Some cameras may take a few seconds to warm up, and the first frame might be black or green.
-
-Now that you have the camera indexes, you should specify the camera's in the config.
-
-### Use your phone
-
-
-
-To use your iPhone as a camera on macOS, enable the Continuity Camera feature:
-- Ensure your Mac is running macOS 13 or later, and your iPhone is on iOS 16 or later.
-- Sign in both devices with the same Apple ID.
-- Connect your devices with a USB cable or turn on Wi-Fi and Bluetooth for a wireless connection.
-
-For more details, visit [Apple support](https://support.apple.com/en-gb/guide/mac-help/mchl77879b8a/mac).
-
-Your iPhone should be detected automatically when running the camera setup script in the next section.
-
-
-
-
-If you want to use your phone as a camera on Linux, follow these steps to set up a virtual camera
-
-1. *Install `v4l2loopback-dkms` and `v4l-utils`*. Those packages are required to create virtual camera devices (`v4l2loopback`) and verify their settings with the `v4l2-ctl` utility from `v4l-utils`. Install them using:
-```python
-sudo apt install v4l2loopback-dkms v4l-utils
-```
-2. *Install [DroidCam](https://droidcam.app) on your phone*. This app is available for both iOS and Android.
-3. *Install [OBS Studio](https://obsproject.com)*. This software will help you manage the camera feed. Install it using [Flatpak](https://flatpak.org):
-```python
-flatpak install flathub com.obsproject.Studio
-```
-4. *Install the DroidCam OBS plugin*. This plugin integrates DroidCam with OBS Studio. Install it with:
-```python
-flatpak install flathub com.obsproject.Studio.Plugin.DroidCam
-```
-5. *Start OBS Studio*. Launch with:
-```python
-flatpak run com.obsproject.Studio
-```
-6. *Add your phone as a source*. Follow the instructions [here](https://droidcam.app/obs/usage). Be sure to set the resolution to `640x480`.
-7. *Adjust resolution settings*. In OBS Studio, go to `File > Settings > Video`. Change the `Base(Canvas) Resolution` and the `Output(Scaled) Resolution` to `640x480` by manually typing it in.
-8. *Start virtual camera*. In OBS Studio, follow the instructions [here](https://obsproject.com/kb/virtual-camera-guide).
-9. *Verify the virtual camera setup*. Use `v4l2-ctl` to list the devices:
-```python
-v4l2-ctl --list-devices
-```
-You should see an entry like:
-```
-VirtualCam (platform:v4l2loopback-000):
-/dev/video1
-```
-10. *Check the camera resolution*. Use `v4l2-ctl` to ensure that the virtual camera output resolution is `640x480`. Change `/dev/video1` to the port of your virtual camera from the output of `v4l2-ctl --list-devices`.
-```python
-v4l2-ctl -d /dev/video1 --get-fmt-video
-```
-You should see an entry like:
-```
->>> Format Video Capture:
->>> Width/Height : 640/480
->>> Pixel Format : 'YUYV' (YUYV 4:2:2)
-```
-
-Troubleshooting: If the resolution is not correct you will have to delete the Virtual Camera port and try again as it cannot be changed.
-
-If everything is set up correctly, you can proceed with the rest of the tutorial.
-
-
-
-
-## Teleoperate with cameras
-
-We can now teleoperate again while at the same time visualizing the cameras and joint positions with `rerun`.
-
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=so101 \
- --control.type=teleoperate
- --control.display_data=true
-```
-
-## Record a dataset
-
-Once you're familiar with teleoperation, you can record your first dataset with SO-101.
-
-We use the Hugging Face hub features for uploading your dataset. If you haven't previously used the Hub, make sure you can login via the cli using a write-access token, this token can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens).
-
-Add your token to the cli by running this command:
-```bash
-huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
-```
-
-Then store your Hugging Face repository name in a variable:
-```bash
-HF_USER=$(huggingface-cli whoami | head -n 1)
-echo $HF_USER
-```
-
-Now you can record a dataset, to record 2 episodes and upload your dataset to the hub execute this command:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=so101 \
- --control.type=record \
- --control.fps=30 \
- --control.single_task="Grasp a lego block and put it in the bin." \
- --control.repo_id=${HF_USER}/so101_test \
- --control.tags='["so101","tutorial"]' \
- --control.warmup_time_s=5 \
- --control.episode_time_s=30 \
- --control.reset_time_s=30 \
- --control.num_episodes=2 \
- --control.push_to_hub=true
-```
-
-You will see a lot of lines appearing like this one:
-```
-INFO 2024-08-10 15:02:58 ol_robot.py:219 dt:33.34 (30.0hz) dtRlead: 5.06 (197.5hz) dtWfoll: 0.25 (3963.7hz) dtRfoll: 6.22 (160.7hz) dtRlaptop: 32.57 (30.7hz) dtRphone: 33.84 (29.5hz)
-```
-
-| Field | Meaning |
-|:---|:---|
-| `2024-08-10 15:02:58` | Timestamp when `print` was called. |
-| `ol_robot.py:219` | Source file and line number of the `print` call (`lerobot/scripts/control_robot.py` at line `219`). |
-| `dt: 33.34 (30.0 Hz)` | Delta time (ms) between teleop steps (target: 30.0 Hz, `--fps 30`). Yellow if step is too slow. |
-| `dtRlead: 5.06 (197.5 Hz)` | Delta time (ms) for reading present position from the **leader arm**. |
-| `dtWfoll: 0.25 (3963.7 Hz)` | Delta time (ms) for writing goal position to the **follower arm** (asynchronous). |
-| `dtRfoll: 6.22 (160.7 Hz)` | Delta time (ms) for reading present position from the **follower arm**. |
-| `dtRlaptop: 32.57 (30.7 Hz)` | Delta time (ms) for capturing an image from the **laptop camera** (async thread). |
-| `dtRphone: 33.84 (29.5 Hz)` | Delta time (ms) for capturing an image from the **phone camera** (async thread). |
-
-
-#### Dataset upload
-Locally your dataset is stored in this folder: `~/.cache/huggingface/lerobot/{repo-id}` (e.g. `data/cadene/so101_test`). At the end of data recording, your dataset will be uploaded on your Hugging Face page (e.g. https://huggingface.co/datasets/cadene/so101_test) that you can obtain by running:
-```bash
-echo https://huggingface.co/datasets/${HF_USER}/so101_test
-```
-Your dataset will be automatically tagged with `LeRobot` for the community to find it easily, and you can also add custom tags (in this case `tutorial` for example).
-
-You can look for other LeRobot datasets on the hub by searching for `LeRobot` [tags](https://huggingface.co/datasets?other=LeRobot).
-
-#### Record function
-
-The `record` function provides a suite of tools for capturing and managing data during robot operation:
-
-##### 1. Frame Capture and Video Encoding
-- Frames from cameras are saved to disk during recording.
-- At the end of each episode, frames are encoded into video files.
-
-##### 2. Data Storage
-- Data is stored using the `LeRobotDataset` format.
-- By default, the dataset is pushed to your Hugging Face page.
- - To disable uploading, use `--control.push_to_hub=false`.
-
-##### 3. Checkpointing and Resuming
-- Checkpoints are automatically created during recording.
-- If an issue occurs, you can resume by re-running the same command with `--control.resume=true`.
-- To start recording from scratch, **manually delete** the dataset directory.
-
-##### 4. Recording Parameters
-Set the flow of data recording using command-line arguments:
-- `--control.warmup_time_s=10`
- Number of seconds before starting data collection (default: **10 seconds**).
- Allows devices to warm up and synchronize.
-- `--control.episode_time_s=60`
- Duration of each data recording episode (default: **60 seconds**).
-- `--control.reset_time_s=60`
- Duration for resetting the environment after each episode (default: **60 seconds**).
-- `--control.num_episodes=50`
- Total number of episodes to record (default: **50**).
-
-##### 5. Keyboard Controls During Recording
-Control the data recording flow using keyboard shortcuts:
-- Press **Right Arrow (`→`)**: Early stop the current episode or reset time and move to the next.
-- Press **Left Arrow (`←`)**: Cancel the current episode and re-record it.
-- Press **Escape (`ESC`)**: Immediately stop the session, encode videos, and upload the dataset.
-
-#### Tips for gathering data
-
-Once you're comfortable with data recording, you can create a larger dataset for training. A good starting task is grasping an object at different locations and placing it in a bin. We suggest recording at least 50 episodes, with 10 episodes per location. Keep the cameras fixed and maintain consistent grasping behavior throughout the recordings. Also make sure the object you are manipulating is visible on the camera's. A good rule of thumb is you should be able to do the task yourself by only looking at the camera images.
-
-In the following sections, you’ll train your neural network. After achieving reliable grasping performance, you can start introducing more variations during data collection, such as additional grasp locations, different grasping techniques, and altering camera positions.
-
-Avoid adding too much variation too quickly, as it may hinder your results.
-
-
-#### Troubleshooting:
-- On Linux, if the left and right arrow keys and escape key don't have any effect during data recording, make sure you've set the `$DISPLAY` environment variable. See [pynput limitations](https://pynput.readthedocs.io/en/latest/limitations.html#linux).
-
-## Visualize a dataset
-
-If you uploaded your dataset to the hub with `--control.push_to_hub=true`, you can [visualize your dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset) by copy pasting your repo id given by:
-```bash
-echo ${HF_USER}/so101_test
-```
-
-If you didn't upload with `--control.push_to_hub=false`, you can visualize it locally with (via a window in the browser `http://127.0.0.1:9090` with the visualization tool):
-```bash
-python lerobot/scripts/visualize_dataset_html.py \
- --repo-id ${HF_USER}/so101_test \
- --local-files-only 1
-```
-
-This will launch a local web server that looks like this:
-
-
-
-
-## Replay an episode
-
-A useful feature is the `replay` function, which allows to replay on your robot any episode that you've recorded or episodes from any dataset out there. This function helps you test the repeatability of your robot's actions and assess transferability across robots of the same model.
-
-You can replay the first episode on your robot with:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=so101 \
- --control.type=replay \
- --control.fps=30 \
- --control.repo_id=${HF_USER}/so101_test \
- --control.episode=0
-```
-
-Your robot should replicate movements similar to those you recorded. For example, check out [this video](https://x.com/RemiCadene/status/1793654950905680090) where we use `replay` on a Aloha robot from [Trossen Robotics](https://www.trossenrobotics.com).
-
-## Train a policy
-
-To train a policy to control your robot, use the [`python lerobot/scripts/train.py`](../lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
-```bash
-python lerobot/scripts/train.py \
- --dataset.repo_id=${HF_USER}/so101_test \
- --policy.type=act \
- --output_dir=outputs/train/act_so101_test \
- --job_name=act_so101_test \
- --policy.device=cuda \
- --wandb.enable=true
-```
-
-Let's explain the command:
-1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/so101_test`.
-2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../lerobot/common/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor states, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
-4. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
-5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
-
-Training should take several hours. You will find checkpoints in `outputs/train/act_so101_test/checkpoints`.
-
-To resume training from a checkpoint, below is an example command to resume from `last` checkpoint of the `act_so101_test` policy:
-```bash
-python lerobot/scripts/train.py \
- --config_path=outputs/train/act_so101_test/checkpoints/last/pretrained_model/train_config.json \
- --resume=true
-```
-
-#### Upload policy checkpoints
-
-Once training is done, upload the latest checkpoint with:
-```bash
-huggingface-cli upload ${HF_USER}/act_so101_test \
- outputs/train/act_so101_test/checkpoints/last/pretrained_model
-```
-
-You can also upload intermediate checkpoints with:
-```bash
-CKPT=010000
-huggingface-cli upload ${HF_USER}/act_so101_test${CKPT} \
- outputs/train/act_so101_test/checkpoints/${CKPT}/pretrained_model
-```
-
-## Evaluate your policy
-
-You can use the `record` function from [`lerobot/scripts/control_robot.py`](../lerobot/scripts/control_robot.py) but with a policy checkpoint as input. For instance, run this command to record 10 evaluation episodes:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=so101 \
- --control.type=record \
- --control.fps=30 \
- --control.single_task="Grasp a lego block and put it in the bin." \
- --control.repo_id=${HF_USER}/eval_act_so101_test \
- --control.tags='["tutorial"]' \
- --control.warmup_time_s=5 \
- --control.episode_time_s=30 \
- --control.reset_time_s=30 \
- --control.num_episodes=10 \
- --control.push_to_hub=true \
- --control.policy.path=outputs/train/act_so101_test/checkpoints/last/pretrained_model
-```
-
-As you can see, it's almost the same command as previously used to record your training dataset. Two things changed:
-1. There is an additional `--control.policy.path` argument which indicates the path to your policy checkpoint with (e.g. `outputs/train/eval_act_so101_test/checkpoints/last/pretrained_model`). You can also use the model repository if you uploaded a model checkpoint to the hub (e.g. `${HF_USER}/act_so101_test`).
-2. The name of dataset begins by `eval` to reflect that you are running inference (e.g. `${HF_USER}/eval_act_so101_test`).
diff --git a/docs/source/groot.mdx b/docs/source/groot.mdx
new file mode 100644
index 00000000..729a6465
--- /dev/null
+++ b/docs/source/groot.mdx
@@ -0,0 +1,125 @@
+# GR00T N1.5 Policy
+
+GR00T N1.5 is an open foundation model from NVIDIA designed for generalized humanoid robot reasoning and skills. It is a cross-embodiment model that accepts multimodal input, including language and images, to perform manipulation tasks in diverse environments.
+
+This document outlines the specifics of its integration and usage within the LeRobot framework.
+
+## Model Overview
+
+NVIDIA Isaac GR00T N1.5 is an upgraded version of the GR00T N1 foundation model. It is built to improve generalization and language-following abilities for humanoid robots.
+
+Developers and researchers can post-train GR00T N1.5 with their own real or synthetic data to adapt it for specific humanoid robots or tasks.
+
+GR00T N1.5 (specifically the GR00T-N1.5-3B model) is built using pre-trained vision and language encoders. It utilizes a flow matching action transformer to model a chunk of actions, conditioned on vision, language, and proprioception.
+
+Its strong performance comes from being trained on an expansive and diverse humanoid dataset, which includes:
+
+- Real captured data from robots.
+- Synthetic data generated using NVIDIA Isaac GR00T Blueprint.
+- Internet-scale video data.
+
+This approach allows the model to be highly adaptable through post-training for specific embodiments, tasks, and environments.
+
+## Installation Requirements
+
+As of today, GR00T N1.5 requires flash attention for it's internal working.
+
+We are working on making this optional, but in the meantime that means that we require an extra installation step and it can only be used in CUDA enabled devices.
+
+1. Following the Environment Setup of our [Installation Guide](./installation). **Attention** don't install `lerobot` in this step.
+2. Install [Flash Attention](https://github.com/Dao-AILab/flash-attention) by running:
+
+```bash
+# Check https://pytorch.org/get-started/locally/ for your system
+pip install "torch>=2.2.1,<2.8.0" "torchvision>=0.21.0,<0.23.0" # --index-url https://download.pytorch.org/whl/cu1XX
+pip install ninja "packaging>=24.2,<26.0" # flash attention dependencies
+pip install "flash-attn>=2.5.9,<3.0.0" --no-build-isolation
+python -c "import flash_attn; print(f'Flash Attention {flash_attn.__version__} imported successfully')"
+```
+
+3. Install LeRobot by running:
+
+```bash
+pip install lerobot[groot]
+```
+
+## Usage
+
+To use GR00T in your LeRobot configuration, specify the policy type as:
+
+```python
+policy.type=groot
+```
+
+## Training
+
+### Training Command Example
+
+Here's a complete training command for finetuning the base GR00T model on your own dataset:
+
+```bash
+# Using a multi-GPU setup
+accelerate launch \
+ --multi_gpu \
+ --num_processes=$NUM_GPUS \
+ $(which lerobot-train) \
+ --output_dir=$OUTPUT_DIR \
+ --save_checkpoint=true \
+ --batch_size=$BATCH_SIZE \
+ --steps=$NUM_STEPS \
+ --save_freq=$SAVE_FREQ \
+ --log_freq=$LOG_FREQ \
+ --policy.push_to_hub=true \
+ --policy.type=groot \
+ --policy.repo_id=$REPO_ID \
+ --policy.tune_diffusion_model=false \
+ --dataset.repo_id=$DATASET_ID \
+ --wandb.enable=true \
+ --wandb.disable_artifact=true \
+ --job_name=$JOB_NAME
+```
+
+## Performance Results
+
+### Libero Benchmark Results
+
+> [!NOTE]
+> Follow our instructions for Libero usage: [Libero](./libero)
+
+GR00T has demonstrated strong performance on the Libero benchmark suite. To compare and test its LeRobot implementation, we finetuned the GR00T N1.5 model for 30k steps on the Libero dataset and compared the results to the GR00T reference results.
+
+| Benchmark | LeRobot Implementation | GR00T Reference |
+| ------------------ | ---------------------- | --------------- |
+| **Libero Spatial** | 82.0% | 92.0% |
+| **Libero Object** | 99.0% | 92.0% |
+| **Libero Long** | 82.0% | 76.0% |
+| **Average** | 87.0% | 87.0% |
+
+These results demonstrate GR00T's strong generalization capabilities across diverse robotic manipulation tasks. To reproduce these results, you can follow the instructions in the [Libero](https://huggingface.co/docs/lerobot/libero) section.
+
+### Evaluate in your hardware setup
+
+Once you have trained your model using your parameters you can run inference in your downstream task. Follow the instructions in [Imitation Learning for Robots](./il_robots). For example:
+
+```bash
+lerobot-record \
+ --robot.type=bi_so100_follower \
+ --robot.left_arm_port=/dev/ttyACM1 \
+ --robot.right_arm_port=/dev/ttyACM0 \
+ --robot.id=bimanual_follower \
+ --robot.cameras='{ right: {"type": "opencv", "index_or_path": 0, "width": 640, "height": 480, "fps": 30},
+ left: {"type": "opencv", "index_or_path": 2, "width": 640, "height": 480, "fps": 30},
+ top: {"type": "opencv", "index_or_path": 4, "width": 640, "height": 480, "fps": 30},
+ }' \
+ --display_data=true \
+ --dataset.repo_id=/eval_groot-bimanual \
+ --dataset.num_episodes=10 \
+ --dataset.single_task="Grab and handover the red cube to the other arm"
+ --policy.path=/groot-bimanual # your trained model
+ --dataset.episode_time_s=30
+ --dataset.reset_time_s=10
+```
+
+## License
+
+This model follows the **Apache 2.0 License**, consistent with the original [GR00T repository](https://github.com/NVIDIA/Isaac-GR00T).
diff --git a/docs/source/hilserl.mdx b/docs/source/hilserl.mdx
new file mode 100644
index 00000000..ad1c74f9
--- /dev/null
+++ b/docs/source/hilserl.mdx
@@ -0,0 +1,923 @@
+# HIL-SERL Real Robot Training Workflow Guide
+
+In this tutorial you will go through the full Human-in-the-Loop Sample-Efficient Reinforcement Learning (HIL-SERL) workflow using LeRobot. You will master training a policy with RL on a real robot in just a few hours.
+
+HIL-SERL is a sample-efficient reinforcement learning algorithm that combines human demonstrations with online learning and human interventions. The approach starts from a small set of human demonstrations, uses them to train a reward classifier, and then employs an actor-learner architecture where humans can intervene during policy execution to guide exploration and correct unsafe behaviors. In this tutorial, you'll use a gamepad to provide interventions and control the robot during the learning process.
+
+It combines three key ingredients:
+
+1. **Offline demonstrations & reward classifier:** a handful of human-teleop episodes plus a vision-based success detector give the policy a shaped starting point.
+
+2. **On-robot actor / learner loop with human interventions:** a distributed Soft Actor Critic (SAC) learner updates the policy while an actor explores on the physical robot; the human can jump in at any time to correct dangerous or unproductive behaviour.
+
+3. **Safety & efficiency tools:** joint/end-effector (EE) bounds, crop region of interest (ROI) preprocessing and WandB monitoring keep the data useful and the hardware safe.
+
+Together these elements let HIL-SERL reach near-perfect task success and faster cycle times than imitation-only baselines.
+
+
+
+
+
+
+ HIL-SERL workflow, Luo et al. 2024
+
+
+This guide provides step-by-step instructions for training a robot policy using LeRobot's HilSerl implementation to train on a real robot.
+
+## What do I need?
+
+- A gamepad (recommended) or keyboard to control the robot
+- A Nvidia GPU
+- A real robot with a follower and leader arm (optional if you use the keyboard or the gamepad)
+- A URDF file for the robot for the kinematics package (check `lerobot/model/kinematics.py`)
+
+## What kind of tasks can I train?
+
+One can use HIL-SERL to train on a variety of manipulation tasks. Some recommendations:
+
+- Start with a simple task to understand how the system works.
+ - Push cube to a goal region
+ - Pick and lift cube with the gripper
+- Avoid extremely long horizon tasks. Focus on tasks that can be completed in 5-10 seconds.
+- Once you have a good idea of how the system works, you can try more complex tasks and longer horizons.
+ - Pick and place cube
+ - Bimanual tasks to pick objects with two arms
+ - Hand-over tasks to transfer objects from one arm to another
+ - Go crazy!
+
+## Install LeRobot with HIL-SERL
+
+To install LeRobot with HIL-SERL, you need to install the `hilserl` extra.
+
+```bash
+pip install -e ".[hilserl]"
+```
+
+## Real Robot Training Workflow
+
+### Understanding Configuration
+
+The training process begins with proper configuration for the HILSerl environment. The main configuration class is `GymManipulatorConfig` in `lerobot/rl/gym_manipulator.py`, which contains nested `HILSerlRobotEnvConfig` and `DatasetConfig`. The configuration is organized into focused, nested sub-configs:
+
+
+```python
+class GymManipulatorConfig:
+ env: HILSerlRobotEnvConfig # Environment configuration (nested)
+ dataset: DatasetConfig # Dataset recording/replay configuration (nested)
+ mode: str | None = None # "record", "replay", or None (for training)
+ device: str = "cpu" # Compute device
+
+class HILSerlRobotEnvConfig(EnvConfig):
+ robot: RobotConfig | None = None # Main robot agent (defined in `lerobot/robots`)
+ teleop: TeleoperatorConfig | None = None # Teleoperator agent, e.g., gamepad or leader arm
+ processor: HILSerlProcessorConfig # Processing pipeline configuration (nested)
+ name: str = "real_robot" # Environment name
+ task: str | None = None # Task identifier
+ fps: int = 10 # Control frequency
+
+# Nested processor configuration
+class HILSerlProcessorConfig:
+ control_mode: str = "gamepad" # Control mode
+ observation: ObservationConfig | None = None # Observation processing settings
+ image_preprocessing: ImagePreprocessingConfig | None = None # Image crop/resize settings
+ gripper: GripperConfig | None = None # Gripper control and penalty settings
+ reset: ResetConfig | None = None # Environment reset and timing settings
+ inverse_kinematics: InverseKinematicsConfig | None = None # IK processing settings
+ reward_classifier: RewardClassifierConfig | None = None # Reward classifier settings
+ max_gripper_pos: float | None = 100.0 # Maximum gripper position
+
+# Sub-configuration classes
+class ObservationConfig:
+ add_joint_velocity_to_observation: bool = False # Add joint velocities to state
+ add_current_to_observation: bool = False # Add motor currents to state
+ display_cameras: bool = False # Display camera feeds during execution
+
+class ImagePreprocessingConfig:
+ crop_params_dict: dict[str, tuple[int, int, int, int]] | None = None # Image cropping parameters
+ resize_size: tuple[int, int] | None = None # Target image size
+
+class GripperConfig:
+ use_gripper: bool = True # Enable gripper control
+ gripper_penalty: float = 0.0 # Penalty for inappropriate gripper usage
+
+class ResetConfig:
+ fixed_reset_joint_positions: Any | None = None # Joint positions for reset
+ reset_time_s: float = 5.0 # Time to wait during reset
+ control_time_s: float = 20.0 # Maximum episode duration
+ terminate_on_success: bool = True # Whether to terminate episodes on success detection
+
+class InverseKinematicsConfig:
+ urdf_path: str | None = None # Path to robot URDF file
+ target_frame_name: str | None = None # End-effector frame name
+ end_effector_bounds: dict[str, list[float]] | None = None # EE workspace bounds
+ end_effector_step_sizes: dict[str, float] | None = None # EE step sizes per axis
+
+class RewardClassifierConfig:
+ pretrained_path: str | None = None # Path to pretrained reward classifier
+ success_threshold: float = 0.5 # Success detection threshold
+ success_reward: float = 1.0 # Reward value for successful episodes
+
+# Dataset configuration
+class DatasetConfig:
+ repo_id: str # LeRobot dataset repository ID
+ task: str # Task identifier
+ root: str | None = None # Local dataset root directory
+ num_episodes_to_record: int = 5 # Number of episodes for recording
+ replay_episode: int | None = None # Episode index for replay
+ push_to_hub: bool = False # Whether to push datasets to Hub
+```
+
+
+### Processor Pipeline Architecture
+
+HIL-SERL uses a modular processor pipeline architecture that processes robot observations and actions through a series of composable steps. The pipeline is divided into two main components:
+
+#### Environment Processor Pipeline
+
+The environment processor (`env_processor`) handles incoming observations and environment state:
+
+1. **VanillaObservationProcessorStep**: Converts raw robot observations into standardized format
+2. **JointVelocityProcessorStep** (optional): Adds joint velocity information to observations
+3. **MotorCurrentProcessorStep** (optional): Adds motor current readings to observations
+4. **ForwardKinematicsJointsToEE** (optional): Computes end-effector pose from joint positions
+5. **ImageCropResizeProcessorStep** (optional): Crops and resizes camera images
+6. **TimeLimitProcessorStep** (optional): Enforces episode time limits
+7. **GripperPenaltyProcessorStep** (optional): Applies penalties for inappropriate gripper usage
+8. **RewardClassifierProcessorStep** (optional): Automated reward detection using vision models
+9. **AddBatchDimensionProcessorStep**: Converts data to batch format for neural network processing
+10. **DeviceProcessorStep**: Moves data to the specified compute device (CPU/GPU)
+
+#### Action Processor Pipeline
+
+The action processor (`action_processor`) handles outgoing actions and human interventions:
+
+1. **AddTeleopActionAsComplimentaryDataStep**: Captures teleoperator actions for logging
+2. **AddTeleopEventsAsInfoStep**: Records intervention events and episode control signals
+3. **InterventionActionProcessorStep**: Handles human interventions and episode termination
+4. **Inverse Kinematics Pipeline** (when enabled):
+ - **MapDeltaActionToRobotActionStep**: Converts delta actions to robot action format
+ - **EEReferenceAndDelta**: Computes end-effector reference and delta movements
+ - **EEBoundsAndSafety**: Enforces workspace safety bounds
+ - **InverseKinematicsEEToJoints**: Converts end-effector actions to joint targets
+ - **GripperVelocityToJoint**: Handles gripper control commands
+
+#### Configuration Examples
+
+**Basic Observation Processing**:
+
+```json
+{
+ "env": {
+ "processor": {
+ "observation": {
+ "add_joint_velocity_to_observation": true,
+ "add_current_to_observation": false,
+ "display_cameras": false
+ }
+ }
+ }
+}
+```
+
+**Image Processing**:
+
+```json
+{
+ "env": {
+ "processor": {
+ "image_preprocessing": {
+ "crop_params_dict": {
+ "observation.images.front": [180, 250, 120, 150],
+ "observation.images.side": [180, 207, 180, 200]
+ },
+ "resize_size": [128, 128]
+ }
+ }
+ }
+}
+```
+
+**Inverse Kinematics Setup**:
+
+```json
+{
+ "env": {
+ "processor": {
+ "inverse_kinematics": {
+ "urdf_path": "path/to/robot.urdf",
+ "target_frame_name": "end_effector",
+ "end_effector_bounds": {
+ "min": [0.16, -0.08, 0.03],
+ "max": [0.24, 0.2, 0.1]
+ },
+ "end_effector_step_sizes": {
+ "x": 0.02,
+ "y": 0.02,
+ "z": 0.02
+ }
+ }
+ }
+ }
+}
+```
+
+### Advanced Observation Processing
+
+The HIL-SERL framework supports additional observation processing features that can improve policy learning:
+
+#### Joint Velocity Processing
+
+Enable joint velocity estimation to provide the policy with motion information:
+
+```json
+{
+ "env": {
+ "processor": {
+ "observation": {
+ "add_joint_velocity_to_observation": true
+ }
+ }
+ }
+}
+```
+
+This processor:
+
+- Estimates joint velocities using finite differences between consecutive joint position readings
+- Adds velocity information to the observation state vector
+- Useful for policies that need motion awareness for dynamic tasks
+
+#### Motor Current Processing
+
+Monitor motor currents to detect contact forces and load conditions:
+
+```json
+{
+ "env": {
+ "processor": {
+ "observation": {
+ "add_current_to_observation": true
+ }
+ }
+ }
+}
+```
+
+This processor:
+
+- Reads motor current values from the robot's control system
+- Adds current measurements to the observation state vector
+- Helps detect contact events, object weights, and mechanical resistance
+- Useful for contact-rich manipulation tasks
+
+#### Combined Observation Processing
+
+You can enable multiple observation processing features simultaneously:
+
+```json
+{
+ "env": {
+ "processor": {
+ "observation": {
+ "add_joint_velocity_to_observation": true,
+ "add_current_to_observation": true,
+ "display_cameras": false
+ }
+ }
+ }
+}
+```
+
+**Note**: Enabling additional observation features increases the state space dimensionality, which may require adjusting your policy network architecture and potentially collecting more training data.
+
+### Finding Robot Workspace Bounds
+
+Before collecting demonstrations, you need to determine the appropriate operational bounds for your robot.
+
+This helps simplify the problem of learning on the real robot in two ways: 1) by limiting the robot's operational space to a specific region that solves the task and avoids unnecessary or unsafe exploration, and 2) by allowing training in end-effector space rather than joint space. Empirically, learning in joint space for reinforcement learning in manipulation is often a harder problem - some tasks are nearly impossible to learn in joint space but become learnable when the action space is transformed to end-effector coordinates.
+
+**Using lerobot-find-joint-limits**
+
+This script helps you find the safe operational bounds for your robot's end-effector. Given that you have a follower and leader arm, you can use the script to find the bounds for the follower arm that will be applied during training.
+Bounding the action space will reduce the redundant exploration of the agent and guarantees safety.
+
+```bash
+lerobot-find-joint-limits \
+ --robot.type=so100_follower \
+ --robot.port=/dev/tty.usbmodem58760431541 \
+ --robot.id=black \
+ --teleop.type=so100_leader \
+ --teleop.port=/dev/tty.usbmodem58760431551 \
+ --teleop.id=blue
+```
+
+**Workflow**
+
+1. Run the script and move the robot through the space that solves the task
+2. The script will record the minimum and maximum end-effector positions and the joint angles and prints them to the console, for example:
+ ```
+ Max ee position [0.2417 0.2012 0.1027]
+ Min ee position [0.1663 -0.0823 0.0336]
+ Max joint positions [-20.0, -20.0, -20.0, -20.0, -20.0, -20.0]
+ Min joint positions [50.0, 50.0, 50.0, 50.0, 50.0, 50.0]
+ ```
+3. Use these values in the configuration of your teleoperation device (TeleoperatorConfig) under the `end_effector_bounds` field
+
+**Example Configuration**
+
+```json
+"end_effector_bounds": {
+ "max": [0.24, 0.20, 0.10],
+ "min": [0.16, -0.08, 0.03]
+}
+```
+
+### Collecting Demonstrations
+
+With the bounds defined, you can safely collect demonstrations for training. Training RL with off-policy algorithm allows us to use offline datasets collected in order to improve the efficiency of the learning process.
+
+**Setting Up Record Mode**
+
+Create a configuration file for recording demonstrations (or edit an existing one like [env_config.json](https://huggingface.co/datasets/lerobot/config_examples/resolve/main/rl/env_config.json)):
+
+1. Set `mode` to `"record"` at the root level
+2. Specify a unique `repo_id` for your dataset in the `dataset` section (e.g., "username/task_name")
+3. Set `num_episodes_to_record` in the `dataset` section to the number of demonstrations you want to collect
+4. Set `env.processor.image_preprocessing.crop_params_dict` to `{}` initially (we'll determine crops later)
+5. Configure `env.robot`, `env.teleop`, and other hardware settings in the `env` section
+
+Example configuration section:
+
+```json
+{
+ "env": {
+ "type": "gym_manipulator",
+ "name": "real_robot",
+ "fps": 10,
+ "processor": {
+ "control_mode": "gamepad",
+ "observation": {
+ "display_cameras": false
+ },
+ "image_preprocessing": {
+ "crop_params_dict": {},
+ "resize_size": [128, 128]
+ },
+ "gripper": {
+ "use_gripper": true,
+ "gripper_penalty": 0.0
+ },
+ "reset": {
+ "reset_time_s": 5.0,
+ "control_time_s": 20.0
+ }
+ },
+ "robot": {
+ // ... robot configuration ...
+ },
+ "teleop": {
+ // ... teleoperator configuration ...
+ }
+ },
+ "dataset": {
+ "repo_id": "username/pick_lift_cube",
+ "root": null,
+ "task": "pick_and_lift",
+ "num_episodes_to_record": 15,
+ "replay_episode": 0,
+ "push_to_hub": true
+ },
+ "mode": "record",
+ "device": "cpu"
+}
+```
+
+### Using a Teleoperation Device
+
+Along with your robot, you will need a teleoperation device to control it in order to collect datasets of your task and perform interventions during the online training.
+We support using a gamepad or a keyboard or the leader arm of the robot.
+
+HIL-Serl learns actions in the end-effector space of the robot. Therefore, the teleoperation will control the end-effector's x,y,z displacements.
+
+For that we need to define a version of the robot that takes actions in the end-effector space. Check the robot class `SO100FollowerEndEffector` and its configuration `SO100FollowerEndEffectorConfig` for the default parameters related to the end-effector space.
+
+
+```python
+class SO100FollowerEndEffectorConfig(SO100FollowerConfig):
+ """Configuration for the SO100FollowerEndEffector robot."""
+
+ # Default bounds for the end-effector position (in meters)
+ end_effector_bounds: dict[str, list[float]] = field( # bounds for the end-effector in x,y,z direction
+ default_factory=lambda: {
+ "min": [-1.0, -1.0, -1.0], # min x, y, z
+ "max": [1.0, 1.0, 1.0], # max x, y, z
+ }
+ )
+
+ max_gripper_pos: float = 50 # maximum gripper position that the gripper will be open at
+
+ end_effector_step_sizes: dict[str, float] = field( # maximum step size for the end-effector in x,y,z direction
+ default_factory=lambda: {
+ "x": 0.02,
+ "y": 0.02,
+ "z": 0.02,
+ }
+ )
+```
+
+
+The `Teleoperator` defines the teleoperation device. You can check the list of available teleoperators in `lerobot/teleoperators`.
+
+**Setting up the Gamepad**
+
+The gamepad provides a very convenient way to control the robot and the episode state.
+
+To setup the gamepad, you need to set the `control_mode` to `"gamepad"` and define the `teleop` section in the configuration file.
+
+```json
+{
+ "env": {
+ "teleop": {
+ "type": "gamepad",
+ "use_gripper": true
+ },
+ "processor": {
+ "control_mode": "gamepad",
+ "gripper": {
+ "use_gripper": true
+ }
+ }
+ }
+}
+```
+
+
+
+
+
+ Gamepad button mapping for robot control and episode management
+
+
+**Setting up the SO101 leader**
+
+The SO101 leader arm has reduced gears that allows it to move and track the follower arm during exploration. Therefore, taking over is much smoother than the gearless SO100.
+
+To setup the SO101 leader, you need to set the `control_mode` to `"leader"` and define the `teleop` section in the configuration file.
+
+```json
+{
+ "env": {
+ "teleop": {
+ "type": "so101_leader",
+ "port": "/dev/tty.usbmodem585A0077921",
+ "use_degrees": true
+ },
+ "processor": {
+ "control_mode": "leader",
+ "gripper": {
+ "use_gripper": true
+ }
+ }
+ }
+}
+```
+
+In order to annotate the success/failure of the episode, **you will need** to use a keyboard to press `s` for success, `esc` for failure.
+During the online training, press `space` to take over the policy and `space` again to give the control back to the policy.
+
+
+Video: SO101 leader teleoperation
+
+
+
+
+
+
SO101 leader teleoperation example, the leader tracks the follower, press `space` to intervene
+
+
+**Recording Demonstrations**
+
+Start the recording process, an example of the config file can be found [here](https://huggingface.co/datasets/aractingi/lerobot-example-config-files/blob/main/env_config_so100.json):
+
+```bash
+python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/env_config_so100.json
+```
+
+During recording:
+
+1. The robot will reset to the initial position defined in the configuration file `env.processor.reset.fixed_reset_joint_positions`
+2. Complete the task successfully
+3. The episode ends with a reward of 1 when you press the "success" button
+4. If the time limit is reached, or the fail button is pressed, the episode ends with a reward of 0
+5. You can rerecord an episode by pressing the "rerecord" button
+6. The process automatically continues to the next episode
+7. After recording all episodes, the dataset is pushed to the Hugging Face Hub (optional) and saved locally
+
+### Processing the Dataset
+
+After collecting demonstrations, process them to determine optimal camera crops.
+Reinforcement learning is sensitive to background distractions, so it is important to crop the images to the relevant workspace area.
+
+Visual RL algorithms learn directly from pixel inputs, making them vulnerable to irrelevant visual information. Background elements like changing lighting, shadows, people moving, or objects outside the workspace can confuse the learning process. Good ROI selection should:
+
+- Include only the essential workspace where the task happens
+- Capture the robot's end-effector and all objects involved in the task
+- Exclude unnecessary background elements and distractions
+
+Note: If you already know the crop parameters, you can skip this step and just set the `crop_params_dict` in the configuration file during recording.
+
+**Determining Crop Parameters**
+
+Use the `crop_dataset_roi.py` script to interactively select regions of interest in your camera images:
+
+```bash
+python -m lerobot.rl.crop_dataset_roi --repo-id username/pick_lift_cube
+```
+
+1. For each camera view, the script will display the first frame
+2. Draw a rectangle around the relevant workspace area
+3. Press 'c' to confirm the selection
+4. Repeat for all camera views
+5. The script outputs cropping parameters and creates a new cropped dataset
+
+Example output:
+
+```
+Selected Rectangular Regions of Interest (top, left, height, width):
+observation.images.side: [180, 207, 180, 200]
+observation.images.front: [180, 250, 120, 150]
+```
+
+
+
+
+
+
+ Interactive cropping tool for selecting regions of interest
+
+
+**Updating Configuration**
+
+Add these crop parameters to your training configuration:
+
+```json
+{
+ "env": {
+ "processor": {
+ "image_preprocessing": {
+ "crop_params_dict": {
+ "observation.images.side": [180, 207, 180, 200],
+ "observation.images.front": [180, 250, 120, 150]
+ },
+ "resize_size": [128, 128]
+ }
+ }
+ }
+}
+```
+
+**Recommended image resolution**
+
+Most vision-based policies have been validated on square inputs of either **128×128** (default) or **64×64** pixels. We therefore advise setting the resize_size parameter to [128, 128] – or [64, 64] if you need to save GPU memory and bandwidth. Other resolutions are possible but have not been extensively tested.
+
+### Training a Reward Classifier
+
+The reward classifier plays an important role in the HIL-SERL workflow by automating reward assignment and automatically detecting episode success. Instead of manually defining reward functions or relying on human feedback for every timestep, the reward classifier learns to predict success/failure from visual observations. This enables the RL algorithm to learn efficiently by providing consistent and automated reward signals based on the robot's camera inputs.
+
+This guide explains how to train a reward classifier for human-in-the-loop reinforcement learning implementation of LeRobot. Reward classifiers learn to predict the reward value given a state which can be used in an RL setup to train a policy.
+
+**Note**: Training a reward classifier is optional. You can start the first round of RL experiments by annotating the success manually with your gamepad or keyboard device.
+
+The reward classifier implementation in `modeling_classifier.py` uses a pretrained vision model to process the images. It can output either a single value for binary rewards to predict success/fail cases or multiple values for multi-class settings.
+
+**Collecting a Dataset for the reward classifier**
+
+Before training, you need to collect a dataset with labeled examples. The `record_dataset` function in `gym_manipulator.py` enables the process of collecting a dataset of observations, actions, and rewards.
+
+To collect a dataset, you need to modify some parameters in the environment configuration based on HILSerlRobotEnvConfig.
+
+```bash
+python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/reward_classifier_train_config.json
+```
+
+**Key Parameters for Data Collection**
+
+- **mode**: set it to `"record"` to collect a dataset (at root level)
+- **dataset.repo_id**: `"hf_username/dataset_name"`, name of the dataset and repo on the hub
+- **dataset.num_episodes_to_record**: Number of episodes to record
+- **env.processor.reset.terminate_on_success**: Whether to automatically terminate episodes when success is detected (default: `true`)
+- **env.fps**: Number of frames per second to record
+- **dataset.push_to_hub**: Whether to push the dataset to the hub
+
+The `env.processor.reset.terminate_on_success` parameter allows you to control episode termination behavior. When set to `false`, episodes will continue even after success is detected, allowing you to collect more positive examples with the reward=1 label. This is crucial for training reward classifiers as it provides more success state examples in your dataset. When set to `true` (default), episodes terminate immediately upon success detection.
+
+**Important**: For reward classifier training, set `terminate_on_success: false` to collect sufficient positive examples. For regular HIL-SERL training, keep it as `true` to enable automatic episode termination when the task is completed successfully.
+
+Example configuration section for data collection:
+
+```json
+{
+ "env": {
+ "type": "gym_manipulator",
+ "name": "real_robot",
+ "fps": 10,
+ "processor": {
+ "reset": {
+ "reset_time_s": 5.0,
+ "control_time_s": 20.0,
+ "terminate_on_success": false
+ },
+ "gripper": {
+ "use_gripper": true
+ }
+ },
+ "robot": {
+ // ... robot configuration ...
+ },
+ "teleop": {
+ // ... teleoperator configuration ...
+ }
+ },
+ "dataset": {
+ "repo_id": "hf_username/dataset_name",
+ "dataset_root": "data/your_dataset",
+ "task": "reward_classifier_task",
+ "num_episodes_to_record": 20,
+ "replay_episode": null,
+ "push_to_hub": true
+ },
+ "mode": "record",
+ "device": "cpu"
+}
+```
+
+**Reward Classifier Configuration**
+
+The reward classifier is configured using `configuration_classifier.py`. Here are the key parameters:
+
+- **model_name**: Base model architecture (e.g., we mainly use `"helper2424/resnet10"`)
+- **model_type**: `"cnn"` or `"transformer"`
+- **num_cameras**: Number of camera inputs
+- **num_classes**: Number of output classes (typically 2 for binary success/failure)
+- **hidden_dim**: Size of hidden representation
+- **dropout_rate**: Regularization parameter
+- **learning_rate**: Learning rate for optimizer
+
+Example configuration for training the [reward classifier](https://huggingface.co/datasets/aractingi/lerobot-example-config-files/blob/main/reward_classifier_train_config.json):
+
+```json
+{
+ "policy": {
+ "type": "reward_classifier",
+ "model_name": "helper2424/resnet10",
+ "model_type": "cnn",
+ "num_cameras": 2,
+ "num_classes": 2,
+ "hidden_dim": 256,
+ "dropout_rate": 0.1,
+ "learning_rate": 1e-4,
+ "device": "cuda",
+ "use_amp": true,
+ "input_features": {
+ "observation.images.front": {
+ "type": "VISUAL",
+ "shape": [3, 128, 128]
+ },
+ "observation.images.side": {
+ "type": "VISUAL",
+ "shape": [3, 128, 128]
+ }
+ }
+ }
+}
+```
+
+**Training the Classifier**
+
+To train the classifier, use the `train.py` script with your configuration:
+
+```bash
+lerobot-train --config_path path/to/reward_classifier_train_config.json
+```
+
+**Deploying and Testing the Model**
+
+To use your trained reward classifier, configure the `HILSerlRobotEnvConfig` to use your model:
+
+
+```python
+config = GymManipulatorConfig(
+ env=HILSerlRobotEnvConfig(
+ processor=HILSerlProcessorConfig(
+ reward_classifier=RewardClassifierConfig(
+ pretrained_path="path_to_your_pretrained_trained_model"
+ )
+ ),
+ # Other environment parameters
+ ),
+ dataset=DatasetConfig(...),
+ mode=None # For training
+)
+```
+
+
+or set the argument in the json config file.
+
+```json
+{
+ "env": {
+ "processor": {
+ "reward_classifier": {
+ "pretrained_path": "path_to_your_pretrained_model",
+ "success_threshold": 0.7,
+ "success_reward": 1.0
+ },
+ "reset": {
+ "terminate_on_success": true
+ }
+ }
+ }
+}
+```
+
+Run `gym_manipulator.py` to test the model.
+
+```bash
+python -m lerobot.rl.gym_manipulator --config_path path/to/env_config.json
+```
+
+The reward classifier will automatically provide rewards based on the visual input from the robot's cameras.
+
+**Example Workflow for training the reward classifier**
+
+1. **Create the configuration files**:
+ Create the necessary json configuration files for the reward classifier and the environment. Check the examples [here](https://huggingface.co/datasets/lerobot/config_examples/resolve/main/reward_classifier/config.json).
+
+2. **Collect a dataset**:
+
+ ```bash
+ python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/env_config.json
+ ```
+
+3. **Train the classifier**:
+
+ ```bash
+ lerobot-train --config_path src/lerobot/configs/reward_classifier_train_config.json
+ ```
+
+4. **Test the classifier**:
+ ```bash
+ python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/env_config.json
+ ```
+
+### Training with Actor-Learner
+
+The LeRobot system uses a distributed actor-learner architecture for training. This architecture decouples robot interactions from the learning process, allowing them to run concurrently without blocking each other. The actor server handles robot observations and actions, sending interaction data to the learner server. The learner server performs gradient descent and periodically updates the actor's policy weights. You will need to start two processes: a learner and an actor.
+
+**Configuration Setup**
+
+Create a training configuration file (example available [here](https://huggingface.co/datasets/lerobot/config_examples/resolve/main/rl/train_config.json)). The training config is based on the main `TrainRLServerPipelineConfig` class in `lerobot/configs/train.py`.
+
+1. Configure the policy settings (`type="sac"`, `device`, etc.)
+2. Set `dataset` to your cropped dataset
+3. Configure environment settings with crop parameters
+4. Check the other parameters related to SAC in [configuration_sac.py](https://github.com/huggingface/lerobot/blob/main/src/lerobot/policies/sac/configuration_sac.py#L79).
+5. Verify that the `policy` config is correct with the right `input_features` and `output_features` for your task.
+
+**Starting the Learner**
+
+First, start the learner server process:
+
+```bash
+python -m lerobot.rl.learner --config_path src/lerobot/configs/train_config_hilserl_so100.json
+```
+
+The learner:
+
+- Initializes the policy network
+- Prepares replay buffers
+- Opens a `gRPC` server to communicate with actors
+- Processes transitions and updates the policy
+
+**Starting the Actor**
+
+In a separate terminal, start the actor process with the same configuration:
+
+```bash
+python -m lerobot.rl.actor --config_path src/lerobot/configs/train_config_hilserl_so100.json
+```
+
+The actor:
+
+- Connects to the learner via `gRPC`
+- Initializes the environment
+- Execute rollouts of the policy to collect experience
+- Sends transitions to the learner
+- Receives updated policy parameters
+
+**Training Flow**
+
+The training proceeds automatically:
+
+1. The actor executes the policy in the environment
+2. Transitions are collected and sent to the learner
+3. The learner updates the policy based on these transitions
+4. Updated policy parameters are sent back to the actor
+5. The process continues until the specified step limit is reached
+
+**Human in the Loop**
+
+- The key to learning efficiently is to have human interventions to provide corrective feedback and completing the task to aide the policy learning and exploration.
+- To perform human interventions, you can press the upper right trigger button on the gamepad (or the `space` key on the keyboard). This will pause the policy actions and allow you to take over.
+- A successful experiment is one where the human has to intervene at the start but then reduces the amount of interventions as the policy improves. You can monitor the intervention rate in the `wandb` dashboard.
+
+
+
+
+
+
+
+ Example showing how human interventions help guide policy learning over time
+
+
+
+- The figure shows the plot of the episodic reward over interaction step. The figure shows the effect of human interventions on the policy learning.
+- The orange curve is an experiment without any human interventions. While the pink and blue curves are experiments with human interventions.
+- We can observe that the number of steps where the policy starts achieving the maximum reward is cut by a quarter when human interventions are present.
+
+**Monitoring and Debugging**
+
+If you have `wandb.enable` set to `true` in your configuration, you can monitor training progress in real-time through the [Weights & Biases](https://wandb.ai/site/) dashboard.
+
+### Guide to Human Interventions
+
+The learning process is very sensitive to the intervention strategy. It will takes a few runs to understand how to intervene effectively. Some tips and hints:
+
+- Allow the policy to explore for a few episodes at the start of training.
+- Avoid intervening for long periods of time. Try to intervene in situation to correct the robot's behaviour when it goes off track.
+- Once the policy starts achieving the task, even if its not perfect, you can limit your interventions to simple quick actions like a simple grasping commands.
+
+The ideal behaviour is that your intervention rate should drop gradually during training as shown in the figure below.
+
+
+
+
+
+
+
+ Plot of the intervention rate during a training run on a pick and lift cube
+ task
+
+
+
+### Key hyperparameters to tune
+
+Some configuration values have a disproportionate impact on training stability and speed:
+
+- **`temperature_init`** (`policy.temperature_init`) – initial entropy temperature in SAC. Higher values encourage more exploration; lower values make the policy more deterministic early on. A good starting point is `1e-2`. We observed that setting it too high can make human interventions ineffective and slow down learning.
+- **`policy_parameters_push_frequency`** (`policy.actor_learner_config.policy_parameters_push_frequency`) – interval in _seconds_ between two weight pushes from the learner to the actor. The default is `4 s`. Decrease to **1-2 s** to provide fresher weights (at the cost of more network traffic); increase only if your connection is slow, as this will reduce sample efficiency.
+- **`storage_device`** (`policy.storage_device`) – device on which the learner keeps the policy parameters. If you have spare GPU memory, set this to `"cuda"` (instead of the default `"cpu"`). Keeping the weights on-GPU removes CPU→GPU transfer overhead and can significantly increase the number of learner updates per second.
+
+Congrats 🎉, you have finished this tutorial!
+
+> [!TIP]
+> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
+
+Paper citation:
+
+```
+@article{luo2024precise,
+ title={Precise and Dexterous Robotic Manipulation via Human-in-the-Loop Reinforcement Learning},
+ author={Luo, Jianlan and Xu, Charles and Wu, Jeffrey and Levine, Sergey},
+ journal={arXiv preprint arXiv:2410.21845},
+ year={2024}
+}
+```
diff --git a/docs/source/hilserl_sim.mdx b/docs/source/hilserl_sim.mdx
new file mode 100644
index 00000000..e2dddd9e
--- /dev/null
+++ b/docs/source/hilserl_sim.mdx
@@ -0,0 +1,154 @@
+# Train RL in Simulation
+
+This guide explains how to use the `gym_hil` simulation environments as an alternative to real robots when working with the LeRobot framework for Human-In-the-Loop (HIL) reinforcement learning.
+
+`gym_hil` is a package that provides Gymnasium-compatible simulation environments specifically designed for Human-In-the-Loop reinforcement learning. These environments allow you to:
+
+- Train policies in simulation to test the RL stack before training on real robots
+
+- Collect demonstrations in sim using external devices like gamepads or keyboards
+- Perform human interventions during policy learning
+
+Currently, the main environment is a Franka Panda robot simulation based on MuJoCo, with tasks like picking up a cube.
+
+## Installation
+
+First, install the `gym_hil` package within the LeRobot environment:
+
+```bash
+pip install -e ".[hilserl]"
+```
+
+## What do I need?
+
+- A gamepad or keyboard to control the robot
+- A Nvidia GPU
+
+## Configuration
+
+To use `gym_hil` with LeRobot, you need to create a configuration file. An example is provided [here](https://huggingface.co/datasets/lerobot/config_examples/resolve/main/rl/gym_hil/env_config.json). Key configuration sections include:
+
+### Environment Type and Task
+
+```json
+{
+ "env": {
+ "type": "gym_manipulator",
+ "name": "gym_hil",
+ "task": "PandaPickCubeGamepad-v0",
+ "fps": 10
+ },
+ "device": "cuda"
+}
+```
+
+Available tasks:
+
+- `PandaPickCubeBase-v0`: Basic environment
+- `PandaPickCubeGamepad-v0`: With gamepad control
+- `PandaPickCubeKeyboard-v0`: With keyboard control
+
+### Processor Configuration
+
+```json
+{
+ "env": {
+ "processor": {
+ "control_mode": "gamepad",
+ "gripper": {
+ "use_gripper": true,
+ "gripper_penalty": -0.02
+ },
+ "reset": {
+ "control_time_s": 15.0,
+ "fixed_reset_joint_positions": [
+ 0.0, 0.195, 0.0, -2.43, 0.0, 2.62, 0.785
+ ]
+ },
+ "inverse_kinematics": {
+ "end_effector_step_sizes": {
+ "x": 0.025,
+ "y": 0.025,
+ "z": 0.025
+ }
+ }
+ }
+ }
+}
+```
+
+Important parameters:
+
+- `gripper.gripper_penalty`: Penalty for excessive gripper movement
+- `gripper.use_gripper`: Whether to enable gripper control
+- `inverse_kinematics.end_effector_step_sizes`: Size of the steps in the x,y,z axes of the end-effector
+- `control_mode`: Set to `"gamepad"` to use a gamepad controller
+
+## Running with HIL RL of LeRobot
+
+### Basic Usage
+
+To run the environment, set mode to null:
+
+```bash
+python -m lerobot.rl.gym_manipulator --config_path path/to/gym_hil_env.json
+```
+
+### Recording a Dataset
+
+To collect a dataset, set the mode to `record` whilst defining the repo_id and number of episodes to record:
+
+```json
+{
+ "env": {
+ "type": "gym_manipulator",
+ "name": "gym_hil",
+ "task": "PandaPickCubeGamepad-v0"
+ },
+ "dataset": {
+ "repo_id": "username/sim_dataset",
+ "root": null,
+ "task": "pick_cube",
+ "num_episodes_to_record": 10,
+ "replay_episode": null,
+ "push_to_hub": true
+ },
+ "mode": "record"
+}
+```
+
+```bash
+python -m lerobot.rl.gym_manipulator --config_path path/to/gym_hil_env.json
+```
+
+### Training a Policy
+
+To train a policy, checkout the configuration example available [here](https://huggingface.co/datasets/lerobot/config_examples/resolve/main/rl/gym_hil/train_config.json) and run the actor and learner servers:
+
+```bash
+python -m lerobot.rl.actor --config_path path/to/train_gym_hil_env.json
+```
+
+In a different terminal, run the learner server:
+
+```bash
+python -m lerobot.rl.learner --config_path path/to/train_gym_hil_env.json
+```
+
+The simulation environment provides a safe and repeatable way to develop and test your Human-In-the-Loop reinforcement learning components before deploying to real robots.
+
+Congrats 🎉, you have finished this tutorial!
+
+> [!TIP]
+> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
+
+Paper citation:
+
+```
+@article{luo2024precise,
+ title={Precise and Dexterous Robotic Manipulation via Human-in-the-Loop Reinforcement Learning},
+ author={Luo, Jianlan and Xu, Charles and Wu, Jeffrey and Levine, Sergey},
+ journal={arXiv preprint arXiv:2410.21845},
+ year={2024}
+}
+```
diff --git a/docs/source/hope_jr.mdx b/docs/source/hope_jr.mdx
new file mode 100644
index 00000000..856febb9
--- /dev/null
+++ b/docs/source/hope_jr.mdx
@@ -0,0 +1,277 @@
+# HopeJR
+
+## Prerequisites
+
+- [Hardware Setup](https://github.com/TheRobotStudio/HOPEJr)
+
+## Install LeRobot
+
+Follow the [installation instructions](https://github.com/huggingface/lerobot#installation) to install LeRobot.
+
+Install LeRobot with HopeJR dependencies:
+
+```bash
+pip install -e ".[hopejr]"
+```
+
+## Device Configuration
+
+Before starting calibration and operation, you need to identify the USB ports for each HopeJR component. Run this script to find the USB ports for the arm, hand, glove, and exoskeleton:
+
+```bash
+lerobot-find-port
+```
+
+This will display the available USB ports and their associated devices. Make note of the port paths (e.g., `/dev/tty.usbmodem58760433331`, `/dev/tty.usbmodem11301`) as you'll need to specify them in the `--robot.port` and `--teleop.port` parameters when recording data, replaying episodes, or running teleoperation scripts.
+
+## Step 1: Calibration
+
+Before performing teleoperation, HopeJR's limbs need to be calibrated. Calibration files will be saved in `~/.cache/huggingface/lerobot/calibration`
+
+### 1.1 Calibrate Robot Hand
+
+```bash
+lerobot-calibrate \
+ --robot.type=hope_jr_hand \
+ --robot.port=/dev/tty.usbmodem58760432281 \
+ --robot.id=blue \
+ --robot.side=right
+```
+
+When running the calibration script, a calibration GUI will pop up. Finger joints are named as follows:
+
+**Thumb**:
+
+- **CMC**: base joint connecting thumb to hand
+- **MCP**: knuckle joint
+- **PIP**: first finger joint
+- **DIP** : fingertip joint
+
+**Index, Middle, Ring, and Pinky fingers**:
+
+- **Radial flexor**: Moves base of finger towards the thumb
+- **Ulnar flexor**: Moves base of finger towards the pinky
+- **PIP/DIP**: Flexes the distal and proximal phalanx of the finger
+
+Each one of these will need to be calibrated individually via the GUI.
+Note that ulnar and radial flexors should have ranges of the same size (but with different offsets) in order to get symmetric movement.
+
+
+
+
+
+Use the calibration interface to set the range boundaries for each joint as shown above.
+
+
+
+
+
+Once you have set the appropriate boundaries for all joints, click "Save" to save the calibration values to the motors.
+
+### 1.2 Calibrate Teleoperator Glove
+
+```bash
+lerobot-calibrate \
+ --teleop.type=homunculus_glove \
+ --teleop.port=/dev/tty.usbmodem11201 \
+ --teleop.id=red \
+ --teleop.side=right
+```
+
+Move each finger through its full range of motion, starting from the thumb.
+
+```
+Move thumb through its entire range of motion.
+Recording positions. Press ENTER to stop...
+
+-------------------------------------------
+NAME | MIN | POS | MAX
+thumb_cmc | 1790 | 1831 | 1853
+thumb_mcp | 1497 | 1514 | 1528
+thumb_pip | 1466 | 1496 | 1515
+thumb_dip | 1463 | 1484 | 1514
+```
+
+Continue with each finger:
+
+```
+Move middle through its entire range of motion.
+Recording positions. Press ENTER to stop...
+
+-------------------------------------------
+NAME | MIN | POS | MAX
+middle_mcp_abduction | 1598 | 1718 | 1820
+middle_mcp_flexion | 1512 | 1658 | 2136
+middle_dip | 1484 | 1500 | 1547
+```
+
+Once calibration is complete, the system will save the calibration to `/Users/your_username/.cache/huggingface/lerobot/calibration/teleoperators/homunculus_glove/red.json`
+
+### 1.3 Calibrate Robot Arm
+
+```bash
+lerobot-calibrate \
+ --robot.type=hope_jr_arm \
+ --robot.port=/dev/tty.usbserial-1110 \
+ --robot.id=white
+```
+
+This will open a calibration GUI where you can set the range limits for each motor. The arm motions are organized as follows:
+
+- **Shoulder**: pitch, yaw, and roll
+- **Elbow**: flex
+- **Wrist**: pitch, yaw, and roll
+
+
+
+
+
+Use the calibration interface to set the range boundaries for each joint. Move each joint through its full range of motion and adjust the minimum and maximum values accordingly. Once you have set the appropriate boundaries for all joints, save the calibration.
+
+### 1.4 Calibrate Teleoperator Exoskeleton
+
+```bash
+lerobot-calibrate \
+ --teleop.type=homunculus_arm \
+ --teleop.port=/dev/tty.usbmodem11201 \
+ --teleop.id=black
+```
+
+The exoskeleton allows one to control the robot arm. During calibration, you'll be prompted to move all joints through their full range of motion:
+
+```
+Move all joints through their entire range of motion.
+Recording positions. Press ENTER to stop...
+
+-------------------------------------------
+-------------------------------------------
+NAME | MIN | POS | MAX
+shoulder_pitch | 586 | 736 | 895
+shoulder_yaw | 1257 | 1374 | 1390
+shoulder_roll | 449 | 1034 | 2564
+elbow_flex | 3023 | 3117 | 3134
+wrist_roll | 3073 | 3096 | 3147
+wrist_yaw | 2143 | 2171 | 2185
+wrist_pitch | 1975 | 1993 | 2074
+Calibration saved to /Users/your_username/.cache/huggingface/lerobot/calibration/teleoperators/homunculus_arm/black.json
+```
+
+## Step 2: Teleoperation
+
+Due to global variable conflicts in the Feetech middleware, teleoperation for arm and hand must run in separate shell sessions:
+
+### Hand
+
+```bash
+lerobot-teleoperate \
+ --robot.type=hope_jr_hand \
+ --robot.port=/dev/tty.usbmodem58760432281 \
+ --robot.id=blue \
+ --robot.side=right \
+ --teleop.type=homunculus_glove \
+ --teleop.port=/dev/tty.usbmodem11201 \
+ --teleop.id=red \
+ --teleop.side=right \
+ --display_data=true \
+ --fps=30
+```
+
+### Arm
+
+```bash
+lerobot-teleoperate \
+ --robot.type=hope_jr_arm \
+ --robot.port=/dev/tty.usbserial-1110 \
+ --robot.id=white \
+ --teleop.type=homunculus_arm \
+ --teleop.port=/dev/tty.usbmodem11201 \
+ --teleop.id=black \
+ --display_data=true \
+ --fps=30
+```
+
+## Step 3: Record, Replay, Train
+
+Record, Replay and Train with Hope-JR is still experimental.
+
+### Record
+
+This step records the dataset, which can be seen as an example [here](https://huggingface.co/datasets/nepyope/hand_record_test_with_video_data/settings).
+
+```bash
+lerobot-record \
+ --robot.type=hope_jr_hand \
+ --robot.port=/dev/tty.usbmodem58760432281 \
+ --robot.id=right \
+ --robot.side=right \
+ --robot.cameras='{"main": {"type": "opencv", "index_or_path": 0, "width": 640, "height": 480, "fps": 30}}' \
+ --teleop.type=homunculus_glove \
+ --teleop.port=/dev/tty.usbmodem1201 \
+ --teleop.id=right \
+ --teleop.side=right \
+ --dataset.repo_id=nepyope/hand_record_test_with_video_data \
+ --dataset.single_task="Hand recording test with video data" \
+ --dataset.num_episodes=1 \
+ --dataset.episode_time_s=5 \
+ --dataset.push_to_hub=true \
+ --dataset.private=true \
+ --display_data=true
+```
+
+### Replay
+
+```bash
+lerobot-replay \
+ --robot.type=hope_jr_hand \
+ --robot.port=/dev/tty.usbmodem58760432281 \
+ --robot.id=right \
+ --robot.side=right \
+ --dataset.repo_id=nepyope/hand_record_test_with_camera \
+ --dataset.episode=0
+```
+
+### Train
+
+```bash
+lerobot-train \
+ --dataset.repo_id=nepyope/hand_record_test_with_video_data \
+ --policy.type=act \
+ --output_dir=outputs/train/hopejr_hand \
+ --job_name=hopejr \
+ --policy.device=mps \
+ --wandb.enable=true \
+ --policy.repo_id=nepyope/hand_test_policy
+```
+
+### Evaluate
+
+This training run can be viewed as an example [here](https://wandb.ai/tino/lerobot/runs/rp0k8zvw?nw=nwusertino).
+
+```bash
+lerobot-record \
+ --robot.type=hope_jr_hand \
+ --robot.port=/dev/tty.usbmodem58760432281 \
+ --robot.id=right \
+ --robot.side=right \
+ --robot.cameras='{"main": {"type": "opencv", "index_or_path": 0, "width": 640, "height": 480, "fps": 30}}' \
+ --display_data=false \
+ --dataset.repo_id=nepyope/eval_hopejr \
+ --dataset.single_task="Evaluate hopejr hand policy" \
+ --dataset.num_episodes=10 \
+ --policy.path=outputs/train/hopejr_hand/checkpoints/last/pretrained_model
+```
diff --git a/docs/source/il_robots.mdx b/docs/source/il_robots.mdx
new file mode 100644
index 00000000..000df2a1
--- /dev/null
+++ b/docs/source/il_robots.mdx
@@ -0,0 +1,603 @@
+# Imitation Learning on Real-World Robots
+
+This tutorial will explain how to train a neural network to control a real robot autonomously.
+
+**You'll learn:**
+
+1. How to record and visualize your dataset.
+2. How to train a policy using your data and prepare it for evaluation.
+3. How to evaluate your policy and visualize the results.
+
+By following these steps, you'll be able to replicate tasks, such as picking up a Lego block and placing it in a bin with a high success rate, as shown in the video below.
+
+
+Video: pickup lego block task
+
+
+
+
+
+
+
+This tutorial isn’t tied to a specific robot: we walk you through the commands and API snippets you can adapt for any supported platform.
+
+During data collection, you’ll use a “teloperation” device, such as a leader arm or keyboard to teleoperate the robot and record its motion trajectories.
+
+Once you’ve gathered enough trajectories, you’ll train a neural network to imitate these trajectories and deploy the trained model so your robot can perform the task autonomously.
+
+If you run into any issues at any point, jump into our [Discord community](https://discord.com/invite/s3KuuzsPFb) for support.
+
+## Set up and Calibrate
+
+If you haven't yet set up and calibrated your robot and teleop device, please do so by following the robot-specific tutorial.
+
+## Teleoperate
+
+In this example, we’ll demonstrate how to teleoperate the SO101 robot. For each command, we also provide a corresponding API example.
+
+Note that the `id` associated with a robot is used to store the calibration file. It's important to use the same `id` when teleoperating, recording, and evaluating when using the same setup.
+
+
+
+```bash
+lerobot-teleoperate \
+ --robot.type=so101_follower \
+ --robot.port=/dev/tty.usbmodem58760431541 \
+ --robot.id=my_awesome_follower_arm \
+ --teleop.type=so101_leader \
+ --teleop.port=/dev/tty.usbmodem58760431551 \
+ --teleop.id=my_awesome_leader_arm
+```
+
+
+
+
+```python
+from lerobot.teleoperators.so101_leader import SO101LeaderConfig, SO101Leader
+from lerobot.robots.so101_follower import SO101FollowerConfig, SO101Follower
+
+robot_config = SO101FollowerConfig(
+ port="/dev/tty.usbmodem58760431541",
+ id="my_red_robot_arm",
+)
+
+teleop_config = SO101LeaderConfig(
+ port="/dev/tty.usbmodem58760431551",
+ id="my_blue_leader_arm",
+)
+
+robot = SO101Follower(robot_config)
+teleop_device = SO101Leader(teleop_config)
+robot.connect()
+teleop_device.connect()
+
+while True:
+ action = teleop_device.get_action()
+ robot.send_action(action)
+```
+
+
+
+
+
+The teleoperate command will automatically:
+
+1. Identify any missing calibrations and initiate the calibration procedure.
+2. Connect the robot and teleop device and start teleoperation.
+
+## Cameras
+
+To add cameras to your setup, follow this [Guide](./cameras#setup-cameras).
+
+## Teleoperate with cameras
+
+With `rerun`, you can teleoperate again while simultaneously visualizing the camera feeds and joint positions. In this example, we’re using the Koch arm.
+
+
+
+```bash
+lerobot-teleoperate \
+ --robot.type=koch_follower \
+ --robot.port=/dev/tty.usbmodem58760431541 \
+ --robot.id=my_awesome_follower_arm \
+ --robot.cameras="{ front: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}}" \
+ --teleop.type=koch_leader \
+ --teleop.port=/dev/tty.usbmodem58760431551 \
+ --teleop.id=my_awesome_leader_arm \
+ --display_data=true
+```
+
+
+
+
+```python
+from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
+from lerobot.teleoperators.koch_leader import KochLeaderConfig, KochLeader
+from lerobot.robots.koch_follower import KochFollowerConfig, KochFollower
+
+camera_config = {
+ "front": OpenCVCameraConfig(index_or_path=0, width=1920, height=1080, fps=30)
+}
+
+robot_config = KochFollowerConfig(
+ port="/dev/tty.usbmodem585A0076841",
+ id="my_red_robot_arm",
+ cameras=camera_config
+)
+
+teleop_config = KochLeaderConfig(
+ port="/dev/tty.usbmodem58760431551",
+ id="my_blue_leader_arm",
+)
+
+robot = KochFollower(robot_config)
+teleop_device = KochLeader(teleop_config)
+robot.connect()
+teleop_device.connect()
+
+while True:
+ observation = robot.get_observation()
+ action = teleop_device.get_action()
+ robot.send_action(action)
+```
+
+
+
+
+
+## Record a dataset
+
+Once you're familiar with teleoperation, you can record your first dataset.
+
+We use the Hugging Face hub features for uploading your dataset. If you haven't previously used the Hub, make sure you can login via the cli using a write-access token, this token can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens).
+
+Add your token to the CLI by running this command:
+
+```bash
+huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
+```
+
+Then store your Hugging Face repository name in a variable:
+
+```bash
+HF_USER=$(hf auth whoami | head -n 1)
+echo $HF_USER
+```
+
+Now you can record a dataset. To record 5 episodes and upload your dataset to the hub, adapt the code below for your robot and execute the command or API example.
+
+
+
+```bash
+lerobot-record \
+ --robot.type=so101_follower \
+ --robot.port=/dev/tty.usbmodem585A0076841 \
+ --robot.id=my_awesome_follower_arm \
+ --robot.cameras="{ front: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}}" \
+ --teleop.type=so101_leader \
+ --teleop.port=/dev/tty.usbmodem58760431551 \
+ --teleop.id=my_awesome_leader_arm \
+ --display_data=true \
+ --dataset.repo_id=${HF_USER}/record-test \
+ --dataset.num_episodes=5 \
+ --dataset.single_task="Grab the black cube"
+```
+
+
+
+
+```python
+from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
+from lerobot.datasets.lerobot_dataset import LeRobotDataset
+from lerobot.datasets.utils import hw_to_dataset_features
+from lerobot.robots.so100_follower import SO100Follower, SO100FollowerConfig
+from lerobot.teleoperators.so100_leader.config_so100_leader import SO100LeaderConfig
+from lerobot.teleoperators.so100_leader.so100_leader import SO100Leader
+from lerobot.utils.control_utils import init_keyboard_listener
+from lerobot.utils.utils import log_say
+from lerobot.utils.visualization_utils import init_rerun
+from lerobot.record import record_loop
+
+NUM_EPISODES = 5
+FPS = 30
+EPISODE_TIME_SEC = 60
+RESET_TIME_SEC = 10
+TASK_DESCRIPTION = "My task description"
+
+# Create the robot and teleoperator configurations
+camera_config = {"front": OpenCVCameraConfig(index_or_path=0, width=640, height=480, fps=FPS)}
+robot_config = SO100FollowerConfig(
+ port="/dev/tty.usbmodem58760434471", id="my_awesome_follower_arm", cameras=camera_config
+)
+teleop_config = SO100LeaderConfig(port="/dev/tty.usbmodem585A0077581", id="my_awesome_leader_arm")
+
+# Initialize the robot and teleoperator
+robot = SO100Follower(robot_config)
+teleop = SO100Leader(teleop_config)
+
+# Configure the dataset features
+action_features = hw_to_dataset_features(robot.action_features, "action")
+obs_features = hw_to_dataset_features(robot.observation_features, "observation")
+dataset_features = {**action_features, **obs_features}
+
+# Create the dataset
+dataset = LeRobotDataset.create(
+ repo_id="/",
+ fps=FPS,
+ features=dataset_features,
+ robot_type=robot.name,
+ use_videos=True,
+ image_writer_threads=4,
+)
+
+# Initialize the keyboard listener and rerun visualization
+_, events = init_keyboard_listener()
+init_rerun(session_name="recording")
+
+# Connect the robot and teleoperator
+robot.connect()
+teleop.connect()
+
+episode_idx = 0
+while episode_idx < NUM_EPISODES and not events["stop_recording"]:
+ log_say(f"Recording episode {episode_idx + 1} of {NUM_EPISODES}")
+
+ record_loop(
+ robot=robot,
+ events=events,
+ fps=FPS,
+ teleop=teleop,
+ dataset=dataset,
+ control_time_s=EPISODE_TIME_SEC,
+ single_task=TASK_DESCRIPTION,
+ display_data=True,
+ )
+
+ # Reset the environment if not stopping or re-recording
+ if not events["stop_recording"] and (episode_idx < NUM_EPISODES - 1 or events["rerecord_episode"]):
+ log_say("Reset the environment")
+ record_loop(
+ robot=robot,
+ events=events,
+ fps=FPS,
+ teleop=teleop,
+ control_time_s=RESET_TIME_SEC,
+ single_task=TASK_DESCRIPTION,
+ display_data=True,
+ )
+
+ if events["rerecord_episode"]:
+ log_say("Re-recording episode")
+ events["rerecord_episode"] = False
+ events["exit_early"] = False
+ dataset.clear_episode_buffer()
+ continue
+
+ dataset.save_episode()
+ episode_idx += 1
+
+# Clean up
+log_say("Stop recording")
+robot.disconnect()
+teleop.disconnect()
+dataset.push_to_hub()
+```
+
+
+
+
+
+#### Dataset upload
+
+Locally, your dataset is stored in this folder: `~/.cache/huggingface/lerobot/{repo-id}`. At the end of data recording, your dataset will be uploaded on your Hugging Face page (e.g. `https://huggingface.co/datasets/${HF_USER}/so101_test`) that you can obtain by running:
+
+```bash
+echo https://huggingface.co/datasets/${HF_USER}/so101_test
+```
+
+Your dataset will be automatically tagged with `LeRobot` for the community to find it easily, and you can also add custom tags (in this case `tutorial` for example).
+
+You can look for other LeRobot datasets on the hub by searching for `LeRobot` [tags](https://huggingface.co/datasets?other=LeRobot).
+
+You can also push your local dataset to the Hub manually, running:
+
+```bash
+huggingface-cli upload ${HF_USER}/record-test ~/.cache/huggingface/lerobot/{repo-id} --repo-type dataset
+```
+
+#### Record function
+
+The `record` function provides a suite of tools for capturing and managing data during robot operation:
+
+##### 1. Data Storage
+
+- Data is stored using the `LeRobotDataset` format and is stored on disk during recording.
+- By default, the dataset is pushed to your Hugging Face page after recording.
+ - To disable uploading, use `--dataset.push_to_hub=False`.
+
+##### 2. Checkpointing and Resuming
+
+- Checkpoints are automatically created during recording.
+- If an issue occurs, you can resume by re-running the same command with `--resume=true`. When resuming a recording, `--dataset.num_episodes` must be set to the **number of additional episodes to be recorded**, and not to the targeted total number of episodes in the dataset !
+- To start recording from scratch, **manually delete** the dataset directory.
+
+##### 3. Recording Parameters
+
+Set the flow of data recording using command-line arguments:
+
+- `--dataset.episode_time_s=60`
+ Duration of each data recording episode (default: **60 seconds**).
+- `--dataset.reset_time_s=60`
+ Duration for resetting the environment after each episode (default: **60 seconds**).
+- `--dataset.num_episodes=50`
+ Total number of episodes to record (default: **50**).
+
+##### 4. Keyboard Controls During Recording
+
+Control the data recording flow using keyboard shortcuts:
+
+- Press **Right Arrow (`→`)**: Early stop the current episode or reset time and move to the next.
+- Press **Left Arrow (`←`)**: Cancel the current episode and re-record it.
+- Press **Escape (`ESC`)**: Immediately stop the session, encode videos, and upload the dataset.
+
+#### Tips for gathering data
+
+Once you're comfortable with data recording, you can create a larger dataset for training. A good starting task is grasping an object at different locations and placing it in a bin. We suggest recording at least 50 episodes, with 10 episodes per location. Keep the cameras fixed and maintain consistent grasping behavior throughout the recordings. Also make sure the object you are manipulating is visible on the camera's. A good rule of thumb is you should be able to do the task yourself by only looking at the camera images.
+
+In the following sections, you’ll train your neural network. After achieving reliable grasping performance, you can start introducing more variations during data collection, such as additional grasp locations, different grasping techniques, and altering camera positions.
+
+Avoid adding too much variation too quickly, as it may hinder your results.
+
+If you want to dive deeper into this important topic, you can check out the [blog post](https://huggingface.co/blog/lerobot-datasets#what-makes-a-good-dataset) we wrote on what makes a good dataset.
+
+#### Troubleshooting:
+
+- On Linux, if the left and right arrow keys and escape key don't have any effect during data recording, make sure you've set the `$DISPLAY` environment variable. See [pynput limitations](https://pynput.readthedocs.io/en/latest/limitations.html#linux).
+
+## Visualize a dataset
+
+If you uploaded your dataset to the hub with `--control.push_to_hub=true`, you can [visualize your dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset) by copy pasting your repo id given by:
+
+```bash
+echo ${HF_USER}/so101_test
+```
+
+## Replay an episode
+
+A useful feature is the `replay` function, which allows you to replay any episode that you've recorded or episodes from any dataset out there. This function helps you test the repeatability of your robot's actions and assess transferability across robots of the same model.
+
+You can replay the first episode on your robot with either the command below or with the API example:
+
+
+
+```bash
+lerobot-replay \
+ --robot.type=so101_follower \
+ --robot.port=/dev/tty.usbmodem58760431541 \
+ --robot.id=my_awesome_follower_arm \
+ --dataset.repo_id=${HF_USER}/record-test \
+ --dataset.episode=0 # choose the episode you want to replay
+```
+
+
+
+
+```python
+import time
+
+from lerobot.datasets.lerobot_dataset import LeRobotDataset
+from lerobot.robots.so100_follower.config_so100_follower import SO100FollowerConfig
+from lerobot.robots.so100_follower.so100_follower import SO100Follower
+from lerobot.utils.robot_utils import busy_wait
+from lerobot.utils.utils import log_say
+
+episode_idx = 0
+
+robot_config = SO100FollowerConfig(port="/dev/tty.usbmodem58760434471", id="my_awesome_follower_arm")
+
+robot = SO100Follower(robot_config)
+robot.connect()
+
+dataset = LeRobotDataset("/", episodes=[episode_idx])
+actions = dataset.hf_dataset.select_columns("action")
+
+log_say(f"Replaying episode {episode_idx}")
+for idx in range(dataset.num_frames):
+ t0 = time.perf_counter()
+
+ action = {
+ name: float(actions[idx]["action"][i]) for i, name in enumerate(dataset.features["action"]["names"])
+ }
+ robot.send_action(action)
+
+ busy_wait(1.0 / dataset.fps - (time.perf_counter() - t0))
+
+robot.disconnect()
+```
+
+
+
+
+
+Your robot should replicate movements similar to those you recorded. For example, check out [this video](https://x.com/RemiCadene/status/1793654950905680090) where we use `replay` on a Aloha robot from [Trossen Robotics](https://www.trossenrobotics.com).
+
+## Train a policy
+
+To train a policy to control your robot, use the [`lerobot-train`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
+
+```bash
+lerobot-train \
+ --dataset.repo_id=${HF_USER}/so101_test \
+ --policy.type=act \
+ --output_dir=outputs/train/act_so101_test \
+ --job_name=act_so101_test \
+ --policy.device=cuda \
+ --wandb.enable=true \
+ --policy.repo_id=${HF_USER}/my_policy
+```
+
+Let's explain the command:
+
+1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/so101_test`.
+2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor states, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
+3. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
+4. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
+
+Training should take several hours. You will find checkpoints in `outputs/train/act_so101_test/checkpoints`.
+
+To resume training from a checkpoint, below is an example command to resume from `last` checkpoint of the `act_so101_test` policy:
+
+```bash
+lerobot-train \
+ --config_path=outputs/train/act_so101_test/checkpoints/last/pretrained_model/train_config.json \
+ --resume=true
+```
+
+If you do not want to push your model to the hub after training use `--policy.push_to_hub=false`.
+
+Additionally you can provide extra `tags` or specify a `license` for your model or make the model repo `private` by adding this: `--policy.private=true --policy.tags=\[ppo,rl\] --policy.license=mit`
+
+#### Train using Google Colab
+
+If your local computer doesn't have a powerful GPU you could utilize Google Colab to train your model by following the [ACT training notebook](./notebooks#training-act).
+
+#### Upload policy checkpoints
+
+Once training is done, upload the latest checkpoint with:
+
+```bash
+huggingface-cli upload ${HF_USER}/act_so101_test \
+ outputs/train/act_so101_test/checkpoints/last/pretrained_model
+```
+
+You can also upload intermediate checkpoints with:
+
+```bash
+CKPT=010000
+huggingface-cli upload ${HF_USER}/act_so101_test${CKPT} \
+ outputs/train/act_so101_test/checkpoints/${CKPT}/pretrained_model
+```
+
+## Run inference and evaluate your policy
+
+You can use the `record` script from [`lerobot/record.py`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/record.py) with a policy checkpoint as input, to run inference and evaluate your policy. For instance, run this command or API example to run inference and record 10 evaluation episodes:
+
+
+
+```bash
+lerobot-record \
+ --robot.type=so100_follower \
+ --robot.port=/dev/ttyACM1 \
+ --robot.cameras="{ up: {type: opencv, index_or_path: /dev/video10, width: 640, height: 480, fps: 30}, side: {type: intelrealsense, serial_number_or_name: 233522074606, width: 640, height: 480, fps: 30}}" \
+ --robot.id=my_awesome_follower_arm \
+ --display_data=false \
+ --dataset.repo_id=${HF_USER}/eval_so100 \
+ --dataset.single_task="Put lego brick into the transparent box" \
+ # <- Teleop optional if you want to teleoperate in between episodes \
+ # --teleop.type=so100_leader \
+ # --teleop.port=/dev/ttyACM0 \
+ # --teleop.id=my_awesome_leader_arm \
+ --policy.path=${HF_USER}/my_policy
+```
+
+
+
+
+```python
+from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
+from lerobot.datasets.lerobot_dataset import LeRobotDataset
+from lerobot.datasets.utils import hw_to_dataset_features
+from lerobot.policies.act.modeling_act import ACTPolicy
+from lerobot.policies.factory import make_pre_post_processors
+from lerobot.robots.so100_follower.config_so100_follower import SO100FollowerConfig
+from lerobot.robots.so100_follower.so100_follower import SO100Follower
+from lerobot.scripts.lerobot_record import record_loop
+from lerobot.utils.control_utils import init_keyboard_listener
+from lerobot.utils.utils import log_say
+from lerobot.utils.visualization_utils import init_rerun
+
+
+NUM_EPISODES = 5
+FPS = 30
+EPISODE_TIME_SEC = 60
+TASK_DESCRIPTION = "My task description"
+HF_MODEL_ID = "/"
+HF_DATASET_ID = "/"
+
+# Create the robot configuration
+camera_config = {"front": OpenCVCameraConfig(index_or_path=0, width=640, height=480, fps=FPS)}
+robot_config = SO100FollowerConfig(
+ port="/dev/tty.usbmodem58760434471", id="my_awesome_follower_arm", cameras=camera_config
+)
+
+# Initialize the robot
+robot = SO100Follower(robot_config)
+
+# Initialize the policy
+policy = ACTPolicy.from_pretrained(HF_MODEL_ID)
+
+# Configure the dataset features
+action_features = hw_to_dataset_features(robot.action_features, "action")
+obs_features = hw_to_dataset_features(robot.observation_features, "observation")
+dataset_features = {**action_features, **obs_features}
+
+# Create the dataset
+dataset = LeRobotDataset.create(
+ repo_id=HF_DATASET_ID,
+ fps=FPS,
+ features=dataset_features,
+ robot_type=robot.name,
+ use_videos=True,
+ image_writer_threads=4,
+)
+
+# Initialize the keyboard listener and rerun visualization
+_, events = init_keyboard_listener()
+init_rerun(session_name="recording")
+
+# Connect the robot
+robot.connect()
+
+preprocessor, postprocessor = make_pre_post_processors(
+ policy_cfg=policy,
+ pretrained_path=HF_MODEL_ID,
+ dataset_stats=dataset.meta.stats,
+)
+
+for episode_idx in range(NUM_EPISODES):
+ log_say(f"Running inference, recording eval episode {episode_idx + 1} of {NUM_EPISODES}")
+
+ # Run the policy inference loop
+ record_loop(
+ robot=robot,
+ events=events,
+ fps=FPS,
+ policy=policy,
+ preprocessor=preprocessor,
+ postprocessor=postprocessor,
+ dataset=dataset,
+ control_time_s=EPISODE_TIME_SEC,
+ single_task=TASK_DESCRIPTION,
+ display_data=True,
+ )
+
+ dataset.save_episode()
+
+# Clean up
+robot.disconnect()
+dataset.push_to_hub()
+```
+
+
+
+
+
+As you can see, it's almost the same command as previously used to record your training dataset. Two things changed:
+
+1. There is an additional `--control.policy.path` argument which indicates the path to your policy checkpoint with (e.g. `outputs/train/eval_act_so101_test/checkpoints/last/pretrained_model`). You can also use the model repository if you uploaded a model checkpoint to the hub (e.g. `${HF_USER}/act_so101_test`).
+2. The name of dataset begins by `eval` to reflect that you are running inference (e.g. `${HF_USER}/eval_act_so101_test`).
diff --git a/docs/source/il_sim.mdx b/docs/source/il_sim.mdx
new file mode 100644
index 00000000..9b7d7c11
--- /dev/null
+++ b/docs/source/il_sim.mdx
@@ -0,0 +1,220 @@
+# Imitation Learning in Sim
+
+This tutorial will explain how to train a neural network to control a robot in simulation with imitation learning.
+
+**You'll learn:**
+
+1. How to record a dataset in simulation with [gym-hil](https://github.com/huggingface/gym-hil) and visualize the dataset.
+2. How to train a policy using your data.
+3. How to evaluate your policy in simulation and visualize the results.
+
+For the simulation environment we use the same [repo](https://github.com/huggingface/gym-hil) that is also being used by the Human-In-the-Loop (HIL) reinforcement learning algorithm.
+This environment is based on [MuJoCo](https://mujoco.org) and allows you to record datasets in LeRobotDataset format.
+Teleoperation is easiest with a controller like the Logitech F710, but you can also use your keyboard if you are up for the challenge.
+
+## Installation
+
+First, install the `gym_hil` package within the LeRobot environment, go to your LeRobot folder and run this command:
+
+```bash
+pip install -e ".[hilserl]"
+```
+
+## Teleoperate and Record a Dataset
+
+To use `gym_hil` with LeRobot, you need to use a configuration file. An example config file can be found [here](https://huggingface.co/datasets/lerobot/config_examples/resolve/main/sim_il/env_config.json).
+
+To teleoperate and collect a dataset, we need to modify this config file. Here's an example configuration for imitation learning data collection:
+
+```json
+{
+ "env": {
+ "type": "gym_manipulator",
+ "name": "gym_hil",
+ "task": "PandaPickCubeGamepad-v0",
+ "fps": 10
+ },
+ "dataset": {
+ "repo_id": "your_username/il_gym",
+ "root": null,
+ "task": "pick_cube",
+ "num_episodes_to_record": 30,
+ "replay_episode": null,
+ "push_to_hub": true
+ },
+ "mode": "record",
+ "device": "cuda"
+}
+```
+
+Key configuration points:
+
+- Set your `repo_id` in the `dataset` section: `"repo_id": "your_username/il_gym"`
+- Set `num_episodes_to_record: 30` to collect 30 demonstration episodes
+- Ensure `mode` is set to `"record"`
+- If you don't have an NVIDIA GPU, change `"device": "cuda"` to `"mps"` for macOS or `"cpu"`
+- To use keyboard instead of gamepad, change `"task"` to `"PandaPickCubeKeyboard-v0"`
+
+Then we can run this command to start:
+
+
+
+
+```bash
+python -m lerobot.rl.gym_manipulator --config_path path/to/env_config_gym_hil_il.json
+```
+
+
+
+
+```bash
+mjpython -m lerobot.rl.gym_manipulator --config_path path/to/env_config_gym_hil_il.json
+```
+
+
+
+
+Once rendered you can teleoperate the robot with the gamepad or keyboard, below you can find the gamepad/keyboard controls.
+
+Note that to teleoperate the robot you have to hold the "Human Take Over Pause Policy" Button `RB` to enable control!
+
+**Gamepad Controls**
+
+
+
+
+
+ Gamepad button mapping for robot control and episode management
+
+
+**Keyboard controls**
+
+For keyboard controls use the `spacebar` to enable control and the following keys to move the robot:
+
+```bash
+ Arrow keys: Move in X-Y plane
+ Shift and Shift_R: Move in Z axis
+ Right Ctrl and Left Ctrl: Open and close gripper
+ ESC: Exit
+```
+
+## Visualize a dataset
+
+If you uploaded your dataset to the hub you can [visualize your dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset) by copy pasting your repo id.
+
+
+
+
+
+ Dataset visualizer
+
+
+## Train a policy
+
+To train a policy to control your robot, use the [`lerobot-train`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
+
+```bash
+lerobot-train \
+ --dataset.repo_id=${HF_USER}/il_gym \
+ --policy.type=act \
+ --output_dir=outputs/train/il_sim_test \
+ --job_name=il_sim_test \
+ --policy.device=cuda \
+ --wandb.enable=true
+```
+
+Let's explain the command:
+
+1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/il_gym`.
+2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor states, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
+3. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
+4. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
+
+Training should take several hours, 100k steps (which is the default) will take about 1h on Nvidia A100. You will find checkpoints in `outputs/train/il_sim_test/checkpoints`.
+
+#### Train using Collab
+
+If your local computer doesn't have a powerful GPU you could utilize Google Collab to train your model by following the [ACT training notebook](./notebooks#training-act).
+
+#### Upload policy checkpoints
+
+Once training is done, upload the latest checkpoint with:
+
+```bash
+huggingface-cli upload ${HF_USER}/il_sim_test \
+ outputs/train/il_sim_test/checkpoints/last/pretrained_model
+```
+
+You can also upload intermediate checkpoints with:
+
+```bash
+CKPT=010000
+huggingface-cli upload ${HF_USER}/il_sim_test${CKPT} \
+ outputs/train/il_sim_test/checkpoints/${CKPT}/pretrained_model
+```
+
+## Evaluate your policy in Sim
+
+To evaluate your policy we have to use a configuration file. An example can be found [here](https://huggingface.co/datasets/lerobot/config_examples/resolve/main/sim_il/eval_config.json).
+
+Here's an example evaluation configuration:
+
+```json
+{
+ "env": {
+ "type": "gym_manipulator",
+ "name": "gym_hil",
+ "task": "PandaPickCubeGamepad-v0",
+ "fps": 10
+ },
+ "dataset": {
+ "repo_id": "your_username/il_sim_dataset",
+ "dataset_root": null,
+ "task": "pick_cube"
+ },
+ "pretrained_policy_name_or_path": "your_username/il_sim_model",
+ "device": "cuda"
+}
+```
+
+Make sure to replace:
+
+- `repo_id` with the dataset you trained on (e.g., `your_username/il_sim_dataset`)
+- `pretrained_policy_name_or_path` with your model ID (e.g., `your_username/il_sim_model`)
+
+Then you can run this command to visualize your trained policy
+
+
+
+
+```bash
+python -m lerobot.rl.eval_policy --config_path=path/to/eval_config_gym_hil.json
+```
+
+
+
+
+```bash
+mjpython -m lerobot.rl.eval_policy --config_path=path/to/eval_config_gym_hil.json
+```
+
+
+
+
+> [!WARNING]
+> While the main workflow of training ACT in simulation is straightforward, there is significant room for exploring how to set up the task, define the initial state of the environment, and determine the type of data required during collection to learn the most effective policy. If your trained policy doesn't perform well, investigate the quality of the dataset it was trained on using our visualizers, as well as the action values and various hyperparameters related to ACT and the simulation.
+
+Congrats 🎉, you have finished this tutorial. If you want to continue with using LeRobot in simulation follow this [Tutorial on reinforcement learning in sim with HIL-SERL](https://huggingface.co/docs/lerobot/hilserl_sim)
+
+> [!TIP]
+> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
diff --git a/docs/source/implement_your_own_processor.mdx b/docs/source/implement_your_own_processor.mdx
new file mode 100644
index 00000000..5b7d4f78
--- /dev/null
+++ b/docs/source/implement_your_own_processor.mdx
@@ -0,0 +1,273 @@
+# Implement your own Robot Processor
+
+In this tutorial, you'll learn how to implement your own Robot Processor.
+It begins by exploring the need for a custom processor, then uses the `NormalizerProcessorStep` as the running example to explain how to implement, configure, and serialize a processor. Finally, it lists all helper processors that ship with LeRobot.
+
+## Why would you need a custom processor?
+
+In most cases, when reading raw data from sensors or when models output actions, you need to process this data to make it compatible with your target system. For example, a common need is normalizing data ranges to make them suitable for neural networks.
+
+LeRobot's `NormalizerProcessorStep` handles this crucial task:
+
+```python
+# Input: raw joint positions in [0, 180] degrees
+raw_action = torch.tensor([90.0, 45.0, 135.0])
+
+# After processing: normalized to [-1, 1] range for model training
+normalizer = NormalizerProcessorStep(features=features, norm_map=norm_map, stats=dataset_stats)
+normalized_result = normalizer(transition)
+# ...
+```
+
+Other common processing needs include:
+
+- **Device placement**: Moving tensors between CPU/GPU and converting data types
+- **Format conversion**: Transforming between different data structures
+- **Batching**: Adding/removing batch dimensions for model compatibility
+- **Safety constraints**: Applying limits to robot commands
+
+```python
+# Example pipeline combining multiple processors
+pipeline = PolicyProcessorPipeline([
+ RenameObservationsProcessorStep(rename_map={}),
+ AddBatchDimensionProcessorStep(),
+ NormalizerProcessorStep(features=features, stats=stats),
+ DeviceProcessorStep(device="cuda"),
+ # ...
+])
+```
+
+LeRobot provides a pipeline mechanism to implement sequences of processing steps for both input data and output actions, making it easy to compose these transformations in the right order for optimal performance.
+
+## How to implement your own processor?
+
+We'll use the `NormalizerProcessorStep` as our main example because it demonstrates essential processor patterns including state management, configuration serialization, and tensor handling that you'll commonly need.
+
+Prepare the sequence of processing steps necessary for your problem. A processor step is a class that implements the following methods:
+
+- `__call__`: implements the processing step for the input transition.
+- `get_config`: gets the configuration of the processor step.
+- `state_dict`: gets the state of the processor step.
+- `load_state_dict`: loads the state of the processor step.
+- `reset`: resets the state of the processor step.
+- `feature_contract`: displays the modification to the feature space during the processor step.
+
+### Implement the `__call__` method
+
+The `__call__` method is the core of your processor step. It takes an `EnvTransition` and returns a modified `EnvTransition`. Here's how the `NormalizerProcessorStep` works:
+
+```python
+@dataclass
+@ProcessorStepRegistry.register("normalizer_processor")
+class NormalizerProcessorStep(ProcessorStep):
+ """Normalize observations/actions using dataset statistics."""
+
+ features: dict[str, PolicyFeature]
+ norm_map: dict[FeatureType, NormalizationMode]
+ stats: dict[str, dict[str, Any]] | None = None
+ eps: float = 1e-8
+ _tensor_stats: dict = field(default_factory=dict, init=False, repr=False)
+
+ def __post_init__(self):
+ """Convert stats to tensors for efficient computation."""
+ self.stats = self.stats or {}
+ self._tensor_stats = to_tensor(self.stats, device=self.device, dtype=torch.float32)
+
+ def __call__(self, transition: EnvTransition) -> EnvTransition:
+ new_transition = transition.copy()
+ # Normalize observations
+ # ...
+ # Normalize action
+ # ...
+ return new_transition
+
+```
+
+See the full implementation in `src/lerobot/processor/normalize_processor.py` for complete details.
+
+**Key principles:**
+
+- **Always use `transition.copy()`** to avoid side effects
+- **Handle both observations and actions** consistently
+- **Separate config from state**: `get_config()` returns JSON-serializable params, `state_dict()` returns tensors
+- **Convert stats to tensors** in `__post_init__()` for efficient computation
+
+### Configuration and State Management
+
+Processors support serialization through three methods that separate configuration from tensor state. The `NormalizerProcessorStep` demonstrates this perfectly - it carries dataset statistics (tensors) in its state, and hyperparameters in its config:
+
+```python
+# Continuing the NormalizerProcessorStep example...
+
+def get_config(self) -> dict[str, Any]:
+ """JSON-serializable configuration (no tensors)."""
+ return {
+ "eps": self.eps,
+ "features": {k: {"type": v.type.value, "shape": v.shape} for k, v in self.features.items()},
+ "norm_map": {ft.value: nm.value for ft, nm in self.norm_map.items()},
+ # ...
+ }
+
+def state_dict(self) -> dict[str, torch.Tensor]:
+ """Tensor state only (e.g., dataset statistics)."""
+ flat: dict[str, torch.Tensor] = {}
+ for key, sub in self._tensor_stats.items():
+ for stat_name, tensor in sub.items():
+ flat[f"{key}.{stat_name}"] = tensor.cpu() # Always save to CPU
+ return flat
+
+def load_state_dict(self, state: dict[str, torch.Tensor]) -> None:
+ """Restore tensor state at runtime."""
+ self._tensor_stats.clear()
+ for flat_key, tensor in state.items():
+ key, stat_name = flat_key.rsplit(".", 1)
+ # Load to processor's configured device
+ self._tensor_stats.setdefault(key, {})[stat_name] = tensor.to(
+ dtype=torch.float32, device=self.device
+ )
+ # ...
+```
+
+**Usage:**
+
+```python
+# Save (e.g., inside a policy)
+config = normalizer.get_config()
+tensors = normalizer.state_dict()
+
+# Restore (e.g., loading a pretrained policy)
+new_normalizer = NormalizerProcessorStep(**config)
+new_normalizer.load_state_dict(tensors)
+# Now new_normalizer has the same stats and configuration
+```
+
+### Transform features
+
+The `transform_features` method defines how your processor transforms feature names and shapes. This is crucial for policy configuration and debugging.
+
+For `NormalizerProcessorStep`, features are typically preserved unchanged since normalization doesn't alter keys or shapes:
+
+```python
+def transform_features(self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
+ """Normalization preserves all feature definitions."""
+ return features # No changes to feature structure
+ # ...
+```
+
+When your processor renames or reshapes data, implement this method to reflect the mapping for downstream components. For example, a simple rename processor:
+
+```python
+def transform_features(self, features: dict[str, PolicyFeature]) -> dict[str, PolicyFeature]:
+ # Simple renaming
+ if "pixels" in features:
+ features["observation.image"] = features.pop("pixels")
+
+ # Pattern-based renaming
+ for key in list(features.keys()):
+ if key.startswith("env_state."):
+ suffix = key[len("env_state."):]
+ features[f"observation.{suffix}"] = features.pop(key)
+ # ...
+
+ return features
+```
+
+**Key principles:**
+
+- Use `features.pop(old_key)` to remove and get the old feature
+- Use `features[new_key] = old_feature` to add the renamed feature
+- Always return the modified features dictionary
+- Document transformations clearly in the docstring
+
+### Using overrides
+
+You can override step parameters at load-time using `overrides`. This is handy for non-serializable objects or site-specific settings. It works both in policy factories and with `DataProcessorPipeline.from_pretrained(...)`.
+
+**Foundational model adaptation**: This is particularly useful when working with foundational pretrained policies where you rarely have access to the original training statistics. You can inject your own dataset statistics to adapt the normalizer to your specific robot or environment data.
+
+Example: during policy evaluation on the robot, override the device and rename map.
+Use this to run a policy trained on CUDA on a CPU-only robot, or to remap camera keys when the robot uses different names than the dataset.
+
+Direct usage with `from_pretrained`:
+
+```python
+from lerobot.processor import RobotProcessorPipeline
+
+# Load a foundational policy trained on diverse robot data
+# but adapt normalization to your specific robot/environment
+new_stats = LeRobotDataset(repo_id="username/my-dataset").meta.stats
+processor = RobotProcessorPipeline.from_pretrained(
+ "huggingface/foundational-robot-policy", # Pretrained foundation model
+ overrides={
+ "normalizer_processor": {"stats": new_stats}, # Inject your robot's statistics
+ "device_processor": {"device": "cuda:0"}, # registry name for registered steps
+ "rename_processor": {"rename_map": robot_key_map}, # Map your robot's observation keys
+ # ...
+ },
+)
+```
+
+## Best Practices
+
+Based on analysis of all LeRobot processor implementations, here are the key patterns and practices:
+
+### 1. **Safe Data Handling**
+
+Always create copies of input data to avoid unintended side effects. Use `transition.copy()` and `observation.copy()` rather than modifying data in-place. This prevents your processor from accidentally affecting other components in the pipeline.
+
+Check for required data before processing and handle missing data gracefully. If your processor expects certain keys (like `"pixels"` for image processing), validate their presence first. For optional data, use safe access patterns like `transition.get()` and handle `None` values appropriately.
+
+When data validation fails, provide clear, actionable error messages that help users understand what went wrong and how to fix it.
+
+### 2. **Choose Appropriate Base Classes**
+
+LeRobot provides specialized base classes that reduce boilerplate code and ensure consistency. Use `ObservationProcessorStep` when you only need to modify observations, `ActionProcessorStep` for action-only processing, and `RobotActionProcessorStep` specifically for dictionary-based robot actions.
+
+Only inherit directly from `ProcessorStep` when you need full control over the entire transition or when processing multiple transition components simultaneously. The specialized base classes handle the transition management for you and provide type safety.
+
+### 3. **Registration and Naming**
+
+Register your processors with descriptive, namespaced names using `@ProcessorStepRegistry.register()`. Use organization prefixes like `"robotics_lab/safety_clipper"` or `"acme_corp/vision_enhancer"` to avoid naming conflicts. Avoid generic names like `"processor"` or `"step"` that could clash with other implementations.
+
+Good registration makes your processors discoverable and enables clean serialization/deserialization when saving and loading pipelines.
+
+### 4. **State Management Patterns**
+
+Distinguish between configuration parameters (JSON-serializable values) and internal state (tensors, buffers). Use dataclass fields with `init=False, repr=False` for internal state that shouldn't appear in the constructor or string representation.
+
+Implement the `reset()` method to clear internal state between episodes. This is crucial for stateful processors that accumulate data over time, like moving averages or temporal filters.
+
+Remember that `get_config()` should only return JSON-serializable configuration, while `state_dict()` handles tensor state separately.
+
+### 5. **Input Validation and Error Handling**
+
+Validate input types and shapes before processing. Check tensor properties like `dtype` and dimensions to ensure compatibility with your algorithms. For robot actions, verify that required pose components or joint values are present and within expected ranges.
+
+Use early returns for edge cases where no processing is needed. Provide clear, descriptive error messages that include the expected vs. actual data types or shapes. This makes debugging much easier for users.
+
+### 6. **Device and Dtype Awareness**
+
+Design your processors to automatically adapt to the device and dtype of input tensors. Internal tensors (like normalization statistics) should match the input tensor's device and dtype to ensure compatibility with multi-GPU training, mixed precision, and distributed setups.
+
+Implement a `to()` method that moves your processor's internal state to the specified device. Check device/dtype compatibility at runtime and automatically migrate internal state when needed. This pattern enables seamless operation across different hardware configurations without manual intervention.
+
+## Conclusion
+
+You now have all the tools to implement custom processors in LeRobot! The key steps are:
+
+1. **Define your processor** as a dataclass with the required methods (`__call__`, `get_config`, `state_dict`, `load_state_dict`, `reset`, `transform_features`)
+2. **Register it** using `@ProcessorStepRegistry.register("name")` for discoverability
+3. **Integrate it** into a `DataProcessorPipeline` with other processing steps
+4. **Use base classes** like `ObservationProcessorStep` when possible to reduce boilerplate
+5. **Implement device/dtype awareness** to support multi-GPU and mixed precision setups
+
+The processor system is designed to be modular and composable, allowing you to build complex data processing pipelines from simple, focused components. Whether you're preprocessing sensor data for training or post-processing model outputs for robot execution, custom processors give you the flexibility to handle any data transformation your robotics application requires.
+
+Key principles for robust processors:
+
+- **Device/dtype adaptation**: Internal tensors should match input tensors
+- **Clear error messages**: Help users understand what went wrong
+- **Base class usage**: Leverage specialized base classes to reduce boilerplate
+- **Feature contracts**: Declare data structure changes with `transform_features()`
+
+Start simple, test thoroughly, and ensure your processors work seamlessly across different hardware configurations!
diff --git a/docs/source/index.mdx b/docs/source/index.mdx
index b8ff56ea..a2f919e7 100644
--- a/docs/source/index.mdx
+++ b/docs/source/index.mdx
@@ -1,6 +1,10 @@
diff --git a/docs/source/installation.mdx b/docs/source/installation.mdx
index 8bc761b1..67082137 100644
--- a/docs/source/installation.mdx
+++ b/docs/source/installation.mdx
@@ -1,84 +1,127 @@
# Installation
-## Install LeRobot
+## Install [`miniforge`](https://conda-forge.org/download/)
-Download our source code:
```bash
-git clone https://github.com/huggingface/lerobot.git
-cd lerobot
+wget "https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-$(uname)-$(uname -m).sh"
+bash Miniforge3-$(uname)-$(uname -m).sh
```
-Create a virtual environment with Python 3.10, using [`Miniconda`](https://docs.anaconda.com/miniconda/install/#quick-command-line-install)
+## Environment Setup
+
+Create a virtual environment with Python 3.10, using conda:
+
```bash
conda create -y -n lerobot python=3.10
```
-Now restart the shell by running:
-
-
-
-```bash
-source ~/.bashrc
-```
-
-
-
-```bash
-source ~/.bash_profile
-```
-
-
-
-```bash
-source ~/.zshrc
-```
-
-
-
Then activate your conda environment, you have to do this each time you open a shell to use lerobot:
+
```bash
conda activate lerobot
```
-When using `miniconda`, install `ffmpeg` in your environment:
+When using `conda`, install `ffmpeg` in your environment:
+
```bash
conda install ffmpeg -c conda-forge
```
> [!TIP]
> This usually installs `ffmpeg 7.X` for your platform compiled with the `libsvtav1` encoder. If `libsvtav1` is not supported (check supported encoders with `ffmpeg -encoders`), you can:
-> - _[On any platform]_ Explicitly install `ffmpeg 7.X` using:
-> ```bash
-> conda install ffmpeg=7.1.1 -c conda-forge
-> ```
-> - _[On Linux only]_ Install [ffmpeg build dependencies](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#GettheDependencies) and [compile ffmpeg from source with libsvtav1](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#libsvtav1), and make sure you use the corresponding ffmpeg binary to your install with `which ffmpeg`.
+>
+> - _[On any platform]_ Explicitly install `ffmpeg 7.X` using:
+>
+> ```bash
+> conda install ffmpeg=7.1.1 -c conda-forge
+> ```
+>
+> - _[On Linux only]_ If you want to bring your own ffmpeg: Install [ffmpeg build dependencies](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#GettheDependencies) and [compile ffmpeg from source with libsvtav1](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#libsvtav1), and make sure you use the corresponding ffmpeg binary to your install with `which ffmpeg`.
+
+## Install LeRobot 🤗
+
+### From Source
+
+First, clone the repository and navigate into the directory:
-Install 🤗 LeRobot:
```bash
-cd lerobot && pip install -e ".[feetech]"
+git clone https://github.com/huggingface/lerobot.git
+cd lerobot
```
-## Troubleshooting
+Then, install the library in editable mode. This is useful if you plan to contribute to the code.
+
+```bash
+pip install -e .
+```
+
+### Installation from PyPI
+
+**Core Library:**
+Install the base package with:
+
+```bash
+pip install lerobot
+```
+
+_This installs only the default dependencies._
+
+**Extra Features:**
+To install additional functionality, use one of the following:
+
+```bash
+pip install 'lerobot[all]' # All available features
+pip install 'lerobot[aloha,pusht]' # Specific features (Aloha & Pusht)
+pip install 'lerobot[feetech]' # Feetech motor support
+```
+
+_Replace `[...]` with your desired features._
+
+**Available Tags:**
+For a full list of optional dependencies, see:
+https://pypi.org/project/lerobot/
+
+> [!NOTE]
+> For lerobot 0.4.0, if you want to install libero or pi, you will have to do: `pip install "lerobot[pi,libero]@git+https://github.com/huggingface/lerobot.git"`
+
+### Troubleshooting
+
If you encounter build errors, you may need to install additional dependencies: `cmake`, `build-essential`, and `ffmpeg libs`.
To install these for linux run:
+
```bash
sudo apt-get install cmake build-essential python-dev pkg-config libavformat-dev libavcodec-dev libavdevice-dev libavutil-dev libswscale-dev libswresample-dev libavfilter-dev pkg-config
```
+
For other systems, see: [Compiling PyAV](https://pyav.org/docs/develop/overview/installation.html#bring-your-own-ffmpeg)
-## Sim
-For simulations, 🤗 LeRobot comes with gymnasium environments that can be installed as extras:
-- [aloha](https://github.com/huggingface/gym-aloha)
-- [xarm](https://github.com/huggingface/gym-xarm)
-- [pusht](https://github.com/huggingface/gym-pusht)
+## Optional dependencies
+
+LeRobot provides optional extras for specific functionalities. Multiple extras can be combined (e.g., `.[aloha,feetech]`). For all available extras, refer to `pyproject.toml`.
+
+### Simulations
+
+Install environment packages: `aloha` ([gym-aloha](https://github.com/huggingface/gym-aloha)), or `pusht` ([gym-pusht](https://github.com/huggingface/gym-pusht))
+Example:
-For instance, to install 🤗 LeRobot with aloha and pusht, use:
```bash
-pip install -e ".[aloha, pusht]"
+pip install -e ".[aloha]" # or "[pusht]" for example
```
-## W&B
+### Motor Control
+
+For Koch v1.1 install the Dynamixel SDK, for SO100/SO101/Moss install the Feetech SDK.
+
+```bash
+pip install -e ".[feetech]" # or "[dynamixel]" for example
+```
+
+### Experiment Tracking
+
To use [Weights and Biases](https://docs.wandb.ai/quickstart) for experiment tracking, log in with
+
```bash
wandb login
```
+
+You can now assemble your robot if it's not ready yet, look for your robot type on the left. Then follow the link below to use Lerobot with your robot.
diff --git a/docs/source/integrate_hardware.mdx b/docs/source/integrate_hardware.mdx
new file mode 100644
index 00000000..e1587be9
--- /dev/null
+++ b/docs/source/integrate_hardware.mdx
@@ -0,0 +1,476 @@
+# Bring Your Own Hardware
+
+This tutorial will explain how to integrate your own robot design into the LeRobot ecosystem and have it access all of our tools (data collection, control pipelines, policy training and inference).
+
+To that end, we provide the [`Robot`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/robots/robot.py) base class in the LeRobot which specifies a standard interface for physical robot integration. Let's see how to implement it.
+
+## Prerequisites
+
+- Your own robot which exposes a communication interface (e.g. serial, CAN, TCP)
+- A way to read sensor data and send motor commands programmatically, e.g. manufacturer's SDK or API, or your own protocol implementation.
+- LeRobot installed in your environment. Follow our [Installation Guide](./installation).
+
+## Choose your motors
+
+If you're using Feetech or Dynamixel motors, LeRobot provides built-in bus interfaces:
+
+- [`FeetechMotorsBus`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/motors/feetech/feetech.py) – for controlling Feetech servos
+- [`DynamixelMotorsBus`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/motors/dynamixel/dynamixel.py) – for controlling Dynamixel servos
+
+Please refer to the [`MotorsBus`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/motors/motors_bus.py) abstract class to learn about its API.
+For a good example of how it can be used, you can have a look at our own [SO101 follower implementation](https://github.com/huggingface/lerobot/blob/main/src/lerobot/robots/so101_follower/so101_follower.py)
+
+Use these if compatible. Otherwise, you'll need to find or write a Python interface (not covered in this tutorial):
+
+- Find an existing SDK in Python (or use bindings to C/C++)
+- Or implement a basic communication wrapper (e.g., via pyserial, socket, or CANopen)
+
+You're not alone—many community contributions use custom boards or firmware!
+
+For Feetech and Dynamixel, we currently support these servos: - Feetech: - STS & SMS series (protocol 0): `sts3215`, `sts3250`, `sm8512bl` - SCS series (protocol 1): `scs0009` - Dynamixel (protocol 2.0 only): `xl330-m077`, `xl330-m288`, `xl430-w250`, `xm430-w350`, `xm540-w270`, `xc430-w150`
+
+If you are using Feetech or Dynamixel servos that are not in this list, you can add those in the [Feetech table](https://github.com/huggingface/lerobot/blob/main/src/lerobot/motors/feetech/tables.py) or [Dynamixel table](https://github.com/huggingface/lerobot/blob/main/src/lerobot/motors/dynamixel/tables.py). Depending on the model, this will require you to add model-specific information. In most cases though, there shouldn't be a lot of additions to do.
+
+In the next sections, we'll use a `FeetechMotorsBus` as the motors interface for the examples. Replace it and adapt to your motors if necessary.
+
+## Step 1: Subclass the `Robot` Interface
+
+You’ll first need to specify the config class and a string identifier (`name`) for your robot. If your robot has special needs that you'd like to be able to change easily, it should go here (e.g. port/address, baudrate).
+
+Here, we'll add the port name and one camera by default for our robot:
+
+
+```python
+from dataclasses import dataclass, field
+
+from lerobot.cameras import CameraConfig
+from lerobot.cameras.opencv import OpenCVCameraConfig
+from lerobot.robots import RobotConfig
+
+
+@RobotConfig.register_subclass("my_cool_robot")
+@dataclass
+class MyCoolRobotConfig(RobotConfig):
+ port: str
+ cameras: dict[str, CameraConfig] = field(
+ default_factory={
+ "cam_1": OpenCVCameraConfig(
+ index_or_path=2,
+ fps=30,
+ width=480,
+ height=640,
+ ),
+ }
+ )
+```
+
+
+[Cameras tutorial](./cameras) to understand how to detect and add your camera.
+
+Next, we'll create our actual robot class which inherits from `Robot`. This abstract class defines a contract you must follow for your robot to be usable with the rest of the LeRobot tools.
+
+Here we'll create a simple 5-DoF robot with one camera. It could be a simple arm but notice that the `Robot` abstract class does not assume anything on your robot's form factor. You can let you imagination run wild when designing new robots!
+
+
+```python
+from lerobot.cameras import make_cameras_from_configs
+from lerobot.motors import Motor, MotorNormMode
+from lerobot.motors.feetech import FeetechMotorsBus
+from lerobot.robots import Robot
+
+class MyCoolRobot(Robot):
+ config_class = MyCoolRobotConfig
+ name = "my_cool_robot"
+
+ def __init__(self, config: MyCoolRobotConfig):
+ super().__init__(config)
+ self.bus = FeetechMotorsBus(
+ port=self.config.port,
+ motors={
+ "joint_1": Motor(1, "sts3250", MotorNormMode.RANGE_M100_100),
+ "joint_2": Motor(2, "sts3215", MotorNormMode.RANGE_M100_100),
+ "joint_3": Motor(3, "sts3215", MotorNormMode.RANGE_M100_100),
+ "joint_4": Motor(4, "sts3215", MotorNormMode.RANGE_M100_100),
+ "joint_5": Motor(5, "sts3215", MotorNormMode.RANGE_M100_100),
+ },
+ calibration=self.calibration,
+ )
+ self.cameras = make_cameras_from_configs(config.cameras)
+```
+
+
+## Step 2: Define Observation and Action Features
+
+These two properties define the _interface contract_ between your robot and tools that consume it (such as data collection or learning pipelines).
+
+> [!WARNING]
+> Note that these properties must be callable even if the robot is not yet connected, so avoid relying on runtime hardware state to define them.
+
+### `observation_features`
+
+This property should return a dictionary describing the structure of sensor outputs from your robot. The keys match what `get_observation()` returns, and the values describe either the shape (for arrays/images) or the type (for simple values).
+
+Example for our 5-DoF arm with one camera:
+
+
+```python
+@property
+def _motors_ft(self) -> dict[str, type]:
+ return {
+ "joint_1.pos": float,
+ "joint_2.pos": float,
+ "joint_3.pos": float,
+ "joint_4.pos": float,
+ "joint_5.pos": float,
+ }
+
+@property
+def _cameras_ft(self) -> dict[str, tuple]:
+ return {
+ cam: (self.cameras[cam].height, self.cameras[cam].width, 3) for cam in self.cameras
+ }
+
+@property
+def observation_features(self) -> dict:
+ return {**self._motors_ft, **self._cameras_ft}
+```
+
+
+In this case, observations consist of a simple dict storing each motor's position and a camera image.
+
+### `action_features`
+
+This property describes the commands your robot expects via `send_action()`. Again, keys must match the expected input format, and values define the shape/type of each command.
+
+Here, we simply use the same joints proprioceptive features (`self._motors_ft`) as with `observation_features`: the action sent will simply the goal position for each motor.
+
+
+```python
+def action_features(self) -> dict:
+ return self._motors_ft
+```
+
+
+## Step 3: Handle Connection and Disconnection
+
+These methods should handle opening and closing communication with your hardware (e.g. serial ports, CAN interfaces, USB devices, cameras).
+
+### `is_connected`
+
+This property should simply reflect that communication with the robot's hardware is established. When this property is `True`, it should be possible to read and write to the hardware using `get_observation()` and `send_action()`.
+
+
+```python
+@property
+def is_connected(self) -> bool:
+ return self.bus.is_connected and all(cam.is_connected for cam in self.cameras.values())
+```
+
+
+### `connect()`
+
+This method should establish communication with the hardware. Moreover, if your robot needs calibration and is not calibrated, it should start a calibration procedure by default. If your robot needs some specific configuration, this should also be called here.
+
+
+```python
+def connect(self, calibrate: bool = True) -> None:
+ self.bus.connect()
+ if not self.is_calibrated and calibrate:
+ self.calibrate()
+
+ for cam in self.cameras.values():
+ cam.connect()
+
+ self.configure()
+```
+
+
+### `disconnect()`
+
+This method should gracefully terminate communication with the hardware: free any related resources (threads or processes), close ports, etc.
+
+Here, we already handle this in our `MotorsBus` and `Camera` classes so we just need to call their own `disconnect()` methods:
+
+
+```python
+def disconnect(self) -> None:
+ self.bus.disconnect()
+ for cam in self.cameras.values():
+ cam.disconnect()
+```
+
+
+## Step 4: Support Calibration and Configuration
+
+LeRobot supports saving and loading calibration data automatically. This is useful for joint offsets, zero positions, or sensor alignment.
+
+> Note that depending on your hardware, this may not apply. If that's the case, you can simply leave these methods as no-ops:
+
+
+```python
+@property
+def is_calibrated(self) -> bool:
+ return True
+
+def calibrate(self) -> None:
+ pass
+```
+
+
+### `is_calibrated`
+
+This should reflect whether your robot has the required calibration loaded.
+
+
+```python
+@property
+def is_calibrated(self) -> bool:
+ return self.bus.is_calibrated
+```
+
+
+### `calibrate()`
+
+The goal of the calibration is twofold:
+
+- Know the physical range of motion of each motors in order to only send commands within this range.
+- Normalize raw motors positions to sensible continuous values (e.g. percentages, degrees) instead of arbitrary discrete value dependant on the specific motor used that will not replicate elsewhere.
+
+It should implement the logic for calibration (if relevant) and update the `self.calibration` dictionary. If you are using Feetech or Dynamixel motors, our bus interfaces already include methods to help with this.
+
+
+```python
+def calibrate(self) -> None:
+ self.bus.disable_torque()
+ for motor in self.bus.motors:
+ self.bus.write("Operating_Mode", motor, OperatingMode.POSITION.value)
+
+ input(f"Move {self} to the middle of its range of motion and press ENTER....")
+ homing_offsets = self.bus.set_half_turn_homings()
+
+ print(
+ "Move all joints sequentially through their entire ranges "
+ "of motion.\nRecording positions. Press ENTER to stop..."
+ )
+ range_mins, range_maxes = self.bus.record_ranges_of_motion()
+
+ self.calibration = {}
+ for motor, m in self.bus.motors.items():
+ self.calibration[motor] = MotorCalibration(
+ id=m.id,
+ drive_mode=0,
+ homing_offset=homing_offsets[motor],
+ range_min=range_mins[motor],
+ range_max=range_maxes[motor],
+ )
+
+ self.bus.write_calibration(self.calibration)
+ self._save_calibration()
+ print("Calibration saved to", self.calibration_fpath)
+```
+
+
+### `configure()`
+
+Use this to set up any configuration for your hardware (servos control modes, controller gains, etc.). This should usually be run at connection time and be idempotent.
+
+
+```python
+def configure(self) -> None:
+ with self.bus.torque_disabled():
+ self.bus.configure_motors()
+ for motor in self.bus.motors:
+ self.bus.write("Operating_Mode", motor, OperatingMode.POSITION.value)
+ self.bus.write("P_Coefficient", motor, 16)
+ self.bus.write("I_Coefficient", motor, 0)
+ self.bus.write("D_Coefficient", motor, 32)
+```
+
+
+## Step 5: Implement Sensors Reading and Action Sending
+
+These are the most important runtime functions: the core I/O loop.
+
+### `get_observation()`
+
+Returns a dictionary of sensor values from the robot. These typically include motor states, camera frames, various sensors, etc. In the LeRobot framework, these observations are what will be fed to a policy in order to predict the actions to take. The dictionary keys and structure must match `observation_features`.
+
+
+```python
+def get_observation(self) -> dict[str, Any]:
+ if not self.is_connected:
+ raise ConnectionError(f"{self} is not connected.")
+
+ # Read arm position
+ obs_dict = self.bus.sync_read("Present_Position")
+ obs_dict = {f"{motor}.pos": val for motor, val in obs_dict.items()}
+
+ # Capture images from cameras
+ for cam_key, cam in self.cameras.items():
+ obs_dict[cam_key] = cam.async_read()
+
+ return obs_dict
+```
+
+
+### `send_action()`
+
+Takes a dictionary that matches `action_features`, and sends it to your hardware. You can add safety limits (clipping, smoothing) and return what was actually sent.
+
+For simplicity, we won't be adding any modification of the actions in our example here.
+
+
+```python
+def send_action(self, action: dict[str, Any]) -> dict[str, Any]:
+ goal_pos = {key.removesuffix(".pos"): val for key, val in action.items()}
+
+ # Send goal position to the arm
+ self.bus.sync_write("Goal_Position", goal_pos)
+
+ return action
+```
+
+
+## Adding a Teleoperator
+
+For implementing teleoperation devices, we also provide a [`Teleoperator`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/teleoperators/teleoperator.py) base class. This class is very similar to the `Robot` base class and also doesn't assume anything on form factor.
+
+The main differences are in the I/O functions: a teleoperator allows you to produce action via `get_action` and can receive feedback actions via `send_feedback`. Feedback could be anything controllable on the teleoperation device that could help the person controlling it understand the consequences of the actions sent. Think motion/force feedback on a leader arm, vibrations on a gamepad controller for example. To implement a teleoperator, you can follow this same tutorial and adapt it for these two methods.
+
+## Using Your Own `LeRobot` Devices 🔌
+
+You can easily extend `lerobot` with your own custom hardware—be it a camera, robot, or teleoperation device—by creating a separate, installable Python package. If you follow a few simple conventions, the `lerobot` command-line tools (like `lerobot-teleop` and `lerobot-record`) will **automatically discover and integrate your creations** without requiring any changes to the `lerobot` source code.
+
+This guide outlines the conventions your plugin must follow.
+
+### The 4 Core Conventions
+
+To ensure your custom device is discoverable, you must adhere to the following four rules.
+
+#### 1\. Create an Installable Package with a Specific Prefix
+
+Your project must be a standard, installable Python package. Crucially, the name of your package (as defined in `pyproject.toml` or `setup.py`) must begin with one of these prefixes:
+
+- `lerobot_robot_` for a robot.
+- `lerobot_camera_` for a camera.
+- `lerobot_teleoperator_` for a teleoperation device.
+
+This prefix system is how `lerobot` automatically finds your plugin in the Python environment.
+
+#### 2\. Follow the `SomethingConfig`/`Something` Naming Pattern
+
+Your device's implementation class must be named after its configuration class, simply by removing the `Config` suffix.
+
+- **Config Class:** `MyAwesomeTeleopConfig`
+- **Device Class:** `MyAwesomeTeleop`
+
+#### 3\. Place Your Files in a Predictable Structure
+
+The device class (`MyAwesomeTeleop`) must be located in a predictable module relative to its configuration class (`MyAwesomeTeleopConfig`). `lerobot` will automatically search in these locations:
+
+- In the **same module** as the config class.
+- In a **submodule named after the device** (e.g., `my_awesome_teleop.py`).
+
+The recommended and simplest structure is to place them in separate, clearly named files within the same directory.
+
+#### 4\. Expose Classes in `__init__.py`
+
+Your package's `__init__.py` file should import and expose both the configuration and the device classes, making them easily accessible.
+
+### Putting It All Together: A Complete Example
+
+Let's create a new teleoperator called `my_awesome_teleop`.
+
+#### Directory Structure
+
+Here is what the project folder should look like. The package name, `lerobot_teleoperator_my_awesome_teleop`, follows **Convention \#1**.
+
+```
+lerobot_teleoperator_my_awesome_teleop/
+├── pyproject.toml # (or setup.py) lists lerobot as a dependency
+└── lerobot_teleoperator_my_awesome_teleop/
+ ├── __init__.py
+ ├── config_my_awesome_teleop.py
+ └── my_awesome_teleop.py
+```
+
+#### File Contents
+
+- **`config_my_awesome_teleop.py`**: Defines the configuration class. Note the `Config` suffix (**Convention \#2**).
+
+ ```python
+ from dataclasses import dataclass
+
+ from lerobot.teleoperators.config import TeleoperatorConfig
+
+ @TeleoperatorConfig.register_subclass("my_awesome_teleop")
+ @dataclass
+ class MyAwesomeTeleopConfig(TeleoperatorConfig):
+ # Your configuration fields go here
+ port: str = "192.168.1.1"
+ ```
+
+- **`my_awesome_teleop.py`**: Implements the device. The class name `MyAwesomeTeleop` matches its config class name (**Convention \#2**). This file structure adheres to **Convention \#3**.
+
+ ```python
+ from lerobot.teleoperators.teleoperator import Teleoperator
+
+ from .config_my_awesome_teleop import MyAwesomeTeleopConfig
+
+ class MyAwesomeTeleop(Teleoperator):
+ config_class = MyAwesomeTeleopConfig
+ name = "my_awesome_teleop"
+
+ def __init__(self, config: MyAwesomeTeleopConfig):
+ super().__init__(config)
+ self.config = config
+
+ # Your device logic (e.g., connect) goes here
+ ```
+
+- **`__init__.py`**: Exposes the key classes (**Convention \#4**).
+
+ ```python
+ from .config_my_awesome_teleop import MyAwesomeTeleopConfig
+ from .my_awesome_teleop import MyAwesomeTeleop
+ ```
+
+### Installation and Usage
+
+1. **Install your new plugin in your Python environment.** You can install your local plugin package using `pip`'s editable mode or from PyPi.
+
+ ```bash
+ # Locally
+ # Navigate to your plugin's root directory and install it
+ cd lerobot_teleoperator_my_awesome_teleop
+ pip install -e .
+
+ # From PyPi
+ pip install lerobot_teleoperator_my_awesome_teleop
+ ```
+
+2. **Use it directly from the command line.** Now, you can use your custom device by referencing its type.
+
+ ```bash
+ lerobot-teleoperate --teleop.type=my_awesome_teleop \
+ # other arguments
+ ```
+
+And that's it\! Your custom device is now fully integrated.
+
+### Looking for an example ?
+
+Check out these two packages from the community:
+
+- https://github.com/SpesRobotics/lerobot-robot-xarm
+- https://github.com/SpesRobotics/lerobot-teleoperator-teleop
+
+## Wrapping Up
+
+Once your robot class is complete, you can leverage the LeRobot ecosystem:
+
+- Control your robot with available teleoperators or integrate directly your teleoperating device
+- Record training data and visualize it
+- Integrate it into RL or imitation learning pipelines
+
+Don't hesitate to reach out to the community for help on our [Discord](https://discord.gg/s3KuuzsPFb) 🤗
diff --git a/docs/source/introduction_processors.mdx b/docs/source/introduction_processors.mdx
new file mode 100644
index 00000000..6f376861
--- /dev/null
+++ b/docs/source/introduction_processors.mdx
@@ -0,0 +1,314 @@
+# Introduction to Processors
+
+In robotics, there's a fundamental mismatch between the data that robots and humans produce and what machine learning models expect.
+Robots output raw sensor data like camera images and joint positions that need normalization, batching, and device placement before models can process them.
+Language instructions from humans must be tokenized into numerical representations, and different robots use different coordinate systems that need standardization.
+
+The challenge extends to model outputs as well.
+Models might output end-effector positions while robots need joint-space commands, or teleoperators produce relative movements while robots expect absolute commands.
+Model predictions are often normalized and need conversion back to real-world scales.
+
+Cross-domain translation adds another layer of complexity.
+Training data from one robot setup needs adaptation for deployment on different hardware, models trained with specific camera configurations must work with new arrangements, and datasets with different naming conventions need harmonization.
+
+**That's where processors come in.** They serve as universal translators that bridge these gaps, ensuring seamless data flow from sensors to models to actuators.
+Processors handle all the preprocessing and postprocessing steps needed to convert raw environment data into model-ready inputs and vice versa.
+
+This means that your favorite policy can be used like this:
+
+```python
+import torch
+
+from lerobot.datasets.lerobot_dataset import LeRobotDataset
+from lerobot.policies.factory import make_pre_post_processors
+from lerobot.policies.your_policy import YourPolicy
+from lerobot.processor.pipeline import RobotProcessorPipeline, PolicyProcessorPipeline
+dataset = LeRobotDataset("hf_user/dataset", episodes=[0])
+sample = dataset[10]
+
+model = YourPolicy.from_pretrained(
+ "hf_user/model",
+)
+model.eval()
+model.to("cuda")
+preprocessor, postprocessor = make_pre_post_processors(model.config, pretrained_path="hf_user/model", dataset_stats=dataset.meta.stats)
+
+preprocessed_sample = preprocessor(sample)
+action = model.select_action(preprocessed_sample)
+postprocessed_action = postprocessor(action)
+```
+
+## What are Processors?
+
+In robotics, data comes in many forms: images from cameras, joint positions from sensors, text instructions from users, and more. Each type of data requires specific transformations before a model can use it effectively. Models need this data to be:
+
+- **Normalized**: Scaled to appropriate ranges for neural network processing
+- **Batched**: Organized with proper dimensions for batch processing
+- **Tokenized**: Text converted to numerical representations
+- **Device-placed**: Moved to the right hardware (CPU/GPU)
+- **Type-converted**: Cast to appropriate data types
+
+Processors handle these transformations through composable, reusable steps that can be chained together into pipelines. Think of them as a modular assembly line where each station performs a specific transformation on your data.
+
+## Core Concepts
+
+### EnvTransition: The Universal Data Container
+
+The `EnvTransition` is the fundamental data structure that flows through all processors.
+It's a typed dictionary that represents a complete robot-environment interaction:
+
+- **OBSERVATION**: All sensor data (images, states, proprioception)
+- **ACTION**: The action to execute or that was executed
+- **REWARD**: Reinforcement learning signal
+- **DONE/TRUNCATED**: Episode boundary indicators
+- **INFO**: Arbitrary metadata
+- **COMPLEMENTARY_DATA**: Task descriptions, indices, padding flags, inter-step data
+
+### ProcessorStep: The Building Block
+
+A `ProcessorStep` is a single transformation unit that processes transitions. It's an abstract base class with two required methods:
+
+```python
+from lerobot.processor import ProcessorStep, EnvTransition
+
+class MyProcessorStep(ProcessorStep):
+ """Example processor step - inherit and implement abstract methods."""
+
+ def __call__(self, transition: EnvTransition) -> EnvTransition:
+ """Transform the transition - REQUIRED abstract method."""
+ # Your processing logic here
+ return transition
+
+ def transform_features(self, features):
+ """Declare how this step transforms feature shapes/types - REQUIRED abstract method."""
+ return features # Most processors return features unchanged
+```
+
+`__call__` is the core of your processor step. It takes an `EnvTransition` and returns a modified `EnvTransition`.
+
+`transform_features` is used to declare how this step transforms feature shapes/types.
+
+### DataProcessorPipeline: The Generic Orchestrator
+
+The `DataProcessorPipeline[TInput, TOutput]` chains multiple `ProcessorStep` instances together:
+
+```python
+from lerobot.processor import RobotProcessorPipeline, PolicyProcessorPipeline
+
+# For robot hardware (unbatched data)
+robot_processor = RobotProcessorPipeline[RobotAction, RobotAction](
+ steps=[step1, step2, step3],
+ name="robot_pipeline"
+)
+
+# For model training/inference (batched data)
+policy_processor = PolicyProcessorPipeline[dict[str, Any], dict[str, Any]](
+ steps=[step1, step2, step3],
+ name="policy_pipeline"
+)
+```
+
+## RobotProcessorPipeline vs PolicyProcessorPipeline
+
+The key distinction is in the data structures they handle:
+
+| Aspect | RobotProcessorPipeline | PolicyProcessorPipeline |
+| --------------- | -------------------------------------------- | ---------------------------------------- |
+| **Input** | `dict[str, Any]` - Individual robot values | `dict[str, Any]` - Batched tensors |
+| **Output** | `dict[str, Any]` - Individual robot commands | `torch.Tensor` - Policy predictions |
+| **Use Case** | Real-time robot control | Model training/inference |
+| **Data Format** | Unbatched, heterogeneous | Batched, homogeneous |
+| **Examples** | `{"joint_1": 0.5}` | `{"observation.state": tensor([[0.5]])}` |
+
+**Use `RobotProcessorPipeline`** for robot hardware interfaces:
+
+```python
+# Robot data structures: dict[str, Any] for observations and actions
+robot_obs: dict[str, Any] = {
+ "joint_1": 0.5, # Individual joint values
+ "joint_2": -0.3,
+ "camera_0": image_array # Raw camera data
+}
+
+robot_action: dict[str, Any] = {
+ "joint_1": 0.2, # Target joint positions
+ "joint_2": 0.1,
+ "gripper": 0.8
+}
+```
+
+**Use `PolicyProcessorPipeline`** for model training and batch processing:
+
+```python
+# Policy data structures: batch dicts and tensors
+policy_batch: dict[str, Any] = {
+ "observation.state": torch.tensor([[0.5, -0.3]]), # Batched states
+ "observation.images.camera0": torch.tensor(...), # Batched images
+ "action": torch.tensor([[0.2, 0.1, 0.8]]) # Batched actions
+}
+
+policy_action: torch.Tensor = torch.tensor([[0.2, 0.1, 0.8]]) # Model output tensor
+```
+
+## Converter Functions
+
+LeRobot provides converter functions to bridge different data formats in `lerobot.processor.converters`. These functions handle the crucial translations between robot hardware data structures, policy model formats, and the internal `EnvTransition` representation that flows through processor pipelines.
+
+| Category | Function | Description |
+| ------------------------------ | ----------------------------- | ------------------------------- |
+| **Robot Hardware Converters** | `robot_action_to_transition` | Robot dict → EnvTransition |
+| | `observation_to_transition` | Robot obs → EnvTransition |
+| | `transition_to_robot_action` | EnvTransition → Robot dict |
+| **Policy/Training Converters** | `batch_to_transition` | Batch dict → EnvTransition |
+| | `transition_to_batch` | EnvTransition → Batch dict |
+| | `policy_action_to_transition` | Policy tensor → EnvTransition |
+| | `transition_to_policy_action` | EnvTransition → Policy tensor |
+| **Utilities** | `create_transition` | Build transitions with defaults |
+| | `identity_transition` | Pass-through converter |
+
+The key insight is that **robot hardware converters** work with individual values and dictionaries, while **policy/training converters** work with batched tensors and model outputs. The converter functions automatically handle the structural differences, so your processor steps can focus on the core transformations without worrying about data format compatibility.
+
+## Processor Examples
+
+The following examples demonstrate real-world processor configurations for policy training and inference.
+
+Here is an example processor for policy training and inference:
+
+```python
+# Training data preprocessing (optimized order for GPU performance)
+training_preprocessor = PolicyProcessorPipeline[dict[str, Any], dict[str, Any]](
+ steps=[
+ RenameObservationsProcessorStep(rename_map={}), # Standardize keys
+ AddBatchDimensionProcessorStep(), # Add batch dims
+ TokenizerProcessorStep(tokenizer_name="...", ...), # Tokenize language
+ DeviceProcessorStep(device="cuda"), # Move to GPU first
+ NormalizerProcessorStep(features=..., stats=...), # Normalize on GPU
+ ]
+)
+
+# Model output postprocessing
+training_postprocessor = PolicyProcessorPipeline[torch.Tensor, torch.Tensor](
+ steps=[
+ DeviceProcessorStep(device="cpu"), # Move to CPU
+ UnnormalizerProcessorStep(features=..., stats=...), # Denormalize
+ ]
+ to_transition=policy_action_to_transition,
+ to_output=transition_to_policy_action,
+)
+```
+
+### An interaction between a robot and a policy with processors
+
+The most common real-world scenario combines both pipeline types robot hardware generates observations that need policy processing, and policy outputs need robot-compatible postprocessing:
+
+```python
+# Real deployment: Robot sensors → Model → Robot commands
+with torch.no_grad():
+ while not done:
+ raw_obs = robot.get_observation() # dict[str, Any]
+
+ # Add your robot observation to policy observation processor
+
+ policy_input = policy_preprocessor(raw_obs) # Batched dict
+
+ policy_output = policy.select_action(policy_input) # Policy tensor
+
+ policy_action = policy_postprocessor(policy_output)
+
+ # Add your robot action to policy action processor
+
+ robot.send_action(policy_action)
+```
+
+## Feature Contracts: Shape and Type Transformation
+
+Processors don't just transform data - they can also **change the data structure itself**. The `transform_features()` method declares these changes, which is crucial for dataset recording and policy creation.
+
+### Why Feature Contracts Matter
+
+When building datasets or policies, LeRobot needs to know:
+
+- **What data fields will exist** after processing
+- **What shapes and types** each field will have
+- **How to configure models** for the expected data structure
+
+```python
+# Example: A processor that adds velocity to observations
+class VelocityProcessor(ObservationProcessorStep):
+ def observation(self, obs):
+ new_obs = obs.copy()
+ if "observation.state" in obs:
+ # concatenate computed velocity field to the state
+ new_obs["observation.state"] = self._compute_velocity(obs["observation.state"])
+ return new_obs
+
+ def transform_features(self, features):
+ """Declare the new velocity field we're adding."""
+ state_feature = features[PipelineFeatureType.OBSERVATION].get("observation.state")
+ if state_feature:
+ double_shape = (state_feature.shape[0] * 2,) if state_feature.shape else (2,)
+ features[PipelineFeatureType.OBSERVATION]["observation.state"] = PolicyFeature(
+ type=FeatureType.STATE, shape=double_shape
+ )
+ return features
+```
+
+### Feature Specification Functions
+
+`create_initial_features()` and `aggregate_pipeline_dataset_features()` solve a critical dataset creation problem: determining the exact final data structure before any data is processed.
+Since processor pipelines can add new features (like velocity fields), change tensor shapes (like cropping images), or rename keys, datasets need to know the complete output specification upfront to allocate proper storage and define schemas.
+These functions work together by starting with robot hardware specifications (`create_initial_features()`) then simulating the entire pipeline transformation (`aggregate_pipeline_dataset_features()`) to compute the final feature dictionary that gets passed to `LeRobotDataset.create()`, ensuring perfect alignment between what processors output and what datasets expect to store.
+
+```python
+from lerobot.datasets.pipeline_features import aggregate_pipeline_dataset_features
+
+# Start with robot's raw features
+initial_features = create_initial_features(
+ observation=robot.observation_features, # {"joint_1.pos": float, "camera_0": (480,640,3)}
+ action=robot.action_features # {"joint_1.pos": float, "gripper.pos": float}
+)
+
+# Apply processor pipeline to compute final features
+final_features = aggregate_pipeline_dataset_features(
+ pipeline=my_processor_pipeline,
+ initial_features=initial_features,
+ use_videos=True
+)
+
+# Use for dataset creation
+dataset = LeRobotDataset.create(
+ repo_id="my_dataset",
+ features=final_features, # Knows exactly what data to expect
+ ...
+)
+```
+
+## Common Processor Steps
+
+LeRobot provides many registered processor steps. Here are the most commonly used core processors:
+
+### Essential Processors
+
+- **`normalizer_processor`**: Normalize observations/actions using dataset statistics (mean/std or min/max)
+- **`device_processor`**: Move tensors to CPU/GPU with optional dtype conversion
+- **`to_batch_processor`**: Add batch dimensions to transitions for model compatibility
+- **`rename_observations_processor`**: Rename observation keys using mapping dictionaries
+- **`tokenizer_processor`**: Tokenize natural language task descriptions into tokens and attention masks
+
+### Next Steps
+
+- **[Implement Your Own Processor](./implement_your_own_processor)** - Create custom processor steps
+- **[Debug Your Pipeline](./debug_processor_pipeline)** - Troubleshoot and optimize pipelines
+- **[Processors for Robots and Teleoperators](./processors_robots_teleop)** - Real-world integration patterns
+
+## Summary
+
+Processors solve the data translation problem in robotics by providing:
+
+- **Modular transformations**: Composable, reusable processing steps
+- **Type safety**: Generic pipelines with compile-time checking
+- **Performance optimization**: GPU-accelerated operations
+- **Robot/Policy distinction**: Separate pipelines for different data structures
+- **Comprehensive ecosystem**: 30+ registered processors for common tasks
+
+The key insight: `RobotProcessorPipeline` handles unbatched robot hardware data, while `PolicyProcessorPipeline` handles batched model data. Choose the right tool for your data structure!
diff --git a/docs/source/koch.mdx b/docs/source/koch.mdx
new file mode 100644
index 00000000..813b9bd6
--- /dev/null
+++ b/docs/source/koch.mdx
@@ -0,0 +1,283 @@
+# Koch v1.1
+
+In the steps below, we explain how to assemble the Koch v1.1 robot.
+
+## Order and assemble the parts
+
+Follow the sourcing and assembling instructions provided in this [README](https://github.com/jess-moss/koch-v1-1). This will guide you through setting up both the follower and leader arms, as shown in the image below.
+
+For a visual walkthrough of the assembly process, you can refer to [this video tutorial](https://youtu.be/8nQIg9BwwTk).
+
+> [!WARNING]
+> Since the production of this video, we simplified the configuration phase. Because of this, two things differ from the instructions in that video:
+>
+> - Don't plug in all the motor cables right away and wait to be instructed to do so in [Configure the motors](#configure-the-motors).
+> - Don't screw in the controller board (PCB) to the base right away and wait for being instructed to do so in [Configure the motors](#configure-the-motors).
+
+## Install LeRobot 🤗
+
+To install LeRobot follow, our [Installation Guide](./installation)
+
+In addition to these instructions, you need to install the Dynamixel SDK:
+
+```bash
+pip install -e ".[dynamixel]"
+```
+
+## Configure the motors
+
+### 1. Find the USB ports associated with each arm
+
+To find the port for each bus servo adapter, run this script:
+
+```bash
+lerobot-find-port
+```
+
+
+
+
+Example output:
+
+```
+Finding all available ports for the MotorBus.
+['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
+Remove the USB cable from your MotorsBus and press Enter when done.
+
+[...Disconnect corresponding leader or follower arm and press Enter...]
+
+The port of this MotorsBus is /dev/tty.usbmodem575E0032081
+Reconnect the USB cable.
+```
+
+Where the found port is: `/dev/tty.usbmodem575E0032081` corresponding to your leader or follower arm.
+
+
+
+
+On Linux, you might need to give access to the USB ports by running:
+
+```bash
+sudo chmod 666 /dev/ttyACM0
+sudo chmod 666 /dev/ttyACM1
+```
+
+Example output:
+
+```
+Finding all available ports for the MotorBus.
+['/dev/ttyACM0', '/dev/ttyACM1']
+Remove the usb cable from your MotorsBus and press Enter when done.
+
+[...Disconnect corresponding leader or follower arm and press Enter...]
+
+The port of this MotorsBus is /dev/ttyACM1
+Reconnect the USB cable.
+```
+
+Where the found port is: `/dev/ttyACM1` corresponding to your leader or follower arm.
+
+
+
+
+### 2. Set the motors ids and baudrates
+
+Each motor is identified by a unique id on the bus. When brand new, motors usually come with a default id of `1`. For the communication to work properly between the motors and the controller, we first need to set a unique, different id to each motor. Additionally, the speed at which data is transmitted on the bus is determined by the baudrate. In order to talk to each other, the controller and all the motors need to be configured with the same baudrate.
+
+To that end, we first need to connect to each motor individually with the controller in order to set these. Since we will write these parameters in the non-volatile section of the motors' internal memory (EEPROM), we'll only need to do this once.
+
+If you are repurposing motors from another robot, you will probably also need to perform this step, as the ids and baudrate likely won't match.
+
+#### Follower
+
+Connect the usb cable from your computer and the 5V power supply to the follower arm's controller board. Then, run the following command or run the API example with the port you got from the previous step. You'll also need to give your leader arm a name with the `id` parameter.
+
+For a visual reference on how to set the motor ids please refer to [this video](https://huggingface.co/docs/lerobot/en/so101#setup-motors-video) where we follow the process for the SO101 arm.
+
+
+
+
+```bash
+lerobot-setup-motors \
+ --robot.type=koch_follower \
+ --robot.port=/dev/tty.usbmodem575E0031751 # <- paste here the port found at previous step
+```
+
+
+
+
+
+```python
+from lerobot.robots.koch_follower import KochFollower, KochFollowerConfig
+
+config = KochFollowerConfig(
+ port="/dev/tty.usbmodem575E0031751",
+ id="my_awesome_follower_arm",
+)
+follower = KochFollower(config)
+follower.setup_motors()
+```
+
+
+
+
+
+You should see the following instruction.
+
+```
+Connect the controller board to the 'gripper' motor only and press enter.
+```
+
+As instructed, plug the gripper's motor. Make sure it's the only motor connected to the board, and that the motor itself is not yet daisy-chained to any other motor. As you press `[Enter]`, the script will automatically set the id and baudrate for that motor.
+
+
+Troubleshooting
+
+If you get an error at that point, check your cables and make sure they are plugged in properly:
+
+
+
Power supply
+
USB cable between your computer and the controller board
+
The 3-pin cable from the controller board to the motor
+
+
+If you are using a Waveshare controller board, make sure that the two jumpers are set on the `B` channel (USB).
+
+
+
+You should then see the following message:
+
+```
+'gripper' motor id set to 6
+```
+
+Followed by the next instruction:
+
+```
+Connect the controller board to the 'wrist_roll' motor only and press enter.
+```
+
+You can disconnect the 3-pin cable from the controller board but you can leave it connected to the gripper motor on the other end as it will already be in the right place. Now, plug in another 3-pin cable to the wrist roll motor and connect it to the controller board. As with the previous motor, make sure it is the only motor connected to the board and that the motor itself isn't connected to any other one.
+
+Repeat the operation for each motor as instructed.
+
+> [!TIP]
+> Check your cabling at each step before pressing Enter. For instance, the power supply cable might disconnect as you manipulate the board.
+
+When you are done, the script will simply finish, at which point the motors are ready to be used. You can now plug the 3-pin cable from each motor to the next one, and the cable from the first motor (the 'shoulder pan' with id=1) to the controller board, which can now be attached to the base of the arm.
+
+#### Leader
+
+Do the same steps for the leader arm but modify the command or script accordingly.
+
+
+
+
+```bash
+lerobot-setup-motors \
+ --teleop.type=koch_leader \
+ --teleop.port=/dev/tty.usbmodem575E0031751 \ # <- paste here the port found at previous step
+```
+
+
+
+
+
+```python
+from lerobot.teleoperators.koch_leader import KochLeader, KochLeaderConfig
+
+config = KochLeaderConfig(
+ port="/dev/tty.usbmodem575E0031751",
+ id="my_awesome_leader_arm",
+)
+leader = KochLeader(config)
+leader.setup_motors()
+```
+
+
+
+
+
+## Calibrate
+
+Next, you'll need to calibrate your robot to ensure that the leader and follower arms have the same position values when they are in the same physical position.
+The calibration process is very important because it allows a neural network trained on one robot to work on another.
+
+#### Follower
+
+Run the following command or API example to calibrate the follower arm:
+
+
+
+
+```bash
+lerobot-calibrate \
+ --robot.type=koch_follower \
+ --robot.port=/dev/tty.usbmodem58760431551 \ # <- The port of your robot
+ --robot.id=my_awesome_follower_arm # <- Give the robot a unique name
+```
+
+
+
+
+
+```python
+from lerobot.robots.koch_follower import KochFollowerConfig, KochFollower
+
+config = KochFollowerConfig(
+ port="/dev/tty.usbmodem585A0076891",
+ id="my_awesome_follower_arm",
+)
+
+follower = KochFollower(config)
+follower.connect(calibrate=False)
+follower.calibrate()
+follower.disconnect()
+```
+
+
+
+
+
+We unified the calibration method for most robots. Thus, the calibration steps for this Koch arm are the same as the steps for the SO100 and SO101. First, we have to move the robot to the position where each joint is in the middle of its range, then we press `Enter`. Secondly, we move all joints through their full range of motion. A video of this same process for the SO101 as reference can be found [here](https://huggingface.co/docs/lerobot/en/so101#calibration-video).
+
+#### Leader
+
+Do the same steps to calibrate the leader arm, run the following command or API example:
+
+
+
+
+```bash
+lerobot-calibrate \
+ --teleop.type=koch_leader \
+ --teleop.port=/dev/tty.usbmodem58760431551 \ # <- The port of your robot
+ --teleop.id=my_awesome_leader_arm # <- Give the robot a unique name
+```
+
+
+
+
+
+```python
+from lerobot.teleoperators.koch_leader import KochLeaderConfig, KochLeader
+
+config = KochLeaderConfig(
+ port="/dev/tty.usbmodem575E0031751",
+ id="my_awesome_leader_arm",
+)
+
+leader = KochLeader(config)
+leader.connect(calibrate=False)
+leader.calibrate()
+leader.disconnect()
+```
+
+
+
+
+
+Congrats 🎉, your robot is all set to learn a task on its own. Start training it by following this tutorial: [Getting started with real-world robots](./il_robots)
+
+> [!TIP]
+> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
diff --git a/docs/source/lekiwi.mdx b/docs/source/lekiwi.mdx
new file mode 100644
index 00000000..875394d7
--- /dev/null
+++ b/docs/source/lekiwi.mdx
@@ -0,0 +1,337 @@
+# LeKiwi
+
+In the steps below, we explain how to assemble the LeKiwi mobile robot.
+
+## Source the parts
+
+Follow this [README](https://github.com/SIGRobotics-UIUC/LeKiwi). It contains the bill of materials, with a link to source the parts, as well as the instructions to 3D print the parts.
+And advise if it's your first time printing or if you don't own a 3D printer.
+
+### Wired version
+
+If you have the **wired** LeKiwi version, you can skip the installation of the Raspberry Pi and setting up SSH. You can also run all commands directly on your PC for both the LeKiwi scripts and the leader arm scripts for teleoperating.
+
+## Install software on Pi
+
+Now we have to set up the remote PC that will run on the LeKiwi Robot. This is normally a Raspberry Pi, but can be any PC that can run on 5V and has enough usb ports (2 or more) for the cameras and motor control board.
+
+### Install OS
+
+For setting up the Raspberry Pi and its SD-card see: [Setup PI](https://www.raspberrypi.com/documentation/computers/getting-started.html). Here is explained how to download the [Imager](https://www.raspberrypi.com/software/) to install Raspberry Pi OS or Ubuntu.
+
+### Setup SSH
+
+After setting up your Pi, you should enable and set up [SSH](https://www.raspberrypi.com/news/coding-on-raspberry-pi-remotely-with-visual-studio-code/) (Secure Shell Protocol) so you can log in to the Pi from your laptop without requiring a screen, keyboard, and mouse on the Pi. A great tutorial on how to do this can be found [here](https://www.raspberrypi.com/documentation/computers/remote-access.html#ssh). Logging into your Pi can be done in your Command Prompt (cmd) or, if you use VSCode you can use [this](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh) extension.
+
+### Install LeRobot on Pi 🤗
+
+On your Raspberry Pi install LeRobot using our [Installation Guide](./installation)
+
+In addition to these instructions, you need to install the Feetech SDK & ZeroMQ on your Pi:
+
+```bash
+pip install -e ".[lekiwi]"
+```
+
+## Install LeRobot locally
+
+If you already have installed LeRobot on your laptop/pc you can skip this step; otherwise, please follow along as we do the same steps we did on the Pi.
+
+Follow our [Installation Guide](./installation)
+
+In addition to these instructions, you need to install the Feetech SDK & ZeroMQ on your laptop/pc:
+
+```bash
+pip install -e ".[lekiwi]"
+```
+
+Great :hugs:! You are now done installing LeRobot, and we can begin assembling the SO100/SO101 arms and the mobile base :robot:.
+Every time you now want to use LeRobot, you can go to the `~/lerobot` folder where we installed LeRobot and run one of the commands.
+
+# Step-by-Step Assembly Instructions
+
+First, we will assemble the two SO100/SO101 arms. One to attach to the mobile base and one for teleoperation. Then we will assemble the mobile base. The instructions for assembling can be found on these two pages:
+
+- [Assemble SO101](./so101#step-by-step-assembly-instructions)
+- [Assemble LeKiwi](https://github.com/SIGRobotics-UIUC/LeKiwi/blob/main/Assembly.md)
+
+### Find the USB ports associated with motor board
+
+To find the port for each bus servo adapter, run this script:
+
+```bash
+lerobot-find-port
+```
+
+
+
+
+Example output:
+
+```
+Finding all available ports for the MotorBus.
+['/dev/tty.usbmodem575E0032081']
+Remove the USB cable from your MotorsBus and press Enter when done.
+
+[...Disconnect corresponding leader or follower arm and press Enter...]
+
+The port of this MotorsBus is /dev/tty.usbmodem575E0032081
+Reconnect the USB cable.
+```
+
+Where the found port is: `/dev/tty.usbmodem575E0032081` corresponding to your board.
+
+
+
+
+On Linux, you might need to give access to the USB ports by running:
+
+```bash
+sudo chmod 666 /dev/ttyACM0
+sudo chmod 666 /dev/ttyACM1
+```
+
+Example output:
+
+```
+Finding all available ports for the MotorBus.
+['/dev/ttyACM0']
+Remove the usb cable from your MotorsBus and press Enter when done.
+
+[...Disconnect corresponding leader or follower arm and press Enter...]
+
+The port of this MotorsBus is /dev/ttyACM0
+Reconnect the USB cable.
+```
+
+Where the found port is: `/dev/ttyACM0` corresponding to your board.
+
+
+
+
+### Configure motors
+
+The instructions for configuring the motors can be found in the SO101 [docs](./so101#configure-the-motors). Besides the ids for the arm motors, we also need to set the motor ids for the mobile base. These need to be in a specific order to work. Below an image of the motor ids and motor mounting positions for the mobile base. Note that we only use one Motor Control board on LeKiwi. This means the motor ids for the wheels are 7, 8 and 9.
+
+You can run this command to setup motors for LeKiwi. It will first setup the motors for arm (id 6..1) and then setup motors for wheels (9,8,7)
+
+```bash
+lerobot-setup-motors \
+ --robot.type=lekiwi \
+ --robot.port=/dev/tty.usbmodem58760431551 # <- paste here the port found at previous step
+```
+
+
+
+### Troubleshoot communication
+
+If you are having trouble connecting to the Mobile SO100, follow these steps to diagnose and resolve the issue.
+
+#### 1. Verify IP Address Configuration
+
+Make sure that the correct IP for the Pi is used in the commands or in your code. To check the Raspberry Pi's IP address, run (on the Pi command line):
+
+```bash
+hostname -I
+```
+
+#### 2. Check if Pi is reachable from laptop/pc
+
+Try pinging the Raspberry Pi from your laptop:
+
+```bach
+ping
+```
+
+If the ping fails:
+
+- Ensure the Pi is powered on and connected to the same network.
+- Check if SSH is enabled on the Pi.
+
+#### 3. Try SSH connection
+
+If you can't SSH into the Pi, it might not be properly connected. Use:
+
+```bash
+ssh @
+```
+
+If you get a connection error:
+
+- Ensure SSH is enabled on the Pi by running:
+ ```bash
+ sudo raspi-config
+ ```
+ Then navigate to: **Interfacing Options -> SSH** and enable it.
+
+### Calibration
+
+Now we have to calibrate the leader arm and the follower arm. The wheel motors don't have to be calibrated.
+The calibration process is very important because it allows a neural network trained on one robot to work on another.
+
+### Calibrate follower arm (on mobile base)
+
+Make sure the arm is connected to the Raspberry Pi and run this script or API example (on the Raspberry Pi via SSH) to launch calibration of the follower arm:
+
+```bash
+lerobot-calibrate \
+ --robot.type=lekiwi \
+ --robot.id=my_awesome_kiwi # <- Give the robot a unique name
+```
+
+We unified the calibration method for most robots, thus, the calibration steps for this SO100 arm are the same as the steps for the Koch and SO101. First, we have to move the robot to the position where each joint is in the middle of its range, then we press `Enter`. Secondly, we move all joints through their full range of motion. A video of this same process for the SO101 as reference can be found [here](https://huggingface.co/docs/lerobot/en/so101#calibration-video).
+
+### Wired version
+
+If you have the **wired** LeKiwi version, please run all commands on your laptop.
+
+### Calibrate leader arm
+
+Then, to calibrate the leader arm (which is attached to the laptop/pc). Run the following command of API example on your laptop:
+
+
+
+
+```bash
+lerobot-calibrate \
+ --teleop.type=so100_leader \
+ --teleop.port=/dev/tty.usbmodem58760431551 \ # <- The port of your robot
+ --teleop.id=my_awesome_leader_arm # <- Give the robot a unique name
+```
+
+
+
+
+
+```python
+from lerobot.teleoperators.so100_leader import SO100LeaderConfig, SO100Leader
+
+config = SO100LeaderConfig(
+ port="/dev/tty.usbmodem58760431551",
+ id="my_awesome_leader_arm",
+)
+
+leader = SO100Leader(config)
+leader.connect(calibrate=False)
+leader.calibrate()
+leader.disconnect()
+```
+
+
+
+
+
+## Teleoperate LeKiwi
+
+> [!TIP]
+> If you're using a Mac, you might need to give Terminal permission to access your keyboard for teleoperation. Go to System Preferences > Security & Privacy > Input Monitoring and check the box for Terminal.
+
+To teleoperate, SSH into your Raspberry Pi, and run `conda activate lerobot` and this command:
+
+```bash
+python -m lerobot.robots.lekiwi.lekiwi_host --robot.id=my_awesome_kiwi
+```
+
+Then on your laptop, also run `conda activate lerobot` and run the API example, make sure you set the correct `remote_ip` and `port` in `examples/lekiwi/teleoperate.py`.
+
+```bash
+python examples/lekiwi/teleoperate.py
+```
+
+You should see on your laptop something like this: `[INFO] Connected to remote robot at tcp://172.17.133.91:5555 and video stream at tcp://172.17.133.91:5556.` Now you can move the leader arm and use the keyboard (w,a,s,d) to drive forward, left, backwards, right. And use (z,x) to turn left or turn right. You can use (r,f) to increase and decrease the speed of the mobile robot. There are three speed modes, see the table below:
+
+| Speed Mode | Linear Speed (m/s) | Rotation Speed (deg/s) |
+| ---------- | ------------------ | ---------------------- |
+| Fast | 0.4 | 90 |
+| Medium | 0.25 | 60 |
+| Slow | 0.1 | 30 |
+
+| Key | Action |
+| --- | -------------- |
+| W | Move forward |
+| A | Move left |
+| S | Move backward |
+| D | Move right |
+| Z | Turn left |
+| X | Turn right |
+| R | Increase speed |
+| F | Decrease speed |
+
+> [!TIP]
+> If you use a different keyboard, you can change the keys for each command in the [`LeKiwiClientConfig`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/robots/lekiwi/config_lekiwi.py).
+
+### Wired version
+
+If you have the **wired** LeKiwi version, please run all commands on your laptop.
+
+## Record a dataset
+
+Once you're familiar with teleoperation, you can record your first dataset.
+
+We use the Hugging Face hub features for uploading your dataset. If you haven't previously used the Hub, make sure you can login via the cli using a write-access token, this token can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens).
+
+Add your token to the CLI by running this command:
+
+```bash
+huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
+```
+
+Then store your Hugging Face repository name in a variable:
+
+```bash
+HF_USER=$(huggingface-cli whoami | head -n 1)
+echo $HF_USER
+```
+
+Now you can record a dataset. To record episodes and upload your dataset to the hub, execute this API example tailored for LeKiwi. Make sure to first adapt the `remote_ip`, `repo_id`, `port` and `task` in the script. If you would like to run the script for longer you can increase `NB_CYCLES_CLIENT_CONNECTION`.
+
+```bash
+python examples/lekiwi/record.py
+```
+
+#### Dataset upload
+
+Locally, your dataset is stored in this folder: `~/.cache/huggingface/lerobot/{repo-id}`. At the end of data recording, your dataset will be uploaded on your Hugging Face page (e.g. https://huggingface.co/datasets/cadene/so101_test) that you can obtain by running:
+
+```bash
+echo https://huggingface.co/datasets/${HF_USER}/so101_test
+```
+
+Your dataset will be automatically tagged with `LeRobot` for the community to find it easily, and you can also add custom tags (in this case `tutorial` for example).
+
+You can look for other LeRobot datasets on the hub by searching for `LeRobot` [tags](https://huggingface.co/datasets?other=LeRobot).
+
+#### Tips for gathering data
+
+Once you're comfortable with data recording, you can create a larger dataset for training. A good starting task is grasping an object at different locations and placing it in a bin. We suggest recording at least 50 episodes, with 10 episodes per location. Keep the cameras fixed and maintain consistent grasping behavior throughout the recordings. Also make sure the object you are manipulating is visible on the camera's. A good rule of thumb is you should be able to do the task yourself by only looking at the camera images.
+
+In the following sections, you’ll train your neural network. After achieving reliable grasping performance, you can start introducing more variations during data collection, such as additional grasp locations, different grasping techniques, and altering camera positions.
+
+Avoid adding too much variation too quickly, as it may hinder your results.
+
+If you want to dive deeper into this important topic, you can check out the [blog post](https://huggingface.co/blog/lerobot-datasets#what-makes-a-good-dataset) we wrote on what makes a good dataset.
+
+#### Troubleshooting:
+
+- On Linux, if the left and right arrow keys and escape key don't have any effect during data recording, make sure you've set the `$DISPLAY` environment variable. See [pynput limitations](https://pynput.readthedocs.io/en/latest/limitations.html#linux).
+
+## Replay an episode
+
+To replay an episode run the API example below, make sure to change `remote_ip`, `port`, LeRobotDatasetId and episode index.
+
+```bash
+python examples/lekiwi/replay.py
+```
+
+Congrats 🎉, your robot is all set to learn a task on its own. Start training it by the training part of this tutorial: [Getting started with real-world robots](./il_robots)
+
+## Evaluate your policy
+
+To evaluate your policy run the `evaluate.py` API example, make sure to change `remote_ip`, `port`, model..
+
+```bash
+python examples/lekiwi/evaluate.py
+```
+
+> [!TIP]
+> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
diff --git a/docs/source/lerobot-dataset-v3.mdx b/docs/source/lerobot-dataset-v3.mdx
new file mode 100644
index 00000000..3521914f
--- /dev/null
+++ b/docs/source/lerobot-dataset-v3.mdx
@@ -0,0 +1,314 @@
+# LeRobotDataset v3.0
+
+`LeRobotDataset v3.0` is a standardized format for robot learning data. It provides unified access to multi-modal time-series data, sensorimotor signals and multi‑camera video, as well as rich metadata for indexing, search, and visualization on the Hugging Face Hub.
+
+This docs will guide you to:
+
+- Understand the v3.0 design and directory layout
+- Record a dataset and push it to the Hub
+- Load datasets for training with `LeRobotDataset`
+- Stream datasets without downloading using `StreamingLeRobotDataset`
+- Apply image transforms for data augmentation during training
+- Migrate existing `v2.1` datasets to `v3.0`
+
+## What’s new in `v3`
+
+- **File-based storage**: Many episodes per Parquet/MP4 file (v2 used one file per episode).
+- **Relational metadata**: Episode boundaries and lookups are resolved through metadata, not filenames.
+- **Hub-native streaming**: Consume datasets directly from the Hub with `StreamingLeRobotDataset`.
+- **Lower file-system pressure**: Fewer, larger files ⇒ faster initialization and fewer issues at scale.
+- **Unified organization**: Clean directory layout with consistent path templates across data and videos.
+
+## Installation
+
+`LeRobotDataset v3.0` will be included in `lerobot >= 0.4.0`.
+
+Until that stable release, you can use the main branch by following the [build from source instructions](./installation#from-source).
+
+## Record a dataset
+
+Run the command below to record a dataset with the SO-101 and push to the Hub:
+
+```bash
+lerobot-record \
+ --robot.type=so101_follower \
+ --robot.port=/dev/tty.usbmodem585A0076841 \
+ --robot.id=my_awesome_follower_arm \
+ --robot.cameras="{ front: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}}" \
+ --teleop.type=so101_leader \
+ --teleop.port=/dev/tty.usbmodem58760431551 \
+ --teleop.id=my_awesome_leader_arm \
+ --display_data=true \
+ --dataset.repo_id=${HF_USER}/record-test \
+ --dataset.num_episodes=5 \
+ --dataset.single_task="Grab the black cube"
+```
+
+See the [recording guide](./il_robots#record-a-dataset) for more details.
+
+## Format design
+
+A core v3 principle is **decoupling storage from the user API**: data is stored efficiently (few large files), while the public API exposes intuitive episode-level access.
+
+`v3` has three pillars:
+
+1. **Tabular data**: Low‑dimensional, high‑frequency signals (states, actions, timestamps) stored in **Apache Parquet**. Access is memory‑mapped or streamed via the `datasets` stack.
+2. **Visual data**: Camera frames concatenated and encoded into **MP4**. Frames from the same episode are grouped; videos are sharded per camera for practical sizes.
+3. **Metadata**: JSON/Parquet records describing schema (feature names, dtypes, shapes), frame rates, normalization stats, and **episode segmentation** (start/end offsets into shared Parquet/MP4 files).
+
+> To scale to millions of episodes, tabular rows and video frames from multiple episodes are **concatenated** into larger files. Episode‑specific views are reconstructed **via metadata**, not file boundaries.
+
+
+
+
+
+ From episode‑based to file‑based datasets
+
+
+
+
+### Directory layout (simplified)
+
+- **`meta/info.json`**: canonical schema (features, shapes/dtypes), FPS, codebase version, and **path templates** to locate data/video shards.
+- **`meta/stats.json`**: global feature statistics (mean/std/min/max) used for normalization; exposed as `dataset.meta.stats`.
+- **`meta/tasks.jsonl`**: natural‑language task descriptions mapped to integer IDs for task‑conditioned policies.
+- **`meta/episodes/`**: per‑episode records (lengths, tasks, offsets) stored as **chunked Parquet** for scalability.
+- **`data/`**: frame‑by‑frame **Parquet** shards; each file typically contains **many episodes**.
+- **`videos/`**: **MP4** shards per camera; each file typically contains **many episodes**.
+
+## Load a dataset for training
+
+`LeRobotDataset` returns Python dictionaries of PyTorch tensors and integrates with `torch.utils.data.DataLoader`. Here is a code example showing its use:
+
+```python
+import torch
+from lerobot.datasets.lerobot_dataset import LeRobotDataset
+
+repo_id = "yaak-ai/L2D-v3"
+
+# 1) Load from the Hub (cached locally)
+dataset = LeRobotDataset(repo_id)
+
+# 2) Random access by index
+sample = dataset[100]
+print(sample)
+# {
+# 'observation.state': tensor([...]),
+# 'action': tensor([...]),
+# 'observation.images.front_left': tensor([C, H, W]),
+# 'timestamp': tensor(1.234),
+# ...
+# }
+
+# 3) Temporal windows via delta_timestamps (seconds relative to t)
+delta_timestamps = {
+ "observation.images.front_left": [-0.2, -0.1, 0.0] # 0.2s and 0.1s before current frame
+}
+
+dataset = LeRobotDataset(repo_id, delta_timestamps=delta_timestamps)
+
+# Accessing an index now returns a stack for the specified key(s)
+sample = dataset[100]
+print(sample["observation.images.front_left"].shape) # [T, C, H, W], where T=3
+
+# 4) Wrap with a DataLoader for training
+batch_size = 16
+data_loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size)
+
+device = "cuda" if torch.cuda.is_available() else "cpu"
+for batch in data_loader:
+ observations = batch["observation.state"].to(device)
+ actions = batch["action"].to(device)
+ images = batch["observation.images.front_left"].to(device)
+ # model.forward(batch)
+```
+
+## Stream a dataset (no downloads)
+
+Use `StreamingLeRobotDataset` to iterate directly from the Hub without local copies. This allows to stream large datasets without the need to downloading them onto disk or loading them onto memory, and is a key feature of the new dataset format.
+
+```python
+from lerobot.datasets.streaming_dataset import StreamingLeRobotDataset
+
+repo_id = "yaak-ai/L2D-v3"
+dataset = StreamingLeRobotDataset(repo_id) # streams directly from the Hub
+```
+
+
+
+
+
+ Stream directly from the Hub for on‑the‑fly training.
+
+
+
+
+## Image transforms
+
+Image transforms are data augmentations applied to camera frames during training to improve model robustness and generalization. LeRobot supports various transforms including brightness, contrast, saturation, hue, and sharpness adjustments.
+
+### Using transforms during dataset creation/recording
+
+Currently, transforms are applied during **training time only**, not during recording. When you create or record a dataset, the raw images are stored without transforms. This allows you to experiment with different augmentations later without re-recording data.
+
+### Adding transforms to existing datasets (API)
+
+Use the `image_transforms` parameter when loading a dataset for training:
+
+```python
+from lerobot.datasets.lerobot_dataset import LeRobotDataset
+from lerobot.datasets.transforms import ImageTransforms, ImageTransformsConfig, ImageTransformConfig
+
+# Option 1: Use default transform configuration (disabled by default)
+transforms_config = ImageTransformsConfig(
+ enable=True, # Enable transforms
+ max_num_transforms=3, # Apply up to 3 transforms per frame
+ random_order=False, # Apply in standard order
+)
+transforms = ImageTransforms(transforms_config)
+
+dataset = LeRobotDataset(
+ repo_id="your-username/your-dataset",
+ image_transforms=transforms
+)
+
+# Option 2: Create custom transform configuration
+custom_transforms_config = ImageTransformsConfig(
+ enable=True,
+ max_num_transforms=2,
+ random_order=True,
+ tfs={
+ "brightness": ImageTransformConfig(
+ weight=1.0,
+ type="ColorJitter",
+ kwargs={"brightness": (0.7, 1.3)} # Adjust brightness range
+ ),
+ "contrast": ImageTransformConfig(
+ weight=2.0, # Higher weight = more likely to be selected
+ type="ColorJitter",
+ kwargs={"contrast": (0.8, 1.2)}
+ ),
+ "sharpness": ImageTransformConfig(
+ weight=0.5, # Lower weight = less likely to be selected
+ type="SharpnessJitter",
+ kwargs={"sharpness": (0.3, 2.0)}
+ ),
+ }
+)
+
+dataset = LeRobotDataset(
+ repo_id="your-username/your-dataset",
+ image_transforms=ImageTransforms(custom_transforms_config)
+)
+
+# Option 3: Use pure torchvision transforms
+from torchvision.transforms import v2
+
+torchvision_transforms = v2.Compose([
+ v2.ColorJitter(brightness=0.2, contrast=0.2, saturation=0.2, hue=0.1),
+ v2.GaussianBlur(kernel_size=3, sigma=(0.1, 2.0)),
+])
+
+dataset = LeRobotDataset(
+ repo_id="your-username/your-dataset",
+ image_transforms=torchvision_transforms
+)
+```
+
+### Available transform types
+
+LeRobot provides several transform types:
+
+- **`ColorJitter`**: Adjusts brightness, contrast, saturation, and hue
+- **`SharpnessJitter`**: Randomly adjusts image sharpness
+- **`Identity`**: No transformation (useful for testing)
+
+You can also use any `torchvision.transforms.v2` transform by passing it directly to the `image_transforms` parameter.
+
+### Configuration options
+
+- **`enable`**: Enable/disable transforms (default: `False`)
+- **`max_num_transforms`**: Maximum number of transforms applied per frame (default: `3`)
+- **`random_order`**: Apply transforms in random order vs. standard order (default: `False`)
+- **`weight`**: Sampling probability for each transform (higher = more likely, if sum of weights is not 1, they will be normalized)
+- **`kwargs`**: Transform-specific parameters (e.g., brightness range)
+
+### Visualizing transforms
+
+Use the visualization script to preview how transforms affect your data:
+
+```bash
+lerobot-imgtransform-viz \
+ --repo-id=your-username/your-dataset \
+ --output-dir=./transform_examples \
+ --n-examples=5
+```
+
+This saves example images showing the effect of each transform, helping you tune parameters.
+
+### Best practices
+
+- **Start conservative**: Begin with small ranges (e.g., brightness 0.9-1.1) and increase gradually
+- **Test first**: Use the visualization script to ensure transforms look reasonable
+- **Monitor training**: Strong augmentations can hurt performance if too aggressive
+- **Match your domain**: If your robot operates in varying lighting, use brightness/contrast transforms
+- **Combine wisely**: Using too many transforms simultaneously can make training unstable
+
+## Migrate `v2.1` → `v3.0`
+
+A converter aggregates per‑episode files into larger shards and writes episode offsets/metadata. Convert your dataset using the instructions below.
+
+```bash
+# Pre-release build with v3 support:
+pip install "https://github.com/huggingface/lerobot/archive/33cad37054c2b594ceba57463e8f11ee374fa93c.zip"
+
+# Convert an existing v2.1 dataset hosted on the Hub:
+python -m lerobot.datasets.v30.convert_dataset_v21_to_v30 --repo-id=
+```
+
+**What it does**
+
+- Aggregates parquet files: `episode-0000.parquet`, `episode-0001.parquet`, … → **`file-0000.parquet`**, …
+- Aggregates mp4 files: `episode-0000.mp4`, `episode-0001.mp4`, … → **`file-0000.mp4`**, …
+- Updates `meta/episodes/*` (chunked Parquet) with per‑episode lengths, tasks, and byte/frame offsets.
+
+## Common Issues
+
+### Always call `finalize()` before pushing
+
+When creating or recording datasets, you **must** call `dataset.finalize()` to properly close parquet writers. See the [PR #1903](https://github.com/huggingface/lerobot/pull/1903) for more details.
+
+```python
+from lerobot.datasets.lerobot_dataset import LeRobotDataset
+
+# Create dataset and record episodes
+dataset = LeRobotDataset.create(...)
+
+for episode in range(num_episodes):
+ # Record frames
+ for frame in episode_data:
+ dataset.add_frame(frame)
+ dataset.save_episode()
+
+# Call finalize() when done recording and before push_to_hub()
+dataset.finalize() # Closes parquet writers, writes metadata footers
+dataset.push_to_hub()
+```
+
+**Why is this necessary?**
+
+Dataset v3.0 uses incremental parquet writing with buffered metadata for efficiency. The `finalize()` method:
+
+- Flushes any buffered episode metadata to disk
+- Closes parquet writers to write footer metadata, otherwise the parquet files will be corrupt
+- Ensures the dataset is valid for loading
+
+Without calling `finalize()`, your parquet files will be incomplete and the dataset won't load properly.
diff --git a/docs/source/libero.mdx b/docs/source/libero.mdx
new file mode 100644
index 00000000..f0448c3f
--- /dev/null
+++ b/docs/source/libero.mdx
@@ -0,0 +1,171 @@
+# LIBERO
+
+**LIBERO** is a benchmark designed to study **lifelong robot learning**. The idea is that robots won’t just be pretrained once in a factory, they’ll need to keep learning and adapting with their human users over time. This ongoing adaptation is called **lifelong learning in decision making (LLDM)**, and it’s a key step toward building robots that become truly personalized helpers.
+
+- 📄 [LIBERO paper](https://arxiv.org/abs/2306.03310)
+- 💻 [Original LIBERO repo](https://github.com/Lifelong-Robot-Learning/LIBERO)
+
+To make progress on this challenge, LIBERO provides a set of standardized tasks that focus on **knowledge transfer**: how well a robot can apply what it has already learned to new situations. By evaluating on LIBERO, different algorithms can be compared fairly and researchers can build on each other’s work.
+
+LIBERO includes **five task suites**:
+
+- **LIBERO-Spatial (`libero_spatial`)** – tasks that require reasoning about spatial relations.
+- **LIBERO-Object (`libero_object`)** – tasks centered on manipulating different objects.
+- **LIBERO-Goal (`libero_goal`)** – goal-conditioned tasks where the robot must adapt to changing targets.
+- **LIBERO-90 (`libero_90`)** – 90 short-horizon tasks from the LIBERO-100 collection.
+- **LIBERO-Long (`libero_10`)** – 10 long-horizon tasks from the LIBERO-100 collection.
+
+Together, these suites cover **130 tasks**, ranging from simple object manipulations to complex multi-step scenarios. LIBERO is meant to grow over time, and to serve as a shared benchmark where the community can test and improve lifelong learning algorithms.
+
+
+
+## Evaluating with LIBERO
+
+At **LeRobot**, we ported [LIBERO](https://github.com/Lifelong-Robot-Learning/LIBERO) into our framework and used it mainly to **evaluate [SmolVLA](https://huggingface.co/docs/lerobot/en/smolvla)**, our lightweight Vision-Language-Action model.
+
+LIBERO is now part of our **multi-eval supported simulation**, meaning you can benchmark your policies either on a **single suite of tasks** or across **multiple suites at once** with just a flag.
+
+To Install LIBERO, after following LeRobot official instructions, just do:
+`pip install -e ".[libero]"`
+
+> [!NOTE]
+> For lerobot 0.4.0, if you want to install libero tag, you will have to do: `pip install "lerobot[libero]@git+https://github.com/huggingface/lerobot.git"`.
+>
+> This will be solved in the next patch release
+
+### Single-suite evaluation
+
+Evaluate a policy on one LIBERO suite:
+
+```bash
+lerobot-eval \
+ --policy.path="your-policy-id" \
+ --env.type=libero \
+ --env.task=libero_object \
+ --eval.batch_size=2 \
+ --eval.n_episodes=3
+```
+
+- `--env.task` picks the suite (`libero_object`, `libero_spatial`, etc.).
+- `--eval.batch_size` controls how many environments run in parallel.
+- `--eval.n_episodes` sets how many episodes to run in total.
+
+---
+
+### Multi-suite evaluation
+
+Benchmark a policy across multiple suites at once:
+
+```bash
+lerobot-eval \
+ --policy.path="your-policy-id" \
+ --env.type=libero \
+ --env.task=libero_object,libero_spatial \
+ --eval.batch_size=1 \
+ --eval.n_episodes=2
+```
+
+- Pass a comma-separated list to `--env.task` for multi-suite evaluation.
+
+### Policy inputs and outputs
+
+When using LIBERO through LeRobot, policies interact with the environment via **observations** and **actions**:
+
+- **Observations**
+ - `observation.state` – proprioceptive features (agent state).
+ - `observation.images.image` – main camera view (`agentview_image`).
+ - `observation.images.image2` – wrist camera view (`robot0_eye_in_hand_image`).
+
+ ⚠️ **Note:** LeRobot enforces the `.images.*` prefix for any multi-modal visual features. Always ensure that your policy config `input_features` use the same naming keys, and that your dataset metadata keys follow this convention during evaluation.
+ If your data contains different keys, you must rename the observations to match what the policy expects, since naming keys are encoded inside the normalization statistics layer.
+ This will be fixed with the upcoming Pipeline PR.
+
+- **Actions**
+ - Continuous control values in a `Box(-1, 1, shape=(7,))` space.
+
+We also provide a notebook for quick testing:
+Training with LIBERO
+
+## Training with LIBERO
+
+When training on LIBERO tasks, make sure your dataset parquet and metadata keys follow the LeRobot convention.
+
+The environment expects:
+
+- `observation.state` → 8-dim agent state
+- `observation.images.image` → main camera (`agentview_image`)
+- `observation.images.image2` → wrist camera (`robot0_eye_in_hand_image`)
+
+⚠️ Cleaning the dataset upfront is **cleaner and more efficient** than remapping keys inside the code.
+To avoid potential mismatches and key errors, we provide a **preprocessed LIBERO dataset** that is fully compatible with the current LeRobot codebase and requires no additional manipulation:
+👉 [HuggingFaceVLA/libero](https://huggingface.co/datasets/HuggingFaceVLA/libero)
+
+For reference, here is the **original dataset** published by Physical Intelligence:
+👉 [physical-intelligence/libero](https://huggingface.co/datasets/physical-intelligence/libero)
+
+---
+
+### Example training command
+
+```bash
+lerobot-train \
+ --policy.type=smolvla \
+ --policy.repo_id=${HF_USER}/libero-test \
+ --policy.load_vlm_weights=true \
+ --dataset.repo_id=HuggingFaceVLA/libero \
+ --env.type=libero \
+ --env.task=libero_10 \
+ --output_dir=./outputs/ \
+ --steps=100000 \
+ --batch_size=4 \
+ --eval.batch_size=1 \
+ --eval.n_episodes=1 \
+ --eval_freq=1000 \
+```
+
+---
+
+### Note on rendering
+
+LeRobot uses MuJoCo for simulation. You need to set the rendering backend before training or evaluation:
+
+- `export MUJOCO_GL=egl` → for headless servers (e.g. HPC, cloud)
+
+## Reproducing π₀.₅ results
+
+We reproduce the results of π₀.₅ on the LIBERO benchmark using the LeRobot implementation. We take the Physical Intelligence LIBERO base model (`pi05_libero`) and finetune for an additional 6k steps in bfloat16, with batch size of 256 on 8 H100 GPUs using the [HuggingFace LIBERO dataset](https://huggingface.co/datasets/HuggingFaceVLA/libero).
+
+The finetuned model can be found here:
+
+- **π₀.₅ LIBERO**: [lerobot/pi05_libero_finetuned](https://huggingface.co/lerobot/pi05_libero_finetuned)
+
+We then evaluate the finetuned model using the LeRobot LIBERO implementation, by running the following command:
+
+```bash
+lerobot-eval \
+ --output_dir=/logs/ \
+ --env.type=libero \
+ --env.task=libero_spatial,libero_object,libero_goal,libero_10 \
+ --eval.batch_size=1 \
+ --eval.n_episodes=10 \
+ --policy.path=pi05_libero_finetuned \
+ --policy.n_action_steps=10 \
+ --output_dir=./eval_logs/ \
+ --env.max_parallel_tasks=1
+```
+
+**Note:** We set `n_action_steps=10`, similar to the original OpenPI implementation.
+
+### Results
+
+We obtain the following results on the LIBERO benchmark:
+
+| Model | LIBERO Spatial | LIBERO Object | LIBERO Goal | LIBERO 10 | Average |
+| -------- | -------------- | ------------- | ----------- | --------- | -------- |
+| **π₀.₅** | 97.0 | 99.0 | 98.0 | 96.0 | **97.5** |
+
+These results are consistent with the original [results](https://github.com/Physical-Intelligence/openpi/tree/main/examples/libero#results) reported by Physical Intelligence:
+
+| Model | LIBERO Spatial | LIBERO Object | LIBERO Goal | LIBERO 10 | Average |
+| -------- | -------------- | ------------- | ----------- | --------- | --------- |
+| **π₀.₅** | 98.8 | 98.2 | 98.0 | 92.4 | **96.85** |
diff --git a/docs/source/metaworld.mdx b/docs/source/metaworld.mdx
new file mode 100644
index 00000000..da90bd51
--- /dev/null
+++ b/docs/source/metaworld.mdx
@@ -0,0 +1,80 @@
+# Meta-World
+
+Meta-World is a well-designed, open-source simulation benchmark for multi-task and meta reinforcement learning in continuous-control robotic manipulation. It gives researchers a shared, realistic playground to test whether algorithms can _learn many different tasks_ and _generalize quickly to new ones_ — two central challenges for real-world robotics.
+
+- 📄 [MetaWorld paper](https://arxiv.org/pdf/1910.10897)
+- 💻 [Original MetaWorld repo](https://github.com/Farama-Foundation/Metaworld)
+
+
+
+## Why Meta-World matters
+
+- **Diverse, realistic tasks.** Meta-World bundles a large suite of simulated manipulation tasks (50 in the MT50 suite) using everyday objects and a common tabletop Sawyer arm. This diversity exposes algorithms to a wide variety of dynamics, contacts and goal specifications while keeping a consistent control and observation structure.
+- **Focus on generalization and multi-task learning.** By evaluating across task distributions that share structure but differ in goals and objects, Meta-World reveals whether an agent truly learns transferable skills rather than overfitting to a narrow task.
+- **Standardized evaluation protocol.** It provides clear evaluation modes and difficulty splits, so different methods can be compared fairly across easy, medium, hard and very-hard regimes.
+- **Empirical insight.** Past evaluations on Meta-World show impressive progress on some fronts, but also highlight that current multi-task and meta-RL methods still struggle with large, diverse task sets. That gap points to important research directions.
+
+## What it enables in LeRobot
+
+In LeRobot, you can evaluate any policy or vision-language-action (VLA) model on Meta-World tasks and get a clear success-rate measure. The integration is designed to be straightforward:
+
+- We provide a LeRobot-ready dataset for Meta-World (MT50) on the HF Hub: `https://huggingface.co/datasets/lerobot/metaworld_mt50`.
+ - This dataset is formatted for the MT50 evaluation that uses all 50 tasks (the most challenging multi-task setting).
+ - MT50 gives the policy a one-hot task vector and uses fixed object/goal positions for consistency.
+
+- Task descriptions and the exact keys required for evaluation are available in the repo/dataset — use these to ensure your policy outputs the right success signals.
+
+## Quick start, train a SmolVLA policy on Meta-World
+
+Example command to train a SmolVLA policy on a subset of tasks:
+
+```bash
+lerobot-train \
+ --policy.type=smolvla \
+ --policy.repo_id=${HF_USER}/metaworld-test \
+ --policy.load_vlm_weights=true \
+ --dataset.repo_id=lerobot/metaworld_mt50 \
+ --env.type=metaworld \
+ --env.task=assembly-v3,dial-turn-v3,handle-press-side-v3 \
+ --output_dir=./outputs/ \
+ --steps=100000 \
+ --batch_size=4 \
+ --eval.batch_size=1 \
+ --eval.n_episodes=1 \
+ --eval_freq=1000
+```
+
+Notes:
+
+- `--env.task` accepts explicit task lists (comma separated) or difficulty groups (e.g., `env.task="hard"`).
+- Adjust `batch_size`, `steps`, and `eval_freq` to match your compute budget.
+- **Gymnasium Assertion Error**: if you encounter an error like
+ `AssertionError: ['human', 'rgb_array', 'depth_array']` when running MetaWorld environments, this comes from a mismatch between MetaWorld and your Gymnasium version.
+ We recommend using:
+
+```bash
+ pip install "gymnasium==1.1.0"
+```
+
+to ensure proper compatibility.
+
+## Quick start — evaluate a trained policy
+
+To evaluate a trained policy on the Meta-World medium difficulty split:
+
+```bash
+lerobot-eval \
+ --policy.path="your-policy-id" \
+ --env.type=metaworld \
+ --env.task=medium \
+ --eval.batch_size=1 \
+ --eval.n_episodes=2
+```
+
+This will run episodes and return per-task success rates using the standard Meta-World evaluation keys.
+
+## Practical tips
+
+- If you care about generalization, run on the full MT50 suite — it’s intentionally challenging and reveals strengths/weaknesses better than a few narrow tasks.
+- Use the one-hot task conditioning for multi-task training (MT10 / MT50 conventions) so policies have explicit task context.
+- Inspect the dataset task descriptions and the `info["is_success"]` keys when writing post-processing or logging so your success metrics line up with the benchmark.
diff --git a/docs/source/multi_gpu_training.mdx b/docs/source/multi_gpu_training.mdx
new file mode 100644
index 00000000..122670f6
--- /dev/null
+++ b/docs/source/multi_gpu_training.mdx
@@ -0,0 +1,125 @@
+# Multi-GPU Training
+
+This guide shows you how to train policies on multiple GPUs using [Hugging Face Accelerate](https://huggingface.co/docs/accelerate).
+
+## Installation
+
+First, ensure you have accelerate installed:
+
+```bash
+pip install accelerate
+```
+
+## Training with Multiple GPUs
+
+You can launch training in two ways:
+
+### Option 1: Without config (specify parameters directly)
+
+You can specify all parameters directly in the command without running `accelerate config`:
+
+```bash
+accelerate launch \
+ --multi_gpu \
+ --num_processes=2 \
+ $(which lerobot-train) \
+ --dataset.repo_id=${HF_USER}/my_dataset \
+ --policy.type=act \
+ --policy.repo_id=${HF_USER}/my_trained_policy \
+ --output_dir=outputs/train/act_multi_gpu \
+ --job_name=act_multi_gpu \
+ --wandb.enable=true
+```
+
+**Key accelerate parameters:**
+
+- `--multi_gpu`: Enable multi-GPU training
+- `--num_processes=2`: Number of GPUs to use
+- `--mixed_precision=fp16`: Use fp16 mixed precision (or `bf16` if supported)
+
+### Option 2: Using accelerate config
+
+If you prefer to save your configuration, you can optionally configure accelerate for your hardware setup by running:
+
+```bash
+accelerate config
+```
+
+This interactive setup will ask you questions about your training environment (number of GPUs, mixed precision settings, etc.) and saves the configuration for future use. For a simple multi-GPU setup on a single machine, you can use these recommended settings:
+
+- Compute environment: This machine
+- Number of machines: 1
+- Number of processes: (number of GPUs you want to use)
+- GPU ids to use: (leave empty to use all)
+- Mixed precision: fp16 or bf16 (recommended for faster training)
+
+Then launch training with:
+
+```bash
+accelerate launch $(which lerobot-train) \
+ --dataset.repo_id=${HF_USER}/my_dataset \
+ --policy.type=act \
+ --policy.repo_id=${HF_USER}/my_trained_policy \
+ --output_dir=outputs/train/act_multi_gpu \
+ --job_name=act_multi_gpu \
+ --wandb.enable=true
+```
+
+## How It Works
+
+When you launch training with accelerate:
+
+1. **Automatic detection**: LeRobot automatically detects if it's running under accelerate
+2. **Data distribution**: Your batch is automatically split across GPUs
+3. **Gradient synchronization**: Gradients are synchronized across GPUs during backpropagation
+4. **Single process logging**: Only the main process logs to wandb and saves checkpoints
+
+## Learning Rate and Training Steps Scaling
+
+**Important:** LeRobot does **NOT** automatically scale learning rates or training steps based on the number of GPUs. This gives you full control over your training hyperparameters.
+
+### Why No Automatic Scaling?
+
+Many distributed training frameworks automatically scale the learning rate by the number of GPUs (e.g., `lr = base_lr × num_gpus`).
+However, LeRobot keeps the learning rate exactly as you specify it.
+
+### When and How to Scale
+
+If you want to scale your hyperparameters when using multiple GPUs, you should do it manually:
+
+**Learning Rate Scaling:**
+
+```bash
+# Example: 2 GPUs with linear LR scaling
+# Base LR: 1e-4, with 2 GPUs -> 2e-4
+accelerate launch --num_processes=2 $(which lerobot-train) \
+ --optimizer.lr=2e-4 \
+ --dataset.repo_id=lerobot/pusht \
+ --policy=act
+```
+
+**Training Steps Scaling:**
+
+Since the effective batch size `bs` increases with multiple GPUs (batch_size × num_gpus), you may want to reduce the number of training steps proportionally:
+
+```bash
+# Example: 2 GPUs with effective batch size 2x larger
+# Original: batch_size=8, steps=100000
+# With 2 GPUs: batch_size=8 (16 in total), steps=50000
+accelerate launch --num_processes=2 $(which lerobot-train) \
+ --batch_size=8 \
+ --steps=50000 \
+ --dataset.repo_id=lerobot/pusht \
+ --policy=act
+```
+
+## Notes
+
+- The `--policy.use_amp` flag in `lerobot-train` is only used when **not** running with accelerate. When using accelerate, mixed precision is controlled by accelerate's configuration.
+- Training logs, checkpoints, and hub uploads are only done by the main process to avoid conflicts. Non-main processes have console logging disabled to prevent duplicate output.
+- The effective batch size is `batch_size × num_gpus`. If you use 4 GPUs with `--batch_size=8`, your effective batch size is 32.
+- Learning rate scheduling is handled correctly across multiple processes—LeRobot sets `step_scheduler_with_optimizer=False` to prevent accelerate from adjusting scheduler steps based on the number of processes.
+- When saving or pushing models, LeRobot automatically unwraps the model from accelerate's distributed wrapper to ensure compatibility.
+- WandB integration automatically initializes only on the main process, preventing multiple runs from being created.
+
+For more advanced configurations and troubleshooting, see the [Accelerate documentation](https://huggingface.co/docs/accelerate). If you want to learn more about how to train on a large number of GPUs, checkout this awesome guide: [Ultrascale Playbook](https://huggingface.co/spaces/nanotron/ultrascale-playbook).
diff --git a/docs/source/notebooks.mdx b/docs/source/notebooks.mdx
new file mode 100644
index 00000000..6a9c3b10
--- /dev/null
+++ b/docs/source/notebooks.mdx
@@ -0,0 +1,29 @@
+# 🤗 LeRobot Notebooks
+
+This repository contains example notebooks for using LeRobot. These notebooks demonstrate how to train policies on real or simulation datasets using standardized policies.
+
+---
+
+### Training ACT
+
+[ACT](https://huggingface.co/papers/2304.13705) (Action Chunking Transformer) is a transformer-based policy architecture for imitation learning that processes robot states and camera inputs to generate smooth, chunked action sequences.
+
+We provide a ready-to-run Google Colab notebook to help you train ACT policies using datasets from the Hugging Face Hub, with optional logging to Weights & Biases.
+
+| Notebook | Colab |
+| :------------------------------------------------------------------------------------------------------ | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| [Train ACT with LeRobot](https://github.com/huggingface/notebooks/blob/main/lerobot/training-act.ipynb) | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/lerobot/training-act.ipynb) |
+
+Expected training time for 100k steps: ~1.5 hours on an NVIDIA A100 GPU with batch size of `64`.
+
+### Training SmolVLA
+
+[SmolVLA](https://huggingface.co/papers/2506.01844) is a small but efficient Vision-Language-Action model. It is compact in size with 450 M-parameter and is developed by Hugging Face.
+
+We provide a ready-to-run Google Colab notebook to help you train SmolVLA policies using datasets from the Hugging Face Hub, with optional logging to Weights & Biases.
+
+| Notebook | Colab |
+| :-------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| [Train SmolVLA with LeRobot](https://github.com/huggingface/notebooks/blob/main/lerobot/training-smolvla.ipynb) | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/lerobot/training-smolvla.ipynb) |
+
+Expected training time for 20k steps: ~5 hours on an NVIDIA A100 GPU with batch size of `64`.
diff --git a/docs/source/phone_teleop.mdx b/docs/source/phone_teleop.mdx
new file mode 100644
index 00000000..76e3c367
--- /dev/null
+++ b/docs/source/phone_teleop.mdx
@@ -0,0 +1,191 @@
+# Phone
+
+Use your phone (iOS or Android) to control your robot.
+
+**In this guide you'll learn:**
+
+- How to connect an iOS/Android phone
+- How phone pose is mapped to robot end‑effector (EE) targets
+- How to tweak safety limits, gripper control, and IK settings
+
+To use phone to control your robot, install the relevant dependencies with:
+
+```bash
+pip install lerobot[phone]
+```
+
+## Get started
+
+### Supported platforms
+
+- iOS: Uses the HEBI Mobile I/O app (ARKit pose + buttons). Download the app first, open it and the examples will discover it on your network and stream the phone pose and inputs.
+- Android: Uses the `teleop` package (WebXR). When you start the Python process, it prints a local URL. Open the link on your phone, tap Start, then use Move to stream pose.
+
+Links:
+
+- Android WebXR library: [`teleop` on PyPI](https://pypi.org/project/teleop/)
+- iOS app: [HEBI Mobile I/O](https://docs.hebi.us/tools.html#mobile-io)
+
+### Phone orientation and controls
+
+- Orientation: hold the phone with the screen facing up and the top edge pointing in the same direction as the robot gripper. This ensures calibration aligns the phone’s frame with the robot frame so motion feels natural, see the image below for reference.
+- Enable/disable:
+ - iOS: Hold `B1` to enable teleoperation, release to stop. The first press captures a reference pose.
+ - Android: Press and hold the `Move` button, release to stop. The first press captures a reference pose.
+- Gripper control:
+ - iOS: Analog input `A3` controls the gripper as velocity input.
+ - Android: Buttons `A` and `B` act like increment/decrement (A opens, B closes). You can tune velocity in the `GripperVelocityToJoint` step.
+
+
+
+### Step 1: Choose the platform
+
+Modify the examples to use `PhoneOS.IOS` or `PhoneOS.ANDROID` in `PhoneConfig`. The API is identical across platforms, only the input source differs. All examples are under `examples/` and have `phone_so100_*.py` variants.
+
+Teleoperation example:
+
+```36:43:examples/phone_so100_teleop.py
+from lerobot.teleoperators.phone.config_phone import PhoneConfig, PhoneOS
+
+teleop_config = PhoneConfig(phone_os=PhoneOS.IOS) # or PhoneOS.ANDROID
+teleop_device = Phone(teleop_config)
+```
+
+### Step 2: Connect and calibrate
+
+When `Phone(teleop_config)` is created and `connect()` is called, calibration is prompted automatically. Hold the phone in the orientation described above, then:
+
+- iOS: press and hold `B1` to capture the reference pose.
+- Android: press `Move` button on the WebXR page to capture the reference pose.
+
+Why calibrate? We capture the current pose so subsequent poses are expressed in a robot aligned frame. When you again press the button to enable control, the position is recaptured to avoid drift when your phone is repositioned while it was disabled.
+
+### Step 3: Run an example
+
+Run on of the examples scripts to teleoperate, record a dataset, replay a dataset or evaluate a policy.
+
+All scripts assume you configured your robot (e.g., SO-100 follower) and set the correct serial port.
+
+Additionally you need to **copy the urdf of the robot to the examples folder**. For the examples in this tutorial (Using SO100/SO101) it is highly recommended to use the urdf in the [SO-ARM100 repo](https://github.com/TheRobotStudio/SO-ARM100/blob/main/Simulation/SO101/so101_new_calib.urdf)
+
+- Run this example to teleoperate:
+
+ ```bash
+ python examples/phone_to_so100/teleoperate.py
+ ```
+
+After running the example:
+
+- Android: after starting the script, open the printed local URL on your phone, tap Start, then press and hold Move.
+- iOS: open HEBI Mobile I/O first; B1 enables motion. A3 controls the gripper.
+
+Additionally you can customize mapping or safety limits by editing the processor steps shown in the examples. You can also remap inputs (e.g., use a different analog input) or adapt the pipeline to other robots (e.g., LeKiwi) by modifying the input and kinematics steps. More about this in the [Processors for Robots and Teleoperators](./processors_robots_teleop) guide.
+
+- Run this example to record a dataset, which saves absolute end effector observations and actions:
+
+ ```bash
+ python examples/phone_to_so100/record.py
+ ```
+
+- Run this example to replay recorded episodes:
+
+ ```bash
+ python examples/phone_to_so100/replay.py
+ ```
+
+- Run this example to evaluate a pretrained policy:
+
+ ```bash
+ python examples/phone_to_so100/evaluate.py
+ ```
+
+### Important pipeline steps and options
+
+- Kinematics are used in multiple steps. We use [Placo](https://github.com/Rhoban/placo) which is a wrapper around Pinocchio for handling our kinematics. We construct the kinematics object by passing the robot's URDF and target frame. We set `target_frame_name` to the gripper frame.
+
+ ```examples/phone_to_so100/teleoperate.py
+ kinematics_solver = RobotKinematics(
+ urdf_path="./SO101/so101_new_calib.urdf",
+ target_frame_name="gripper_frame_link",
+ joint_names=list(robot.bus.motors.keys()),
+ )
+
+ ```
+
+- The `MapPhoneActionToRobotAction` step converts the calibrated phone pose and inputs into target deltas and gripper commands, below is shown what the step outputs.
+
+ ```src/lerobot/teleoperators/phone/phone_processor.py
+ action["enabled"] = enabled
+ action["target_x"] = -pos[1] if enabled else 0.0
+ action["target_y"] = pos[0] if enabled else 0.0
+ action["target_z"] = pos[2] if enabled else 0.0
+ action["target_wx"] = rotvec[1] if enabled else 0.0
+ action["target_wy"] = rotvec[0] if enabled else 0.0
+ action["target_wz"] = -rotvec[2] if enabled else 0.0
+ action["gripper_vel"] = gripper_vel # Still send gripper action when disabled
+ ```
+
+- The `EEReferenceAndDelta` step converts target deltas to an absolute desired EE pose, storing a reference on enable, the `end_effector_step_sizes` are the step sizes for the EE pose and can be modified to change the motion speed.
+
+ ```examples/phone_to_so100/teleoperate.py
+ EEReferenceAndDelta(
+ kinematics=kinematics_solver,
+ end_effector_step_sizes={"x": 0.5, "y": 0.5, "z": 0.5},
+ motor_names=list(robot.bus.motors.keys()),
+ use_latched_reference=True,
+ ),
+ ```
+
+- The `EEBoundsAndSafety` step clamps EE motion to a workspace and checks for large ee step jumps to ensure safety. The `end_effector_bounds` are the bounds for the EE pose and can be modified to change the workspace. The `max_ee_step_m` are the step limits for the EE pose and can be modified to change the safety limits.
+
+ ```examples/phone_to_so100/teleoperate.py
+ EEBoundsAndSafety(
+ end_effector_bounds={"min": [-1.0, -1.0, -1.0], "max": [1.0, 1.0, 1.0]},
+ max_ee_step_m=0.10,
+ )
+ ```
+
+- The `GripperVelocityToJoint` step turns a velocity‑like gripper input into absolute gripper position using the current measured state. The `speed_factor` is the factor by which the velocity is multiplied.
+
+ ```examples/phone_to_so100/teleoperate.py
+ GripperVelocityToJoint(speed_factor=20.0)
+ ```
+
+#### Different IK initial guesses
+
+We use different IK initial guesses in the kinematic steps. As initial guess either the current measured joints or the previous IK solution is used.
+
+- Closed loop (used in record/eval): sets `initial_guess_current_joints=True` so IK starts from the measured joints each frame.
+
+ ```examples/phone_to_so100/record.py
+ InverseKinematicsEEToJoints(
+ kinematics=kinematics_solver,
+ motor_names=list(robot.bus.motors.keys()),
+ initial_guess_current_joints=True, # closed loop
+ )
+ ```
+
+- Open loop (used in replay): sets `initial_guess_current_joints=False` so IK continues from the previous IK solution rather than the measured state. This preserves action stability when we replay without feedback.
+
+ ```examples/phone_to_so100/replay.py
+ InverseKinematicsEEToJoints(
+ kinematics=kinematics_solver,
+ motor_names=list(robot.bus.motors.keys()),
+ initial_guess_current_joints=False, # open loop
+ )
+ ```
+
+### Pipeline steps explained
+
+- MapPhoneActionToRobotAction: converts calibrated phone pose and inputs into target deltas and a gripper command. Motion is gated by an enable signal (B1 on iOS, Move on Android).
+- EEReferenceAndDelta: latches a reference EE pose on enable and combines it with target deltas to produce an absolute desired EE pose each frame. When disabled, it keeps sending the last commanded pose.
+- EEBoundsAndSafety: clamps the EE pose to a workspace and rate‑limits jumps for safety. Also declares `action.ee.*` features.
+- InverseKinematicsEEToJoints: turns an EE pose into joint positions with IK. `initial_guess_current_joints=True` is recommended for closed‑loop control; set `False` for open‑loop replay for stability.
+- GripperVelocityToJoint: integrates a velocity‑like gripper input into an absolute gripper position using the current measured state.
+- ForwardKinematicsJointsToEE: computes `observation.state.ee.*` from observed joints for logging and training on EE state.
+
+### Troubleshooting
+
+- iOS not discovered: ensure HEBI Mobile I/O is open and your laptop/phone are on the same network.
+- Android URL not reachable: check local you used `https` instead of `http`, use the exact IP printed by the script and allow your browser to enter and ignore the certificate issue.
+- Motion feels inverted: adjust the sign flips in `MapPhoneActionToRobotAction` or swap axes to match your setup.
diff --git a/docs/source/pi0.mdx b/docs/source/pi0.mdx
new file mode 100644
index 00000000..d15f7e91
--- /dev/null
+++ b/docs/source/pi0.mdx
@@ -0,0 +1,84 @@
+# π₀ (Pi0)
+
+π₀ is a **Vision-Language-Action model for general robot control**, from Physical Intelligence. The LeRobot implementation is adapted from their open source [OpenPI](https://github.com/Physical-Intelligence/openpi) repository.
+
+## Model Overview
+
+π₀ represents a breakthrough in robotics as the first general-purpose robot foundation model developed by [Physical Intelligence](https://www.physicalintelligence.company/blog/pi0). Unlike traditional robot programs that are narrow specialists programmed for repetitive motions, π₀ is designed to be a generalist policy that can understand visual inputs, interpret natural language instructions, and control a variety of different robots across diverse tasks.
+
+### The Vision for Physical Intelligence
+
+As described by Physical Intelligence, while AI has achieved remarkable success in digital domains, from chess-playing to drug discovery, human intelligence still dramatically outpaces AI in the physical world. To paraphrase Moravec's paradox, winning a game of chess represents an "easy" problem for AI, but folding a shirt or cleaning up a table requires solving some of the most difficult engineering problems ever conceived. π₀ represents a first step toward developing artificial physical intelligence that enables users to simply ask robots to perform any task they want, just like they can with large language models.
+
+### Architecture and Approach
+
+π₀ combines several key innovations:
+
+- **Flow Matching**: Uses a novel method to augment pre-trained VLMs with continuous action outputs via flow matching (a variant of diffusion models)
+- **Cross-Embodiment Training**: Trained on data from 8 distinct robot platforms including UR5e, Bimanual UR5e, Franka, Bimanual Trossen, Bimanual ARX, Mobile Trossen, and Mobile Fibocom
+- **Internet-Scale Pre-training**: Inherits semantic knowledge from a pre-trained 3B parameter Vision-Language Model
+- **High-Frequency Control**: Outputs motor commands at up to 50 Hz for real-time dexterous manipulation
+
+## Installation Requirements
+
+1. Install LeRobot by following our [Installation Guide](./installation).
+2. Install Pi0 dependencies by running:
+
+ ```bash
+ pip install -e ".[pi]"
+ ```
+
+ > [!NOTE]
+ > For lerobot 0.4.0, if you want to install pi tag, you will have to do: `pip install "lerobot[pi]@git+https://github.com/huggingface/lerobot.git"`.
+ >
+ > This will be solved in the next patch release
+
+## Training Data and Capabilities
+
+π₀ is trained on the largest robot interaction dataset to date, combining three key data sources:
+
+1. **Internet-Scale Pre-training**: Vision-language data from the web for semantic understanding
+2. **Open X-Embodiment Dataset**: Open-source robot manipulation datasets
+3. **Physical Intelligence Dataset**: Large and diverse dataset of dexterous tasks across 8 distinct robots
+
+## Usage
+
+To use π₀ in LeRobot, specify the policy type as:
+
+```python
+policy.type=pi0
+```
+
+## Training
+
+For training π₀, you can use the standard LeRobot training script with the appropriate configuration:
+
+```bash
+python src/lerobot/scripts/lerobot_train.py \
+ --dataset.repo_id=your_dataset \
+ --policy.type=pi0 \
+ --output_dir=./outputs/pi0_training \
+ --job_name=pi0_training \
+ --policy.pretrained_path=lerobot/pi0_base \
+ --policy.repo_id=your_repo_id \
+ --policy.compile_model=true \
+ --policy.gradient_checkpointing=true \
+ --policy.dtype=bfloat16 \
+ --steps=3000 \
+ --policy.device=cuda \
+ --batch_size=32
+```
+
+### Key Training Parameters
+
+- **`--policy.compile_model=true`**: Enables model compilation for faster training
+- **`--policy.gradient_checkpointing=true`**: Reduces memory usage significantly during training
+- **`--policy.dtype=bfloat16`**: Use mixed precision training for efficiency
+- **`--batch_size=32`**: Batch size for training, adapt this based on your GPU memory
+- **`--policy.pretrained_path=lerobot/pi0_base`**: The base π₀ model you want to finetune, options are:
+ - [lerobot/pi0_base](https://huggingface.co/lerobot/pi0_base)
+ - [lerobot/pi0_libero](https://huggingface.co/lerobot/pi0_libero) (specifically trained on the Libero dataset)
+
+## License
+
+This model follows the **Apache 2.0 License**, consistent with the original [OpenPI repository](https://github.com/Physical-Intelligence/openpi).
diff --git a/docs/source/pi05.mdx b/docs/source/pi05.mdx
new file mode 100644
index 00000000..29b79793
--- /dev/null
+++ b/docs/source/pi05.mdx
@@ -0,0 +1,112 @@
+# π₀.₅ (Pi05) Policy
+
+π₀.₅ is a **Vision-Language-Action model with open-world generalization**, from Physical Intelligence. The LeRobot implementation is adapted from their open source [OpenPI](https://github.com/Physical-Intelligence/openpi) repository.
+
+## Model Overview
+
+π₀.₅ represents a significant evolution from π₀, developed by [Physical Intelligence](https://www.physicalintelligence.company/blog/pi05) to address a big challenge in robotics: **open-world generalization**. While robots can perform impressive tasks in controlled environments, π₀.₅ is designed to generalize to entirely new environments and situations that were never seen during training.
+
+### The Generalization Challenge
+
+As Physical Intelligence explains, the fundamental challenge isn't performing tasks of agility or dexterity, but generalization, the ability to correctly perform tasks in new settings with new objects. Consider a robot cleaning different homes: each home has different objects in different places. Generalization must occur at multiple levels:
+
+- **Physical Level**: Understanding how to pick up a spoon (by the handle) or plate (by the edge), even with unseen objects in cluttered environments
+- **Semantic Level**: Understanding task semantics, where to put clothes and shoes (laundry hamper, not on the bed), and what tools are appropriate for cleaning spills
+- **Environmental Level**: Adapting to "messy" real-world environments like homes, grocery stores, offices, and hospitals
+
+### Co-Training on Heterogeneous Data
+
+The breakthrough innovation in π₀.₅ is **co-training on heterogeneous data sources**. The model learns from:
+
+1. **Multimodal Web Data**: Image captioning, visual question answering, object detection
+2. **Verbal Instructions**: Humans coaching robots through complex tasks step-by-step
+3. **Subtask Commands**: High-level semantic behavior labels (e.g., "pick up the pillow" for an unmade bed)
+4. **Cross-Embodiment Robot Data**: Data from various robot platforms with different capabilities
+5. **Multi-Environment Data**: Static robots deployed across many different homes
+6. **Mobile Manipulation Data**: ~400 hours of mobile robot demonstrations
+
+This diverse training mixture creates a "curriculum" that enables generalization across physical, visual, and semantic levels simultaneously.
+
+## Installation Requirements
+
+1. Install LeRobot by following our [Installation Guide](./installation).
+2. Install Pi0.5 dependencies by running:
+
+ ```bash
+ pip install -e ".[pi]"
+ ```
+
+ > [!NOTE]
+ > For lerobot 0.4.0, if you want to install pi tag, you will have to do: `pip install "lerobot[pi]@git+https://github.com/huggingface/lerobot.git"`.
+ >
+ > This will be solved in the next patch release
+
+## Usage
+
+To use π₀.₅ in your LeRobot configuration, specify the policy type as:
+
+```python
+policy.type=pi05
+```
+
+## Training
+
+### Training Command Example
+
+Here's a complete training command for finetuning the base π₀.₅ model on your own dataset:
+
+```bash
+python src/lerobot/scripts/lerobot_train.py\
+ --dataset.repo_id=your_dataset \
+ --policy.type=pi05 \
+ --output_dir=./outputs/pi05_training \
+ --job_name=pi05_training \
+ --policy.repo_id=your_repo_id \
+ --policy.pretrained_path=lerobot/pi05_base \
+ --policy.compile_model=true \
+ --policy.gradient_checkpointing=true \
+ --wandb.enable=true \
+ --policy.dtype=bfloat16 \
+ --steps=3000 \
+ --policy.device=cuda \
+ --batch_size=32
+```
+
+### Key Training Parameters
+
+- **`--policy.compile_model=true`**: Enables model compilation for faster training
+- **`--policy.gradient_checkpointing=true`**: Reduces memory usage significantly during training
+- **`--policy.dtype=bfloat16`**: Use mixed precision training for efficiency
+- **`--batch_size=32`**: Batch size for training, adapt this based on your GPU memory
+- **`--policy.pretrained_path=lerobot/pi05_base`**: The base π₀.₅ model you want to finetune, options are:
+ - [lerobot/pi05_base](https://huggingface.co/lerobot/pi05_base)
+ - [lerobot/pi05_libero](https://huggingface.co/lerobot/pi05_libero) (specifically trained on the Libero dataset)
+
+If your dataset is not converted with `quantiles`, you can convert it with the following command:
+
+```bash
+python src/lerobot/datasets/v30/augment_dataset_quantile_stats.py \
+ --repo-id=your_dataset \
+```
+
+Or train pi05 with this normalization mapping: `--policy.normalization_mapping='{"ACTION": "MEAN_STD", "STATE": "MEAN_STD", "VISUAL": "IDENTITY"}'`
+
+## Performance Results
+
+### Libero Benchmark Results
+
+π₀.₅ has demonstrated strong performance on the Libero benchmark suite. To compare and test its LeRobot implementation, we finetuned the libero base model for an additional 6k steps on the Libero dataset and compared the results to the OpenPI reference results.
+
+| Benchmark | LeRobot Implementation | OpenPI Reference |
+| ------------------ | ---------------------- | ---------------- |
+| **Libero Spatial** | 97.0% | 98.8% |
+| **Libero Object** | 99.0% | 98.2% |
+| **Libero Goal** | 98.0% | 98.0% |
+| **Libero 10** | 96.0% | 92.4% |
+| **Average** | 97.5% | 96.85% |
+
+These results demonstrate π₀.₅'s strong generalization capabilities across diverse robotic manipulation tasks. To reproduce these results, you can follow the instructions in the [Libero](https://huggingface.co/docs/lerobot/libero) section.
+
+## License
+
+This model follows the **Apache 2.0 License**, consistent with the original [OpenPI repository](https://github.com/Physical-Intelligence/openpi).
diff --git a/docs/source/policy_act_README.md b/docs/source/policy_act_README.md
new file mode 100644
index 00000000..371a9136
--- /dev/null
+++ b/docs/source/policy_act_README.md
@@ -0,0 +1,14 @@
+## Paper
+
+https://tonyzhaozh.github.io/aloha
+
+## Citation
+
+```bibtex
+@article{zhao2023learning,
+ title={Learning fine-grained bimanual manipulation with low-cost hardware},
+ author={Zhao, Tony Z and Kumar, Vikash and Levine, Sergey and Finn, Chelsea},
+ journal={arXiv preprint arXiv:2304.13705},
+ year={2023}
+}
+```
diff --git a/docs/source/policy_diffusion_README.md b/docs/source/policy_diffusion_README.md
new file mode 100644
index 00000000..9ec934ad
--- /dev/null
+++ b/docs/source/policy_diffusion_README.md
@@ -0,0 +1,14 @@
+## Paper
+
+https://diffusion-policy.cs.columbia.edu
+
+## Citation
+
+```bibtex
+@article{chi2024diffusionpolicy,
+ author = {Cheng Chi and Zhenjia Xu and Siyuan Feng and Eric Cousineau and Yilun Du and Benjamin Burchfiel and Russ Tedrake and Shuran Song},
+ title ={Diffusion Policy: Visuomotor Policy Learning via Action Diffusion},
+ journal = {The International Journal of Robotics Research},
+ year = {2024},
+}
+```
diff --git a/docs/source/policy_groot_README.md b/docs/source/policy_groot_README.md
new file mode 100644
index 00000000..efcd76eb
--- /dev/null
+++ b/docs/source/policy_groot_README.md
@@ -0,0 +1,27 @@
+## Research Paper
+
+Paper: https://research.nvidia.com/labs/gear/gr00t-n1_5/
+
+## Repository
+
+Code: https://github.com/NVIDIA/Isaac-GR00T
+
+## Citation
+
+```bibtex
+@inproceedings{gr00tn1_2025,
+ archivePrefix = {arxiv},
+ eprint = {2503.14734},
+ title = {{GR00T} {N1}: An Open Foundation Model for Generalist Humanoid Robots},
+ author = {NVIDIA and Johan Bjorck andFernando Castañeda, Nikita Cherniadev and Xingye Da and Runyu Ding and Linxi "Jim" Fan and Yu Fang and Dieter Fox and Fengyuan Hu and Spencer Huang and Joel Jang and Zhenyu Jiang and Jan Kautz and Kaushil Kundalia and Lawrence Lao and Zhiqi Li and Zongyu Lin and Kevin Lin and Guilin Liu and Edith Llontop and Loic Magne and Ajay Mandlekar and Avnish Narayan and Soroush Nasiriany and Scott Reed and You Liang Tan and Guanzhi Wang and Zu Wang and Jing Wang and Qi Wang and Jiannan Xiang and Yuqi Xie and Yinzhen Xu and Zhenjia Xu and Seonghyeon Ye and Zhiding Yu and Ao Zhang and Hao Zhang and Yizhou Zhao and Ruijie Zheng and Yuke Zhu},
+ month = {March},
+ year = {2025},
+ booktitle = {ArXiv Preprint},
+}
+```
+
+## Additional Resources
+
+Blog: https://developer.nvidia.com/isaac/gr00t
+
+Hugging Face Model: https://huggingface.co/nvidia/GR00T-N1.5-3B
diff --git a/docs/source/policy_smolvla_README.md b/docs/source/policy_smolvla_README.md
new file mode 100644
index 00000000..ee567ee8
--- /dev/null
+++ b/docs/source/policy_smolvla_README.md
@@ -0,0 +1,14 @@
+## Paper
+
+https://arxiv.org/abs/2506.01844
+
+## Citation
+
+```bibtex
+@article{shukor2025smolvla,
+ title={SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics},
+ author={Shukor, Mustafa and Aubakirova, Dana and Capuano, Francesco and Kooijmans, Pepijn and Palma, Steven and Zouitine, Adil and Aractingi, Michel and Pascal, Caroline and Russi, Martino and Marafioti, Andres and Alibert, Simon and Cord, Matthieu and Wolf, Thomas and Cadene, Remi},
+ journal={arXiv preprint arXiv:2506.01844},
+ year={2025}
+}
+```
diff --git a/docs/source/policy_tdmpc_README.md b/docs/source/policy_tdmpc_README.md
new file mode 100644
index 00000000..804f166c
--- /dev/null
+++ b/docs/source/policy_tdmpc_README.md
@@ -0,0 +1,14 @@
+## Paper
+
+https://www.nicklashansen.com/td-mpc/
+
+## Citation
+
+```bibtex
+@inproceedings{Hansen2022tdmpc,
+ title={Temporal Difference Learning for Model Predictive Control},
+ author={Nicklas Hansen and Xiaolong Wang and Hao Su},
+ booktitle={ICML},
+ year={2022}
+}
+```
diff --git a/docs/source/policy_vqbet_README.md b/docs/source/policy_vqbet_README.md
new file mode 100644
index 00000000..02f95b7c
--- /dev/null
+++ b/docs/source/policy_vqbet_README.md
@@ -0,0 +1,14 @@
+## Paper
+
+https://sjlee.cc/vq-bet/
+
+## Citation
+
+```bibtex
+@article{lee2024behavior,
+ title={Behavior generation with latent actions},
+ author={Lee, Seungjae and Wang, Yibin and Etukuru, Haritheja and Kim, H Jin and Shafiullah, Nur Muhammad Mahi and Pinto, Lerrel},
+ journal={arXiv preprint arXiv:2403.03181},
+ year={2024}
+}
+```
diff --git a/docs/source/porting_datasets_v3.mdx b/docs/source/porting_datasets_v3.mdx
new file mode 100644
index 00000000..46793265
--- /dev/null
+++ b/docs/source/porting_datasets_v3.mdx
@@ -0,0 +1,321 @@
+# Porting Large Datasets to LeRobot Dataset v3.0
+
+This tutorial explains how to port large-scale robotic datasets to the LeRobot Dataset v3.0 format. We'll use the **DROID 1.0.1** dataset as our primary example, which demonstrates handling multi-terabyte datasets with thousands of shards across SLURM clusters.
+
+## File Organization: v2.1 vs v3.0
+
+Dataset v3.0 fundamentally changes how data is organized and stored:
+
+**v2.1 Structure (Episode-based)**:
+
+```
+dataset/
+├── data/chunk-000/episode_000000.parquet
+├── data/chunk-000/episode_000001.parquet
+├── videos/chunk-000/camera/episode_000000.mp4
+└── meta/episodes.jsonl
+```
+
+**v3.0 Structure (File-based)**:
+
+```
+dataset/
+├── data/chunk-000/file-000.parquet # Multiple episodes per file
+├── videos/camera/chunk-000/file-000.mp4 # Consolidated video chunks
+└── meta/episodes/chunk-000/file-000.parquet # Structured metadata
+```
+
+This transition from individual episode files to file-based chunks dramatically improves performance and reduces storage overhead.
+
+## What's New in Dataset v3.0
+
+Dataset v3.0 introduces significant improvements for handling large datasets:
+
+### 🏗️ **Enhanced File Organization**
+
+- **File-based structure**: Episodes are now grouped into chunked files rather than individual episode files
+- **Configurable file sizes**: for data and video files
+- **Improved storage efficiency**: Better compression and reduced overhead
+
+### 📊 **Modern Metadata Management**
+
+- **Parquet-based metadata**: Replaced JSON Lines with efficient parquet format
+- **Structured episode access**: Direct pandas DataFrame access via `dataset.meta.episodes`
+- **Per-episode statistics**: Enhanced statistics tracking at episode level
+
+### 🚀 **Performance Enhancements**
+
+- **Memory-mapped access**: Improved RAM usage through PyArrow memory mapping
+- **Faster loading**: Significantly reduced dataset initialization time
+- **Better scalability**: Designed for datasets with millions of episodes
+
+## Prerequisites
+
+Before porting large datasets, ensure you have:
+
+- **LeRobot installed** with v3.0 support. Follow our [Installation Guide](./installation).
+- **Sufficient storage**: Raw datasets can be very large (e.g., DROID requires 2TB)
+- **Cluster access** (recommended for large datasets): SLURM or similar job scheduler
+- **Dataset-specific dependencies**: For DROID, you'll need TensorFlow Dataset utilities
+
+## Understanding the DROID Dataset
+
+[DROID 1.0.1](https://droid-dataset.github.io/droid/the-droid-dataset) is an excellent example of a large-scale robotic dataset:
+
+- **Size**: 1.7TB (RLDS format), 8.7TB (raw data)
+- **Structure**: 2048 pre-defined TensorFlow dataset shards
+- **Content**: 76,000+ robot manipulation trajectories from Franka Emika Panda robots
+- **Scope**: Real-world manipulation tasks across multiple environments and objects
+- **Format**: Originally in TensorFlow Records/RLDS format, requiring conversion to LeRobot format
+- **Hosting**: Google Cloud Storage with public access via `gsutil`
+
+The dataset contains diverse manipulation demonstrations with:
+
+- Multiple camera views (wrist camera, exterior cameras)
+- Natural language task descriptions
+- Robot proprioceptive state and actions
+- Success/failure annotations
+
+### DROID Features Schema
+
+```python
+DROID_FEATURES = {
+ # Episode markers
+ "is_first": {"dtype": "bool", "shape": (1,)},
+ "is_last": {"dtype": "bool", "shape": (1,)},
+ "is_terminal": {"dtype": "bool", "shape": (1,)},
+
+ # Language instructions
+ "language_instruction": {"dtype": "string", "shape": (1,)},
+ "language_instruction_2": {"dtype": "string", "shape": (1,)},
+ "language_instruction_3": {"dtype": "string", "shape": (1,)},
+
+ # Robot state
+ "observation.state.gripper_position": {"dtype": "float32", "shape": (1,)},
+ "observation.state.cartesian_position": {"dtype": "float32", "shape": (6,)},
+ "observation.state.joint_position": {"dtype": "float32", "shape": (7,)},
+
+ # Camera observations
+ "observation.images.wrist_left": {"dtype": "image"},
+ "observation.images.exterior_1_left": {"dtype": "image"},
+ "observation.images.exterior_2_left": {"dtype": "image"},
+
+ # Actions
+ "action.gripper_position": {"dtype": "float32", "shape": (1,)},
+ "action.cartesian_position": {"dtype": "float32", "shape": (6,)},
+ "action.joint_position": {"dtype": "float32", "shape": (7,)},
+
+ # Standard LeRobot format
+ "observation.state": {"dtype": "float32", "shape": (8,)}, # joints + gripper
+ "action": {"dtype": "float32", "shape": (8,)}, # joints + gripper
+}
+```
+
+## Approach 1: Single Computer Porting
+
+### Step 1: Install Dependencies
+
+For DROID specifically:
+
+```bash
+pip install tensorflow
+pip install tensorflow_datasets
+```
+
+For other datasets, install the appropriate readers for your source format.
+
+### Step 2: Download Raw Data
+
+Download DROID from Google Cloud Storage using `gsutil`:
+
+```bash
+# Install Google Cloud SDK if not already installed
+# https://cloud.google.com/sdk/docs/install
+
+# Download the full RLDS dataset (1.7TB)
+gsutil -m cp -r gs://gresearch/robotics/droid/1.0.1 /your/data/
+
+# Or download just the 100-episode sample (2GB) for testing
+gsutil -m cp -r gs://gresearch/robotics/droid_100 /your/data/
+```
+
+> [!WARNING]
+> Large datasets require substantial time and storage:
+>
+> - **Full DROID (1.7TB)**: Several days to download depending on bandwidth
+> - **Processing time**: 7+ days for local porting of full dataset
+> - **Upload time**: 3+ days to push to Hugging Face Hub
+> - **Local storage**: ~400GB for processed LeRobot format
+
+### Step 3: Port the Dataset
+
+```bash
+python examples/port_datasets/port_droid.py \
+ --raw-dir /your/data/droid/1.0.1 \
+ --repo-id your_id/droid_1.0.1 \
+ --push-to-hub
+```
+
+### Development and Testing
+
+For development, you can port a single shard:
+
+```bash
+python examples/port_datasets/port_droid.py \
+ --raw-dir /your/data/droid/1.0.1 \
+ --repo-id your_id/droid_1.0.1_test \
+ --num-shards 2048 \
+ --shard-index 0
+```
+
+This approach works for smaller datasets or testing, but large datasets require cluster computing.
+
+## Approach 2: SLURM Cluster Porting (Recommended)
+
+For large datasets like DROID, parallel processing across multiple nodes dramatically reduces processing time.
+
+### Step 1: Install Cluster Dependencies
+
+```bash
+pip install datatrove # Hugging Face's distributed processing library
+```
+
+### Step 2: Configure Your SLURM Environment
+
+Find your partition information:
+
+```bash
+sinfo --format="%R" # List available partitions
+sinfo -N -p your_partition -h -o "%N cpus=%c mem=%m" # Check resources
+```
+
+Choose a **CPU partition** - no GPU needed for dataset porting.
+
+### Step 3: Launch Parallel Porting Jobs
+
+```bash
+python examples/port_datasets/slurm_port_shards.py \
+ --raw-dir /your/data/droid/1.0.1 \
+ --repo-id your_id/droid_1.0.1 \
+ --logs-dir /your/logs \
+ --job-name port_droid \
+ --partition your_partition \
+ --workers 2048 \
+ --cpus-per-task 8 \
+ --mem-per-cpu 1950M
+```
+
+#### Parameter Guidelines
+
+- **`--workers`**: Number of parallel jobs (max 2048 for DROID's shard count)
+- **`--cpus-per-task`**: 8 CPUs recommended for frame encoding parallelization
+- **`--mem-per-cpu`**: ~16GB total RAM (8×1950M) for loading raw frames
+
+> [!TIP]
+> Start with fewer workers (e.g., 100) to test your cluster configuration before launching thousands of jobs.
+
+### Step 4: Monitor Progress
+
+Check running jobs:
+
+```bash
+squeue -u $USER
+```
+
+Monitor overall progress:
+
+```bash
+jobs_status /your/logs
+```
+
+Inspect individual job logs:
+
+```bash
+less /your/logs/port_droid/slurm_jobs/JOB_ID_WORKER_ID.out
+```
+
+Debug failed jobs:
+
+```bash
+failed_logs /your/logs/port_droid
+```
+
+### Step 5: Aggregate Shards
+
+Once all porting jobs complete:
+
+```bash
+python examples/port_datasets/slurm_aggregate_shards.py \
+ --repo-id your_id/droid_1.0.1 \
+ --logs-dir /your/logs \
+ --job-name aggr_droid \
+ --partition your_partition \
+ --workers 2048 \
+ --cpus-per-task 8 \
+ --mem-per-cpu 1950M
+```
+
+### Step 6: Upload to Hub
+
+```bash
+python examples/port_datasets/slurm_upload.py \
+ --repo-id your_id/droid_1.0.1 \
+ --logs-dir /your/logs \
+ --job-name upload_droid \
+ --partition your_partition \
+ --workers 50 \
+ --cpus-per-task 4 \
+ --mem-per-cpu 1950M
+```
+
+> [!NOTE]
+> Upload uses fewer workers (50) since it's network-bound rather than compute-bound.
+
+## Dataset v3.0 File Structure
+
+Your completed dataset will have this modern structure:
+
+```
+dataset/
+├── meta/
+│ ├── episodes/
+│ │ └── chunk-000/
+│ │ └── file-000.parquet # Episode metadata
+│ ├── tasks.parquet # Task definitions
+│ ├── stats.json # Aggregated statistics
+│ └── info.json # Dataset information
+├── data/
+│ └── chunk-000/
+│ └── file-000.parquet # Consolidated episode data
+└── videos/
+ └── camera_key/
+ └── chunk-000/
+ └── file-000.mp4 # Consolidated video files
+```
+
+This replaces the old episode-per-file structure with efficient, optimally-sized chunks.
+
+## Migrating from Dataset v2.1
+
+If you have existing datasets in v2.1 format, use the migration tool:
+
+```bash
+python src/lerobot/datasets/v30/convert_dataset_v21_to_v30.py \
+ --repo-id your_id/existing_dataset
+```
+
+This automatically:
+
+- Converts file structure to v3.0 format
+- Migrates metadata from JSON Lines to parquet
+- Aggregates statistics and creates per-episode stats
+- Updates version information
+
+## Performance Benefits
+
+Dataset v3.0 provides significant improvements for large datasets:
+
+- **Faster loading**: 3-5x reduction in initialization time
+- **Memory efficiency**: Better RAM usage through memory mapping
+- **Scalable processing**: Handles millions of episodes efficiently
+- **Storage optimization**: Reduced file count and improved compression
diff --git a/docs/source/processors_robots_teleop.mdx b/docs/source/processors_robots_teleop.mdx
new file mode 100644
index 00000000..3d8dcb40
--- /dev/null
+++ b/docs/source/processors_robots_teleop.mdx
@@ -0,0 +1,151 @@
+# Processors for Robots and Teleoperators
+
+This guide shows how to build and modify processing pipelines that connect teleoperators (e.g., phone) to robots and datasets. Pipelines standardize conversions between different action/observation spaces so you can swap teleops and robots without rewriting glue code.
+
+We use the Phone to SO‑100 follower examples for concreteness, but the same patterns apply to other robots.
+
+**What you'll learn**
+
+- Absolute vs. relative EE control: What each means, trade‑offs, and how to choose for your task.
+- Three-pipeline pattern: How to map teleop actions → dataset actions → robot commands, and robot observations → dataset observations.
+- Adapters (`to_transition` / `to_output`): How these convert raw dicts to `EnvTransition` and back to reduce boilerplate.
+- Dataset feature contracts: How steps declare features via `transform_features(...)`, and how to aggregate/merge them for recording.
+- Choosing a representation: When to store joints, absolute EE poses, or relative EE deltas—and how that affects training.
+- Pipeline customization guidance: How to swap robots/URDFs safely and tune bounds, step sizes, and options like IK initialization.
+
+### Absolute vs relative EE control
+
+The examples in this guide use absolute end effector (EE) poses because they are easy to reason about. In practice, relative EE deltas or joint position are often preferred as learning features.
+
+With processors, you choose the learning features you want to use for your policy. This could be joints positions/velocities, absolute EE, or relative EE positions. You can also choose to store other features, such as joint torques, motor currents, etc.
+
+## Three pipelines
+
+We often compose three pipelines. Depending on your setup, some can be empty if action and observation spaces already match.
+Each of these pipelines handle different conversions between different action and observation spaces. Below is a quick explanation of each pipeline.
+
+1. Pipeline 1: Teleop action space → dataset action space (phone pose → EE targets)
+2. Pipeline 2: Dataset action space → robot command space (EE targets → joints)
+3. Pipeline 3: Robot observation space → dataset observation space (joints → EE pose)
+
+Below is an example of the three pipelines that we use in the phone to SO-100 follower examples:
+
+```69:90:examples/phone_so100_record.py
+phone_to_robot_ee_pose_processor = RobotProcessorPipeline[RobotAction, RobotAction]( # teleop -> dataset action
+ steps=[
+ MapPhoneActionToRobotAction(platform=teleop_config.phone_os),
+ EEReferenceAndDelta(
+ kinematics=kinematics_solver, end_effector_step_sizes={"x": 0.5, "y": 0.5, "z": 0.5}, motor_names=list(robot.bus.motors.keys()),
+ ),
+ EEBoundsAndSafety(
+ end_effector_bounds={"min": [-1.0, -1.0, -1.0], "max": [1.0, 1.0, 1.0]}, max_ee_step_m=0.20,
+ ),
+ GripperVelocityToJoint(),
+ ],
+ to_transition=robot_action_to_transition,
+ to_output=transition_to_robot_action,
+)
+
+robot_ee_to_joints_processor = RobotProcessorPipeline[RobotAction, RobotAction]( # dataset action -> robot
+ steps=[
+ InverseKinematicsEEToJoints(
+ kinematics=kinematics_solver, motor_names=list(robot.bus.motors.keys()), initial_guess_current_joints=True,
+ ),
+ ],
+ to_transition=robot_action_to_transition,
+ to_output=transition_to_robot_action,
+)
+
+robot_joints_to_ee_pose = RobotProcessorPipeline[RobotObservation, RobotObservation]( # robot obs -> dataset obs
+ steps=[
+ ForwardKinematicsJointsToEE(kinematics=kinematics_solver, motor_names=list(robot.bus.motors.keys()))
+ ],
+ to_transition=observation_to_transition,
+ to_output=transition_to_observation,
+)
+```
+
+## Why to_transition / to_output
+
+To convert from robot/teleoperator to pipeline and back, we use the `to_transition` and `to_output` pipeline adapters.
+They standardize conversions to reduce boilerplate code, and form the bridge between the robot and teleoperators raw dictionaries and the pipeline’s `EnvTransition` format.
+In the phone to SO-100 follower examples we use the following adapters:
+
+- `robot_action_to_transition`: transforms the teleop action dict to a pipeline transition.
+- `transition_to_robot_action`: transforms the pipeline transition to a robot action dict.
+- `observation_to_transition`: transforms the robot observation dict to a pipeline transition.
+- `transition_to_observation`: transforms the pipeline transition to a observation dict.
+
+Checkout [src/lerobot/processor/converters.py](https://github.com/huggingface/lerobot/blob/main/src/lerobot/processor/converters.py) for more details.
+
+## Dataset feature contracts
+
+Dataset features are determined by the keys saved in the dataset. Each step can declare what features it modifies in a contract called `transform_features(...)`. Once you build a processor, the processor can then aggregate all of these features with `aggregate_pipeline_dataset_features()` and merge multiple feature dicts with `combine_feature_dicts(...)`.
+
+Below is and example of how we declare features with the `transform_features` method in the phone to SO-100 follower examples:
+
+```src/lerobot/robots/so100_follower/robot_kinematic_processor.py
+ def transform_features(
+ self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
+ ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
+ # We only use the ee pose in the dataset, so we don't need the joint positions
+ for n in self.motor_names:
+ features[PipelineFeatureType.ACTION].pop(f"{n}.pos", None)
+ # We specify the dataset features of this step that we want to be stored in the dataset
+ for k in ["x", "y", "z", "wx", "wy", "wz", "gripper_pos"]:
+ features[PipelineFeatureType.ACTION][f"ee.{k}"] = PolicyFeature(
+ type=FeatureType.STATE, shape=(1,)
+ )
+ return features
+```
+
+Here we declare what PolicyFeatures we modify in this step, so we know what features we can expect when we run the processor. These features can then be aggregated and used to create the dataset features.
+
+Below is an example of how we aggregate and merge features in the phone to SO-100 record example:
+
+```121:145:examples/phone_so100_record.py
+features=combine_feature_dicts(
+ # Run the feature contract of the pipelines
+ # This tells you how the features would look like after the pipeline steps
+ aggregate_pipeline_dataset_features(
+ pipeline=phone_to_robot_ee_pose_processor,
+ initial_features=create_initial_features(action=phone.action_features), # <- Action features we can expect, these come from our teleop device (phone) and action processor
+ use_videos=True,
+ ),
+ aggregate_pipeline_dataset_features(
+ pipeline=robot_joints_to_ee_pose,
+ initial_features=create_initial_features(observation=robot.observation_features), # <- Observation features we can expect, these come from our robot and observation processor
+ use_videos=True,
+ patterns=["observation.state.ee"], # <- Here you could optionally filter the features we want to store in the dataset, with a specific pattern
+
+ ),
+ ),
+```
+
+How it works:
+
+- `aggregate_pipeline_dataset_features(...)`: applies `transform_features` across the pipeline and filters by patterns (images included when `use_videos=True`, and state features included when `patterns` is specified).
+- `combine_feature_dicts(...)`: combine multiple feature dicts.
+- Recording with `record_loop(...)` uses `build_dataset_frame(...)` to build frames consistent with `dataset.features` before we call `add_frame(...)` to add the frame to the dataset.
+
+## Guidance when customizing robot pipelines
+
+You can store any of the following features as your action/observation space:
+
+- Joint positions
+- Absolute EE poses
+- Relative EE deltas
+- Other features: joint velocity, torques, etc.
+
+Pick what you want to use for your policy action and observation space and configure/modify the pipelines and steps accordingly.
+
+### Different robots
+
+- You can easily reuse pipelines, for example to use another robot with phone teleop, modify the examples and swap the robot `RobotKinematics` (URDF) and `motor_names` to use your own robot with Phone teleop. Additionally you should ensure `target_frame_name` points to your gripper/wrist.
+
+### Safety first
+
+- When changing pipelines, start with tight bounds, implement safety steps when working with real robots.
+- Its advised to start with simulation first and then move to real robots.
+
+Thats it! We hope this guide helps you get started with customizing your robot pipelines, If you run into any issues at any point, jump into our [Discord community](https://discord.com/invite/s3KuuzsPFb) for support.
diff --git a/docs/source/reachy2.mdx b/docs/source/reachy2.mdx
new file mode 100644
index 00000000..7d3dc1b6
--- /dev/null
+++ b/docs/source/reachy2.mdx
@@ -0,0 +1,288 @@
+# Reachy 2
+
+Reachy 2 is an open-source humanoid robot made by Pollen Robotics, specifically designed for the development of embodied AI and real-world applications.
+Check out [Pollen Robotics website](https://www.pollen-robotics.com/reachy/), or access [Reachy 2 documentation](https://docs.pollen-robotics.com/) for more information on the platform!
+
+## Teleoperate Reachy 2
+
+Currently, there are two ways to teleoperate Reachy 2:
+
+- Pollen Robotics’ VR teleoperation (not included in LeRobot).
+- Robot-to-robot teleoperation (use one Reachy 2 to control another).
+
+## Reachy 2 Simulation
+
+**(Linux only)** You can run Reachy 2 in simulation (Gazebo or MuJoCo) using the provided [Docker image](https://hub.docker.com/r/pollenrobotics/reachy2_core).
+
+1. Install [Docker Engine](https://docs.docker.com/engine/).
+2. Run (for MuJoCo):
+
+```
+docker run --rm -it \
+ --name reachy \
+ --privileged \
+ --network host \
+ --ipc host \
+ --device-cgroup-rule='c 189:* rwm' \
+ --group-add audio \
+ -e ROS_DOMAIN_ID="$ROS_DOMAIN_ID" \
+ -e DISPLAY="$DISPLAY" \
+ -e RCUTILS_CONSOLE_OUTPUT_FORMAT="[{severity}]: {message}" \
+ -e REACHY2_CORE_SERVICE_FAKE="${REACHY2_CORE_SERVICE_FAKE:-true}" \
+ -v /dev:/dev \
+ -v "$HOME/.reachy_config":/home/reachy/.reachy_config_override \
+ -v "$HOME/.reachy.log":/home/reachy/.ros/log \
+ -v /usr/lib/x86_64-linux-gnu:/opt/host-libs \
+ --entrypoint /package/launch.sh \
+ pollenrobotics/reachy2_core:1.7.5.9_deploy \
+ start_rviz:=true start_sdk_server:=true mujoco:=true
+```
+
+> If MuJoCo runs slowly (low simulation frequency), append `-e LD_LIBRARY_PATH="/opt/host-libs:$LD_LIBRARY_PATH" \` to the previous command to improve performance:
+>
+> ```
+> docker run --rm -it \
+> --name reachy \
+> --privileged \
+> --network host \
+> --ipc host \
+> --device-cgroup-rule='c 189:* rwm' \
+> --group-add audio \
+> -e ROS_DOMAIN_ID="$ROS_DOMAIN_ID" \
+> -e DISPLAY="$DISPLAY" \
+> -e RCUTILS_CONSOLE_OUTPUT_FORMAT="[{severity}]: {message}" \
+> -e REACHY2_CORE_SERVICE_FAKE="${REACHY2_CORE_SERVICE_FAKE:-true}" \
+> -e LD_LIBRARY_PATH="/opt/host-libs:$LD_LIBRARY_PATH" \
+> -v /dev:/dev \
+> -v "$HOME/.reachy_config":/home/reachy/.reachy_config_override \
+> -v "$HOME/.reachy.log":/home/reachy/.ros/log \
+> -v /usr/lib/x86_64-linux-gnu:/opt/host-libs \
+> --entrypoint /package/launch.sh \
+> pollenrobotics/reachy2_core:1.7.5.9_deploy \
+> start_rviz:=true start_sdk_server:=true mujoco:=true
+> ```
+
+## Setup
+
+### Prerequisites
+
+- On your robot, check the **service images** meet the minimum versions:
+ - **reachy2-core >= 1.7.5.2**
+ - **webrtc >= 2.0.1.1**
+
+Then, if you want to use VR teleoperation:
+
+- Install the [Reachy 2 teleoperation application](https://docs.pollen-robotics.com/teleoperation/teleoperation-introduction/discover-teleoperation/).
+ Use version **>=v1.2.0**
+
+We recommend using two computers: one for teleoperation (Windows required) and another for recording with LeRobot.
+
+### Install LeRobot
+
+Follow the [installation instructions](https://github.com/huggingface/lerobot#installation) to install LeRobot.
+
+Install LeRobot with Reachy 2 dependencies:
+
+```bash
+pip install -e ".[reachy2]"
+```
+
+### (Optional but recommended) Install pollen_data_acquisition_server
+
+How you manage Reachy 2 recording sessions is up to you, but the **easiest** way is to use this server so you can control sessions directly from the VR teleoperation app.
+
+> **Note:** Currently, only the VR teleoperation application works as a client for this server, so this step primarily targets teleoperation. You’re free to develop custom clients to manage sessions to your needs.
+
+In your LeRobot environment, install the server from source:
+
+```bash
+git clone https://github.com/pollen-robotics/pollen_data_acquisition_server.git
+cd pollen_data_acquisition_server
+pip install -e .
+```
+
+Find the [pollen_data_acquisition_server documentation here](https://github.com/pollen-robotics/pollen_data_acquisition_server).
+
+## Step 1: Recording
+
+### Get Reachy 2 IP address
+
+Before starting teleoperation and data recording, find the [robot's IP address](https://docs.pollen-robotics.com/getting-started/setup-reachy2/connect-reachy2/).
+We strongly recommend connecting all devices (PC and robot) via **Ethernet**.
+
+### Launch recording
+
+There are two ways to manage recording sessions when using the Reachy 2 VR teleoperation application:
+
+- **Using the data acquisition server (recommended for VR teleop)**: The VR app orchestrates sessions (via the server it tells LeRobot when to create datasets, start/stop episodes) while also controlling the robot’s motions.
+- **Using LeRobot’s record script**: LeRobot owns session control and decides when to start/stop episodes. If you also use the VR teleop app, it’s only for motion control.
+
+### Option 1: Using Pollen data acquisition server (recommended for VR teleop)
+
+Make sure you have installed pollen_data_acquisition_server, as explained in the Setup section.
+
+Launch the data acquisition server to be able to manage your session directly from the teleoperation application:
+
+```bash
+python -m pollen_data_acquisition_server.server
+```
+
+Then get into the teleoperation application and choose "Data acquisition session".
+You can finally setup your session by following the screens displayed.
+
+> Even without the VR app, you can use the `pollen_data_acquisition_server` with your own client implementation.
+
+### Option 2: Using lerobot.record
+
+Reachy 2 is fully supported by LeRobot’s recording features.
+If you choose this option but still want to use the VR teleoperation application, select "Standard session" in the app.
+
+**Example: start a recording without the mobile base:**
+First add reachy2 and reachy2_teleoperator to the imports of the record script. Then you can use the following command:
+
+```bash
+python -m lerobot.record \
+ --robot.type=reachy2 \
+ --robot.ip_address=192.168.0.200 \
+ --robot.id=r2-0000 \
+ --robot.use_external_commands=true \
+ --robot.with_mobile_base=false \
+ --teleop.type=reachy2_teleoperator \
+ --teleop.ip_address=192.168.0.200 \
+ --teleop.with_mobile_base=false \
+ --dataset.repo_id=pollen_robotics/record_test \
+ --dataset.single_task="Reachy 2 recording test" \
+ --dataset.num_episodes=1 \
+ --dataset.episode_time_s=5 \
+ --dataset.fps=15 \
+ --dataset.push_to_hub=true \
+ --dataset.private=true \
+ --display_data=true
+```
+
+#### Specific Options
+
+**Extended setup overview (all options included):**
+
+```bash
+python -m lerobot.record \
+ --robot.type=reachy2 \
+ --robot.ip_address=192.168.0.200 \
+ --robot.use_external_commands=true \
+ --robot.with_mobile_base=true \
+ --robot.with_l_arm=true \
+ --robot.with_r_arm=true \
+ --robot.with_neck=true \
+ --robot.with_antennas=true \
+ --robot.with_left_teleop_camera=true \
+ --robot.with_right_teleop_camera=true \
+ --robot.with_torso_camera=false \
+ --robot.disable_torque_on_disconnect=false \
+ --robot.max_relative_target=5.0 \
+ --teleop.type=reachy2_teleoperator \
+ --teleop.ip_address=192.168.0.200 \
+ --teleop.use_present_position=false \
+ --teleop.with_mobile_base=false \
+ --teleop.with_l_arm=true \
+ --teleop.with_r_arm=true \
+ --teleop.with_neck=true \
+ --teleop.with_antennas=true \
+ --dataset.repo_id=pollen_robotics/record_test \
+ --dataset.single_task="Reachy 2 recording test" \
+ --dataset.num_episodes=1 \
+ --dataset.episode_time_s=5 \
+ --dataset.fps=15 \
+ --dataset.push_to_hub=true \
+ --dataset.private=true \
+ --display_data=true
+```
+
+##### `--robot.use_external_commands`
+
+Determine whether LeRobot robot.send_action() sends commands to the robot.
+**Must** be set to false while using the VR teleoperation application, as the app already sends commands.
+
+##### `--teleop.use_present_position`
+
+Determine whether the teleoperator reads the goal or present position of the robot.
+Must be set to true if a compliant Reachy 2 is used to control another one.
+
+##### Use the relevant parts
+
+From our initial tests, recording **all** joints when only some are moving can reduce model quality with certain policies.
+To avoid this, you can exclude specific parts from recording and replay using:
+
+````
+--robot.with_=false
+```,
+with `` being one of : `mobile_base`, `l_arm`, `r_arm", `neck`, `antennas`.
+It determine whether the corresponding part is recorded in the observations. True if not set.
+
+By default, **all parts are recorded**.
+
+The same per-part mechanism is available in `reachy2_teleoperator` as well.
+
+````
+
+--teleop.with\_
+
+```
+with `` being one of : `mobile_base`, `l_arm`, `r_arm", `neck`, `antennas`.
+Determine whether the corresponding part is recorded in the actions. True if not set.
+
+> **Important:** In a given session, the **enabled parts must match** on both the robot and the teleoperator.
+For example, if the robot runs with `--robot.with_mobile_base=false`, the teleoperator must disable the same part `--teleoperator.with_mobile_base=false`.
+
+##### Use the relevant cameras
+
+You can do the same for **cameras**. By default, only the **teleoperation cameras** are recorded (both `left_teleop_camera` and `right_teleop_camera`). Enable or disable each camera with:
+
+```
+
+--robot.with_left_teleop_camera=
+--robot.with_right_teleop_camera=
+--robot.with_torso_camera=
+
+````
+
+
+## Step 2: Replay
+
+Make sure the robot is configured with the same parts as the dataset:
+
+```bash
+python -m lerobot.replay \
+ --robot.type=reachy2 \
+ --robot.ip_address=192.168.0.200 \
+ --robot.use_external_commands=false \
+ --robot.with_mobile_base=false \
+ --dataset.repo_id=pollen_robotics/record_test \
+ --dataset.episode=0
+ --display_data=true
+````
+
+## Step 3: Train
+
+```bash
+python -m lerobot.scripts.train \
+ --dataset.repo_id=pollen_robotics/record_test \
+ --policy.type=act \
+ --output_dir=outputs/train/reachy2_test \
+ --job_name=reachy2 \
+ --policy.device=mps \
+ --wandb.enable=true \
+ --policy.repo_id=pollen_robotics/record_test_policy
+```
+
+## Step 4: Evaluate
+
+```bash
+python -m lerobot.record \
+ --robot.type=reachy2 \
+ --robot.ip_address=192.168.0.200 \
+ --display_data=false \
+ --dataset.repo_id=pollen_robotics/eval_record_test \
+ --dataset.single_task="Evaluate reachy2 policy" \
+ --dataset.num_episodes=10 \
+ --policy.path=outputs/train/reachy2_test/checkpoints/last/pretrained_model
+```
diff --git a/docs/source/smolvla.mdx b/docs/source/smolvla.mdx
new file mode 100644
index 00000000..a56298b5
--- /dev/null
+++ b/docs/source/smolvla.mdx
@@ -0,0 +1,116 @@
+# SmolVLA
+
+SmolVLA is Hugging Face’s lightweight foundation model for robotics. Designed for easy fine-tuning on LeRobot datasets, it helps accelerate your development!
+
+
+
+
+
+ Figure 1. SmolVLA takes as input (i) multiple cameras views, (ii) the
+ robot’s current sensorimotor state, and (iii) a natural language
+ instruction, encoded into contextual features used to condition the action
+ expert when generating an action chunk.
+
+
+
+## Set Up Your Environment
+
+1. Install LeRobot by following our [Installation Guide](./installation).
+2. Install SmolVLA dependencies by running:
+
+ ```bash
+ pip install -e ".[smolvla]"
+ ```
+
+## Collect a dataset
+
+SmolVLA is a base model, so fine-tuning on your own data is required for optimal performance in your setup.
+We recommend recording ~50 episodes of your task as a starting point. Follow our guide to get started: [Recording a Dataset](./il_robots)
+
+
+
+In your dataset, make sure to have enough demonstrations per each variation (e.g. the cube position on the table if it is cube pick-place task) you are introducing.
+
+We recommend checking out the dataset linked below for reference that was used in the [SmolVLA paper](https://huggingface.co/papers/2506.01844):
+
+🔗 [SVLA SO100 PickPlace](https://huggingface.co/spaces/lerobot/visualize_dataset?path=%2Flerobot%2Fsvla_so100_pickplace%2Fepisode_0)
+
+In this dataset, we recorded 50 episodes across 5 distinct cube positions. For each position, we collected 10 episodes of pick-and-place interactions. This structure, repeating each variation several times, helped the model generalize better. We tried similar dataset with 25 episodes, and it was not enough leading to a bad performance. So, the data quality and quantity is definitely a key.
+After you have your dataset available on the Hub, you are good to go to use our finetuning script to adapt SmolVLA to your application.
+
+
+
+## Finetune SmolVLA on your data
+
+Use [`smolvla_base`](https://hf.co/lerobot/smolvla_base), our pretrained 450M model, and fine-tune it on your data.
+Training the model for 20k steps will roughly take ~4 hrs on a single A100 GPU. You should tune the number of steps based on performance and your use-case.
+
+If you don't have a gpu device, you can train using our notebook on [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/lerobot/training-smolvla.ipynb)
+
+Pass your dataset to the training script using `--dataset.repo_id`. If you want to test your installation, run the following command where we use one of the datasets we collected for the [SmolVLA Paper](https://huggingface.co/papers/2506.01844).
+
+```bash
+cd lerobot && lerobot-train \
+ --policy.path=lerobot/smolvla_base \
+ --dataset.repo_id=${HF_USER}/mydataset \
+ --batch_size=64 \
+ --steps=20000 \
+ --output_dir=outputs/train/my_smolvla \
+ --job_name=my_smolvla_training \
+ --policy.device=cuda \
+ --wandb.enable=true
+```
+
+
+ You can start with a small batch size and increase it incrementally, if the
+ GPU allows it, as long as loading times remain short.
+
+
+Fine-tuning is an art. For a complete overview of the options for finetuning, run
+
+```bash
+lerobot-train --help
+```
+
+
+
+
+
+ Figure 2: Comparison of SmolVLA across task variations. From left to right:
+ (1) pick-place cube counting, (2) pick-place cube counting, (3) pick-place
+ cube counting under perturbations, and (4) generalization on pick-and-place
+ of the lego block with real-world SO101.
+
+
+
+## Evaluate the finetuned model and run it in real-time
+
+Similarly for when recording an episode, it is recommended that you are logged in to the HuggingFace Hub. You can follow the corresponding steps: [Record a dataset](./il_robots).
+Once you are logged in, you can run inference in your setup by doing:
+
+```bash
+lerobot-record \
+ --robot.type=so101_follower \
+ --robot.port=/dev/ttyACM0 \ # <- Use your port
+ --robot.id=my_blue_follower_arm \ # <- Use your robot id
+ --robot.cameras="{ front: {type: opencv, index_or_path: 8, width: 640, height: 480, fps: 30}}" \ # <- Use your cameras
+ --dataset.single_task="Grasp a lego block and put it in the bin." \ # <- Use the same task description you used in your dataset recording
+ --dataset.repo_id=${HF_USER}/eval_DATASET_NAME_test \ # <- This will be the dataset name on HF Hub
+ --dataset.episode_time_s=50 \
+ --dataset.num_episodes=10 \
+ # <- Teleop optional if you want to teleoperate in between episodes \
+ # --teleop.type=so100_leader \
+ # --teleop.port=/dev/ttyACM0 \
+ # --teleop.id=my_red_leader_arm \
+ --policy.path=HF_USER/FINETUNE_MODEL_NAME # <- Use your fine-tuned model
+```
+
+Depending on your evaluation setup, you can configure the duration and the number of episodes to record for your evaluation suite.
diff --git a/docs/source/so100.mdx b/docs/source/so100.mdx
new file mode 100644
index 00000000..3c73ae80
--- /dev/null
+++ b/docs/source/so100.mdx
@@ -0,0 +1,640 @@
+# SO-100
+
+In the steps below, we explain how to assemble the SO-100 robot.
+
+## Source the parts
+
+Follow this [README](https://github.com/TheRobotStudio/SO-ARM100/blob/main/SO100.md). It contains the bill of materials, with a link to source the parts, as well as the instructions to 3D print the parts. And advise if it's your first time printing or if you don't own a 3D printer.
+
+## Install LeRobot 🤗
+
+To install LeRobot, follow our [Installation Guide](./installation)
+
+In addition to these instructions, you need to install the Feetech SDK:
+
+```bash
+pip install -e ".[feetech]"
+```
+
+## Configure the motors
+
+**Note:**
+Unlike the SO-101, the motor connectors are not easily accessible once the arm is assembled, so the configuration step must be done beforehand.
+
+### 1. Find the USB ports associated with each arm
+
+To find the port for each bus servo adapter, run this script:
+
+```bash
+lerobot-find-port
+```
+
+
+
+
+Example output:
+
+```
+Finding all available ports for the MotorBus.
+['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
+Remove the USB cable from your MotorsBus and press Enter when done.
+
+[...Disconnect corresponding leader or follower arm and press Enter...]
+
+The port of this MotorsBus is /dev/tty.usbmodem575E0032081
+Reconnect the USB cable.
+```
+
+Where the found port is: `/dev/tty.usbmodem575E0032081` corresponding to your leader or follower arm.
+
+
+
+
+On Linux, you might need to give access to the USB ports by running:
+
+```bash
+sudo chmod 666 /dev/ttyACM0
+sudo chmod 666 /dev/ttyACM1
+```
+
+Example output:
+
+```
+Finding all available ports for the MotorBus.
+['/dev/ttyACM0', '/dev/ttyACM1']
+Remove the usb cable from your MotorsBus and press Enter when done.
+
+[...Disconnect corresponding leader or follower arm and press Enter...]
+
+The port of this MotorsBus is /dev/ttyACM1
+Reconnect the USB cable.
+```
+
+Where the found port is: `/dev/ttyACM1` corresponding to your leader or follower arm.
+
+
+
+
+### 2. Set the motors ids and baudrates
+
+Each motor is identified by a unique id on the bus. When brand new, motors usually come with a default id of `1`. For the communication to work properly between the motors and the controller, we first need to set a unique, different id to each motor. Additionally, the speed at which data is transmitted on the bus is determined by the baudrate. In order to talk to each other, the controller and all the motors need to be configured with the same baudrate.
+
+To that end, we first need to connect to each motor individually with the controller in order to set these. Since we will write these parameters in the non-volatile section of the motors' internal memory (EEPROM), we'll only need to do this once.
+
+If you are repurposing motors from another robot, you will probably also need to perform this step as the ids and baudrate likely won't match.
+
+#### Follower
+
+Connect the usb cable from your computer and the power supply to the follower arm's controller board. Then, run the following command or run the API example with the port you got from the previous step. You'll also need to give your leader arm a name with the `id` parameter.
+
+For a visual reference on how to set the motor ids please refer to [this video](https://huggingface.co/docs/lerobot/en/so101#setup-motors-video) where we follow the process for the SO101 arm.
+
+
+
+
+```bash
+lerobot-setup-motors \
+ --robot.type=so100_follower \
+ --robot.port=/dev/tty.usbmodem585A0076841 # <- paste here the port found at previous step
+```
+
+
+
+
+
+```python
+from lerobot.robots.so100_follower import SO100Follower, SO100FollowerConfig
+
+config = SO100FollowerConfig(
+ port="/dev/tty.usbmodem585A0076841",
+ id="my_awesome_follower_arm",
+)
+follower = SO100Follower(config)
+follower.setup_motors()
+```
+
+
+
+
+
+You should see the following instruction
+
+```
+Connect the controller board to the 'gripper' motor only and press enter.
+```
+
+As instructed, plug the gripper's motor. Make sure it's the only motor connected to the board, and that the motor itself is not yet daisy-chained to any other motor. As you press `[Enter]`, the script will automatically set the id and baudrate for that motor.
+
+
+Troubleshooting
+
+If you get an error at that point, check your cables and make sure they are plugged in properly:
+
+
+
Power supply
+
USB cable between your computer and the controller board
+
The 3-pin cable from the controller board to the motor
+
+
+If you are using a Waveshare controller board, make sure that the two jumpers are set on the `B` channel (USB).
+
+
+
+You should then see the following message:
+
+```
+'gripper' motor id set to 6
+```
+
+Followed by the next instruction:
+
+```
+Connect the controller board to the 'wrist_roll' motor only and press enter.
+```
+
+You can disconnect the 3-pin cable from the controller board, but you can leave it connected to the gripper motor on the other end, as it will already be in the right place. Now, plug in another 3-pin cable to the wrist roll motor and connect it to the controller board. As with the previous motor, make sure it is the only motor connected to the board and that the motor itself isn't connected to any other one.
+
+Repeat the operation for each motor as instructed.
+
+> [!TIP]
+> Check your cabling at each step before pressing Enter. For instance, the power supply cable might disconnect as you manipulate the board.
+
+When you are done, the script will simply finish, at which point the motors are ready to be used. You can now plug the 3-pin cable from each motor to the next one, and the cable from the first motor (the 'shoulder pan' with id=1) to the controller board, which can now be attached to the base of the arm.
+
+#### Leader
+
+Do the same steps for the leader arm.
+
+
+
+```bash
+lerobot-setup-motors \
+ --teleop.type=so100_leader \
+ --teleop.port=/dev/tty.usbmodem575E0031751 # <- paste here the port found at previous step
+```
+
+
+
+
+```python
+from lerobot.teleoperators.so100_leader import SO100Leader, SO100LeaderConfig
+
+config = SO100LeaderConfig(
+ port="/dev/tty.usbmodem585A0076841",
+ id="my_awesome_leader_arm",
+)
+leader = SO100Leader(config)
+leader.setup_motors()
+```
+
+
+
+
+
+## Step-by-Step Assembly Instructions
+
+## Remove the gears of the 6 leader motors
+
+
+Video removing gears
+
+
+
+
+
+
+
+Follow the video for removing gears. You need to remove the gear for the motors of the leader arm. As a result, you will only use the position encoding of the motor and reduce friction to more easily operate the leader arm.
+
+### Clean Parts
+
+Remove all support material from the 3D-printed parts. The easiest way to do this is using a small screwdriver to get underneath the support material.
+
+### Additional Guidance
+
+
+Video assembling arms
+
+
+
+
+
+
+
+**Note:**
+This video provides visual guidance for assembling the arms, but it doesn't specify when or how to do the wiring. Inserting the cables beforehand is much easier than doing it afterward. The first arm may take a bit more than 1 hour to assemble, but once you get used to it, you can assemble the second arm in under 1 hour.
+
+---
+
+### First Motor
+
+**Step 2: Insert Wires**
+
+- Insert two wires into the first motor.
+
+
+
+**Step 3: Install in Base**
+
+- Place the first motor into the base.
+
+
+
+**Step 4: Secure Motor**
+
+- Fasten the motor with 4 screws. Two from the bottom and two from top.
+
+**Step 5: Attach Motor Holder**
+
+- Slide over the first motor holder and fasten it using two screws (one on each side).
+
+
+
+**Step 6: Attach Motor Horns**
+
+- Install both motor horns, securing the top horn with a screw. Try not to move the motor position when attaching the motor horn, especially for the leader arms, where we removed the gears.
+
+
+
+
+
+ Video adding motor horn
+
+
+
+
+**Step 7: Attach Shoulder Part**
+
+- Route one wire to the back of the robot and the other to the left or towards you (see photo).
+- Attach the shoulder part.
+
+
+
+**Step 8: Secure Shoulder**
+
+- Tighten the shoulder part with 4 screws on top and 4 on the bottom
+ _(access bottom holes by turning the shoulder)._
+
+---
+
+### Second Motor Assembly
+
+**Step 9: Install Motor 2**
+
+- Slide the second motor in from the top and link the wire from motor 1 to motor 2.
+
+
+
+**Step 10: Attach Shoulder Holder**
+
+- Add the shoulder motor holder.
+- Ensure the wire from motor 1 to motor 2 goes behind the holder while the other wire is routed upward (see photo).
+- This part can be tight to assemble, you can use a workbench like the image or a similar setup to push the part around the motor.
+
+
+
+
+
+
+
+**Step 11: Secure Motor 2**
+
+- Fasten the second motor with 4 screws.
+
+**Step 12: Attach Motor Horn**
+
+- Attach both motor horns to motor 2, again use the horn screw.
+
+**Step 13: Attach Base**
+
+- Install the base attachment using 2 screws.
+
+
+
+**Step 14: Attach Upper Arm**
+
+- Attach the upper arm with 4 screws on each side.
+
+
+
+---
+
+### Third Motor Assembly
+
+**Step 15: Install Motor 3**
+
+- Route the motor cable from motor 2 through the cable holder to motor 3, then secure motor 3 with 4 screws.
+
+**Step 16: Attach Motor Horn**
+
+- Attach both motor horns to motor 3 and secure one again with a horn screw.
+
+
+
+**Step 17: Attach Forearm**
+
+- Connect the forearm to motor 3 using 4 screws on each side.
+
+
+
+---
+
+### Fourth Motor Assembly
+
+**Step 18: Install Motor 4**
+
+- Slide in motor 4, attach the cable from motor 3, and secure the cable in its holder with a screw.
+
+
+
+
+
+
+**Step 19: Attach Motor Holder 4**
+
+- Install the fourth motor holder (a tight fit). Ensure one wire is routed upward and the wire from motor 3 is routed downward (see photo).
+
+
+
+**Step 20: Secure Motor 4 & Attach Horn**
+
+- Fasten motor 4 with 4 screws and attach its motor horns, use for one a horn screw.
+
+
+
+---
+
+### Wrist Assembly
+
+**Step 21: Install Motor 5**
+
+- Insert motor 5 into the wrist holder and secure it with 2 front screws.
+
+
+
+**Step 22: Attach Wrist**
+
+- Connect the wire from motor 4 to motor 5. And already insert the other wire for the gripper.
+- Secure the wrist to motor 4 using 4 screws on both sides.
+
+
+
+**Step 23: Attach Wrist Horn**
+
+- Install only one motor horn on the wrist motor and secure it with a horn screw.
+
+
+
+---
+
+### Follower Configuration
+
+**Step 24: Attach Gripper**
+
+- Attach the gripper to motor 5.
+
+
+
+**Step 25: Install Gripper Motor**
+
+- Insert the gripper motor, connect the motor wire from motor 5 to motor 6, and secure it with 3 screws on each side.
+
+
+
+**Step 26: Attach Gripper Horn & Claw**
+
+- Attach the motor horns and again use a horn screw.
+- Install the gripper claw and secure it with 4 screws on both sides.
+
+
+
+**Step 27: Mount Controller**
+
+- Attach the motor controller to the back of the robot.
+
+
+
+
+
+
+_Assembly complete – proceed to Leader arm assembly._
+
+---
+
+### Leader Configuration
+
+For the leader configuration, perform **Steps 1–23**. Make sure that you removed the motor gears from the motors.
+
+**Step 24: Attach Leader Holder**
+
+- Mount the leader holder onto the wrist and secure it with a screw.
+
+
+
+**Step 25: Attach Handle**
+
+- Attach the handle to motor 5 using 4 screws.
+
+
+
+**Step 26: Install Gripper Motor**
+
+- Insert the gripper motor, secure it with 3 screws on each side, attach a motor horn using a horn screw, and connect the motor wire.
+
+
+
+**Step 27: Attach Trigger**
+
+- Attach the follower trigger with 4 screws.
+
+
+
+**Step 28: Mount Controller**
+
+- Attach the motor controller to the back of the robot.
+
+
+
+
+
+
+## Calibrate
+
+Next, you'll need to calibrate your robot to ensure that the leader and follower arms have the same position values when they are in the same physical position.
+The calibration process is very important because it allows a neural network trained on one robot to work on another.
+
+#### Follower
+
+Run the following command or API example to calibrate the follower arm:
+
+
+
+
+```bash
+lerobot-calibrate \
+ --robot.type=so100_follower \
+ --robot.port=/dev/tty.usbmodem58760431551 \ # <- The port of your robot
+ --robot.id=my_awesome_follower_arm # <- Give the robot a unique name
+```
+
+
+
+
+
+```python
+from lerobot.robots.so100_follower import SO100FollowerConfig, SO100Follower
+
+config = SO100FollowerConfig(
+ port="/dev/tty.usbmodem585A0076891",
+ id="my_awesome_follower_arm",
+)
+
+follower = SO100Follower(config)
+follower.connect(calibrate=False)
+follower.calibrate()
+follower.disconnect()
+```
+
+
+
+
+
+We unified the calibration method for most robots. Thus, the calibration steps for this SO100 arm are the same as the steps for the Koch and SO101. First, we have to move the robot to the position where each joint is in the middle of its range, then we press `Enter`. Secondly, we move all joints through their full range of motion. A video of this same process for the SO101 as reference can be found [here](https://huggingface.co/docs/lerobot/en/so101#calibration-video)
+
+#### Leader
+
+Do the same steps to calibrate the leader arm, run the following command or API example:
+
+
+
+
+```bash
+lerobot-calibrate \
+ --teleop.type=so100_leader \
+ --teleop.port=/dev/tty.usbmodem58760431551 \ # <- The port of your robot
+ --teleop.id=my_awesome_leader_arm # <- Give the robot a unique name
+```
+
+
+
+
+
+```python
+from lerobot.teleoperators.so100_leader import SO100LeaderConfig, SO100Leader
+
+config = SO100LeaderConfig(
+ port="/dev/tty.usbmodem58760431551",
+ id="my_awesome_leader_arm",
+)
+
+leader = SO100Leader(config)
+leader.connect(calibrate=False)
+leader.calibrate()
+leader.disconnect()
+```
+
+
+
+
+
+Congrats 🎉, your robot is all set to learn a task on its own. Start training it by following this tutorial: [Getting started with real-world robots](./il_robots)
+
+> [!TIP]
+> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
diff --git a/docs/source/so101.mdx b/docs/source/so101.mdx
new file mode 100644
index 00000000..00ec3eb7
--- /dev/null
+++ b/docs/source/so101.mdx
@@ -0,0 +1,436 @@
+# SO-101
+
+In the steps below, we explain how to assemble our flagship robot, the SO-101.
+
+## Source the parts
+
+Follow this [README](https://github.com/TheRobotStudio/SO-ARM100). It contains the bill of materials, with a link to source the parts, as well as the instructions to 3D print the parts.
+And advise if it's your first time printing or if you don't own a 3D printer.
+
+## Install LeRobot 🤗
+
+To install LeRobot, follow our [Installation Guide](./installation)
+
+In addition to these instructions, you need to install the Feetech SDK:
+
+```bash
+pip install -e ".[feetech]"
+```
+
+## Step-by-Step Assembly Instructions
+
+The follower arm uses 6x STS3215 motors with 1/345 gearing. The leader, however, uses three differently geared motors to make sure it can both sustain its own weight and it can be moved without requiring much force. Which motor is needed for which joint is shown in the table below.
+
+| Leader-Arm Axis | Motor | Gear Ratio |
+| ------------------- | :---: | :--------: |
+| Base / Shoulder Pan | 1 | 1 / 191 |
+| Shoulder Lift | 2 | 1 / 345 |
+| Elbow Flex | 3 | 1 / 191 |
+| Wrist Flex | 4 | 1 / 147 |
+| Wrist Roll | 5 | 1 / 147 |
+| Gripper | 6 | 1 / 147 |
+
+### Clean Parts
+
+Remove all support material from the 3D-printed parts. The easiest way to do this is using a small screwdriver to get underneath the support material.
+
+It is advisable to install one 3-pin cable in the motor after placing them before continuing assembly.
+
+### Joint 1
+
+- Place the first motor into the base.
+- Fasten the motor with 4 M2x6mm screws (smallest screws). Two from the top and two from the bottom.
+- Slide over the first motor holder and fasten it using two M2x6mm screws (one on each side).
+- Install both motor horns, securing the top horn with a M3x6mm screw.
+- Attach the shoulder part.
+- Tighten the shoulder part with 4 M3x6mm screws on top and 4 M3x6mm screws on the bottom
+- Add the shoulder motor holder.
+
+
+
+
+
+### Joint 2
+
+- Slide the second motor in from the top.
+- Fasten the second motor with 4 M2x6mm screws.
+- Attach both motor horns to motor 2, again use the M3x6mm horn screw.
+- Attach the upper arm with 4 M3x6mm screws on each side.
+
+
+
+
+
+### Joint 3
+
+- Insert motor 3 and fasten using 4 M2x6mm screws
+- Attach both motor horns to motor 3 and secure one again with a M3x6mm horn screw.
+- Connect the forearm to motor 3 using 4 M3x6mm screws on each side.
+
+
+
+
+
+### Joint 4
+
+- Slide over motor holder 4.
+- Slide in motor 4.
+- Fasten motor 4 with 4 M2x6mm screws and attach its motor horns, use a M3x6mm horn screw.
+
+
+
+
+
+### Joint 5
+
+- Insert motor 5 into the wrist holder and secure it with 2 M2x6mm front screws.
+- Install only one motor horn on the wrist motor and secure it with a M3x6mm horn screw.
+- Secure the wrist to motor 4 using 4 M3x6mm screws on both sides.
+
+
+
+
+
+### Gripper / Handle
+
+
+
+
+- Attach the gripper to motor 5, attach it to the motor horn on the wrist using 4 M3x6mm screws.
+- Insert the gripper motor and secure it with 2 M2x6mm screws on each side.
+- Attach the motor horns and again use a M3x6mm horn screw.
+- Install the gripper claw and secure it with 4 M3x6mm screws on both sides.
+
+
+
+
+
+
+
+
+- Mount the leader holder onto the wrist and secure it with 4 M3x6mm screws.
+- Attach the handle to motor 5 using 1 M2x6mm screw.
+- Insert the gripper motor, secure it with 2 M2x6mm screws on each side, attach a motor horn using a M3x6mm horn screw.
+- Attach the follower trigger with 4 M3x6mm screws.
+
+
+
+
+
+
+
+
+## Configure the motors
+
+### 1. Find the USB ports associated with each arm
+
+To find the port for each bus servo adapter, connect MotorBus to your computer via USB and power. Run the following script and disconnect the MotorBus when prompted:
+
+```bash
+lerobot-find-port
+```
+
+
+
+
+Example output:
+
+```
+Finding all available ports for the MotorBus.
+['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
+Remove the USB cable from your MotorsBus and press Enter when done.
+
+[...Disconnect corresponding leader or follower arm and press Enter...]
+
+The port of this MotorsBus is /dev/tty.usbmodem575E0032081
+Reconnect the USB cable.
+```
+
+Where the found port is: `/dev/tty.usbmodem575E0032081` corresponding to your leader or follower arm.
+
+
+
+
+On Linux, you might need to give access to the USB ports by running:
+
+```bash
+sudo chmod 666 /dev/ttyACM0
+sudo chmod 666 /dev/ttyACM1
+```
+
+Example output:
+
+```
+Finding all available ports for the MotorBus.
+['/dev/ttyACM0', '/dev/ttyACM1']
+Remove the usb cable from your MotorsBus and press Enter when done.
+
+[...Disconnect corresponding leader or follower arm and press Enter...]
+
+The port of this MotorsBus is /dev/ttyACM1
+Reconnect the USB cable.
+```
+
+Where the found port is: `/dev/ttyACM1` corresponding to your leader or follower arm.
+
+
+
+
+### 2. Set the motors ids and baudrates
+
+Each motor is identified by a unique id on the bus. When brand new, motors usually come with a default id of `1`. For the communication to work properly between the motors and the controller, we first need to set a unique, different id to each motor. Additionally, the speed at which data is transmitted on the bus is determined by the baudrate. In order to talk to each other, the controller and all the motors need to be configured with the same baudrate.
+
+To that end, we first need to connect to each motor individually with the controller in order to set these. Since we will write these parameters in the non-volatile section of the motors' internal memory (EEPROM), we'll only need to do this once.
+
+If you are repurposing motors from another robot, you will probably also need to perform this step as the ids and baudrate likely won't match.
+
+The video below shows the sequence of steps for setting the motor ids.
+
+##### Setup motors video
+
+
+
+
+
+#### Follower
+
+Connect the usb cable from your computer and the power supply to the follower arm's controller board. Then, run the following command or run the API example with the port you got from the previous step. You'll also need to give your leader arm a name with the `id` parameter.
+
+
+
+
+```bash
+lerobot-setup-motors \
+ --robot.type=so101_follower \
+ --robot.port=/dev/tty.usbmodem585A0076841 # <- paste here the port found at previous step
+```
+
+
+
+
+
+```python
+from lerobot.robots.so101_follower import SO101Follower, SO101FollowerConfig
+
+config = SO101FollowerConfig(
+ port="/dev/tty.usbmodem585A0076841",
+ id="my_awesome_follower_arm",
+)
+follower = SO101Follower(config)
+follower.setup_motors()
+```
+
+
+
+
+
+You should see the following instruction
+
+```bash
+Connect the controller board to the 'gripper' motor only and press enter.
+```
+
+As instructed, plug the gripper's motor. Make sure it's the only motor connected to the board, and that the motor itself is not yet daisy-chained to any other motor. As you press `[Enter]`, the script will automatically set the id and baudrate for that motor.
+
+
+Troubleshooting
+
+If you get an error at that point, check your cables and make sure they are plugged in properly:
+
+
+
Power supply
+
USB cable between your computer and the controller board
+
The 3-pin cable from the controller board to the motor
+
+
+If you are using a Waveshare controller board, make sure that the two jumpers are set on the `B` channel (USB).
+
+
+
+You should then see the following message:
+
+```bash
+'gripper' motor id set to 6
+```
+
+Followed by the next instruction:
+
+```bash
+Connect the controller board to the 'wrist_roll' motor only and press enter.
+```
+
+You can disconnect the 3-pin cable from the controller board, but you can leave it connected to the gripper motor on the other end, as it will already be in the right place. Now, plug in another 3-pin cable to the wrist roll motor and connect it to the controller board. As with the previous motor, make sure it is the only motor connected to the board and that the motor itself isn't connected to any other one.
+
+Repeat the operation for each motor as instructed.
+
+> [!TIP]
+> Check your cabling at each step before pressing Enter. For instance, the power supply cable might disconnect as you manipulate the board.
+
+When you are done, the script will simply finish, at which point the motors are ready to be used. You can now plug the 3-pin cable from each motor to the next one, and the cable from the first motor (the 'shoulder pan' with id=1) to the controller board, which can now be attached to the base of the arm.
+
+#### Leader
+
+Do the same steps for the leader arm.
+
+
+
+
+```bash
+lerobot-setup-motors \
+ --teleop.type=so101_leader \
+ --teleop.port=/dev/tty.usbmodem575E0031751 # <- paste here the port found at previous step
+```
+
+
+
+
+
+```python
+from lerobot.teleoperators.so101_leader import SO101Leader, SO101LeaderConfig
+
+config = SO101LeaderConfig(
+ port="/dev/tty.usbmodem585A0076841",
+ id="my_awesome_leader_arm",
+)
+leader = SO101Leader(config)
+leader.setup_motors()
+```
+
+
+
+
+
+## Calibrate
+
+Next, you'll need to calibrate your robot to ensure that the leader and follower arms have the same position values when they are in the same physical position.
+The calibration process is very important because it allows a neural network trained on one robot to work on another.
+
+#### Follower
+
+Run the following command or API example to calibrate the follower arm:
+
+
+
+
+```bash
+lerobot-calibrate \
+ --robot.type=so101_follower \
+ --robot.port=/dev/tty.usbmodem58760431551 \ # <- The port of your robot
+ --robot.id=my_awesome_follower_arm # <- Give the robot a unique name
+```
+
+
+
+
+
+```python
+from lerobot.robots.so101_follower import SO101FollowerConfig, SO101Follower
+
+config = SO101FollowerConfig(
+ port="/dev/tty.usbmodem585A0076891",
+ id="my_awesome_follower_arm",
+)
+
+follower = SO101Follower(config)
+follower.connect(calibrate=False)
+follower.calibrate()
+follower.disconnect()
+```
+
+
+
+
+
+The video below shows how to perform the calibration. First you need to move the robot to the position where all joints are in the middle of their ranges. Then after pressing enter you have to move each joint through its full range of motion.
+
+##### Calibration video
+
+
+
+
+
+#### Leader
+
+Do the same steps to calibrate the leader arm, run the following command or API example:
+
+
+
+
+```bash
+lerobot-calibrate \
+ --teleop.type=so101_leader \
+ --teleop.port=/dev/tty.usbmodem58760431551 \ # <- The port of your robot
+ --teleop.id=my_awesome_leader_arm # <- Give the robot a unique name
+```
+
+
+
+
+
+```python
+from lerobot.teleoperators.so101_leader import SO101LeaderConfig, SO101Leader
+
+config = SO101LeaderConfig(
+ port="/dev/tty.usbmodem58760431551",
+ id="my_awesome_leader_arm",
+)
+
+leader = SO101Leader(config)
+leader.connect(calibrate=False)
+leader.calibrate()
+leader.disconnect()
+```
+
+
+
+
+
+Congrats 🎉, your robot is all set to learn a task on its own. Start training it by following this tutorial: [Getting started with real-world robots](./il_robots)
+
+> [!TIP]
+> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
diff --git a/docs/source/using_dataset_tools.mdx b/docs/source/using_dataset_tools.mdx
new file mode 100644
index 00000000..affca0ee
--- /dev/null
+++ b/docs/source/using_dataset_tools.mdx
@@ -0,0 +1,102 @@
+# Using Dataset Tools
+
+This guide covers the dataset tools utilities available in LeRobot for modifying and editing existing datasets.
+
+## Overview
+
+LeRobot provides several utilities for manipulating datasets:
+
+1. **Delete Episodes** - Remove specific episodes from a dataset
+2. **Split Dataset** - Divide a dataset into multiple smaller datasets
+3. **Merge Datasets** - Combine multiple datasets into one. The datasets must have identical features, and episodes are concatenated in the order specified in `repo_ids`
+4. **Add Features** - Add new features to a dataset
+5. **Remove Features** - Remove features from a dataset
+
+The core implementation is in `lerobot.datasets.dataset_tools`.
+An example script detailing how to use the tools API is available in `examples/dataset/use_dataset_tools.py`.
+
+## Command-Line Tool: lerobot-edit-dataset
+
+`lerobot-edit-dataset` is a command-line script for editing datasets. It can be used to delete episodes, split datasets, merge datasets, add features, and remove features.
+
+Run `lerobot-edit-dataset --help` for more information on the configuration of each operation.
+
+### Usage Examples
+
+#### Delete Episodes
+
+Remove specific episodes from a dataset. This is useful for filtering out undesired data.
+
+```bash
+# Delete episodes 0, 2, and 5 (modifies original dataset)
+lerobot-edit-dataset \
+ --repo_id lerobot/pusht \
+ --operation.type delete_episodes \
+ --operation.episode_indices "[0, 2, 5]"
+
+# Delete episodes and save to a new dataset (preserves original dataset)
+lerobot-edit-dataset \
+ --repo_id lerobot/pusht \
+ --new_repo_id lerobot/pusht_after_deletion \
+ --operation.type delete_episodes \
+ --operation.episode_indices "[0, 2, 5]"
+```
+
+#### Split Dataset
+
+Divide a dataset into multiple subsets.
+
+```bash
+# Split by fractions (e.g. 80% train, 20% test, 20% val)
+lerobot-edit-dataset \
+ --repo_id lerobot/pusht \
+ --operation.type split \
+ --operation.splits '{"train": 0.8, "test": 0.2, "val": 0.2}'
+
+# Split by specific episode indices
+lerobot-edit-dataset \
+ --repo_id lerobot/pusht \
+ --operation.type split \
+ --operation.splits '{"task1": [0, 1, 2, 3], "task2": [4, 5]}'
+```
+
+There are no constraints on the split names, they can be determined by the user. Resulting datasets are saved under the repo id with the split name appended, e.g. `lerobot/pusht_train`, `lerobot/pusht_task1`, `lerobot/pusht_task2`.
+
+#### Merge Datasets
+
+Combine multiple datasets into a single dataset.
+
+```bash
+# Merge train and validation splits back into one dataset
+lerobot-edit-dataset \
+ --repo_id lerobot/pusht_merged \
+ --operation.type merge \
+ --operation.repo_ids "['lerobot/pusht_train', 'lerobot/pusht_val']"
+```
+
+#### Remove Features
+
+Remove features from a dataset.
+
+```bash
+# Remove a camera feature
+lerobot-edit-dataset \
+ --repo_id lerobot/pusht \
+ --operation.type remove_feature \
+ --operation.feature_names "['observation.images.top']"
+```
+
+### Push to Hub
+
+Add the `--push_to_hub` flag to any command to automatically upload the resulting dataset to the Hugging Face Hub:
+
+```bash
+lerobot-edit-dataset \
+ --repo_id lerobot/pusht \
+ --new_repo_id lerobot/pusht_after_deletion \
+ --operation.type delete_episodes \
+ --operation.episode_indices "[0, 2, 5]" \
+ --push_to_hub
+```
+
+There is also a tool for adding features to a dataset that is not yet covered in `lerobot-edit-dataset`.
diff --git a/examples/10_use_so100.md b/examples/10_use_so100.md
deleted file mode 100644
index 62164cf5..00000000
--- a/examples/10_use_so100.md
+++ /dev/null
@@ -1,624 +0,0 @@
-# Using the [SO-100](https://github.com/TheRobotStudio/SO-ARM100) with LeRobot
-
-## Table of Contents
-
- - [A. Source the parts](#a-source-the-parts)
- - [B. Install LeRobot](#b-install-lerobot)
- - [C. Configure the Motors](#c-configure-the-motors)
- - [D. Step-by-Step Assembly Instructions](#d-step-by-step-assembly-instructions)
- - [E. Calibrate](#e-calibrate)
- - [F. Teleoperate](#f-teleoperate)
- - [G. Record a dataset](#g-record-a-dataset)
- - [H. Visualize a dataset](#h-visualize-a-dataset)
- - [I. Replay an episode](#i-replay-an-episode)
- - [J. Train a policy](#j-train-a-policy)
- - [K. Evaluate your policy](#k-evaluate-your-policy)
- - [L. More Information](#l-more-information)
-
-## A. Source the parts
-
-Follow this [README](https://github.com/TheRobotStudio/SO-ARM100). It contains the bill of materials, with a link to source the parts, as well as the instructions to 3D print the parts,
-and advice if it's your first time printing or if you don't own a 3D printer.
-
-Before assembling, you will first need to configure your motors. To this end, we provide a nice script, so let's first install LeRobot. After configuration, we will also guide you through assembly.
-
-## B. Install LeRobot
-
-> [!TIP]
-> We use the Command Prompt (cmd) quite a lot. If you are not comfortable using the cmd or want to brush up using the command line you can have a look here: [Command line crash course](https://developer.mozilla.org/en-US/docs/Learn_web_development/Getting_started/Environment_setup/Command_line)
-
-On your computer:
-
-#### 1. [Install Miniconda](https://docs.anaconda.com/miniconda/install/#quick-command-line-install):
-
-#### 2. Restart shell
-Copy paste in your shell: `source ~/.bashrc` or for Mac: `source ~/.bash_profile` or `source ~/.zshrc` if you're using zshell
-
-#### 3. Create and activate a fresh conda environment for lerobot
-
-
-Video install instructions
-
-
-
-
-
-```bash
-conda create -y -n lerobot python=3.10
-```
-
-Then activate your conda environment (do this each time you open a shell to use lerobot!):
-```bash
-conda activate lerobot
-```
-
-#### 4. Clone LeRobot:
-```bash
-git clone https://github.com/huggingface/lerobot.git ~/lerobot
-```
-
-#### 5. Install ffmpeg in your environment:
-When using `miniconda`, install `ffmpeg` in your environment:
-```bash
-conda install ffmpeg -c conda-forge
-```
-
-#### 6. Install LeRobot with dependencies for the feetech motors:
-```bash
-cd ~/lerobot && pip install -e ".[feetech]"
-```
-
-Great :hugs:! You are now done installing LeRobot and we can begin assembling the SO100 arms :robot:.
-Every time you now want to use LeRobot you can go to the `~/lerobot` folder where we installed LeRobot and run one of the commands.
-
-## C. Configure the motors
-
-> [!NOTE]
-> Throughout this tutorial you will find videos on how to do the steps, the full video tutorial can be found here: [assembly video](https://www.youtube.com/watch?v=FioA2oeFZ5I).
-
-### 1. Find the USB ports associated to each arm
-
-Designate one bus servo adapter and 6 motors for your leader arm, and similarly the other bus servo adapter and 6 motors for the follower arm. It's convenient to label them and write on each motor if it's for the follower `F` or for the leader `L` and it's ID from 1 to 6 (F1...F6 and L1...L6).
-
-#### a. Run the script to find port
-
-
-Video finding port
-
-
-
-
-To find the port for each bus servo adapter, run the utility script:
-```bash
-python lerobot/scripts/find_motors_bus_port.py
-```
-
-#### b. Example outputs
-
-Example output when identifying the leader arm's port (e.g., `/dev/tty.usbmodem575E0031751` on Mac, or possibly `/dev/ttyACM0` on Linux):
-```
-Finding all available ports for the MotorBus.
-['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
-Remove the usb cable from your MotorsBus and press Enter when done.
-
-[...Disconnect leader arm and press Enter...]
-
-The port of this MotorsBus is /dev/tty.usbmodem575E0031751
-Reconnect the usb cable.
-```
-Example output when identifying the follower arm's port (e.g., `/dev/tty.usbmodem575E0032081`, or possibly `/dev/ttyACM1` on Linux):
-```
-Finding all available ports for the MotorBus.
-['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
-Remove the usb cable from your MotorsBus and press Enter when done.
-
-[...Disconnect follower arm and press Enter...]
-
-The port of this MotorsBus is /dev/tty.usbmodem575E0032081
-Reconnect the usb cable.
-```
-
-#### c. Troubleshooting
-On Linux, you might need to give access to the USB ports by running:
-```bash
-sudo chmod 666 /dev/ttyACM0
-sudo chmod 666 /dev/ttyACM1
-```
-
-#### d. Update config file
-
-IMPORTANTLY: Now that you have your ports, update the **port** default values of [`SO100RobotConfig`](../lerobot/common/robot_devices/robots/configs.py). You will find something like:
-```diff
-@RobotConfig.register_subclass("so100")
-@dataclass
-class So100RobotConfig(ManipulatorRobotConfig):
- calibration_dir: str = ".cache/calibration/so100"
- # `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
- # Set this to a positive scalar to have the same value for all motors, or a list that is the same length as
- # the number of motors in your follower arms.
- max_relative_target: int | None = None
-
- leader_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "main": FeetechMotorsBusConfig(
-- port="/dev/tty.usbmodem58760431091",
-+ port="{ADD YOUR LEADER PORT}",
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "sts3215"],
- "shoulder_lift": [2, "sts3215"],
- "elbow_flex": [3, "sts3215"],
- "wrist_flex": [4, "sts3215"],
- "wrist_roll": [5, "sts3215"],
- "gripper": [6, "sts3215"],
- },
- ),
- }
- )
-
- follower_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "main": FeetechMotorsBusConfig(
-- port="/dev/tty.usbmodem585A0076891",
-+ port="{ADD YOUR FOLLOWER PORT}",
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "sts3215"],
- "shoulder_lift": [2, "sts3215"],
- "elbow_flex": [3, "sts3215"],
- "wrist_flex": [4, "sts3215"],
- "wrist_roll": [5, "sts3215"],
- "gripper": [6, "sts3215"],
- },
- ),
- }
- )
-```
-
-### 2. Assembling the Base
-Let's begin with assembling the follower arm base
-
-#### a. Set IDs for all 12 motors
-
-
-Video configuring motor
-
-
-
-
-Plug your first motor F1 and run this script to set its ID to 1. It will also set its present position to 2048, so expect your motor to rotate. Replace the text after --port to the corresponding follower control board port and run this command in cmd:
-```bash
-python lerobot/scripts/configure_motor.py \
- --port /dev/tty.usbmodem58760432961 \
- --brand feetech \
- --model sts3215 \
- --baudrate 1000000 \
- --ID 1
-```
-
-> [!NOTE]
-> These motors are currently limited. They can take values between 0 and 4096 only, which corresponds to a full turn. They can't turn more than that. 2048 is at the middle of this range, so we can take -2048 steps (180 degrees anticlockwise) and reach the maximum range, or take +2048 steps (180 degrees clockwise) and reach the maximum range. The configuration step also sets the homing offset to 0, so that if you misassembled the arm, you can always update the homing offset to account for a shift up to ± 2048 steps (± 180 degrees).
-
-Then unplug your motor and plug the second motor and set its ID to 2.
-```bash
-python lerobot/scripts/configure_motor.py \
- --port /dev/tty.usbmodem58760432961 \
- --brand feetech \
- --model sts3215 \
- --baudrate 1000000 \
- --ID 2
-```
-
-Redo the process for all your motors until ID 6. Do the same for the 6 motors of the leader arm.
-
-
-#### b. Remove the gears of the 6 leader motors
-
-
-Video removing gears
-
-
-
-
-
-
-Follow the video for removing gears. You need to remove the gear for the motors of the leader arm. As a result, you will only use the position encoding of the motor and reduce friction to more easily operate the leader arm.
-
-## D. Step-by-Step Assembly Instructions
-
-**Step 1: Clean Parts**
-- Remove all support material from the 3D-printed parts.
----
-
-### Additional Guidance
-
-
-Video assembling arms
-
-
-
-
-
-**Note:**
-This video provides visual guidance for assembling the arms, but it doesn't specify when or how to do the wiring. Inserting the cables beforehand is much easier than doing it afterward. The first arm may take a bit more than 1 hour to assemble, but once you get used to it, you can assemble the second arm in under 1 hour.
-
----
-
-### First Motor
-
-**Step 2: Insert Wires**
-- Insert two wires into the first motor.
-
-
-
-**Step 3: Install in Base**
-- Place the first motor into the base.
-
-
-
-**Step 4: Secure Motor**
-- Fasten the motor with 4 screws. Two from the bottom and two from top.
-
-**Step 5: Attach Motor Holder**
-- Slide over the first motor holder and fasten it using two screws (one on each side).
-
-
-
-**Step 6: Attach Motor Horns**
-- Install both motor horns, securing the top horn with a screw. Try not to move the motor position when attaching the motor horn, especially for the leader arms, where we removed the gears.
-
-
-
- Video adding motor horn
-
-
-
-**Step 7: Attach Shoulder Part**
-- Route one wire to the back of the robot and the other to the left or in photo towards you (see photo).
-- Attach the shoulder part.
-
-
-
-**Step 8: Secure Shoulder**
-- Tighten the shoulder part with 4 screws on top and 4 on the bottom
-*(access bottom holes by turning the shoulder).*
-
----
-
-### Second Motor Assembly
-
-**Step 9: Install Motor 2**
-- Slide the second motor in from the top and link the wire from motor 1 to motor 2.
-
-
-
-**Step 10: Attach Shoulder Holder**
-- Add the shoulder motor holder.
-- Ensure the wire from motor 1 to motor 2 goes behind the holder while the other wire is routed upward (see photo).
-- This part can be tight to assemble, you can use a workbench like the image or a similar setup to push the part around the motor.
-
-
-
-
-
-
-
-**Step 11: Secure Motor 2**
-- Fasten the second motor with 4 screws.
-
-**Step 12: Attach Motor Horn**
-- Attach both motor horns to motor 2, again use the horn screw.
-
-**Step 13: Attach Base**
-- Install the base attachment using 2 screws.
-
-
-
-**Step 14: Attach Upper Arm**
-- Attach the upper arm with 4 screws on each side.
-
-
-
----
-
-### Third Motor Assembly
-
-**Step 15: Install Motor 3**
-- Route the motor cable from motor 2 through the cable holder to motor 3, then secure motor 3 with 4 screws.
-
-**Step 16: Attach Motor Horn**
-- Attach both motor horns to motor 3 and secure one again with a horn screw.
-
-
-
-**Step 17: Attach Forearm**
-- Connect the forearm to motor 3 using 4 screws on each side.
-
-
-
----
-
-### Fourth Motor Assembly
-
-**Step 18: Install Motor 4**
-- Slide in motor 4, attach the cable from motor 3, and secure the cable in its holder with a screw.
-
-
-
-
-
-
-**Step 19: Attach Motor Holder 4**
-- Install the fourth motor holder (a tight fit). Ensure one wire is routed upward and the wire from motor 3 is routed downward (see photo).
-
-
-
-**Step 20: Secure Motor 4 & Attach Horn**
-- Fasten motor 4 with 4 screws and attach its motor horns, use for one a horn screw.
-
-
-
----
-
-### Wrist Assembly
-
-**Step 21: Install Motor 5**
-- Insert motor 5 into the wrist holder and secure it with 2 front screws.
-
-
-
-**Step 22: Attach Wrist**
-- Connect the wire from motor 4 to motor 5. And already insert the other wire for the gripper.
-- Secure the wrist to motor 4 using 4 screws on both sides.
-
-
-
-**Step 23: Attach Wrist Horn**
-- Install only one motor horn on the wrist motor and secure it with a horn screw.
-
-
-
----
-
-### Follower Configuration
-
-**Step 24: Attach Gripper**
-- Attach the gripper to motor 5.
-
-
-
-**Step 25: Install Gripper Motor**
-- Insert the gripper motor, connect the motor wire from motor 5 to motor 6, and secure it with 3 screws on each side.
-
-
-
-**Step 26: Attach Gripper Horn & Claw**
-- Attach the motor horns and again use a horn screw.
-- Install the gripper claw and secure it with 4 screws on both sides.
-
-
-
-**Step 27: Mount Controller**
-- Attach the motor controller on the back.
-
-
-
-
-
-
-*Assembly complete – proceed to Leader arm assembly.*
-
----
-
-### Leader Configuration
-
-For the leader configuration, perform **Steps 1–23**. Make sure that you removed the motor gears from the motors.
-
-**Step 24: Attach Leader Holder**
-- Mount the leader holder onto the wrist and secure it with a screw.
-
-
-
-**Step 25: Attach Handle**
-- Attach the handle to motor 5 using 4 screws.
-
-
-
-**Step 26: Install Gripper Motor**
-- Insert the gripper motor, secure it with 3 screws on each side, attach a motor horn using a horn screw, and connect the motor wire.
-
-
-
-**Step 27: Attach Trigger**
-- Attach the follower trigger with 4 screws.
-
-
-
-**Step 28: Mount Controller**
-- Attach the motor controller on the back.
-
-
-
-
-
-
-*Assembly complete – proceed to calibration.*
-
-
-## E. Calibrate
-
-Next, you'll need to calibrate your SO-100 robot to ensure that the leader and follower arms have the same position values when they are in the same physical position.
-The calibration process is very important because it allows a neural network trained on one SO-100 robot to work on another.
-
-#### Manual calibration of follower arm
-
-You will need to move the follower arm to these positions sequentially, note that the rotated position is on the right side of the robot and you have to open the gripper fully.
-
-| 1. Middle position | 2. Zero position | 3. Rotated position | 4. Rest position |
-| ------------ |------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| | | | |
-
-Make sure both arms are connected and run this script to launch manual calibration:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=so100 \
- --robot.cameras='{}' \
- --control.type=calibrate \
- --control.arms='["main_follower"]'
-```
-
-#### Manual calibration of leader arm
-You will also need to move the leader arm to these positions sequentially:
-
-| 1. Middle position | 2. Zero position | 3. Rotated position | 4. Rest position |
-| ------------ |------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| | | | |
-
-Run this script to launch manual calibration:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=so100 \
- --robot.cameras='{}' \
- --control.type=calibrate \
- --control.arms='["main_leader"]'
-```
-
-## F. Teleoperate
-
-**Simple teleop**
-Then you are ready to teleoperate your robot! Run this simple script (it won't connect and display the cameras):
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=so100 \
- --robot.cameras='{}' \
- --control.type=teleoperate
-```
-
-
-#### a. Teleop with displaying cameras
-Follow [this guide to setup your cameras](https://github.com/huggingface/lerobot/blob/main/examples/7_get_started_with_real_robot.md#c-add-your-cameras-with-opencvcamera). Then you will be able to display the cameras on your computer while you are teleoperating by running the following code. This is useful to prepare your setup before recording your first dataset.
-
-> **NOTE:** To visualize the data, enable `--control.display_data=true`. This streams the data using `rerun`.
-
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=so100 \
- --control.type=teleoperate
-```
-
-## G. Record a dataset
-
-Once you're familiar with teleoperation, you can record your first dataset with SO-100.
-
-If you want to use the Hugging Face hub features for uploading your dataset and you haven't previously done it, make sure you've logged in using a write-access token, which can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens):
-```bash
-huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
-```
-
-Store your Hugging Face repository name in a variable to run these commands:
-```bash
-HF_USER=$(huggingface-cli whoami | head -n 1)
-echo $HF_USER
-```
-
-Record 2 episodes and upload your dataset to the hub:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=so100 \
- --control.type=record \
- --control.fps=30 \
- --control.single_task="Grasp a lego block and put it in the bin." \
- --control.repo_id=${HF_USER}/so100_test \
- --control.tags='["so100","tutorial"]' \
- --control.warmup_time_s=5 \
- --control.episode_time_s=30 \
- --control.reset_time_s=30 \
- --control.num_episodes=2 \
- --control.push_to_hub=true
-```
-
-Note: You can resume recording by adding `--control.resume=true`.
-
-## H. Visualize a dataset
-
-If you uploaded your dataset to the hub with `--control.push_to_hub=true`, you can [visualize your dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset) by copy pasting your repo id given by:
-```bash
-echo ${HF_USER}/so100_test
-```
-
-If you didn't upload with `--control.push_to_hub=false`, you can also visualize it locally with (a window can be opened in the browser `http://127.0.0.1:9090` with the visualization tool):
-```bash
-python lerobot/scripts/visualize_dataset_html.py \
- --repo-id ${HF_USER}/so100_test \
- --local-files-only 1
-```
-
-## I. Replay an episode
-
-Now try to replay the first episode on your robot:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=so100 \
- --control.type=replay \
- --control.fps=30 \
- --control.repo_id=${HF_USER}/so100_test \
- --control.episode=0
-```
-
-## J. Train a policy
-
-To train a policy to control your robot, use the [`python lerobot/scripts/train.py`](../lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
-```bash
-python lerobot/scripts/train.py \
- --dataset.repo_id=${HF_USER}/so100_test \
- --policy.type=act \
- --output_dir=outputs/train/act_so100_test \
- --job_name=act_so100_test \
- --policy.device=cuda \
- --wandb.enable=true
-```
-
-Let's explain it:
-1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/so100_test`.
-2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../lerobot/common/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor states, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
-4. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
-5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
-
-Training should take several hours. You will find checkpoints in `outputs/train/act_so100_test/checkpoints`.
-
-To resume training from a checkpoint, below is an example command to resume from `last` checkpoint of the `act_so100_test` policy:
-```bash
-python lerobot/scripts/train.py \
- --config_path=outputs/train/act_so100_test/checkpoints/last/pretrained_model/train_config.json \
- --resume=true
-```
-
-## K. Evaluate your policy
-
-You can use the `record` function from [`lerobot/scripts/control_robot.py`](../lerobot/scripts/control_robot.py) but with a policy checkpoint as input. For instance, run this command to record 10 evaluation episodes:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=so100 \
- --control.type=record \
- --control.fps=30 \
- --control.single_task="Grasp a lego block and put it in the bin." \
- --control.repo_id=${HF_USER}/eval_act_so100_test \
- --control.tags='["tutorial"]' \
- --control.warmup_time_s=5 \
- --control.episode_time_s=30 \
- --control.reset_time_s=30 \
- --control.num_episodes=10 \
- --control.push_to_hub=true \
- --control.policy.path=outputs/train/act_so100_test/checkpoints/last/pretrained_model
-```
-
-As you can see, it's almost the same command as previously used to record your training dataset. Two things changed:
-1. There is an additional `--control.policy.path` argument which indicates the path to your policy checkpoint with (e.g. `outputs/train/eval_act_so100_test/checkpoints/last/pretrained_model`). You can also use the model repository if you uploaded a model checkpoint to the hub (e.g. `${HF_USER}/act_so100_test`).
-2. The name of dataset begins by `eval` to reflect that you are running inference (e.g. `${HF_USER}/eval_act_so100_test`).
-
-## L. More Information
-
-Follow this [previous tutorial](https://github.com/huggingface/lerobot/blob/main/examples/7_get_started_with_real_robot.md#4-train-a-policy-on-your-data) for a more in-depth tutorial on controlling real robots with LeRobot.
-
-> [!TIP]
-> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb) in the channel [`#so100-arm`](https://discord.com/channels/1216765309076115607/1237741463832363039).
diff --git a/examples/11_use_lekiwi.md b/examples/11_use_lekiwi.md
deleted file mode 100644
index 4c15dcd1..00000000
--- a/examples/11_use_lekiwi.md
+++ /dev/null
@@ -1,597 +0,0 @@
-# Using the [LeKiwi](https://github.com/SIGRobotics-UIUC/LeKiwi) Robot with LeRobot
-
-## Table of Contents
-
- - [A. Source the parts](#a-source-the-parts)
- - [B. Install software Pi](#b-install-software-on-pi)
- - [C. Setup LeRobot laptop/pc](#c-install-lerobot-on-laptop)
- - [D. Assemble the arms](#d-assembly)
- - [E. Calibrate](#e-calibration)
- - [F. Teleoperate](#f-teleoperate)
- - [G. Record a dataset](#g-record-a-dataset)
- - [H. Visualize a dataset](#h-visualize-a-dataset)
- - [I. Replay an episode](#i-replay-an-episode)
- - [J. Train a policy](#j-train-a-policy)
- - [K. Evaluate your policy](#k-evaluate-your-policy)
-
-> [!TIP]
-> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb) in the channel [`#mobile-so-100-arm`](https://discord.com/channels/1216765309076115607/1318390825528332371).
-
-## A. Source the parts
-
-Follow this [README](https://github.com/SIGRobotics-UIUC/LeKiwi). It contains the bill of materials, with a link to source the parts, as well as the instructions to 3D print the parts, and advice if it's your first time printing or if you don't own a 3D printer.
-
-Before assembling, you will first need to configure your motors. To this end, we provide a nice script, so let's first install LeRobot. After configuration, we will also guide you through assembly.
-
-### Wired version
-If you have the **wired** LeKiwi version you can skip the installation of the Raspberry Pi and setting up SSH. You can also run all commands directly on your PC for both the LeKiwi scripts and the leader arm scripts for teleoperating.
-
-## B. Install software on Pi
-Now we have to setup the remote PC that will run on the LeKiwi Robot. This is normally a Raspberry Pi, but can be any PC that can run on 5V and has enough usb ports (2 or more) for the cameras and motor control board.
-
-### Install OS
-For setting up the Raspberry Pi and its SD-card see: [Setup PI](https://www.raspberrypi.com/documentation/computers/getting-started.html). Here is explained how to download the [Imager](https://www.raspberrypi.com/software/) to install Raspberry Pi OS or Ubuntu.
-
-### Setup SSH
-After setting up your Pi, you should enable and setup [SSH](https://www.raspberrypi.com/news/coding-on-raspberry-pi-remotely-with-visual-studio-code/) (Secure Shell Protocol) so you can login into the Pi from your laptop without requiring a screen, keyboard and mouse in the Pi. A great tutorial on how to do this can be found [here](https://www.raspberrypi.com/documentation/computers/remote-access.html#ssh). Logging into your Pi can be done in your Command Prompt (cmd) or if you use VSCode you can use [this](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh) extension.
-
-### Install LeRobot
-
-On your Raspberry Pi:
-
-#### 1. [Install Miniconda](https://docs.anaconda.com/miniconda/install/#quick-command-line-install):
-
-#### 2. Restart shell
-Copy paste in your shell: `source ~/.bashrc` or for Mac: `source ~/.bash_profile` or `source ~/.zshrc` if you're using zshell
-
-#### 3. Create and activate a fresh conda environment for lerobot
-
-
-Video install instructions
-
-
-
-
-
-```bash
-conda create -y -n lerobot python=3.10
-```
-
-Then activate your conda environment (do this each time you open a shell to use lerobot!):
-```bash
-conda activate lerobot
-```
-
-#### 4. Clone LeRobot:
-```bash
-git clone https://github.com/huggingface/lerobot.git ~/lerobot
-```
-
-#### 5. Install ffmpeg in your environment:
-When using `miniconda`, install `ffmpeg` in your environment:
-```bash
-conda install ffmpeg -c conda-forge
-```
-
-#### 6. Install LeRobot with dependencies for the feetech motors:
-```bash
-cd ~/lerobot && pip install -e ".[feetech]"
-```
-
-## C. Install LeRobot on laptop
-If you already have install LeRobot on your laptop you can skip this step, otherwise please follow along as we do the same steps we did on the Pi.
-
-> [!TIP]
-> We use the Command Prompt (cmd) quite a lot. If you are not comfortable using the cmd or want to brush up using the command line you can have a look here: [Command line crash course](https://developer.mozilla.org/en-US/docs/Learn_web_development/Getting_started/Environment_setup/Command_line)
-
-On your computer:
-
-#### 1. [Install Miniconda](https://docs.anaconda.com/miniconda/install/#quick-command-line-install):
-
-#### 2. Restart shell
-Copy paste in your shell: `source ~/.bashrc` or for Mac: `source ~/.bash_profile` or `source ~/.zshrc` if you're using zshell
-
-#### 3. Create and activate a fresh conda environment for lerobot
-
-
-Video install instructions
-
-
-
-
-
-```bash
-conda create -y -n lerobot python=3.10
-```
-
-Then activate your conda environment (do this each time you open a shell to use lerobot!):
-```bash
-conda activate lerobot
-```
-
-#### 4. Clone LeRobot:
-```bash
-git clone https://github.com/huggingface/lerobot.git ~/lerobot
-```
-
-#### 5. Install ffmpeg in your environment:
-When using `miniconda`, install `ffmpeg` in your environment:
-```bash
-conda install ffmpeg -c conda-forge
-```
-
-#### 6. Install LeRobot with dependencies for the feetech motors:
-```bash
-cd ~/lerobot && pip install -e ".[feetech]"
-```
-
-Great :hugs:! You are now done installing LeRobot and we can begin assembling the SO100 arms and Mobile base :robot:.
-Every time you now want to use LeRobot you can go to the `~/lerobot` folder where we installed LeRobot and run one of the commands.
-
-# D. Assembly
-
-First we will assemble the two SO100 arms. One to attach to the mobile base and one for teleoperation. Then we will assemble the mobile base.
-
-## SO100 Arms
-### Configure motors
-The instructions for configuring the motors can be found [Here](https://github.com/huggingface/lerobot/blob/main/examples/10_use_so100.md#c-configure-the-motors) in step C of the SO100 tutorial. Besides the ID's for the arm motors we also need to set the motor ID's for the mobile base. These need to be in a specific order to work. Below an image of the motor ID's and motor mounting positions for the mobile base. Note that we only use one Motor Control board on LeKiwi. This means the motor ID's for the wheels are 7, 8 and 9.
-
-
-
-### Assemble arms
-[Assemble arms instruction](https://github.com/huggingface/lerobot/blob/main/examples/10_use_so100.md#d-assemble-the-arms)
-
-## Mobile base (LeKiwi)
-[Assemble LeKiwi](https://github.com/SIGRobotics-UIUC/LeKiwi)
-
-### Update config
-Both config files on the LeKiwi LeRobot and on the laptop should be the same. First we should find the Ip address of the Raspberry Pi of the mobile manipulator. This is the same Ip address used in SSH. We also need the usb port of the control board of the leader arm on the laptop and the port of the control board on LeKiwi. We can find these ports with the following script.
-
-#### a. Run the script to find port
-
-
-Video finding port
-
-
-
-
-To find the port for each bus servo adapter, run the utility script:
-```bash
-python lerobot/scripts/find_motors_bus_port.py
-```
-
-#### b. Example outputs
-
-Example output when identifying the leader arm's port (e.g., `/dev/tty.usbmodem575E0031751` on Mac, or possibly `/dev/ttyACM0` on Linux):
-```
-Finding all available ports for the MotorBus.
-['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
-Remove the usb cable from your DynamixelMotorsBus and press Enter when done.
-
-[...Disconnect leader arm and press Enter...]
-
-The port of this DynamixelMotorsBus is /dev/tty.usbmodem575E0031751
-Reconnect the usb cable.
-```
-Example output when identifying the follower arm's port (e.g., `/dev/tty.usbmodem575E0032081`, or possibly `/dev/ttyACM1` on Linux):
-```
-Finding all available ports for the MotorBus.
-['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
-Remove the usb cable from your DynamixelMotorsBus and press Enter when done.
-
-[...Disconnect follower arm and press Enter...]
-
-The port of this DynamixelMotorsBus is /dev/tty.usbmodem575E0032081
-Reconnect the usb cable.
-```
-
-#### c. Troubleshooting
-On Linux, you might need to give access to the USB ports by running:
-```bash
-sudo chmod 666 /dev/ttyACM0
-sudo chmod 666 /dev/ttyACM1
-```
-
-#### d. Update config file
-
-IMPORTANTLY: Now that you have your ports of leader and follower arm and ip address of the mobile-so100, update the **ip** in Network configuration, **port** in leader_arms and **port** in lekiwi. In the [`LeKiwiRobotConfig`](../lerobot/common/robot_devices/robots/configs.py) file. Where you will find something like:
-```python
-@RobotConfig.register_subclass("lekiwi")
-@dataclass
-class LeKiwiRobotConfig(RobotConfig):
- # `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
- # Set this to a positive scalar to have the same value for all motors, or a list that is the same length as
- # the number of motors in your follower arms.
- max_relative_target: int | None = None
-
- # Network Configuration
- ip: str = "172.17.133.91"
- port: int = 5555
- video_port: int = 5556
-
- cameras: dict[str, CameraConfig] = field(
- default_factory=lambda: {
- "mobile": OpenCVCameraConfig(camera_index="/dev/video0", fps=30, width=640, height=480),
- "mobile2": OpenCVCameraConfig(camera_index="/dev/video2", fps=30, width=640, height=480),
- }
- )
-
- calibration_dir: str = ".cache/calibration/lekiwi"
-
- leader_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "main": FeetechMotorsBusConfig(
- port="/dev/tty.usbmodem585A0077581",
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "sts3215"],
- "shoulder_lift": [2, "sts3215"],
- "elbow_flex": [3, "sts3215"],
- "wrist_flex": [4, "sts3215"],
- "wrist_roll": [5, "sts3215"],
- "gripper": [6, "sts3215"],
- },
- ),
- }
- )
-
- follower_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "main": FeetechMotorsBusConfig(
- port="/dev/ttyACM0",
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "sts3215"],
- "shoulder_lift": [2, "sts3215"],
- "elbow_flex": [3, "sts3215"],
- "wrist_flex": [4, "sts3215"],
- "wrist_roll": [5, "sts3215"],
- "gripper": [6, "sts3215"],
- "left_wheel": (7, "sts3215"),
- "back_wheel": (8, "sts3215"),
- "right_wheel": (9, "sts3215"),
- },
- ),
- }
- )
-
- teleop_keys: dict[str, str] = field(
- default_factory=lambda: {
- # Movement
- "forward": "w",
- "backward": "s",
- "left": "a",
- "right": "d",
- "rotate_left": "z",
- "rotate_right": "x",
- # Speed control
- "speed_up": "r",
- "speed_down": "f",
- # quit teleop
- "quit": "q",
- }
- )
-
- mock: bool = False
-```
-
-## Wired version
-
-For the wired LeKiwi version your configured IP address should refer to your own laptop (127.0.0.1), because leader arm and LeKiwi are in this case connected to own laptop. Below and example configuration for this wired setup:
-```python
-@RobotConfig.register_subclass("lekiwi")
-@dataclass
-class LeKiwiRobotConfig(RobotConfig):
- # `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
- # Set this to a positive scalar to have the same value for all motors, or a list that is the same length as
- # the number of motors in your follower arms.
- max_relative_target: int | None = None
-
- # Network Configuration
- ip: str = "127.0.0.1"
- port: int = 5555
- video_port: int = 5556
-
- cameras: dict[str, CameraConfig] = field(
- default_factory=lambda: {
- "front": OpenCVCameraConfig(
- camera_index=0, fps=30, width=640, height=480, rotation=90
- ),
- "wrist": OpenCVCameraConfig(
- camera_index=1, fps=30, width=640, height=480, rotation=180
- ),
- }
- )
-
- calibration_dir: str = ".cache/calibration/lekiwi"
-
- leader_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "main": FeetechMotorsBusConfig(
- port="/dev/tty.usbmodem585A0077581",
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "sts3215"],
- "shoulder_lift": [2, "sts3215"],
- "elbow_flex": [3, "sts3215"],
- "wrist_flex": [4, "sts3215"],
- "wrist_roll": [5, "sts3215"],
- "gripper": [6, "sts3215"],
- },
- ),
- }
- )
-
- follower_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "main": FeetechMotorsBusConfig(
- port="/dev/tty.usbmodem58760431061",
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "sts3215"],
- "shoulder_lift": [2, "sts3215"],
- "elbow_flex": [3, "sts3215"],
- "wrist_flex": [4, "sts3215"],
- "wrist_roll": [5, "sts3215"],
- "gripper": [6, "sts3215"],
- "left_wheel": (7, "sts3215"),
- "back_wheel": (8, "sts3215"),
- "right_wheel": (9, "sts3215"),
- },
- ),
- }
- )
-
- teleop_keys: dict[str, str] = field(
- default_factory=lambda: {
- # Movement
- "forward": "w",
- "backward": "s",
- "left": "a",
- "right": "d",
- "rotate_left": "z",
- "rotate_right": "x",
- # Speed control
- "speed_up": "r",
- "speed_down": "f",
- # quit teleop
- "quit": "q",
- }
- )
-
- mock: bool = False
-```
-
-# E. Calibration
-Now we have to calibrate the leader arm and the follower arm. The wheel motors don't have to be calibrated.
-
-
-### Calibrate follower arm (on mobile base)
-> [!IMPORTANT]
-> Contrarily to step 6 of the [assembly video](https://youtu.be/FioA2oeFZ5I?t=724) which illustrates the auto calibration, we will actually do manual calibration of follower for now.
-
-You will need to move the follower arm to these positions sequentially:
-
-| 1. Zero position | 2. Rotated position | 3. Rest position |
-| ----------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| | | |
-
-Make sure the arm is connected to the Raspberry Pi and run this script (on the Raspberry Pi) to launch manual calibration:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=lekiwi \
- --robot.cameras='{}' \
- --control.type=calibrate \
- --control.arms='["main_follower"]'
-```
-
-### Wired version
-If you have the **wired** LeKiwi version please run all commands including this calibration command on your laptop.
-
-### Calibrate leader arm
-Then to calibrate the leader arm (which is attached to the laptop/pc). You will need to move the leader arm to these positions sequentially:
-
-| 1. Zero position | 2. Rotated position | 3. Rest position |
-| ------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| | | |
-
-Run this script (on your laptop/pc) to launch manual calibration:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=lekiwi \
- --robot.cameras='{}' \
- --control.type=calibrate \
- --control.arms='["main_leader"]'
-```
-
-# F. Teleoperate
-
-> [!TIP]
-> If you're using a Mac, you might need to give Terminal permission to access your keyboard. Go to System Preferences > Security & Privacy > Input Monitoring and check the box for Terminal.
-
-To teleoperate SSH into your Raspberry Pi, and run `conda activate lerobot` and this script:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=lekiwi \
- --control.type=remote_robot
-```
-
-Then on your laptop, also run `conda activate lerobot` and this script:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=lekiwi \
- --control.type=teleoperate \
- --control.fps=30
-```
-
-> **NOTE:** To visualize the data, enable `--control.display_data=true`. This streams the data using `rerun`. For the `--control.type=remote_robot` you will also need to set `--control.viewer_ip` and `--control.viewer_port`
-
-You should see on your laptop something like this: ```[INFO] Connected to remote robot at tcp://172.17.133.91:5555 and video stream at tcp://172.17.133.91:5556.``` Now you can move the leader arm and use the keyboard (w,a,s,d) to drive forward, left, backwards, right. And use (z,x) to turn left or turn right. You can use (r,f) to increase and decrease the speed of the mobile robot. There are three speed modes, see the table below:
-| Speed Mode | Linear Speed (m/s) | Rotation Speed (deg/s) |
-| ---------- | ------------------ | ---------------------- |
-| Fast | 0.4 | 90 |
-| Medium | 0.25 | 60 |
-| Slow | 0.1 | 30 |
-
-
-| Key | Action |
-| --- | -------------- |
-| W | Move forward |
-| A | Move left |
-| S | Move backward |
-| D | Move right |
-| Z | Turn left |
-| X | Turn right |
-| R | Increase speed |
-| F | Decrease speed |
-
-> [!TIP]
-> If you use a different keyboard you can change the keys for each command in the [`LeKiwiRobotConfig`](../lerobot/common/robot_devices/robots/configs.py).
-
-### Wired version
-If you have the **wired** LeKiwi version please run all commands including both these teleoperation commands on your laptop.
-
-## Troubleshoot communication
-
-If you are having trouble connecting to the Mobile SO100, follow these steps to diagnose and resolve the issue.
-
-### 1. Verify IP Address Configuration
-Make sure that the correct ip for the Pi is set in the configuration file. To check the Raspberry Pi's IP address, run (on the Pi command line):
-```bash
-hostname -I
-```
-
-### 2. Check if Pi is reachable from laptop/pc
-Try pinging the Raspberry Pi from your laptop:
-```bach
-ping
-```
-
-If the ping fails:
-- Ensure the Pi is powered on and connected to the same network.
-- Check if SSH is enabled on the Pi.
-
-### 3. Try SSH connection
-If you can't SSH into the Pi, it might not be properly connected. Use:
-```bash
-ssh @
-```
-If you get a connection error:
-- Ensure SSH is enabled on the Pi by running:
- ```bash
- sudo raspi-config
- ```
- Then navigate to: **Interfacing Options -> SSH** and enable it.
-
-### 4. Same config file
-Make sure the configuration file on both your laptop/pc and the Raspberry Pi is the same.
-
-# G. Record a dataset
-Once you're familiar with teleoperation, you can record your first dataset with LeKiwi.
-
-To start the program on LeKiwi, SSH into your Raspberry Pi, and run `conda activate lerobot` and this script:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=lekiwi \
- --control.type=remote_robot
-```
-
-If you want to use the Hugging Face hub features for uploading your dataset and you haven't previously done it, make sure you've logged in using a write-access token, which can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens):
-```bash
-huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
-```
-
-Store your Hugging Face repository name in a variable to run these commands:
-```bash
-HF_USER=$(huggingface-cli whoami | head -n 1)
-echo $HF_USER
-```
-On your laptop then run this command to record 2 episodes and upload your dataset to the hub:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=lekiwi \
- --control.type=record \
- --control.fps=30 \
- --control.single_task="Grasp a lego block and put it in the bin." \
- --control.repo_id=${HF_USER}/lekiwi_test \
- --control.tags='["tutorial"]' \
- --control.warmup_time_s=5 \
- --control.episode_time_s=30 \
- --control.reset_time_s=30 \
- --control.num_episodes=2 \
- --control.push_to_hub=true
-```
-
-Note: You can resume recording by adding `--control.resume=true`.
-
-### Wired version
-If you have the **wired** LeKiwi version please run all commands including both these record dataset commands on your laptop.
-
-# H. Visualize a dataset
-
-If you uploaded your dataset to the hub with `--control.push_to_hub=true`, you can [visualize your dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset) by copy pasting your repo id given by:
-```bash
-echo ${HF_USER}/lekiwi_test
-```
-
-If you didn't upload with `--control.push_to_hub=false`, you can also visualize it locally with (a window can be opened in the browser `http://127.0.0.1:9090` with the visualization tool):
-```bash
-python lerobot/scripts/visualize_dataset_html.py \
- --repo-id ${HF_USER}/lekiwi_test \
- --local-files-only 1
-```
-
-# I. Replay an episode
-Now try to replay the first episode on your robot:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=lekiwi \
- --control.type=replay \
- --control.fps=30 \
- --control.repo_id=${HF_USER}/lekiwi_test \
- --control.episode=0
-```
-
-## J. Train a policy
-
-To train a policy to control your robot, use the [`python lerobot/scripts/train.py`](../lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
-```bash
-python lerobot/scripts/train.py \
- --dataset.repo_id=${HF_USER}/lekiwi_test \
- --policy.type=act \
- --output_dir=outputs/train/act_lekiwi_test \
- --job_name=act_lekiwi_test \
- --policy.device=cuda \
- --wandb.enable=true
-```
-
-Let's explain it:
-1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/lekiwi_test`.
-2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../lerobot/common/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor states, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
-4. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
-5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
-
-Training should take several hours. You will find checkpoints in `outputs/train/act_lekiwi_test/checkpoints`.
-
-## K. Evaluate your policy
-
-You can use the `record` function from [`lerobot/scripts/control_robot.py`](../lerobot/scripts/control_robot.py) but with a policy checkpoint as input. For instance, run this command to record 10 evaluation episodes:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=lekiwi \
- --control.type=record \
- --control.fps=30 \
- --control.single_task="Drive to the red block and pick it up" \
- --control.repo_id=${HF_USER}/eval_act_lekiwi_test \
- --control.tags='["tutorial"]' \
- --control.warmup_time_s=5 \
- --control.episode_time_s=30 \
- --control.reset_time_s=30 \
- --control.num_episodes=10 \
- --control.push_to_hub=true \
- --control.policy.path=outputs/train/act_lekiwi_test/checkpoints/last/pretrained_model
-```
-
-As you can see, it's almost the same command as previously used to record your training dataset. Two things changed:
-1. There is an additional `--control.policy.path` argument which indicates the path to your policy checkpoint with (e.g. `outputs/train/eval_act_lekiwi_test/checkpoints/last/pretrained_model`). You can also use the model repository if you uploaded a model checkpoint to the hub (e.g. `${HF_USER}/act_lekiwi_test`).
-2. The name of dataset begins by `eval` to reflect that you are running inference (e.g. `${HF_USER}/eval_act_lekiwi_test`).
diff --git a/examples/11_use_moss.md b/examples/11_use_moss.md
deleted file mode 100644
index 7c9297ac..00000000
--- a/examples/11_use_moss.md
+++ /dev/null
@@ -1,337 +0,0 @@
-This tutorial explains how to use [Moss v1](https://github.com/jess-moss/moss-robot-arms) with LeRobot.
-
-## Source the parts
-
-Follow this [README](https://github.com/jess-moss/moss-robot-arms). It contains the bill of materials with link to source the parts, as well as the instructions to 3D print the parts and advice if it's your first time printing or if you don't own a 3D printer already.
-
-**Important**: Before assembling, you will first need to configure your motors. To this end, we provide a nice script, so let's first install LeRobot. After configuration, we will also guide you through assembly.
-
-## Install LeRobot
-
-On your computer:
-
-1. [Install Miniconda](https://docs.anaconda.com/miniconda/#quick-command-line-install):
-```bash
-mkdir -p ~/miniconda3
-wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
-bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
-rm ~/miniconda3/miniconda.sh
-~/miniconda3/bin/conda init bash
-```
-
-2. Restart shell or `source ~/.bashrc`
-
-3. Create and activate a fresh conda environment for lerobot
-```bash
-conda create -y -n lerobot python=3.10 && conda activate lerobot
-```
-
-4. Clone LeRobot:
-```bash
-git clone https://github.com/huggingface/lerobot.git ~/lerobot
-```
-
-5. Install ffmpeg in your environment:
-When using `miniconda`, install `ffmpeg` in your environment:
-```bash
-conda install ffmpeg -c conda-forge
-```
-
-6. Install LeRobot with dependencies for the feetech motors:
-```bash
-cd ~/lerobot && pip install -e ".[feetech]"
-```
-
-## Configure the motors
-
-Follow step 1 of the [assembly video](https://www.youtube.com/watch?v=DA91NJOtMic) which illustrates the use of our scripts below.
-
-**Find USB ports associated to your arms**
-To find the correct ports for each arm, run the utility script twice:
-```bash
-python lerobot/scripts/find_motors_bus_port.py
-```
-
-Example output when identifying the leader arm's port (e.g., `/dev/tty.usbmodem575E0031751` on Mac, or possibly `/dev/ttyACM0` on Linux):
-```
-Finding all available ports for the MotorBus.
-['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
-Remove the usb cable from your DynamixelMotorsBus and press Enter when done.
-
-[...Disconnect leader arm and press Enter...]
-
-The port of this DynamixelMotorsBus is /dev/tty.usbmodem575E0031751
-Reconnect the usb cable.
-```
-
-Example output when identifying the follower arm's port (e.g., `/dev/tty.usbmodem575E0032081`, or possibly `/dev/ttyACM1` on Linux):
-```
-Finding all available ports for the MotorBus.
-['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
-Remove the usb cable from your DynamixelMotorsBus and press Enter when done.
-
-[...Disconnect follower arm and press Enter...]
-
-The port of this DynamixelMotorsBus is /dev/tty.usbmodem575E0032081
-Reconnect the usb cable.
-```
-
-Troubleshooting: On Linux, you might need to give access to the USB ports by running:
-```bash
-sudo chmod 666 /dev/ttyACM0
-sudo chmod 666 /dev/ttyACM1
-```
-
-#### Update config file
-
-IMPORTANTLY: Now that you have your ports, update the **port** default values of [`MossRobotConfig`](../lerobot/common/robot_devices/robots/configs.py). You will find something like:
-```python
-@RobotConfig.register_subclass("moss")
-@dataclass
-class MossRobotConfig(ManipulatorRobotConfig):
- calibration_dir: str = ".cache/calibration/moss"
- # `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
- # Set this to a positive scalar to have the same value for all motors, or a list that is the same length as
- # the number of motors in your follower arms.
- max_relative_target: int | None = None
-
- leader_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "main": FeetechMotorsBusConfig(
- port="/dev/tty.usbmodem58760431091", <-- UPDATE HERE
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "sts3215"],
- "shoulder_lift": [2, "sts3215"],
- "elbow_flex": [3, "sts3215"],
- "wrist_flex": [4, "sts3215"],
- "wrist_roll": [5, "sts3215"],
- "gripper": [6, "sts3215"],
- },
- ),
- }
- )
-
- follower_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "main": FeetechMotorsBusConfig(
- port="/dev/tty.usbmodem585A0076891", <-- UPDATE HERE
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "sts3215"],
- "shoulder_lift": [2, "sts3215"],
- "elbow_flex": [3, "sts3215"],
- "wrist_flex": [4, "sts3215"],
- "wrist_roll": [5, "sts3215"],
- "gripper": [6, "sts3215"],
- },
- ),
- }
- )
-```
-
-**Configure your motors**
-Plug your first motor and run this script to set its ID to 1. It will also set its present position to 2048, so expect your motor to rotate:
-```bash
-python lerobot/scripts/configure_motor.py \
- --port /dev/tty.usbmodem58760432961 \
- --brand feetech \
- --model sts3215 \
- --baudrate 1000000 \
- --ID 1
-```
-
-Note: These motors are currently limited. They can take values between 0 and 4096 only, which corresponds to a full turn. They can't turn more than that. 2048 is at the middle of this range, so we can take -2048 steps (180 degrees anticlockwise) and reach the maximum range, or take +2048 steps (180 degrees clockwise) and reach the maximum range. The configuration step also sets the homing offset to 0, so that if you misassembled the arm, you can always update the homing offset to account for a shift up to ± 2048 steps (± 180 degrees).
-
-Then unplug your motor and plug the second motor and set its ID to 2.
-```bash
-python lerobot/scripts/configure_motor.py \
- --port /dev/tty.usbmodem58760432961 \
- --brand feetech \
- --model sts3215 \
- --baudrate 1000000 \
- --ID 2
-```
-
-Redo the process for all your motors until ID 6. Do the same for the 6 motors of the leader arm.
-
-**Remove the gears of the 6 leader motors**
-Follow step 2 of the [assembly video](https://www.youtube.com/watch?v=DA91NJOtMic). You need to remove the gear for the motors of the leader arm. As a result, you will only use the position encoding of the motor and reduce friction to more easily operate the leader arm.
-
-**Add motor horn to the motors**
-Follow step 3 of the [assembly video](https://www.youtube.com/watch?v=DA91NJOtMic). For Moss v1, you need to align the holes on the motor horn to the motor spline to be approximately 3, 6, 9 and 12 o'clock.
-Try to avoid rotating the motor while doing so to keep position 2048 set during configuration. It is especially tricky for the leader motors as it is more sensible without the gears, but it's ok if it's a bit rotated.
-
-## Assemble the arms
-
-Follow step 4 of the [assembly video](https://www.youtube.com/watch?v=DA91NJOtMic). The first arm should take a bit more than 1 hour to assemble, but once you get used to it, you can do it under 1 hour for the second arm.
-
-## Calibrate
-
-Next, you'll need to calibrate your Moss v1 robot to ensure that the leader and follower arms have the same position values when they are in the same physical position. This calibration is essential because it allows a neural network trained on one Moss v1 robot to work on another.
-
-**Manual calibration of follower arm**
-/!\ Contrarily to step 6 of the [assembly video](https://www.youtube.com/watch?v=DA91NJOtMic) which illustrates the auto calibration, we will actually do manual calibration of follower for now.
-
-You will need to move the follower arm to these positions sequentially:
-
-| 1. Zero position | 2. Rotated position | 3. Rest position |
-| ------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| | | |
-
-Make sure both arms are connected and run this script to launch manual calibration:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=moss \
- --robot.cameras='{}' \
- --control.type=calibrate \
- --control.arms='["main_follower"]'
-```
-
-**Manual calibration of leader arm**
-Follow step 6 of the [assembly video](https://www.youtube.com/watch?v=DA91NJOtMic) which illustrates the manual calibration. You will need to move the leader arm to these positions sequentially:
-
-| 1. Zero position | 2. Rotated position | 3. Rest position |
-| ------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| | | |
-
-Run this script to launch manual calibration:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=moss \
- --robot.cameras='{}' \
- --control.type=calibrate \
- --control.arms='["main_leader"]'
-```
-
-## Teleoperate
-
-**Simple teleop**
-Then you are ready to teleoperate your robot! Run this simple script (it won't connect and display the cameras):
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=moss \
- --robot.cameras='{}' \
- --control.type=teleoperate
-```
-
-
-**Teleop with displaying cameras**
-Follow [this guide to setup your cameras](https://github.com/huggingface/lerobot/blob/main/examples/7_get_started_with_real_robot.md#c-add-your-cameras-with-opencvcamera). Then you will be able to display the cameras on your computer while you are teleoperating by running the following code. This is useful to prepare your setup before recording your first dataset.
-
-> **NOTE:** To visualize the data, enable `--control.display_data=true`. This streams the data using `rerun`.
-
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=moss \
- --control.type=teleoperate
-```
-
-## Record a dataset
-
-Once you're familiar with teleoperation, you can record your first dataset with Moss v1.
-
-If you want to use the Hugging Face hub features for uploading your dataset and you haven't previously done it, make sure you've logged in using a write-access token, which can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens):
-```bash
-huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
-```
-
-Store your Hugging Face repository name in a variable to run these commands:
-```bash
-HF_USER=$(huggingface-cli whoami | head -n 1)
-echo $HF_USER
-```
-
-Record 2 episodes and upload your dataset to the hub:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=moss \
- --control.type=record \
- --control.fps=30 \
- --control.single_task="Grasp a lego block and put it in the bin." \
- --control.repo_id=${HF_USER}/moss_test \
- --control.tags='["moss","tutorial"]' \
- --control.warmup_time_s=5 \
- --control.episode_time_s=30 \
- --control.reset_time_s=30 \
- --control.num_episodes=2 \
- --control.push_to_hub=true
-```
-
-Note: You can resume recording by adding `--control.resume=true`.
-
-## Visualize a dataset
-
-If you uploaded your dataset to the hub with `--control.push_to_hub=true`, you can [visualize your dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset) by copy pasting your repo id given by:
-```bash
-echo ${HF_USER}/moss_test
-```
-
-If you didn't upload with `--control.push_to_hub=false`, you can also visualize it locally with:
-```bash
-python lerobot/scripts/visualize_dataset_html.py \
- --repo-id ${HF_USER}/moss_test \
- --local-files-only 1
-```
-
-## Replay an episode
-
-Now try to replay the first episode on your robot:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=moss \
- --control.type=replay \
- --control.fps=30 \
- --control.repo_id=${HF_USER}/moss_test \
- --control.episode=0
-```
-
-## Train a policy
-
-To train a policy to control your robot, use the [`python lerobot/scripts/train.py`](../lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
-```bash
-python lerobot/scripts/train.py \
- --dataset.repo_id=${HF_USER}/moss_test \
- --policy.type=act \
- --output_dir=outputs/train/act_moss_test \
- --job_name=act_moss_test \
- --policy.device=cuda \
- --wandb.enable=true
-```
-
-Let's explain it:
-1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/moss_test`.
-2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../lerobot/common/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor states, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
-4. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
-5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
-
-Training should take several hours. You will find checkpoints in `outputs/train/act_moss_test/checkpoints`.
-
-## Evaluate your policy
-
-You can use the `record` function from [`lerobot/scripts/control_robot.py`](../lerobot/scripts/control_robot.py) but with a policy checkpoint as input. For instance, run this command to record 10 evaluation episodes:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=moss \
- --control.type=record \
- --control.fps=30 \
- --control.single_task="Grasp a lego block and put it in the bin." \
- --control.repo_id=${HF_USER}/eval_act_moss_test \
- --control.tags='["tutorial"]' \
- --control.warmup_time_s=5 \
- --control.episode_time_s=30 \
- --control.reset_time_s=30 \
- --control.num_episodes=10 \
- --control.push_to_hub=true \
- --control.policy.path=outputs/train/act_moss_test/checkpoints/last/pretrained_model
-```
-
-As you can see, it's almost the same command as previously used to record your training dataset. Two things changed:
-1. There is an additional `--control.policy.path` argument which indicates the path to your policy checkpoint with (e.g. `outputs/train/eval_act_moss_test/checkpoints/last/pretrained_model`). You can also use the model repository if you uploaded a model checkpoint to the hub (e.g. `${HF_USER}/act_moss_test`).
-2. The name of dataset begins by `eval` to reflect that you are running inference (e.g. `${HF_USER}/eval_act_moss_test`).
-
-## More
-
-Follow this [previous tutorial](https://github.com/huggingface/lerobot/blob/main/examples/7_get_started_with_real_robot.md#4-train-a-policy-on-your-data) for a more in-depth tutorial on controlling real robots with LeRobot.
-
-If you have any question or need help, please reach out on Discord in the channel [`#moss-arm`](https://discord.com/channels/1216765309076115607/1275374638985252925).
diff --git a/examples/12_use_so101.md b/examples/12_use_so101.md
deleted file mode 100644
index 2b43022b..00000000
--- a/examples/12_use_so101.md
+++ /dev/null
@@ -1,711 +0,0 @@
-# Assemble and use SO-101
-
-In the steps below we explain how to assemble and use our flagship robot, the SO-101 with LeRobot 🤗.
-
-## Source the parts
-
-Follow this [README](https://github.com/TheRobotStudio/SO-ARM100). It contains the bill of materials, with a link to source the parts, as well as the instructions to 3D print the parts,
-and advice if it's your first time printing or if you don't own a 3D printer.
-
-Before assembling, you will first need to configure your motors. To this end, we provide a nice script, so let's first install LeRobot. After configuration, we will also guide you through assembly.
-
-## Install LeRobot
-
-> [!TIP]
-> We use the Command Prompt (cmd) quite a lot. If you are not comfortable using the cmd or want to brush up using the command line you can have a look here: [Command line crash course](https://developer.mozilla.org/en-US/docs/Learn_web_development/Getting_started/Environment_setup/Command_line)
-
-Download our source code:
-```bash
-git clone https://github.com/huggingface/lerobot.git
-cd lerobot
-```
-
-Create a virtual environment with Python 3.10 and activate it, e.g. with [`miniconda`](https://docs.anaconda.com/miniconda/install/#quick-command-line-install):
-```bash
-conda create -y -n lerobot python=3.10
-```
-Now restart the shell by running:
-
-##### Windows:
-```bash
-`source ~/.bashrc`
-```
-
-##### Mac:
-```bash
-`source ~/.bash_profile`
-```
-
-##### zshell:
-```bash
-`source ~/.zshrc`
-```
-
-Then activate your conda environment, you have to do this each time you open a shell to use lerobot:
-```bash
-conda activate lerobot
-```
-
-When using `miniconda`, install `ffmpeg` in your environment:
-```bash
-conda install ffmpeg -c conda-forge
-```
-
-> [!NOTE]
-> This usually installs `ffmpeg 7.X` for your platform compiled with the `libsvtav1` encoder. If `libsvtav1` is not supported (check supported encoders with `ffmpeg -encoders`), you can:
-> - _[On any platform]_ Explicitly install `ffmpeg 7.X` using:
-> ```bash
-> conda install ffmpeg=7.1.1 -c conda-forge
-> ```
-> - _[On Linux only]_ Install [ffmpeg build dependencies](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#GettheDependencies) and [compile ffmpeg from source with libsvtav1](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#libsvtav1), and make sure you use the corresponding ffmpeg binary to your install with `which ffmpeg`.
-
-Install 🤗 LeRobot:
-```bash
-cd lerobot && pip install -e ".[feetech]"
-```
-
-> [!NOTE]
-> If you encounter build errors, you may need to install additional dependencies (`cmake`, `build-essential`, and `ffmpeg libs`). On Linux, run: `sudo apt-get install cmake build-essential python3-dev pkg-config libavformat-dev libavcodec-dev libavdevice-dev libavutil-dev libswscale-dev libswresample-dev libavfilter-dev pkg-config`. For other systems, see: [Compiling PyAV](https://pyav.org/docs/develop/overview/installation.html#bring-your-own-ffmpeg)
-
-
-## Configure motors
-
-To configure the motors designate one bus servo adapter and 6 motors for your leader arm, and similarly the other bus servo adapter and 6 motors for the follower arm. It's convenient to label them and write on each motor if it's for the follower `F` or for the leader `L` and it's ID from 1 to 6.
-
-You now should plug the 5V or 12V power supply to the motor bus. 5V for the STS3215 7.4V motors and 12V for the STS3215 12V motors. Note that the leader arm always uses the 7.4V motors, so watch out that you plug in the right power supply if you have 12V and 7.4V motors, otherwise you might burn your motors! Now, connect the motor bus to your computer via USB. Note that the USB doesn't provide any power, and both the power supply and USB have to be plugged in.
-
-### Find the USB ports associated to each arm
-
-To find the port for each bus servo adapter, run this script:
-```bash
-python lerobot/scripts/find_motors_bus_port.py
-```
-#### Example outputs of script
-
-##### Mac:
-Example output leader arm's port: `/dev/tty.usbmodem575E0031751`
-
-```bash
-Finding all available ports for the MotorBus.
-['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
-Remove the usb cable from your MotorsBus and press Enter when done.
-
-[...Disconnect leader arm and press Enter...]
-
-The port of this MotorsBus is /dev/tty.usbmodem575E0031751
-Reconnect the usb cable.
-```
-
-Example output follower arm port: `/dev/tty.usbmodem575E0032081`
-
-```
-Finding all available ports for the MotorBus.
-['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
-Remove the usb cable from your MotorsBus and press Enter when done.
-
-[...Disconnect follower arm and press Enter...]
-
-The port of this MotorsBus is /dev/tty.usbmodem575E0032081
-Reconnect the usb cable.
-```
-
-##### Linux:
-On Linux, you might need to give access to the USB ports by running:
-```bash
-sudo chmod 666 /dev/ttyACM0
-sudo chmod 666 /dev/ttyACM1
-```
-
-Example output leader arm port: `/dev/ttyACM0`
-
-```bash
-Finding all available ports for the MotorBus.
-['/dev/ttyACM0', '/dev/ttyACM1']
-Remove the usb cable from your MotorsBus and press Enter when done.
-
-[...Disconnect leader arm and press Enter...]
-
-The port of this MotorsBus is /dev/ttyACM0
-Reconnect the usb cable.
-```
-
-Example output follower arm port: `/dev/ttyACM1`
-
-```
-Finding all available ports for the MotorBus.
-['/dev/ttyACM0', '/dev/ttyACM1']
-Remove the usb cable from your MotorsBus and press Enter when done.
-
-[...Disconnect follower arm and press Enter...]
-
-The port of this MotorsBus is /dev/ttyACM1
-Reconnect the usb cable.
-```
-
-#### Update config file
-
-Now that you have your ports, update the **port** default values of [`SO101RobotConfig`](https://github.com/huggingface/lerobot/blob/main/lerobot/common/robot_devices/robots/configs.py).
-You will find a class called `so101` where you can update the `port` values with your actual motor ports:
-```diff
-@RobotConfig.register_subclass("so101")
-@dataclass
-class So101RobotConfig(ManipulatorRobotConfig):
- calibration_dir: str = ".cache/calibration/so101"
- # `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
- # Set this to a positive scalar to have the same value for all motors, or a list that is the same length as
- # the number of motors in your follower arms.
- max_relative_target: int | None = None
-
- leader_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "main": FeetechMotorsBusConfig(
-- port="/dev/tty.usbmodem58760431091",
-+ port="{ADD YOUR LEADER PORT}",
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "sts3215"],
- "shoulder_lift": [2, "sts3215"],
- "elbow_flex": [3, "sts3215"],
- "wrist_flex": [4, "sts3215"],
- "wrist_roll": [5, "sts3215"],
- "gripper": [6, "sts3215"],
- },
- ),
- }
- )
-
- follower_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "main": FeetechMotorsBusConfig(
-- port="/dev/tty.usbmodem585A0076891",
-+ port="{ADD YOUR FOLLOWER PORT}",
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "sts3215"],
- "shoulder_lift": [2, "sts3215"],
- "elbow_flex": [3, "sts3215"],
- "wrist_flex": [4, "sts3215"],
- "wrist_roll": [5, "sts3215"],
- "gripper": [6, "sts3215"],
- },
- ),
- }
- )
-```
-
-Here is a video of the process:
-
-
-
-### Set motor IDs
-
-Now we need to set the motor ID for each motor. Plug your motor in only one of the two ports of the motor bus and run this script to set its ID to 1. Replace the text after --port to the corresponding control board port.
-```bash
-python lerobot/scripts/configure_motor.py \
- --port /dev/tty.usbmodem58760432961 \
- --brand feetech \
- --model sts3215 \
- --baudrate 1000000 \
- --ID 1
-```
-
-Then unplug your motor and plug the second motor and set its ID to 2.
-```bash
-python lerobot/scripts/configure_motor.py \
- --port /dev/tty.usbmodem58760432961 \
- --brand feetech \
- --model sts3215 \
- --baudrate 1000000 \
- --ID 2
-```
-
-Redo this process for all your motors until ID 6. Do the same for the 6 motors of the leader arm, but make sure to change the power supply if you use motors with different voltage.
-
-Here is a video of the process:
-
-
-
-## Step-by-Step Assembly Instructions
-
-The follower arm uses 6x STS3215 motors with 1/345 gearing. The leader however uses three differently geared motors to make sure it can both sustain its own weight and it can be moved without requiring much force. Which motor is needed for which joint is shown in table below.
-
-| Leader-Arm Axis | Motor | Gear Ratio |
-|-----------------|:-------:|:----------:|
-| Base / Shoulder Yaw | 1 | 1 / 191 |
-| Shoulder Pitch | 2 | 1 / 345 |
-| Elbow | 3 | 1 / 191 |
-| Wrist Roll | 4 | 1 / 147 |
-| Wrist Pitch | 5 | 1 / 147 |
-| Gripper | 6 | 1 / 147 |
-
-
-### Clean Parts
-Remove all support material from the 3D-printed parts.
-
-### Joint 1
-
-- Place the first motor into the base.
-- Fasten the motor with 4 M2x6mm screws (smallest screws). Two from the top and two from bottom.
-- Slide over the first motor holder and fasten it using two M2x6mm screws (one on each side).
-- Install both motor horns, securing the top horn with a M3x6mm screw.
-- Attach the shoulder part.
-- Tighten the shoulder part with 4 M3x6mm screws on top and 4 M3x6mm screws on the bottom
-- Add the shoulder motor holder.
-
-
-
-### Joint 2
-
-- Slide the second motor in from the top.
-- Fasten the second motor with 4 M2x6mm screws.
-- Attach both motor horns to motor 2, again use the M3x6mm horn screw.
-- Attach the upper arm with 4 M3x6mm screws on each side.
-
-
-
-### Joint 3
-
-- Insert motor 3 and fasten using 4 M2x6mm screws
-- Attach both motor horns to motor 3 and secure one again with a M3x6mm horn screw.
-- Connect the forearm to motor 3 using 4 M3x6mm screws on each side.
-
-
-
-### Joint 4
-
-- Slide over motor holder 4.
-- Slide in motor 4.
-- Fasten motor 4 with 4 M2x6mm screws and attach its motor horns, use a M3x6mm horn screw.
-
-
-
-### Joint 5
-
-- Insert motor 5 into the wrist holder and secure it with 2 M2x6mm front screws.
-- Install only one motor horn on the wrist motor and secure it with a M3x6mm horn screw.
-- Secure the wrist to motor 4 using 4 M3x6mm screws on both sides.
-
-
-
-### Gripper / Handle
-
-#### Follower:
-
-- Attach the gripper to motor 5, attach it to the motor horn on the wrist using 4 M3x6mm screws.
-- Insert the gripper motor and secure it with 2 M2x6mm screws on each side.
-- Attach the motor horns and again use a M3x6mm horn screw.
-- Install the gripper claw and secure it with 4 M3x6mm screws on both sides.
-
-
-
-#### Leader:
-
-- Mount the leader holder onto the wrist and secure it with 4 M3x6mm screws.
-- Attach the handle to motor 5 using 1 M2x6mm screw.
-- Insert the gripper motor, secure it with 2 M2x6mm screws on each side, attach a motor horn using a M3x6mm horn screw.
-- Attach the follower trigger with 4 M3x6mm screws.
-
-
-
-##### Wiring
-
-- Attach the motor controller on the back.
-- Then insert all wires, use the wire guides everywhere to make sure the wires don't unplug themselves and stay in place.
-
-
-
-## Calibrate
-
-Next, you'll need to calibrate your SO-101 robot to ensure that the leader and follower arms have the same position values when they are in the same physical position.
-The calibration process is very important because it allows a neural network trained on one SO-101 robot to work on another.
-
-#### Manual calibration of follower arm
-
-You will need to move the follower arm to these positions sequentially, note that the rotated position is on the right side of the robot and you have to open the gripper fully.
-
-| 1. Middle position | 2. Zero position | 3. Rotated position | 4. Rest position |
-| ------------ |------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| | | | |
-
-Make sure both arms are connected and run this script to launch manual calibration:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=so101 \
- --robot.cameras='{}' \
- --control.type=calibrate \
- --control.arms='["main_follower"]'
-```
-
-#### Manual calibration of leader arm
-You will also need to move the leader arm to these positions sequentially:
-
-| 1. Middle position | 2. Zero position | 3. Rotated position | 4. Rest position |
-| ------------ |------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| | | | |
-
-Run this script to launch manual calibration:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=so101 \
- --robot.cameras='{}' \
- --control.type=calibrate \
- --control.arms='["main_leader"]'
-```
-## Control your robot
-
-Congrats 🎉, your robot is all set to learn a task on its own. Next we will explain to you how to train a neural network to autonomously control a real robot.
-
-**You'll learn to:**
-1. How to record and visualize your dataset.
-2. How to train a policy using your data and prepare it for evaluation.
-3. How to evaluate your policy and visualize the results.
-
-By following these steps, you'll be able to replicate tasks like picking up a Lego block and placing it in a bin with a high success rate, as demonstrated in [this video](https://x.com/RemiCadene/status/1814680760592572934).
-
-This tutorial is specifically made for the affordable [SO-101](https://github.com/TheRobotStudio/SO-ARM100) robot, but it contains additional information to be easily adapted to various types of robots like [Aloha bimanual robot](https://aloha-2.github.io) by changing some configurations. The SO-101 consists of a leader arm and a follower arm, each with 6 motors. It can work with one or several cameras to record the scene, which serve as visual sensors for the robot.
-
-During the data collection phase, you will control the follower arm by moving the leader arm. This process is known as "teleoperation." This technique is used to collect robot trajectories. Afterward, you'll train a neural network to imitate these trajectories and deploy the network to enable your robot to operate autonomously.
-
-If you encounter any issues at any step of the tutorial, feel free to seek help on [Discord](https://discord.com/invite/s3KuuzsPFb) or don't hesitate to iterate with us on the tutorial by creating issues or pull requests.
-
-## Teleoperate
-
-Run this simple script to teleoperate your robot (it won't connect and display the cameras):
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=so101 \
- --robot.cameras='{}' \
- --control.type=teleoperate
-```
-
-The teleoperate command will automatically:
-1. Identify any missing calibrations and initiate the calibration procedure.
-2. Connect the robot and start teleoperation.
-
-## Setup Cameras
-
-To connect a camera you have three options:
-1. OpenCVCamera which allows us to use any camera: usb, realsense, laptop webcam
-2. iPhone camera with MacOS
-3. Phone camera on Linux
-
-### Use OpenCVCamera
-
-The [`OpenCVCamera`](../lerobot/common/robot_devices/cameras/opencv.py) class allows you to efficiently record frames from most cameras using the [`opencv2`](https://docs.opencv.org) library. For more details on compatibility, see [Video I/O with OpenCV Overview](https://docs.opencv.org/4.x/d0/da7/videoio_overview.html).
-
-To instantiate an [`OpenCVCamera`](../lerobot/common/robot_devices/cameras/opencv.py), you need a camera index (e.g. `OpenCVCamera(camera_index=0)`). When you only have one camera like a webcam of a laptop, the camera index is usually `0` but it might differ, and the camera index might change if you reboot your computer or re-plug your camera. This behavior depends on your operating system.
-
-To find the camera indices, run the following utility script, which will save a few frames from each detected camera:
-```bash
-python lerobot/common/robot_devices/cameras/opencv.py \
- --images-dir outputs/images_from_opencv_cameras
-```
-
-The output will look something like this if you have two cameras connected:
-```
-Mac or Windows detected. Finding available camera indices through scanning all indices from 0 to 60
-[...]
-Camera found at index 0
-Camera found at index 1
-[...]
-Connecting cameras
-OpenCVCamera(0, fps=30.0, width=1920.0, height=1080.0, color_mode=rgb)
-OpenCVCamera(1, fps=24.0, width=1920.0, height=1080.0, color_mode=rgb)
-Saving images to outputs/images_from_opencv_cameras
-Frame: 0000 Latency (ms): 39.52
-[...]
-Frame: 0046 Latency (ms): 40.07
-Images have been saved to outputs/images_from_opencv_cameras
-```
-
-Check the saved images in `outputs/images_from_opencv_cameras` to identify which camera index corresponds to which physical camera (e.g. `0` for `camera_00` or `1` for `camera_01`):
-```
-camera_00_frame_000000.png
-[...]
-camera_00_frame_000047.png
-camera_01_frame_000000.png
-[...]
-camera_01_frame_000047.png
-```
-
-Note: Some cameras may take a few seconds to warm up, and the first frame might be black or green.
-
-Now that you have the camera indexes, you should change them in the config. You can also change the fps, width or height of the camera.
-
-The camera config is defined per robot, can be found here [`RobotConfig`](https://github.com/huggingface/lerobot/blob/main/lerobot/common/robot_devices/robots/configs.py) and looks like this:
-```python
-cameras: dict[str, CameraConfig] = field(
- default_factory=lambda: {
- "wrist": OpenCVCameraConfig(
- camera_index=0, <-- UPDATE HERE
- fps=30,
- width=640,
- height=480,
- ),
- "base": OpenCVCameraConfig(
- camera_index=1, <-- UPDATE HERE
- fps=30,
- width=640,
- height=480,
- ),
- }
- )
-```
-
-### Use your phone
-#### Mac:
-
-To use your iPhone as a camera on macOS, enable the Continuity Camera feature:
-- Ensure your Mac is running macOS 13 or later, and your iPhone is on iOS 16 or later.
-- Sign in both devices with the same Apple ID.
-- Connect your devices with a USB cable or turn on Wi-Fi and Bluetooth for a wireless connection.
-
-For more details, visit [Apple support](https://support.apple.com/en-gb/guide/mac-help/mchl77879b8a/mac).
-
-Your iPhone should be detected automatically when running the camera setup script in the next section.
-
-#### Linux:
-
-If you want to use your phone as a camera on Linux, follow these steps to set up a virtual camera
-
-1. *Install `v4l2loopback-dkms` and `v4l-utils`*. Those packages are required to create virtual camera devices (`v4l2loopback`) and verify their settings with the `v4l2-ctl` utility from `v4l-utils`. Install them using:
-```python
-sudo apt install v4l2loopback-dkms v4l-utils
-```
-2. *Install [DroidCam](https://droidcam.app) on your phone*. This app is available for both iOS and Android.
-3. *Install [OBS Studio](https://obsproject.com)*. This software will help you manage the camera feed. Install it using [Flatpak](https://flatpak.org):
-```python
-flatpak install flathub com.obsproject.Studio
-```
-4. *Install the DroidCam OBS plugin*. This plugin integrates DroidCam with OBS Studio. Install it with:
-```python
-flatpak install flathub com.obsproject.Studio.Plugin.DroidCam
-```
-5. *Start OBS Studio*. Launch with:
-```python
-flatpak run com.obsproject.Studio
-```
-6. *Add your phone as a source*. Follow the instructions [here](https://droidcam.app/obs/usage). Be sure to set the resolution to `640x480`.
-7. *Adjust resolution settings*. In OBS Studio, go to `File > Settings > Video`. Change the `Base(Canvas) Resolution` and the `Output(Scaled) Resolution` to `640x480` by manually typing it in.
-8. *Start virtual camera*. In OBS Studio, follow the instructions [here](https://obsproject.com/kb/virtual-camera-guide).
-9. *Verify the virtual camera setup*. Use `v4l2-ctl` to list the devices:
-```python
-v4l2-ctl --list-devices
-```
-You should see an entry like:
-```
-VirtualCam (platform:v4l2loopback-000):
-/dev/video1
-```
-10. *Check the camera resolution*. Use `v4l2-ctl` to ensure that the virtual camera output resolution is `640x480`. Change `/dev/video1` to the port of your virtual camera from the output of `v4l2-ctl --list-devices`.
-```python
-v4l2-ctl -d /dev/video1 --get-fmt-video
-```
-You should see an entry like:
-```
->>> Format Video Capture:
->>> Width/Height : 640/480
->>> Pixel Format : 'YUYV' (YUYV 4:2:2)
-```
-
-Troubleshooting: If the resolution is not correct you will have to delete the Virtual Camera port and try again as it cannot be changed.
-
-If everything is set up correctly, you can proceed with the rest of the tutorial.
-
-### Add wrist camera
-If you have an additional camera you can add a wrist camera to the SO101. There are already many premade wrist camera holders that you can find in the SO101 repo: [Wrist camera's](https://github.com/TheRobotStudio/SO-ARM100#wrist-cameras)
-
-## Teleoperate with cameras
-
-We can now teleoperate again while at the same time visualizing the cameras and joint positions with `rerun`.
-
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=so101 \
- --control.type=teleoperate \
- --control.display_data=true
-```
-
-## Record a dataset
-
-Once you're familiar with teleoperation, you can record your first dataset with SO-101.
-
-We use the Hugging Face hub features for uploading your dataset. If you haven't previously used the Hub, make sure you can login via the cli using a write-access token, this token can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens).
-
-Add your token to the cli by running this command:
-```bash
-huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
-```
-
-Then store your Hugging Face repository name in a variable:
-```bash
-HF_USER=$(huggingface-cli whoami | head -n 1)
-echo $HF_USER
-```
-
-Now you can record a dataset, to record 2 episodes and upload your dataset to the hub execute this command:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=so101 \
- --control.type=record \
- --control.fps=30 \
- --control.single_task="Grasp a lego block and put it in the bin." \
- --control.repo_id=${HF_USER}/so101_test \
- --control.tags='["so101","tutorial"]' \
- --control.warmup_time_s=5 \
- --control.episode_time_s=30 \
- --control.reset_time_s=30 \
- --control.num_episodes=2 \
- --control.display_data=true \
- --control.push_to_hub=true
-```
-
-You will see a lot of lines appearing like this one:
-```
-INFO 2024-08-10 15:02:58 ol_robot.py:219 dt:33.34 (30.0hz) dtRlead: 5.06 (197.5hz) dtWfoll: 0.25 (3963.7hz) dtRfoll: 6.22 (160.7hz) dtRlaptop: 32.57 (30.7hz) dtRphone: 33.84 (29.5hz)
-```
-It contains:
-- `2024-08-10 15:02:58` which is the date and time of the call to the print function,
-- `ol_robot.py:219` which is the end of the file name and the line number where the print function is called (`lerobot/scripts/control_robot.py` line `219`).
-- `dt:33.34 (30.0hz)` which is the "delta time" or the number of milliseconds spent between the previous call to `robot.teleop_step(record_data=True)` and the current one, associated with the frequency (33.34 ms equals 30.0 Hz) ; note that we use `--fps 30` so we expect 30.0 Hz ; when a step takes more time, the line appears in yellow.
-- `dtRlead: 5.06 (197.5hz)` which is the delta time of reading the present position of the leader arm.
-- `dtWfoll: 0.25 (3963.7hz)` which is the delta time of writing the goal position on the follower arm ; writing is asynchronous so it takes less time than reading.
-- `dtRfoll: 6.22 (160.7hz)` which is the delta time of reading the present position on the follower arm.
-- `dtRlaptop:32.57 (30.7hz) ` which is the delta time of capturing an image from the laptop camera in the thread running asynchronously.
-- `dtRphone:33.84 (29.5hz)` which is the delta time of capturing an image from the phone camera in the thread running asynchronously.
-
-#### Dataset upload
-Locally your dataset is stored in this folder: `~/.cache/huggingface/lerobot/{repo-id}` (e.g. `data/cadene/so101_test`). At the end of data recording, your dataset will be uploaded on your Hugging Face page (e.g. https://huggingface.co/datasets/cadene/so101_test) that you can obtain by running:
-```bash
-echo https://huggingface.co/datasets/${HF_USER}/so101_test
-```
-Your dataset will be automatically tagged with `LeRobot` for the community to find it easily, and you can also add custom tags (in this case `tutorial` for example).
-
-You can look for other LeRobot datasets on the hub by searching for `LeRobot` [tags](https://huggingface.co/datasets?other=LeRobot).
-
-#### Record function
-
-The `record` function provides a suite of tools for capturing and managing data during robot operation:
-1. Set the flow of data recording using command line arguments:
- - `--control.warmup_time_s=10` defines the number of seconds before starting data collection. It allows the robot devices to warmup and synchronize (10 seconds by default).
- - `--control.episode_time_s=60` defines the number of seconds for data recording for each episode (60 seconds by default).
- - `--control.reset_time_s=60` defines the number of seconds for resetting the environment after each episode (60 seconds by default).
- - `--control.num_episodes=50` defines the number of episodes to record (50 by default).
-2. Control the flow during data recording using keyboard keys:
- - Press right arrow `->` at any time during episode recording to early stop and go to resetting. Same during resetting, to early stop and to go to the next episode recording.
- - Press left arrow `<-` at any time during episode recording or resetting to early stop, cancel the current episode, and re-record it.
- - Press escape `ESC` at any time during episode recording to end the session early and go straight to video encoding and dataset uploading.
-3. Checkpoints are done set during recording, so if any issue occurs, you can resume recording by re-running the same command again with `--control.resume=true`. You will need to manually delete the dataset directory if you want to start recording from scratch.
-
-#### Tips for gathering data
-
-Once you're comfortable with data recording, you can create a larger dataset for training. A good starting task is grasping an object at different locations and placing it in a bin. We suggest recording at least 50 episodes, with 10 episodes per location. Keep the cameras fixed and maintain consistent grasping behavior throughout the recordings. Also make sure the object you are manipulating is visible on the camera's. A good rule of thumb is you should be able to do the task yourself by only looking at the camera images.
-
-In the following sections, you’ll train your neural network. After achieving reliable grasping performance, you can start introducing more variations during data collection, such as additional grasp locations, different grasping techniques, and altering camera positions.
-
-Avoid adding too much variation too quickly, as it may hinder your results.
-
-#### Troubleshooting:
-- On Linux, if the left and right arrow keys and escape key don't have any effect during data recording, make sure you've set the `$DISPLAY` environment variable. See [pynput limitations](https://pynput.readthedocs.io/en/latest/limitations.html#linux).
-
-## Visualize a dataset
-
-If you uploaded your dataset to the hub with `--control.push_to_hub=true`, you can [visualize your dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset) by copy pasting your repo id given by:
-```bash
-echo ${HF_USER}/so101_test
-```
-
-If you didn't upload with `--control.push_to_hub=false`, you can visualize it locally with (via a window in the browser `http://127.0.0.1:9090` with the visualization tool):
-```bash
-python lerobot/scripts/visualize_dataset_html.py \
- --repo-id ${HF_USER}/so101_test \
- --local-files-only 1
-```
-
-This will launch a local web server that looks like this:
-
-
-
-
-
-## Replay an episode
-
-A useful feature is the `replay` function, which allows to replay on your robot any episode that you've recorded or episodes from any dataset out there. This function helps you test the repeatability of your robot's actions and assess transferability across robots of the same model.
-
-You can replay the first episode on your robot with:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=so101 \
- --control.type=replay \
- --control.fps=30 \
- --control.repo_id=${HF_USER}/so101_test \
- --control.episode=0
-```
-
-Your robot should replicate movements similar to those you recorded. For example, check out [this video](https://x.com/RemiCadene/status/1793654950905680090) where we use `replay` on a Aloha robot from [Trossen Robotics](https://www.trossenrobotics.com).
-
-## Train a policy
-
-To train a policy to control your robot, use the [`python lerobot/scripts/train.py`](../lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
-```bash
-python lerobot/scripts/train.py \
- --dataset.repo_id=${HF_USER}/so101_test \
- --policy.type=act \
- --output_dir=outputs/train/act_so101_test \
- --job_name=act_so101_test \
- --policy.device=cuda \
- --wandb.enable=true
-```
-
-Let's explain the command:
-1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/so101_test`.
-2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../lerobot/common/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor states, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
-4. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
-5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
-
-Training should take several hours. You will find checkpoints in `outputs/train/act_so101_test/checkpoints`.
-
-To resume training from a checkpoint, below is an example command to resume from `last` checkpoint of the `act_so101_test` policy:
-```bash
-python lerobot/scripts/train.py \
- --config_path=outputs/train/act_so101_test/checkpoints/last/pretrained_model/train_config.json \
- --resume=true
-```
-
-#### Upload policy checkpoints
-
-Once training is done, upload the latest checkpoint with:
-```bash
-huggingface-cli upload ${HF_USER}/act_so101_test \
- outputs/train/act_so101_test/checkpoints/last/pretrained_model
-```
-
-You can also upload intermediate checkpoints with:
-```bash
-CKPT=010000
-huggingface-cli upload ${HF_USER}/act_so101_test${CKPT} \
- outputs/train/act_so101_test/checkpoints/${CKPT}/pretrained_model
-```
-
-## Evaluate your policy
-
-You can use the `record` function from [`lerobot/scripts/control_robot.py`](../lerobot/scripts/control_robot.py) but with a policy checkpoint as input. For instance, run this command to record 10 evaluation episodes:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=so101 \
- --control.type=record \
- --control.fps=30 \
- --control.single_task="Grasp a lego block and put it in the bin." \
- --control.repo_id=${HF_USER}/eval_act_so101_test \
- --control.tags='["tutorial"]' \
- --control.warmup_time_s=5 \
- --control.episode_time_s=30 \
- --control.reset_time_s=30 \
- --control.num_episodes=10 \
- --control.push_to_hub=true \
- --control.policy.path=outputs/train/act_so101_test/checkpoints/last/pretrained_model
-```
-
-As you can see, it's almost the same command as previously used to record your training dataset. Two things changed:
-1. There is an additional `--control.policy.path` argument which indicates the path to your policy checkpoint with (e.g. `outputs/train/eval_act_so101_test/checkpoints/last/pretrained_model`). You can also use the model repository if you uploaded a model checkpoint to the hub (e.g. `${HF_USER}/act_so101_test`).
-2. The name of dataset begins by `eval` to reflect that you are running inference (e.g. `${HF_USER}/eval_act_so101_test`).
diff --git a/examples/2_evaluate_pretrained_policy.py b/examples/2_evaluate_pretrained_policy.py
deleted file mode 100644
index 4e6154c2..00000000
--- a/examples/2_evaluate_pretrained_policy.py
+++ /dev/null
@@ -1,139 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-This script demonstrates how to evaluate a pretrained policy from the HuggingFace Hub or from your local
-training outputs directory. In the latter case, you might want to run examples/3_train_policy.py first.
-
-It requires the installation of the 'gym_pusht' simulation environment. Install it by running:
-```bash
-pip install -e ".[pusht]"
-```
-"""
-
-from pathlib import Path
-
-import gym_pusht # noqa: F401
-import gymnasium as gym
-import imageio
-import numpy
-import torch
-
-from lerobot.common.policies.diffusion.modeling_diffusion import DiffusionPolicy
-
-# Create a directory to store the video of the evaluation
-output_directory = Path("outputs/eval/example_pusht_diffusion")
-output_directory.mkdir(parents=True, exist_ok=True)
-
-# Select your device
-device = "cuda"
-
-# Provide the [hugging face repo id](https://huggingface.co/lerobot/diffusion_pusht):
-pretrained_policy_path = "lerobot/diffusion_pusht"
-# OR a path to a local outputs/train folder.
-# pretrained_policy_path = Path("outputs/train/example_pusht_diffusion")
-
-policy = DiffusionPolicy.from_pretrained(pretrained_policy_path)
-
-# Initialize evaluation environment to render two observation types:
-# an image of the scene and state/position of the agent. The environment
-# also automatically stops running after 300 interactions/steps.
-env = gym.make(
- "gym_pusht/PushT-v0",
- obs_type="pixels_agent_pos",
- max_episode_steps=300,
-)
-
-# We can verify that the shapes of the features expected by the policy match the ones from the observations
-# produced by the environment
-print(policy.config.input_features)
-print(env.observation_space)
-
-# Similarly, we can check that the actions produced by the policy will match the actions expected by the
-# environment
-print(policy.config.output_features)
-print(env.action_space)
-
-# Reset the policy and environments to prepare for rollout
-policy.reset()
-numpy_observation, info = env.reset(seed=42)
-
-# Prepare to collect every rewards and all the frames of the episode,
-# from initial state to final state.
-rewards = []
-frames = []
-
-# Render frame of the initial state
-frames.append(env.render())
-
-step = 0
-done = False
-while not done:
- # Prepare observation for the policy running in Pytorch
- state = torch.from_numpy(numpy_observation["agent_pos"])
- image = torch.from_numpy(numpy_observation["pixels"])
-
- # Convert to float32 with image from channel first in [0,255]
- # to channel last in [0,1]
- state = state.to(torch.float32)
- image = image.to(torch.float32) / 255
- image = image.permute(2, 0, 1)
-
- # Send data tensors from CPU to GPU
- state = state.to(device, non_blocking=True)
- image = image.to(device, non_blocking=True)
-
- # Add extra (empty) batch dimension, required to forward the policy
- state = state.unsqueeze(0)
- image = image.unsqueeze(0)
-
- # Create the policy input dictionary
- observation = {
- "observation.state": state,
- "observation.image": image,
- }
-
- # Predict the next action with respect to the current observation
- with torch.inference_mode():
- action = policy.select_action(observation)
-
- # Prepare the action for the environment
- numpy_action = action.squeeze(0).to("cpu").numpy()
-
- # Step through the environment and receive a new observation
- numpy_observation, reward, terminated, truncated, info = env.step(numpy_action)
- print(f"{step=} {reward=} {terminated=}")
-
- # Keep track of all the rewards and frames
- rewards.append(reward)
- frames.append(env.render())
-
- # The rollout is considered done when the success state is reached (i.e. terminated is True),
- # or the maximum number of iterations is reached (i.e. truncated is True)
- done = terminated | truncated | done
- step += 1
-
-if terminated:
- print("Success!")
-else:
- print("Failure!")
-
-# Get the speed of environment (i.e. its number of frames per second).
-fps = env.metadata["render_fps"]
-
-# Encode all frames into a mp4 video.
-video_path = output_directory / "rollout.mp4"
-imageio.mimsave(str(video_path), numpy.stack(frames), fps=fps)
-
-print(f"Video of the evaluation is available in '{video_path}'.")
diff --git a/examples/4_train_policy_with_script.md b/examples/4_train_policy_with_script.md
deleted file mode 100644
index cb4cc626..00000000
--- a/examples/4_train_policy_with_script.md
+++ /dev/null
@@ -1,274 +0,0 @@
-This tutorial will explain the training script, how to use it, and particularly how to configure everything needed for the training run.
-> **Note:** The following assumes you're running these commands on a machine equipped with a cuda GPU. If you don't have one (or if you're using a Mac), you can add `--policy.device=cpu` (`--policy.device=mps` respectively). However, be advised that the code executes much slower on cpu.
-
-
-## The training script
-
-LeRobot offers a training script at [`lerobot/scripts/train.py`](../lerobot/scripts/train.py). At a high level it does the following:
-
-- Initialize/load a configuration for the following steps using.
-- Instantiates a dataset.
-- (Optional) Instantiates a simulation environment corresponding to that dataset.
-- Instantiates a policy.
-- Runs a standard training loop with forward pass, backward pass, optimization step, and occasional logging, evaluation (of the policy on the environment), and checkpointing.
-
-## Overview of the configuration system
-
-In the training script, the main function `train` expects a `TrainPipelineConfig` object:
-```python
-# train.py
-@parser.wrap()
-def train(cfg: TrainPipelineConfig):
-```
-
-You can inspect the `TrainPipelineConfig` defined in [`lerobot/configs/train.py`](../lerobot/configs/train.py) (which is heavily commented and meant to be a reference to understand any option)
-
-When running the script, inputs for the command line are parsed thanks to the `@parser.wrap()` decorator and an instance of this class is automatically generated. Under the hood, this is done with [Draccus](https://github.com/dlwh/draccus) which is a tool dedicated to this purpose. If you're familiar with Hydra, Draccus can similarly load configurations from config files (.json, .yaml) and also override their values through command line inputs. Unlike Hydra, these configurations are pre-defined in the code through dataclasses rather than being defined entirely in config files. This allows for more rigorous serialization/deserialization, typing, and to manipulate configuration as objects directly in the code and not as dictionaries or namespaces (which enables nice features in an IDE such as autocomplete, jump-to-def, etc.)
-
-Let's have a look at a simplified example. Amongst other attributes, the training config has the following attributes:
-```python
-@dataclass
-class TrainPipelineConfig:
- dataset: DatasetConfig
- env: envs.EnvConfig | None = None
- policy: PreTrainedConfig | None = None
-```
-in which `DatasetConfig` for example is defined as such:
-```python
-@dataclass
-class DatasetConfig:
- repo_id: str
- episodes: list[int] | None = None
- video_backend: str = "pyav"
-```
-
-This creates a hierarchical relationship where, for example assuming we have a `cfg` instance of `TrainPipelineConfig`, we can access the `repo_id` value with `cfg.dataset.repo_id`.
-From the command line, we can specify this value by using a very similar syntax `--dataset.repo_id=repo/id`.
-
-By default, every field takes its default value specified in the dataclass. If a field doesn't have a default value, it needs to be specified either from the command line or from a config file – which path is also given in the command line (more in this below). In the example above, the `dataset` field doesn't have a default value which means it must be specified.
-
-
-## Specifying values from the CLI
-
-Let's say that we want to train [Diffusion Policy](../lerobot/common/policies/diffusion) on the [pusht](https://huggingface.co/datasets/lerobot/pusht) dataset, using the [gym_pusht](https://github.com/huggingface/gym-pusht) environment for evaluation. The command to do so would look like this:
-```bash
-python lerobot/scripts/train.py \
- --dataset.repo_id=lerobot/pusht \
- --policy.type=diffusion \
- --env.type=pusht
-```
-
-Let's break this down:
-- To specify the dataset, we just need to specify its `repo_id` on the hub which is the only required argument in the `DatasetConfig`. The rest of the fields have default values and in this case we are fine with those so we can just add the option `--dataset.repo_id=lerobot/pusht`.
-- To specify the policy, we can just select diffusion policy using `--policy` appended with `.type`. Here, `.type` is a special argument which allows us to select config classes inheriting from `draccus.ChoiceRegistry` and that have been decorated with the `register_subclass()` method. To have a better explanation of this feature, have a look at this [Draccus demo](https://github.com/dlwh/draccus?tab=readme-ov-file#more-flexible-configuration-with-choice-types). In our code, we use this mechanism mainly to select policies, environments, robots, and some other components like optimizers. The policies available to select are located in [lerobot/common/policies](../lerobot/common/policies)
-- Similarly, we select the environment with `--env.type=pusht`. The different environment configs are available in [`lerobot/common/envs/configs.py`](../lerobot/common/envs/configs.py)
-
-Let's see another example. Let's say you've been training [ACT](../lerobot/common/policies/act) on [lerobot/aloha_sim_insertion_human](https://huggingface.co/datasets/lerobot/aloha_sim_insertion_human) using the [gym-aloha](https://github.com/huggingface/gym-aloha) environment for evaluation with:
-```bash
-python lerobot/scripts/train.py \
- --policy.type=act \
- --dataset.repo_id=lerobot/aloha_sim_insertion_human \
- --env.type=aloha \
- --output_dir=outputs/train/act_aloha_insertion
-```
-> Notice we added `--output_dir` to explicitly tell where to write outputs from this run (checkpoints, training state, configs etc.). This is not mandatory and if you don't specify it, a default directory will be created from the current date and time, env.type and policy.type. This will typically look like `outputs/train/2025-01-24/16-10-05_aloha_act`.
-
-We now want to train a different policy for aloha on another task. We'll change the dataset and use [lerobot/aloha_sim_transfer_cube_human](https://huggingface.co/datasets/lerobot/aloha_sim_transfer_cube_human) instead. Of course, we also need to change the task of the environment as well to match this other task.
-Looking at the [`AlohaEnv`](../lerobot/common/envs/configs.py) config, the task is `"AlohaInsertion-v0"` by default, which corresponds to the task we trained on in the command above. The [gym-aloha](https://github.com/huggingface/gym-aloha?tab=readme-ov-file#description) environment also has the `AlohaTransferCube-v0` task which corresponds to this other task we want to train on. Putting this together, we can train this new policy on this different task using:
-```bash
-python lerobot/scripts/train.py \
- --policy.type=act \
- --dataset.repo_id=lerobot/aloha_sim_transfer_cube_human \
- --env.type=aloha \
- --env.task=AlohaTransferCube-v0 \
- --output_dir=outputs/train/act_aloha_transfer
-```
-
-## Loading from a config file
-
-Now, let's assume that we want to reproduce the run just above. That run has produced a `train_config.json` file in its checkpoints, which serializes the `TrainPipelineConfig` instance it used:
-```json
-{
- "dataset": {
- "repo_id": "lerobot/aloha_sim_transfer_cube_human",
- "episodes": null,
- ...
- },
- "env": {
- "type": "aloha",
- "task": "AlohaTransferCube-v0",
- "fps": 50,
- ...
- },
- "policy": {
- "type": "act",
- "n_obs_steps": 1,
- ...
- },
- ...
-}
-```
-
-We can then simply load the config values from this file using:
-```bash
-python lerobot/scripts/train.py \
- --config_path=outputs/train/act_aloha_transfer/checkpoints/last/pretrained_model/ \
- --output_dir=outputs/train/act_aloha_transfer_2
-```
-`--config_path` is also a special argument which allows to initialize the config from a local config file. It can point to a directory that contains `train_config.json` or to the config file itself directly.
-
-Similarly to Hydra, we can still override some parameters in the CLI if we want to, e.g.:
-```bash
-python lerobot/scripts/train.py \
- --config_path=outputs/train/act_aloha_transfer/checkpoints/last/pretrained_model/ \
- --output_dir=outputs/train/act_aloha_transfer_2
- --policy.n_action_steps=80
-```
-> Note: While `--output_dir` is not required in general, in this case we need to specify it since it will otherwise take the value from the `train_config.json` (which is `outputs/train/act_aloha_transfer`). In order to prevent accidental deletion of previous run checkpoints, we raise an error if you're trying to write in an existing directory. This is not the case when resuming a run, which is what you'll learn next.
-
-`--config_path` can also accept the repo_id of a repo on the hub that contains a `train_config.json` file, e.g. running:
-```bash
-python lerobot/scripts/train.py --config_path=lerobot/diffusion_pusht
-```
-will start a training run with the same configuration used for training [lerobot/diffusion_pusht](https://huggingface.co/lerobot/diffusion_pusht)
-
-
-## Resume training
-
-Being able to resume a training run is important in case it crashed or aborted for any reason. We'll demonstrate how to do that here.
-
-Let's reuse the command from the previous run and add a few more options:
-```bash
-python lerobot/scripts/train.py \
- --policy.type=act \
- --dataset.repo_id=lerobot/aloha_sim_transfer_cube_human \
- --env.type=aloha \
- --env.task=AlohaTransferCube-v0 \
- --log_freq=25 \
- --save_freq=100 \
- --output_dir=outputs/train/run_resumption
-```
-
-Here we've taken care to set up the log frequency and checkpointing frequency to low numbers so we can showcase resumption. You should be able to see some logging and have a first checkpoint within 1 minute (depending on hardware). Wait for the first checkpoint to happen, you should see a line that looks like this in your terminal:
-```
-INFO 2025-01-24 16:10:56 ts/train.py:263 Checkpoint policy after step 100
-```
-Now let's simulate a crash by killing the process (hit `ctrl`+`c`). We can then simply resume this run from the last checkpoint available with:
-```bash
-python lerobot/scripts/train.py \
- --config_path=outputs/train/run_resumption/checkpoints/last/pretrained_model/ \
- --resume=true
-```
-You should see from the logging that your training picks up from where it left off.
-
-Another reason for which you might want to resume a run is simply to extend training and add more training steps. The number of training steps is set by the option `--steps`, which is 100 000 by default.
-You could double the number of steps of the previous run with:
-```bash
-python lerobot/scripts/train.py \
- --config_path=outputs/train/run_resumption/checkpoints/last/pretrained_model/ \
- --resume=true \
- --steps=200000
-```
-
-## Outputs of a run
-In the output directory, there will be a folder called `checkpoints` with the following structure:
-```bash
-outputs/train/run_resumption/checkpoints
-├── 000100 # checkpoint_dir for training step 100
-│ ├── pretrained_model/
-│ │ ├── config.json # policy config
-│ │ ├── model.safetensors # policy weights
-│ │ └── train_config.json # train config
-│ └── training_state/
-│ ├── optimizer_param_groups.json # optimizer param groups
-│ ├── optimizer_state.safetensors # optimizer state
-│ ├── rng_state.safetensors # rng states
-│ ├── scheduler_state.json # scheduler state
-│ └── training_step.json # training step
-├── 000200
-└── last -> 000200 # symlink to the last available checkpoint
-```
-
-## Fine-tuning a pre-trained policy
-
-In addition to the features currently in Draccus, we've added a special `.path` argument for the policy, which allows to load a policy as you would with `PreTrainedPolicy.from_pretrained()`. In that case, `path` can be a local directory that contains a checkpoint or a repo_id pointing to a pretrained policy on the hub.
-
-For example, we could fine-tune a [policy pre-trained on the aloha transfer task](https://huggingface.co/lerobot/act_aloha_sim_transfer_cube_human) on the aloha insertion task. We can achieve this with:
-```bash
-python lerobot/scripts/train.py \
- --policy.path=lerobot/act_aloha_sim_transfer_cube_human \
- --dataset.repo_id=lerobot/aloha_sim_insertion_human \
- --env.type=aloha \
- --env.task=AlohaInsertion-v0
-```
-
-When doing so, keep in mind that the features of the fine-tuning dataset would have to match the input/output features of the pretrained policy.
-
-## Typical logs and metrics
-
-When you start the training process, you will first see your full configuration being printed in the terminal. You can check it to make sure that you configured your run correctly. The final configuration will also be saved with the checkpoint.
-
-After that, you will see training log like this one:
-```
-INFO 2024-08-14 13:35:12 ts/train.py:192 step:0 smpl:64 ep:1 epch:0.00 loss:1.112 grdn:15.387 lr:2.0e-07 updt_s:1.738 data_s:4.774
-```
-or evaluation log:
-```
-INFO 2024-08-14 13:38:45 ts/train.py:226 step:100 smpl:6K ep:52 epch:0.25 ∑rwrd:20.693 success:0.0% eval_s:120.266
-```
-
-These logs will also be saved in wandb if `wandb.enable` is set to `true`. Here are the meaning of some abbreviations:
-- `smpl`: number of samples seen during training.
-- `ep`: number of episodes seen during training. An episode contains multiple samples in a complete manipulation task.
-- `epch`: number of time all unique samples are seen (epoch).
-- `grdn`: gradient norm.
-- `∑rwrd`: compute the sum of rewards in every evaluation episode and then take an average of them.
-- `success`: average success rate of eval episodes. Reward and success are usually different except for the sparsing reward setting, where reward=1 only when the task is completed successfully.
-- `eval_s`: time to evaluate the policy in the environment, in second.
-- `updt_s`: time to update the network parameters, in second.
-- `data_s`: time to load a batch of data, in second.
-
-Some metrics are useful for initial performance profiling. For example, if you find the current GPU utilization is low via the `nvidia-smi` command and `data_s` sometimes is too high, you may need to modify batch size or number of dataloading workers to accelerate dataloading. We also recommend [pytorch profiler](https://github.com/huggingface/lerobot?tab=readme-ov-file#improve-your-code-with-profiling) for detailed performance probing.
-
-## In short
-
-We'll summarize here the main use cases to remember from this tutorial.
-
-#### Train a policy from scratch – CLI
-```bash
-python lerobot/scripts/train.py \
- --policy.type=act \ # <- select 'act' policy
- --env.type=pusht \ # <- select 'pusht' environment
- --dataset.repo_id=lerobot/pusht # <- train on this dataset
-```
-
-#### Train a policy from scratch - config file + CLI
-```bash
-python lerobot/scripts/train.py \
- --config_path=path/to/pretrained_model \ # <- can also be a repo_id
- --policy.n_action_steps=80 # <- you may still override values
-```
-
-#### Resume/continue a training run
-```bash
-python lerobot/scripts/train.py \
- --config_path=checkpoint/pretrained_model/ \
- --resume=true \
- --steps=200000 # <- you can change some training parameters
-```
-
-#### Fine-tuning
-```bash
-python lerobot/scripts/train.py \
- --policy.path=lerobot/act_aloha_sim_transfer_cube_human \ # <- can also be a local path to a checkpoint
- --dataset.repo_id=lerobot/aloha_sim_insertion_human \
- --env.type=aloha \
- --env.task=AlohaInsertion-v0
-```
-
----
-
-Now that you know the basics of how to train a policy, you might want to know how to apply this knowledge to actual robots, or how to record your own datasets and train policies on your specific task?
-If that's the case, head over to the next tutorial [`7_get_started_with_real_robot.md`](./7_get_started_with_real_robot.md).
-
-Or in the meantime, happy training! 🤗
diff --git a/examples/7_get_started_with_real_robot.md b/examples/7_get_started_with_real_robot.md
deleted file mode 100644
index 9a4db525..00000000
--- a/examples/7_get_started_with_real_robot.md
+++ /dev/null
@@ -1,998 +0,0 @@
-# Getting Started with Real-World Robots
-
-This tutorial will guide you through the process of setting up and training a neural network to autonomously control a real robot.
-
-**What You'll Learn:**
-1. How to order and assemble your robot.
-2. How to connect, configure, and calibrate your robot.
-3. How to record and visualize your dataset.
-4. How to train a policy using your data and prepare it for evaluation.
-5. How to evaluate your policy and visualize the results.
-
-By following these steps, you'll be able to replicate tasks like picking up a Lego block and placing it in a bin with a high success rate, as demonstrated in [this video](https://x.com/RemiCadene/status/1814680760592572934).
-
-This tutorial is specifically made for the affordable [Koch v1.1](https://github.com/jess-moss/koch-v1-1) robot, but it contains additional information to be easily adapted to various types of robots like [Aloha bimanual robot](https://aloha-2.github.io) by changing some configurations. The Koch v1.1 consists of a leader arm and a follower arm, each with 6 motors. It can work with one or several cameras to record the scene, which serve as visual sensors for the robot.
-
-During the data collection phase, you will control the follower arm by moving the leader arm. This process is known as "teleoperation." This technique is used to collect robot trajectories. Afterward, you'll train a neural network to imitate these trajectories and deploy the network to enable your robot to operate autonomously.
-
-If you encounter any issues at any step of the tutorial, feel free to seek help on [Discord](https://discord.com/invite/s3KuuzsPFb) or don't hesitate to iterate with us on the tutorial by creating issues or pull requests. Thanks!
-
-## 1. Order and Assemble your Koch v1.1
-
-Follow the sourcing and assembling instructions provided on the [Koch v1.1 Github page](https://github.com/jess-moss/koch-v1-1). This will guide you through setting up both the follower and leader arms, as shown in the image below.
-
-
-
-
-
-For a visual walkthrough of the assembly process, you can refer to [this video tutorial](https://youtu.be/8nQIg9BwwTk).
-
-## 2. Configure motors, calibrate arms, teleoperate your Koch v1.1
-
-First, install the additional dependencies required for robots built with dynamixel motors like Koch v1.1 by running one of the following commands (make sure gcc is installed).
-
-Using `pip`:
-```bash
-pip install -e ".[dynamixel]"
-```
-
-Using `poetry`:
-```bash
-poetry sync --extras "dynamixel"
-```
-
-Using `uv`:
-```bash
-uv sync --extra "dynamixel"
-```
-
-You are now ready to plug the 5V power supply to the motor bus of the leader arm (the smaller one) since all its motors only require 5V.
-
-Then plug the 12V power supply to the motor bus of the follower arm. It has two motors that need 12V, and the rest will be powered with 5V through the voltage convertor.
-
-Finally, connect both arms to your computer via USB. Note that the USB doesn't provide any power, and both arms need to be plugged in with their associated power supply to be detected by your computer.
-
-Now you are ready to configure your motors for the first time, as detailed in the sections below. In the upcoming sections, you'll learn about our classes and functions by running some python code in an interactive session, or by copy-pasting it in a python file.
-
-If you have already configured your motors the first time, you can streamline the process by directly running the teleoperate script (which is detailed further in the tutorial):
-
-> **NOTE:** To visualize the data, enable `--control.display_data=true`. This streams the data using `rerun`.
-
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=koch \
- --control.type=teleoperate
-```
-
-It will automatically:
-1. Identify any missing calibrations and initiate the calibration procedure.
-2. Connect the robot and start teleoperation.
-
-### a. Control your motors with DynamixelMotorsBus
-
-You can use the [`DynamixelMotorsBus`](../lerobot/common/robot_devices/motors/dynamixel.py) to communicate with the motors connected as a chain to the corresponding USB bus. This class leverages the Python [Dynamixel SDK](https://emanual.robotis.com/docs/en/software/dynamixel/dynamixel_sdk/sample_code/python_read_write_protocol_2_0/#python-read-write-protocol-20) to facilitate reading from and writing to the motors.
-
-**First Configuration of your motors**
-
-You will need to unplug each motor in turn and run a command the identify the motor. The motor will save its own identification, so you only need to do this once. Start by unplugging all of the motors.
-
-Do the Leader arm first, as all of its motors are of the same type. Plug in your first motor on your leader arm and run this script to set its ID to 1.
-```bash
-python lerobot/scripts/configure_motor.py \
- --port /dev/tty.usbmodem58760432961 \
- --brand dynamixel \
- --model xl330-m288 \
- --baudrate 1000000 \
- --ID 1
-```
-
-Then unplug your first motor and plug the second motor and set its ID to 2.
-```bash
-python lerobot/scripts/configure_motor.py \
- --port /dev/tty.usbmodem58760432961 \
- --brand dynamixel \
- --model xl330-m288 \
- --baudrate 1000000 \
- --ID 2
-```
-
-Redo the process for all your motors until ID 6.
-
-The process for the follower arm is almost the same, but the follower arm has two types of motors. For the first two motors, make sure you set the model to `xl430-w250`. _Important: configuring follower motors requires plugging and unplugging power. Make sure you use the 5V power for the XL330s and the 12V power for the XL430s!_
-
-After all of your motors are configured properly, you're ready to plug them all together in a daisy-chain as shown in the original video.
-
-**Instantiate the DynamixelMotorsBus**
-
-To begin, create two instances of the [`DynamixelMotorsBus`](../lerobot/common/robot_devices/motors/dynamixel.py), one for each arm, using their corresponding USB ports (e.g. `DynamixelMotorsBus(port="/dev/tty.usbmodem575E0031751"`).
-
-To find the correct ports for each arm, run the utility script twice:
-```bash
-python lerobot/scripts/find_motors_bus_port.py
-```
-
-Example output when identifying the leader arm's port (e.g., `/dev/tty.usbmodem575E0031751` on Mac, or possibly `/dev/ttyACM0` on Linux):
-```
-Finding all available ports for the MotorBus.
-['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
-Remove the usb cable from your DynamixelMotorsBus and press Enter when done.
-
-[...Disconnect leader arm and press Enter...]
-
-The port of this DynamixelMotorsBus is /dev/tty.usbmodem575E0031751
-Reconnect the usb cable.
-```
-
-Example output when identifying the follower arm's port (e.g., `/dev/tty.usbmodem575E0032081`, or possibly `/dev/ttyACM1` on Linux):
-```
-Finding all available ports for the MotorBus.
-['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
-Remove the usb cable from your DynamixelMotorsBus and press Enter when done.
-
-[...Disconnect follower arm and press Enter...]
-
-The port of this DynamixelMotorsBus is /dev/tty.usbmodem575E0032081
-Reconnect the usb cable.
-```
-
-Troubleshooting: On Linux, you might need to give access to the USB ports by running this command with your ports:
-```bash
-sudo chmod 666 /dev/tty.usbmodem575E0032081
-sudo chmod 666 /dev/tty.usbmodem575E0031751
-```
-
-*Listing and Configuring Motors*
-
-Next, you'll need to list the motors for each arm, including their name, index, and model. Initially, each motor is assigned the factory default index `1`. Since each motor requires a unique index to function correctly when connected in a chain on a common bus, you'll need to assign different indices. It's recommended to use an ascending index order, starting from `1` (e.g., `1, 2, 3, 4, 5, 6`). These indices will be saved in the persistent memory of each motor during the first connection.
-
-To assign indices to the motors, run this code in an interactive Python session. Replace the `port` values with the ones you identified earlier:
-```python
-from lerobot.common.robot_devices.motors.configs import DynamixelMotorsBusConfig
-from lerobot.common.robot_devices.motors.dynamixel import DynamixelMotorsBus
-
-leader_config = DynamixelMotorsBusConfig(
- port="/dev/tty.usbmodem575E0031751",
- motors={
- # name: (index, model)
- "shoulder_pan": (1, "xl330-m077"),
- "shoulder_lift": (2, "xl330-m077"),
- "elbow_flex": (3, "xl330-m077"),
- "wrist_flex": (4, "xl330-m077"),
- "wrist_roll": (5, "xl330-m077"),
- "gripper": (6, "xl330-m077"),
- },
-)
-
-follower_config = DynamixelMotorsBusConfig(
- port="/dev/tty.usbmodem575E0032081",
- motors={
- # name: (index, model)
- "shoulder_pan": (1, "xl430-w250"),
- "shoulder_lift": (2, "xl430-w250"),
- "elbow_flex": (3, "xl330-m288"),
- "wrist_flex": (4, "xl330-m288"),
- "wrist_roll": (5, "xl330-m288"),
- "gripper": (6, "xl330-m288"),
- },
-)
-
-leader_arm = DynamixelMotorsBus(leader_config)
-follower_arm = DynamixelMotorsBus(follower_config)
-```
-
-IMPORTANTLY: Now that you have your ports, update [`KochRobotConfig`](../lerobot/common/robot_devices/robots/configs.py). You will find something like:
-```python
-@RobotConfig.register_subclass("koch")
-@dataclass
-class KochRobotConfig(ManipulatorRobotConfig):
- calibration_dir: str = ".cache/calibration/koch"
- # `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
- # Set this to a positive scalar to have the same value for all motors, or a list that is the same length as
- # the number of motors in your follower arms.
- max_relative_target: int | None = None
-
- leader_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "main": DynamixelMotorsBusConfig(
- port="/dev/tty.usbmodem585A0085511", <-- UPDATE HERE
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "xl330-m077"],
- "shoulder_lift": [2, "xl330-m077"],
- "elbow_flex": [3, "xl330-m077"],
- "wrist_flex": [4, "xl330-m077"],
- "wrist_roll": [5, "xl330-m077"],
- "gripper": [6, "xl330-m077"],
- },
- ),
- }
- )
-
- follower_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "main": DynamixelMotorsBusConfig(
- port="/dev/tty.usbmodem585A0076891", <-- UPDATE HERE
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "xl430-w250"],
- "shoulder_lift": [2, "xl430-w250"],
- "elbow_flex": [3, "xl330-m288"],
- "wrist_flex": [4, "xl330-m288"],
- "wrist_roll": [5, "xl330-m288"],
- "gripper": [6, "xl330-m288"],
- },
- ),
- }
- )
-```
-
-**Connect and Configure your Motors**
-
-Before you can start using your motors, you'll need to configure them to ensure proper communication. When you first connect the motors, the [`DynamixelMotorsBus`](../lerobot/common/robot_devices/motors/dynamixel.py) automatically detects any mismatch between the current motor indices (factory set to `1`) and the specified indices (e.g., `1, 2, 3, 4, 5, 6`). This triggers a configuration procedure that requires you to unplug the power cord and motors, then reconnect each motor sequentially, starting from the one closest to the bus.
-
-For a visual guide, refer to the [video tutorial of the configuration procedure](https://youtu.be/U78QQ9wCdpY).
-
-To connect and configure the leader arm, run the following code in the same Python interactive session as earlier in the tutorial:
-```python
-leader_arm.connect()
-```
-
-When you connect the leader arm for the first time, you might see an output similar to this:
-```
-Read failed due to communication error on port /dev/tty.usbmodem575E0032081 for group_key ID_shoulder_pan_shoulder_lift_elbow_flex_wrist_flex_wrist_roll_gripper: [TxRxResult] There is no status packet!
-
-/!\ A configuration issue has been detected with your motors:
-If this is the first time you are using these motors, press enter to configure your motors... but before verify that all the cables are connected the proper way. If you find an issue, before making a modification, kill the python process, unplug the power cord to not damage the motors, rewire correctly, then plug the power again and relaunch the script.
-
-Motor indices detected: {9600: [1]}
-
-1. Unplug the power cord
-2. Plug/unplug minimal number of cables to only have the first 1 motor(s) (['shoulder_pan']) connected.
-3. Re-plug the power cord
-Press Enter to continue...
-
-*Follow the procedure*
-
-Setting expected motor indices: [1, 2, 3, 4, 5, 6]
-```
-
-Once the leader arm is configured, repeat the process for the follower arm by running:
-```python
-follower_arm.connect()
-```
-
-Congratulations! Both arms are now properly configured and connected. You won't need to go through the configuration procedure again in the future.
-
-**Troubleshooting**:
-
-If the configuration process fails, you may need to do the configuration process via the Dynamixel Wizard.
-
-Known failure modes:
-- Calling `arm.connect()` raises `OSError: No motor found, but one new motor expected. Verify power cord is plugged in and retry` on Ubuntu 22.
-
-Steps:
-1. Visit https://emanual.robotis.com/docs/en/software/dynamixel/dynamixel_wizard2/#connect-dynamixel.
-2. Follow the software installation instructions in section 3 of the web page.
-3. Launch the software.
-4. Configure the device scanning options in the menu under `Tools` > `Options` > `Scan`. Check only Protocol 2.0, select only the USB port identifier of interest, select all baudrates, set the ID range to `[0, 10]`. _While this step was not strictly necessary, it greatly speeds up scanning_.
-5. For each motor in turn:
- - Disconnect the power to the driver board.
- - Connect **only** the motor of interest to the driver board, making sure to disconnect it from any other motors.
- - Reconnect the power to the driver board.
- - From the software menu select `Device` > `Scan` and let the scan run. A device should appear.
- - If the device has an asterisk (*) near it, it means the firmware is indeed outdated. From the software menu, select `Tools` > `Firmware Update`. Follow the prompts.
- - The main panel should have table with various parameters of the device (refer to the web page, section 5). Select the row with `ID`, and then set the desired ID on the bottom right panel by selecting and clicking `Save`.
- - Just like you did with the ID, also set the `Baud Rate` to 1 Mbps.
-6. Check everything has been done right:
- - Rewire the arms in their final configuration and power both of them.
- - Scan for devices. All 12 motors should appear.
- - Select the motors one by one and move the arm. Check that the graphical indicator near the top right shows the movement.
-
-** There is a common issue with the Dynamixel XL430-W250 motors where the motors become undiscoverable after upgrading their firmware from Mac and Windows Dynamixel Wizard2 applications. When this occurs, it is required to do a firmware recovery (Select `DYNAMIXEL Firmware Recovery` and follow the prompts). There are two known workarounds to conduct this firmware reset:
- 1) Install the Dynamixel Wizard on a linux machine and complete the firmware recovery
- 2) Use the Dynamixel U2D2 in order to perform the reset with Windows or Mac. This U2D2 can be purchased [here](https://www.robotis.us/u2d2/).
- For either solution, open DYNAMIXEL Wizard 2.0 and select the appropriate port. You will likely be unable to see the motor in the GUI at this time. Select `Firmware Recovery`, carefully choose the correct model, and wait for the process to complete. Finally, re-scan to confirm the firmware recovery was successful.
-
-**Read and Write with DynamixelMotorsBus**
-
-To get familiar with how `DynamixelMotorsBus` communicates with the motors, you can start by reading data from them. Copy past this code in the same interactive python session:
-```python
-leader_pos = leader_arm.read("Present_Position")
-follower_pos = follower_arm.read("Present_Position")
-print(leader_pos)
-print(follower_pos)
-```
-
-Expected output might look like:
-```
-array([2054, 523, 3071, 1831, 3049, 2441], dtype=int32)
-array([2003, 1601, 56, 2152, 3101, 2283], dtype=int32)
-```
-
-Try moving the arms to various positions and observe how the values change.
-
-Now let's try to enable torque in the follower arm by copy pasting this code:
-```python
-from lerobot.common.robot_devices.motors.dynamixel import TorqueMode
-
-follower_arm.write("Torque_Enable", TorqueMode.ENABLED.value)
-```
-
-With torque enabled, the follower arm will be locked in its current position. Do not attempt to manually move the arm while torque is enabled, as this could damage the motors.
-
-Now, to get more familiar with reading and writing, let's move the arm programmatically copy pasting the following example code:
-```python
-# Get the current position
-position = follower_arm.read("Present_Position")
-
-# Update first motor (shoulder_pan) position by +10 steps
-position[0] += 10
-follower_arm.write("Goal_Position", position)
-
-# Update all motors position by -30 steps
-position -= 30
-follower_arm.write("Goal_Position", position)
-
-# Update gripper by +30 steps
-position[-1] += 30
-follower_arm.write("Goal_Position", position[-1], "gripper")
-```
-
-When you're done playing, you can try to disable the torque, but make sure you hold your robot so that it doesn't fall:
-```python
-follower_arm.write("Torque_Enable", TorqueMode.DISABLED.value)
-```
-
-Finally, disconnect the arms:
-```python
-leader_arm.disconnect()
-follower_arm.disconnect()
-```
-
-Alternatively, you can unplug the power cord, which will automatically disable torque and disconnect the motors.
-
-*/!\ Warning*: These motors tend to overheat, especially under torque or if left plugged in for too long. Unplug after use.
-
-### b. Teleoperate your Koch v1.1 with ManipulatorRobot
-
-**Instantiate the ManipulatorRobot**
-
-Before you can teleoperate your robot, you need to instantiate the [`ManipulatorRobot`](../lerobot/common/robot_devices/robots/manipulator.py) using the previously defined `leader_config` and `follower_config`.
-
-For the Koch v1.1 robot, we only have one leader, so we refer to it as `"main"` and define it as `leader_arms={"main": leader_config}`. We do the same for the follower arm. For other robots (like the Aloha), which may have two pairs of leader and follower arms, you would define them like this: `leader_arms={"left": left_leader_config, "right": right_leader_config},`. Same thing for the follower arms.
-
-
-Run the following code to instantiate your manipulator robot:
-```python
-from lerobot.common.robot_devices.robots.configs import KochRobotConfig
-from lerobot.common.robot_devices.robots.manipulator import ManipulatorRobot
-
-robot_config = KochRobotConfig(
- leader_arms={"main": leader_config},
- follower_arms={"main": follower_config},
- cameras={}, # We don't use any camera for now
-)
-robot = ManipulatorRobot(robot_config)
-```
-
-The `KochRobotConfig` is used to set the associated settings and calibration process. For instance, we activate the torque of the gripper of the leader Koch v1.1 arm and position it at a 40 degree angle to use it as a trigger.
-
-For the [Aloha bimanual robot](https://aloha-2.github.io), we would use `AlohaRobotConfig` to set different settings such as a secondary ID for shadow joints (shoulder, elbow). Specific to Aloha, LeRobot comes with default calibration files stored in `.cache/calibration/aloha_default`. Assuming the motors have been properly assembled, no manual calibration step is expected for Aloha.
-
-**Calibrate and Connect the ManipulatorRobot**
-
-Next, you'll need to calibrate your Koch robot to ensure that the leader and follower arms have the same position values when they are in the same physical position. This calibration is essential because it allows a neural network trained on one Koch robot to work on another.
-
-When you connect your robot for the first time, the [`ManipulatorRobot`](../lerobot/common/robot_devices/robots/manipulator.py) will detect if the calibration file is missing and trigger the calibration procedure. During this process, you will be guided to move each arm to three different positions.
-
-Here are the positions you'll move the follower arm to:
-
-| 1. Zero position | 2. Rotated position | 3. Rest position |
-| ----------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| | | |
-
-And here are the corresponding positions for the leader arm:
-
-| 1. Zero position | 2. Rotated position | 3. Rest position |
-| ----------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| | | |
-
-You can watch a [video tutorial of the calibration procedure](https://youtu.be/8drnU9uRY24) for more details.
-
-During calibration, we count the number of full 360-degree rotations your motors have made since they were first used. That's why we ask you to move to this arbitrary "zero" position. We don't actually "set" the zero position, so you don't need to be accurate. After calculating these "offsets" to shift the motor values around 0, we need to assess the rotation direction of each motor, which might differ. That's why we ask you to rotate all motors to roughly 90 degrees, to measure if the values changed negatively or positively.
-
-Finally, the rest position ensures that the follower and leader arms are roughly aligned after calibration, preventing sudden movements that could damage the motors when starting teleoperation.
-
-Importantly, once calibrated, all Koch robots will move to the same positions (e.g. zero and rotated position) when commanded.
-
-Run the following code to calibrate and connect your robot:
-```python
-robot.connect()
-```
-
-The output will look like this:
-```
-Connecting main follower arm
-Connecting main leader arm
-
-Missing calibration file '.cache/calibration/koch/main_follower.json'
-Running calibration of koch main follower...
-Move arm to zero position
-[...]
-Move arm to rotated position
-[...]
-Move arm to rest position
-[...]
-Calibration is done! Saving calibration file '.cache/calibration/koch/main_follower.json'
-
-Missing calibration file '.cache/calibration/koch/main_leader.json'
-Running calibration of koch main leader...
-Move arm to zero position
-[...]
-Move arm to rotated position
-[...]
-Move arm to rest position
-[...]
-Calibration is done! Saving calibration file '.cache/calibration/koch/main_leader.json'
-```
-
-*Verifying Calibration*
-
-Once calibration is complete, you can check the positions of the leader and follower arms to ensure they match. If the calibration was successful, the positions should be very similar.
-
-Run this code to get the positions in degrees:
-```python
-leader_pos = robot.leader_arms["main"].read("Present_Position")
-follower_pos = robot.follower_arms["main"].read("Present_Position")
-
-print(leader_pos)
-print(follower_pos)
-```
-
-Example output:
-```
-array([-0.43945312, 133.94531, 179.82422, -18.984375, -1.9335938, 34.541016], dtype=float32)
-array([-0.58723712, 131.72314, 174.98743, -16.872612, 0.786213, 35.271973], dtype=float32)
-```
-
-These values are in degrees, which makes them easier to interpret and debug. The zero position used during calibration should roughly correspond to 0 degrees for each motor, and the rotated position should roughly correspond to 90 degrees for each motor.
-
-**Teleoperate your Koch v1.1**
-
-You can easily teleoperate your robot by reading the positions from the leader arm and sending them as goal positions to the follower arm.
-
-To teleoperate your robot for 30 seconds at a frequency of approximately 200Hz, run the following code:
-```python
-import tqdm
-seconds = 30
-frequency = 200
-for _ in tqdm.tqdm(range(seconds*frequency)):
- leader_pos = robot.leader_arms["main"].read("Present_Position")
- robot.follower_arms["main"].write("Goal_Position", leader_pos)
-```
-
-*Using `teleop_step` for Teleoperation*
-
-Alternatively, you can teleoperate the robot using the `teleop_step` method from [`ManipulatorRobot`](../lerobot/common/robot_devices/robots/manipulator.py).
-
-Run this code to teleoperate:
-```python
-for _ in tqdm.tqdm(range(seconds*frequency)):
- robot.teleop_step()
-```
-
-*Recording data during Teleoperation*
-
-Teleoperation is particularly useful for recording data. You can use the `teleop_step(record_data=True)` to returns both the follower arm's position as `"observation.state"` and the leader arm's position as `"action"`. This function also converts the numpy arrays into PyTorch tensors. If you're working with a robot that has two leader and two follower arms (like the Aloha), the positions are concatenated.
-
-Run the following code to see how slowly moving the leader arm affects the observation and action:
-```python
-leader_pos = robot.leader_arms["main"].read("Present_Position")
-follower_pos = robot.follower_arms["main"].read("Present_Position")
-observation, action = robot.teleop_step(record_data=True)
-
-print(follower_pos)
-print(observation)
-print(leader_pos)
-print(action)
-```
-
-Expected output:
-```
-array([7.8223, 131.1328, 165.5859, -23.4668, -0.9668, 32.4316], dtype=float32)
-{'observation.state': tensor([7.8223, 131.1328, 165.5859, -23.4668, -0.9668, 32.4316])}
-array([3.4277, 134.1211, 179.8242, -18.5449, -1.5820, 34.7168], dtype=float32)
-{'action': tensor([3.4277, 134.1211, 179.8242, -18.5449, -1.5820, 34.7168])}
-```
-
-*Asynchronous Frame Recording*
-
-Additionally, `teleop_step` can asynchronously record frames from multiple cameras and include them in the observation dictionary as `"observation.images.CAMERA_NAME"`. This feature will be covered in more detail in the next section.
-
-*Disconnecting the Robot*
-
-When you're finished, make sure to disconnect your robot by running:
-```python
-robot.disconnect()
-```
-
-Alternatively, you can unplug the power cord, which will also disable torque.
-
-*/!\ Warning*: These motors tend to overheat, especially under torque or if left plugged in for too long. Unplug after use.
-
-### c. Add your cameras with OpenCVCamera
-
-**(Optional) Use your phone as camera on Linux**
-
-If you want to use your phone as a camera on Linux, follow these steps to set up a virtual camera
-
-1. *Install `v4l2loopback-dkms` and `v4l-utils`*. Those packages are required to create virtual camera devices (`v4l2loopback`) and verify their settings with the `v4l2-ctl` utility from `v4l-utils`. Install them using:
-```python
-sudo apt install v4l2loopback-dkms v4l-utils
-```
-2. *Install [DroidCam](https://droidcam.app) on your phone*. This app is available for both iOS and Android.
-3. *Install [OBS Studio](https://obsproject.com)*. This software will help you manage the camera feed. Install it using [Flatpak](https://flatpak.org):
-```python
-flatpak install flathub com.obsproject.Studio
-```
-4. *Install the DroidCam OBS plugin*. This plugin integrates DroidCam with OBS Studio. Install it with:
-```python
-flatpak install flathub com.obsproject.Studio.Plugin.DroidCam
-```
-5. *Start OBS Studio*. Launch with:
-```python
-flatpak run com.obsproject.Studio
-```
-6. *Add your phone as a source*. Follow the instructions [here](https://droidcam.app/obs/usage). Be sure to set the resolution to `640x480`.
-7. *Adjust resolution settings*. In OBS Studio, go to `File > Settings > Video`. Change the `Base(Canvas) Resolution` and the `Output(Scaled) Resolution` to `640x480` by manually typing it in.
-8. *Start virtual camera*. In OBS Studio, follow the instructions [here](https://obsproject.com/kb/virtual-camera-guide).
-9. *Verify the virtual camera setup*. Use `v4l2-ctl` to list the devices:
-```python
-v4l2-ctl --list-devices
-```
-You should see an entry like:
-```
-VirtualCam (platform:v4l2loopback-000):
-/dev/video1
-```
-10. *Check the camera resolution*. Use `v4l2-ctl` to ensure that the virtual camera output resolution is `640x480`. Change `/dev/video1` to the port of your virtual camera from the output of `v4l2-ctl --list-devices`.
-```python
-v4l2-ctl -d /dev/video1 --get-fmt-video
-```
-You should see an entry like:
-```
->>> Format Video Capture:
->>> Width/Height : 640/480
->>> Pixel Format : 'YUYV' (YUYV 4:2:2)
-```
-
-Troubleshooting: If the resolution is not correct you will have to delete the Virtual Camera port and try again as it cannot be changed.
-
-If everything is set up correctly, you can proceed with the rest of the tutorial.
-
-**(Optional) Use your iPhone as a camera on MacOS**
-
-To use your iPhone as a camera on macOS, enable the Continuity Camera feature:
-- Ensure your Mac is running macOS 13 or later, and your iPhone is on iOS 16 or later.
-- Sign in both devices with the same Apple ID.
-- Connect your devices with a USB cable or turn on Wi-Fi and Bluetooth for a wireless connection.
-
-For more details, visit [Apple support](https://support.apple.com/en-gb/guide/mac-help/mchl77879b8a/mac).
-
-Your iPhone should be detected automatically when running the camera setup script in the next section.
-
-**Instantiate an OpenCVCamera**
-
-The [`OpenCVCamera`](../lerobot/common/robot_devices/cameras/opencv.py) class allows you to efficiently record frames from most cameras using the [`opencv2`](https://docs.opencv.org) library. For more details on compatibility, see [Video I/O with OpenCV Overview](https://docs.opencv.org/4.x/d0/da7/videoio_overview.html).
-
-To instantiate an [`OpenCVCamera`](../lerobot/common/robot_devices/cameras/opencv.py), you need a camera index (e.g. `OpenCVCamera(camera_index=0)`). When you only have one camera like a webcam of a laptop, the camera index is usually `0` but it might differ, and the camera index might change if you reboot your computer or re-plug your camera. This behavior depends on your operating system.
-
-To find the camera indices, run the following utility script, which will save a few frames from each detected camera:
-```bash
-python lerobot/common/robot_devices/cameras/opencv.py \
- --images-dir outputs/images_from_opencv_cameras
-```
-
-The output will look something like this if you have two cameras connected:
-```
-Mac or Windows detected. Finding available camera indices through scanning all indices from 0 to 60
-[...]
-Camera found at index 0
-Camera found at index 1
-[...]
-Connecting cameras
-OpenCVCamera(0, fps=30.0, width=1920.0, height=1080.0, color_mode=rgb)
-OpenCVCamera(1, fps=24.0, width=1920.0, height=1080.0, color_mode=rgb)
-Saving images to outputs/images_from_opencv_cameras
-Frame: 0000 Latency (ms): 39.52
-[...]
-Frame: 0046 Latency (ms): 40.07
-Images have been saved to outputs/images_from_opencv_cameras
-```
-
-Check the saved images in `outputs/images_from_opencv_cameras` to identify which camera index corresponds to which physical camera (e.g. `0` for `camera_00` or `1` for `camera_01`):
-```
-camera_00_frame_000000.png
-[...]
-camera_00_frame_000047.png
-camera_01_frame_000000.png
-[...]
-camera_01_frame_000047.png
-```
-
-Note: Some cameras may take a few seconds to warm up, and the first frame might be black or green.
-
-Finally, run this code to instantiate and connect your camera:
-```python
-from lerobot.common.robot_devices.cameras.configs import OpenCVCameraConfig
-from lerobot.common.robot_devices.cameras.opencv import OpenCVCamera
-
-config = OpenCVCameraConfig(camera_index=0)
-camera = OpenCVCamera(config)
-camera.connect()
-color_image = camera.read()
-
-print(color_image.shape)
-print(color_image.dtype)
-```
-
-Expected output for a laptop camera on MacBookPro:
-```
-(1080, 1920, 3)
-uint8
-```
-
-Or like this if you followed our tutorial to set a virtual camera:
-```
-(480, 640, 3)
-uint8
-```
-
-With certain camera, you can also specify additional parameters like frame rate, resolution, and color mode during instantiation. For instance:
-```python
-config = OpenCVCameraConfig(camera_index=0, fps=30, width=640, height=480)
-```
-
-If the provided arguments are not compatible with the camera, an exception will be raised.
-
-*Disconnecting the camera*
-
-When you're done using the camera, disconnect it by running:
-```python
-camera.disconnect()
-```
-
-**Instantiate your robot with cameras**
-
-Additionally, you can set up your robot to work with your cameras.
-
-Modify the following Python code with the appropriate camera names and configurations:
-```python
-robot = ManipulatorRobot(
- KochRobotConfig(
- leader_arms={"main": leader_arm},
- follower_arms={"main": follower_arm},
- calibration_dir=".cache/calibration/koch",
- cameras={
- "laptop": OpenCVCameraConfig(0, fps=30, width=640, height=480),
- "phone": OpenCVCameraConfig(1, fps=30, width=640, height=480),
- },
- )
-)
-robot.connect()
-```
-
-As a result, `teleop_step(record_data=True` will return a frame for each camera following the pytorch "channel first" convention but we keep images in `uint8` with pixels in range [0,255] to easily save them.
-
-Modify this code with the names of your cameras and run it:
-```python
-observation, action = robot.teleop_step(record_data=True)
-print(observation["observation.images.laptop"].shape)
-print(observation["observation.images.phone"].shape)
-print(observation["observation.images.laptop"].min().item())
-print(observation["observation.images.laptop"].max().item())
-```
-
-The output should look like this:
-```
-torch.Size([3, 480, 640])
-torch.Size([3, 480, 640])
-0
-255
-```
-
-### d. Use `control_robot.py` and our `teleoperate` function
-
-Instead of manually running the python code in a terminal window, you can use [`lerobot/scripts/control_robot.py`](../lerobot/scripts/control_robot.py) to instantiate your robot by providing the robot configurations via command line and control your robot with various modes as explained next.
-
-Try running this code to teleoperate your robot (if you dont have a camera, keep reading):
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=koch \
- --control.type=teleoperate
-```
-
-You will see a lot of lines appearing like this one:
-```
-INFO 2024-08-10 11:15:03 ol_robot.py:209 dt: 5.12 (195.1hz) dtRlead: 4.93 (203.0hz) dtWfoll: 0.19 (5239.0hz)
-```
-
-It contains
-- `2024-08-10 11:15:03` which is the date and time of the call to the print function.
-- `ol_robot.py:209` which is the end of the file name and the line number where the print function is called (`lerobot/scripts/control_robot.py` line `209`).
-- `dt: 5.12 (195.1hz)` which is the "delta time" or the number of milliseconds spent between the previous call to `robot.teleop_step()` and the current one, associated with the frequency (5.12 ms equals 195.1 Hz) ; note that you can control the maximum frequency by adding fps as argument such as `--fps 30`.
-- `dtRlead: 4.93 (203.0hz)` which is the number of milliseconds it took to read the position of the leader arm using `leader_arm.read("Present_Position")`.
-- `dtWfoll: 0.22 (4446.9hz)` which is the number of milliseconds it took to set a new goal position for the follower arm using `follower_arm.write("Goal_position", leader_pos)` ; note that writing is done asynchronously so it takes less time than reading.
-
-Importantly: If you don't have any camera, you can remove them dynamically with this [draccus](https://github.com/dlwh/draccus) syntax `--robot.cameras='{}'`:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=koch \
- --robot.cameras='{}' \
- --control.type=teleoperate
-```
-
-We advise to create a new yaml file when the command becomes too long.
-
-## 3. Record your Dataset and Visualize it
-
-Using what you've learned previously, you can now easily record a dataset of states and actions for one episode. You can use `busy_wait` to control the speed of teleoperation and record at a fixed `fps` (frame per seconds).
-
-Try this code to record 30 seconds at 60 fps:
-```python
-import time
-from lerobot.scripts.control_robot import busy_wait
-
-record_time_s = 30
-fps = 60
-
-states = []
-actions = []
-for _ in range(record_time_s * fps):
- start_time = time.perf_counter()
- observation, action = robot.teleop_step(record_data=True)
-
- states.append(observation["observation.state"])
- actions.append(action["action"])
-
- dt_s = time.perf_counter() - start_time
- busy_wait(1 / fps - dt_s)
-
-# Note that observation and action are available in RAM, but
-# you could potentially store them on disk with pickle/hdf5 or
-# our optimized format `LeRobotDataset`. More on this next.
-```
-
-Importantly, many utilities are still missing. For instance, if you have cameras, you will need to save the images on disk to not go out of RAM, and to do so in threads to not slow down communication with your robot. Also, you will need to store your data in a format optimized for training and web sharing like [`LeRobotDataset`](../lerobot/common/datasets/lerobot_dataset.py). More on this in the next section.
-
-### a. Use the `record` function
-
-You can use the `record` function from [`lerobot/scripts/control_robot.py`](../lerobot/scripts/control_robot.py) to achieve efficient data recording. It encompasses many recording utilities:
-1. Frames from cameras are saved on disk in threads, and encoded into videos at the end of each episode recording.
-2. Video streams from cameras are displayed in window so that you can verify them.
-3. Data is stored with [`LeRobotDataset`](../lerobot/common/datasets/lerobot_dataset.py) format which is pushed to your Hugging Face page (unless `--control.push_to_hub=false` is provided).
-4. Checkpoints are done during recording, so if any issue occurs, you can resume recording by re-running the same command again with `--control.resume=true`. You will need to manually delete the dataset directory if you want to start recording from scratch.
-5. Set the flow of data recording using command line arguments:
- - `--control.warmup_time_s=10` defines the number of seconds before starting data collection. It allows the robot devices to warmup and synchronize (10 seconds by default).
- - `--control.episode_time_s=60` defines the number of seconds for data recording for each episode (60 seconds by default).
- - `--control.reset_time_s=60` defines the number of seconds for resetting the environment after each episode (60 seconds by default).
- - `--control.num_episodes=50` defines the number of episodes to record (50 by default).
-6. Control the flow during data recording using keyboard keys:
- - Press right arrow `->` at any time during episode recording to early stop and go to resetting. Same during resetting, to early stop and to go to the next episode recording.
- - Press left arrow `<-` at any time during episode recording or resetting to early stop, cancel the current episode, and re-record it.
- - Press escape `ESC` at any time during episode recording to end the session early and go straight to video encoding and dataset uploading.
-7. Similarly to `teleoperate`, you can also use the command line to override anything.
-
-Before trying `record`, if you want to push your dataset to the hub, make sure you've logged in using a write-access token, which can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens):
-```bash
-huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
-```
-Also, store your Hugging Face repository name in a variable (e.g. `cadene` or `lerobot`). For instance, run this to use your Hugging Face user name as repository:
-```bash
-HF_USER=$(huggingface-cli whoami | head -n 1)
-echo $HF_USER
-```
-If you don't want to push to hub, use `--control.push_to_hub=false`.
-
-Now run this to record 2 episodes:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=koch \
- --control.type=record \
- --control.single_task="Grasp a lego block and put it in the bin." \
- --control.fps=30 \
- --control.repo_id=${HF_USER}/koch_test \
- --control.tags='["tutorial"]' \
- --control.warmup_time_s=5 \
- --control.episode_time_s=30 \
- --control.reset_time_s=30 \
- --control.num_episodes=2 \
- --control.push_to_hub=true
-```
-
-
-This will write your dataset locally to `~/.cache/huggingface/lerobot/{repo-id}` (e.g. `data/cadene/koch_test`) and push it on the hub at `https://huggingface.co/datasets/{HF_USER}/{repo-id}`. Your dataset will be automatically tagged with `LeRobot` for the community to find it easily, and you can also add custom tags (in this case `tutorial` for example).
-
-You can look for other LeRobot datasets on the hub by searching for `LeRobot` tags: https://huggingface.co/datasets?other=LeRobot
-
-You will see a lot of lines appearing like this one:
-```
-INFO 2024-08-10 15:02:58 ol_robot.py:219 dt:33.34 (30.0hz) dtRlead: 5.06 (197.5hz) dtWfoll: 0.25 (3963.7hz) dtRfoll: 6.22 (160.7hz) dtRlaptop: 32.57 (30.7hz) dtRphone: 33.84 (29.5hz)
-```
-It contains:
-- `2024-08-10 15:02:58` which is the date and time of the call to the print function,
-- `ol_robot.py:219` which is the end of the file name and the line number where the print function is called (`lerobot/scripts/control_robot.py` line `219`).
-- `dt:33.34 (30.0hz)` which is the "delta time" or the number of milliseconds spent between the previous call to `robot.teleop_step(record_data=True)` and the current one, associated with the frequency (33.34 ms equals 30.0 Hz) ; note that we use `--fps 30` so we expect 30.0 Hz ; when a step takes more time, the line appears in yellow.
-- `dtRlead: 5.06 (197.5hz)` which is the delta time of reading the present position of the leader arm.
-- `dtWfoll: 0.25 (3963.7hz)` which is the delta time of writing the goal position on the follower arm ; writing is asynchronous so it takes less time than reading.
-- `dtRfoll: 6.22 (160.7hz)` which is the delta time of reading the present position on the follower arm.
-- `dtRlaptop:32.57 (30.7hz) ` which is the delta time of capturing an image from the laptop camera in the thread running asynchronously.
-- `dtRphone:33.84 (29.5hz)` which is the delta time of capturing an image from the phone camera in the thread running asynchronously.
-
-Troubleshooting:
-- On Linux, if the left and right arrow keys and escape key don't have any effect during data recording, make sure you've set the `$DISPLAY` environment variable. See [pynput limitations](https://pynput.readthedocs.io/en/latest/limitations.html#linux).
-
-At the end of data recording, your dataset will be uploaded on your Hugging Face page (e.g. https://huggingface.co/datasets/cadene/koch_test) that you can obtain by running:
-```bash
-echo https://huggingface.co/datasets/${HF_USER}/koch_test
-```
-
-### b. Advice for recording dataset
-
-Once you're comfortable with data recording, it's time to create a larger dataset for training. A good starting task is grasping an object at different locations and placing it in a bin. We suggest recording at least 50 episodes, with 10 episodes per location. Keep the cameras fixed and maintain consistent grasping behavior throughout the recordings.
-
-In the following sections, you’ll train your neural network. After achieving reliable grasping performance, you can start introducing more variations during data collection, such as additional grasp locations, different grasping techniques, and altering camera positions.
-
-Avoid adding too much variation too quickly, as it may hinder your results.
-
-In the coming months, we plan to release a foundational model for robotics. We anticipate that fine-tuning this model will enhance generalization, reducing the need for strict consistency during data collection.
-
-### c. Visualize all episodes
-
-You can visualize your dataset by running:
-```bash
-python lerobot/scripts/visualize_dataset_html.py \
- --repo-id ${HF_USER}/koch_test
-```
-
-Note: You might need to add `--local-files-only 1` if your dataset was not uploaded to hugging face hub.
-
-This will launch a local web server that looks like this:
-
-
-
-
-### d. Replay episode on your robot with the `replay` function
-
-A useful feature of [`lerobot/scripts/control_robot.py`](../lerobot/scripts/control_robot.py) is the `replay` function, which allows to replay on your robot any episode that you've recorded or episodes from any dataset out there. This function helps you test the repeatability of your robot's actions and assess transferability across robots of the same model.
-
-To replay the first episode of the dataset you just recorded, run the following command:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=koch \
- --control.type=replay \
- --control.fps=30 \
- --control.repo_id=${HF_USER}/koch_test \
- --control.episode=0
-```
-
-Your robot should replicate movements similar to those you recorded. For example, check out [this video](https://x.com/RemiCadene/status/1793654950905680090) where we use `replay` on a Aloha robot from [Trossen Robotics](https://www.trossenrobotics.com).
-
-## 4. Train a policy on your data
-
-### a. Use the `train` script
-
-To train a policy to control your robot, use the [`python lerobot/scripts/train.py`](../lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
-```bash
-python lerobot/scripts/train.py \
- --dataset.repo_id=${HF_USER}/koch_test \
- --policy.type=act \
- --output_dir=outputs/train/act_koch_test \
- --job_name=act_koch_test \
- --policy.device=cuda \
- --wandb.enable=true
-```
-
-Let's explain it:
-1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/koch_test`.
-2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../lerobot/common/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor sates, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
-4. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
-5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
-
-For more information on the `train` script see the previous tutorial: [`examples/4_train_policy_with_script.md`](../examples/4_train_policy_with_script.md)
-
-### b. (Optional) Upload policy checkpoints to the hub
-
-Once training is done, upload the latest checkpoint with:
-```bash
-huggingface-cli upload ${HF_USER}/act_koch_test \
- outputs/train/act_koch_test/checkpoints/last/pretrained_model
-```
-
-You can also upload intermediate checkpoints with:
-```bash
-CKPT=010000
-huggingface-cli upload ${HF_USER}/act_koch_test_${CKPT} \
- outputs/train/act_koch_test/checkpoints/${CKPT}/pretrained_model
-```
-
-## 5. Evaluate your policy
-
-Now that you have a policy checkpoint, you can easily control your robot with it using methods from [`ManipulatorRobot`](../lerobot/common/robot_devices/robots/manipulator.py) and the policy.
-
-Try this code for running inference for 60 seconds at 30 fps:
-```python
-from lerobot.common.policies.act.modeling_act import ACTPolicy
-
-inference_time_s = 60
-fps = 30
-device = "cuda" # TODO: On Mac, use "mps" or "cpu"
-
-ckpt_path = "outputs/train/act_koch_test/checkpoints/last/pretrained_model"
-policy = ACTPolicy.from_pretrained(ckpt_path)
-policy.to(device)
-
-for _ in range(inference_time_s * fps):
- start_time = time.perf_counter()
-
- # Read the follower state and access the frames from the cameras
- observation = robot.capture_observation()
-
- # Convert to pytorch format: channel first and float32 in [0,1]
- # with batch dimension
- for name in observation:
- if "image" in name:
- observation[name] = observation[name].type(torch.float32) / 255
- observation[name] = observation[name].permute(2, 0, 1).contiguous()
- observation[name] = observation[name].unsqueeze(0)
- observation[name] = observation[name].to(device)
-
- # Compute the next action with the policy
- # based on the current observation
- action = policy.select_action(observation)
- # Remove batch dimension
- action = action.squeeze(0)
- # Move to cpu, if not already the case
- action = action.to("cpu")
- # Order the robot to move
- robot.send_action(action)
-
- dt_s = time.perf_counter() - start_time
- busy_wait(1 / fps - dt_s)
-```
-
-### a. Use our `record` function
-
-Ideally, when controlling your robot with your neural network, you would want to record evaluation episodes and to be able to visualize them later on, or even train on them like in Reinforcement Learning. This pretty much corresponds to recording a new dataset but with a neural network providing the actions instead of teleoperation.
-
-To this end, you can use the `record` function from [`lerobot/scripts/control_robot.py`](../lerobot/scripts/control_robot.py) but with a policy checkpoint as input. For instance, run this command to record 10 evaluation episodes:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=koch \
- --control.type=record \
- --control.fps=30 \
- --control.repo_id=${HF_USER}/eval_act_koch_test \
- --control.tags='["tutorial"]' \
- --control.warmup_time_s=5 \
- --control.episode_time_s=30 \
- --control.reset_time_s=30 \
- --control.num_episodes=10 \
- --control.push_to_hub=true \
- --control.policy.path=outputs/train/act_koch_test/checkpoints/last/pretrained_model
-```
-
-As you can see, it's almost the same command as previously used to record your training dataset. Two things changed:
-1. There is an additional `--control.policy.path` argument which indicates the path to your policy checkpoint with (e.g. `outputs/train/eval_koch_test/checkpoints/last/pretrained_model`). You can also use the model repository if you uploaded a model checkpoint to the hub (e.g. `${HF_USER}/act_koch_test`).
-2. The name of dataset begins by `eval` to reflect that you are running inference (e.g. `${HF_USER}/eval_act_koch_test`).
-
-### b. Visualize evaluation afterwards
-
-You can then visualize your evaluation dataset by running the same command as before but with the new inference dataset as argument:
-```bash
-python lerobot/scripts/visualize_dataset.py \
- --repo-id ${HF_USER}/eval_act_koch_test
-```
-
-## 6. Next step
-
-Join our [Discord](https://discord.com/invite/s3KuuzsPFb) to collaborate on data collection and help us train a fully open-source foundational models for robotics!
diff --git a/examples/8_use_stretch.md b/examples/8_use_stretch.md
deleted file mode 100644
index 982e7257..00000000
--- a/examples/8_use_stretch.md
+++ /dev/null
@@ -1,161 +0,0 @@
-This tutorial explains how to use [Stretch 3](https://hello-robot.com/stretch-3-product) with LeRobot.
-
-## Setup
-
-Familiarize yourself with Stretch by following its [tutorials](https://docs.hello-robot.com/0.3/getting_started/hello_robot/) (recommended).
-
-To use LeRobot on Stretch, 3 options are available:
-- [tethered setup](https://docs.hello-robot.com/0.3/getting_started/connecting_to_stretch/#tethered-setup)
-- [untethered setup](https://docs.hello-robot.com/0.3/getting_started/connecting_to_stretch/#untethered-setup)
-- ssh directly into Stretch (you will first need to install and configure openssh-server on stretch using one of the two above setups)
-
-
-## Install LeRobot
-
-On Stretch's CLI, follow these steps:
-
-1. [Install Miniconda](https://docs.anaconda.com/miniconda/#quick-command-line-install):
-```bash
-mkdir -p ~/miniconda3
-wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
-bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
-rm ~/miniconda3/miniconda.sh
-~/miniconda3/bin/conda init bash
-```
-
-2. Comment out these lines in `~/.profile` (this can mess up paths used by conda and ~/.local/bin should already be in your PATH)
-```
-# set PATH so it includes user's private bin if it exists
-if [ -d "$HOME/.local/bin" ] ; then
- PATH="$HOME/.local/bin:$PATH"
-fi
-```
-
-3. Restart shell or `source ~/.bashrc`
-
-4. Create and activate a fresh conda environment for lerobot
-```bash
-conda create -y -n lerobot python=3.10 && conda activate lerobot
-```
-
-5. Clone LeRobot:
-```bash
-git clone https://github.com/huggingface/lerobot.git ~/lerobot
-```
-
-6. When using `miniconda`, install `ffmpeg` in your environment:
-```bash
-conda install ffmpeg -c conda-forge
-```
-
-7. Install LeRobot with stretch dependencies:
-```bash
-cd ~/lerobot && pip install -e ".[stretch]"
-```
-
-> **Note:** If you get this message, you can ignore it: `ERROR: pip's dependency resolver does not currently take into account all the packages that are installed.`
-
-8. Run a [system check](https://docs.hello-robot.com/0.3/getting_started/stretch_hardware_overview/#system-check) to make sure your robot is ready:
-```bash
-stretch_system_check.py
-```
-
-> **Note:** You may need to free the "robot process" after booting Stretch by running `stretch_free_robot_process.py`. For more info this Stretch's [doc](https://docs.hello-robot.com/0.3/getting_started/stretch_hardware_overview/#turning-off-gamepad-teleoperation).
-
-You should get something like this:
-```bash
-For use with S T R E T C H (R) from Hello Robot Inc.
----------------------------------------------------------------------
-
-Model = Stretch 3
-Tool = DexWrist 3 w/ Gripper
-Serial Number = stretch-se3-3054
-
----- Checking Hardware ----
-[Pass] Comms are ready
-[Pass] Actuators are ready
-[Warn] Sensors not ready (IMU AZ = -10.19 out of range -10.1 to -9.5)
-[Pass] Battery voltage is 13.6 V
-
----- Checking Software ----
-[Pass] Ubuntu 22.04 is ready
-[Pass] All APT pkgs are setup correctly
-[Pass] Firmware is up-to-date
-[Pass] Python pkgs are up-to-date
-[Pass] ROS2 Humble is ready
-```
-
-## Teleoperate, record a dataset and run a policy
-
-**Calibrate (Optional)**
-Before operating Stretch, you need to [home](https://docs.hello-robot.com/0.3/getting_started/stretch_hardware_overview/#homing) it first. Be mindful about giving Stretch some space as this procedure will move the robot's arm and gripper. Now run this command:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=stretch \
- --control.type=calibrate
-```
-This is equivalent to running `stretch_robot_home.py`
-
-> **Note:** If you run any of the LeRobot scripts below and Stretch is not properly homed, it will automatically home/calibrate first.
-
-**Teleoperate**
-Before trying teleoperation, you need to activate the gamepad controller by pressing the middle button. For more info, see Stretch's [doc](https://docs.hello-robot.com/0.3/getting_started/hello_robot/#gamepad-teleoperation).
-
-Now try out teleoperation (see above documentation to learn about the gamepad controls):
-
-> **NOTE:** To visualize the data, enable `--control.display_data=true`. This streams the data using `rerun`.
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=stretch \
- --control.type=teleoperate
-```
-This is essentially the same as running `stretch_gamepad_teleop.py`
-
-**Record a dataset**
-Once you're familiar with the gamepad controls and after a bit of practice, you can try to record your first dataset with Stretch.
-
-If you want to use the Hugging Face hub features for uploading your dataset and you haven't previously done it, make sure you've logged in using a write-access token, which can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens):
-```bash
-huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
-```
-
-Store your Hugging Face repository name in a variable to run these commands:
-```bash
-HF_USER=$(huggingface-cli whoami | head -n 1)
-echo $HF_USER
-```
-
-Record one episode:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=stretch \
- --control.type=record \
- --control.fps=30 \
- --control.single_task="Grasp a lego block and put it in the bin." \
- --control.repo_id=${HF_USER}/stretch_test \
- --control.tags='["tutorial"]' \
- --control.warmup_time_s=5 \
- --control.episode_time_s=30 \
- --control.reset_time_s=30 \
- --control.num_episodes=2 \
- --control.push_to_hub=true
-```
-
-> **Note:** If you're using ssh to connect to Stretch and run this script, you won't be able to visualize its cameras feed (though they will still be recording). To see the cameras stream, use [tethered](https://docs.hello-robot.com/0.3/getting_started/connecting_to_stretch/#tethered-setup) or [untethered setup](https://docs.hello-robot.com/0.3/getting_started/connecting_to_stretch/#untethered-setup).
-
-**Replay an episode**
-Now try to replay this episode (make sure the robot's initial position is the same):
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=stretch \
- --control.type=replay \
- --control.fps=30 \
- --control.repo_id=${HF_USER}/stretch_test \
- --control.episode=0
-```
-
-Follow [previous tutorial](https://github.com/huggingface/lerobot/blob/main/examples/7_get_started_with_real_robot.md#4-train-a-policy-on-your-data) to train a policy on your data and run inference on your robot. You will need to adapt the code for Stretch.
-
-> TODO(rcadene, aliberts): Add already setup environment and policy yaml configuration files
-
-If you need help, please reach out on Discord in the channel `#stretch3-mobile-arm`.
diff --git a/examples/9_use_aloha.md b/examples/9_use_aloha.md
deleted file mode 100644
index be2a323b..00000000
--- a/examples/9_use_aloha.md
+++ /dev/null
@@ -1,182 +0,0 @@
-This tutorial explains how to use [Aloha and Aloha 2 stationary](https://www.trossenrobotics.com/aloha-stationary) with LeRobot.
-
-## Setup
-
-Follow the [documentation from Trossen Robotics](https://docs.trossenrobotics.com/aloha_docs/2.0/getting_started/stationary/hardware_setup.html) for setting up the hardware and plugging the 4 arms and 4 cameras to your computer.
-
-
-## Install LeRobot
-
-On your computer:
-
-1. [Install Miniconda](https://docs.anaconda.com/miniconda/#quick-command-line-install):
-```bash
-mkdir -p ~/miniconda3
-wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
-bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
-rm ~/miniconda3/miniconda.sh
-~/miniconda3/bin/conda init bash
-```
-
-2. Restart shell or `source ~/.bashrc`
-
-3. Create and activate a fresh conda environment for lerobot
-```bash
-conda create -y -n lerobot python=3.10 && conda activate lerobot
-```
-
-4. Clone LeRobot:
-```bash
-git clone https://github.com/huggingface/lerobot.git ~/lerobot
-```
-
-5. When using `miniconda`, install `ffmpeg` in your environment:
-```bash
-conda install ffmpeg -c conda-forge
-```
-
-6. Install LeRobot with dependencies for the Aloha motors (dynamixel) and cameras (intelrealsense):
-```bash
-cd ~/lerobot && pip install -e ".[dynamixel, intelrealsense]"
-```
-
-## Teleoperate
-
-**/!\ FOR SAFETY, READ THIS /!\**
-Teleoperation consists in manually operating the leader arms to move the follower arms. Importantly:
-1. Make sure your leader arms are in the same position as the follower arms, so that the follower arms don't move too fast to match the leader arms,
-2. Our code assumes that your robot has been assembled following Trossen Robotics instructions. This allows us to skip calibration, as we use the pre-defined calibration files in `.cache/calibration/aloha_default`. If you replace a motor, make sure you follow the exact instructions from Trossen Robotics.
-
-By running the following code, you can start your first **SAFE** teleoperation:
-
-> **NOTE:** To visualize the data, enable `--control.display_data=true`. This streams the data using `rerun`.
-
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=aloha \
- --robot.max_relative_target=5 \
- --control.type=teleoperate
-```
-
-By adding `--robot.max_relative_target=5`, we override the default value for `max_relative_target` defined in [`AlohaRobotConfig`](lerobot/common/robot_devices/robots/configs.py). It is expected to be `5` to limit the magnitude of the movement for more safety, but the teleoperation won't be smooth. When you feel confident, you can disable this limit by adding `--robot.max_relative_target=null` to the command line:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=aloha \
- --robot.max_relative_target=null \
- --control.type=teleoperate
-```
-
-## Record a dataset
-
-Once you're familiar with teleoperation, you can record your first dataset with Aloha.
-
-If you want to use the Hugging Face hub features for uploading your dataset and you haven't previously done it, make sure you've logged in using a write-access token, which can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens):
-```bash
-huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
-```
-
-Store your Hugging Face repository name in a variable to run these commands:
-```bash
-HF_USER=$(huggingface-cli whoami | head -n 1)
-echo $HF_USER
-```
-
-Record 2 episodes and upload your dataset to the hub:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=aloha \
- --robot.max_relative_target=null \
- --control.type=record \
- --control.fps=30 \
- --control.single_task="Grasp a lego block and put it in the bin." \
- --control.repo_id=${HF_USER}/aloha_test \
- --control.tags='["tutorial"]' \
- --control.warmup_time_s=5 \
- --control.episode_time_s=30 \
- --control.reset_time_s=30 \
- --control.num_episodes=2 \
- --control.push_to_hub=true
-```
-
-## Visualize a dataset
-
-If you uploaded your dataset to the hub with `--control.push_to_hub=true`, you can [visualize your dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset) by copy pasting your repo id given by:
-```bash
-echo ${HF_USER}/aloha_test
-```
-
-If you didn't upload with `--control.push_to_hub=false`, you can also visualize it locally with:
-```bash
-python lerobot/scripts/visualize_dataset_html.py \
- --repo-id ${HF_USER}/aloha_test
-```
-
-## Replay an episode
-
-**/!\ FOR SAFETY, READ THIS /!\**
-Replay consists in automatically replaying the sequence of actions (i.e. goal positions for your motors) recorded in a given dataset episode. Make sure the current initial position of your robot is similar to the one in your episode, so that your follower arms don't move too fast to go to the first goal positions. For safety, you might want to add `--robot.max_relative_target=5` to your command line as explained above.
-
-Now try to replay the first episode on your robot:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=aloha \
- --robot.max_relative_target=null \
- --control.type=replay \
- --control.fps=30 \
- --control.repo_id=${HF_USER}/aloha_test \
- --control.episode=0
-```
-
-## Train a policy
-
-To train a policy to control your robot, use the [`python lerobot/scripts/train.py`](../lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
-```bash
-python lerobot/scripts/train.py \
- --dataset.repo_id=${HF_USER}/aloha_test \
- --policy.type=act \
- --output_dir=outputs/train/act_aloha_test \
- --job_name=act_aloha_test \
- --policy.device=cuda \
- --wandb.enable=true
-```
-
-Let's explain it:
-1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/aloha_test`.
-2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../lerobot/common/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor states, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
-4. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
-5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
-
-For more information on the `train` script see the previous tutorial: [`examples/4_train_policy_with_script.md`](../examples/4_train_policy_with_script.md)
-
-Training should take several hours. You will find checkpoints in `outputs/train/act_aloha_test/checkpoints`.
-
-## Evaluate your policy
-
-You can use the `record` function from [`lerobot/scripts/control_robot.py`](../lerobot/scripts/control_robot.py) but with a policy checkpoint as input. For instance, run this command to record 10 evaluation episodes:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=aloha \
- --control.type=record \
- --control.fps=30 \
- --control.single_task="Grasp a lego block and put it in the bin." \
- --control.repo_id=${HF_USER}/eval_act_aloha_test \
- --control.tags='["tutorial"]' \
- --control.warmup_time_s=5 \
- --control.episode_time_s=30 \
- --control.reset_time_s=30 \
- --control.num_episodes=10 \
- --control.push_to_hub=true \
- --control.policy.path=outputs/train/act_aloha_test/checkpoints/last/pretrained_model \
- --control.num_image_writer_processes=1
-```
-
-As you can see, it's almost the same command as previously used to record your training dataset. Two things changed:
-1. There is an additional `--control.policy.path` argument which indicates the path to your policy checkpoint with (e.g. `outputs/train/eval_act_aloha_test/checkpoints/last/pretrained_model`). You can also use the model repository if you uploaded a model checkpoint to the hub (e.g. `${HF_USER}/act_aloha_test`).
-2. The name of dataset begins by `eval` to reflect that you are running inference (e.g. `${HF_USER}/eval_act_aloha_test`).
-3. We use `--control.num_image_writer_processes=1` instead of the default value (`0`). On our computer, using a dedicated process to write images from the 4 cameras on disk allows to reach constant 30 fps during inference. Feel free to explore different values for `--control.num_image_writer_processes`.
-
-## More
-
-Follow this [previous tutorial](https://github.com/huggingface/lerobot/blob/main/examples/7_get_started_with_real_robot.md#4-train-a-policy-on-your-data) for a more in-depth explanation.
-
-If you have any question or need help, please reach out on Discord in the channel `#aloha-arm`.
diff --git a/examples/advanced/1_add_image_transforms.py b/examples/advanced/1_add_image_transforms.py
deleted file mode 100644
index f1460926..00000000
--- a/examples/advanced/1_add_image_transforms.py
+++ /dev/null
@@ -1,67 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-This script demonstrates how to use torchvision's image transformation with LeRobotDataset for data
-augmentation purposes. The transformations are passed to the dataset as an argument upon creation, and
-transforms are applied to the observation images before they are returned in the dataset's __getitem__.
-"""
-
-from pathlib import Path
-
-from torchvision.transforms import ToPILImage, v2
-
-from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
-
-dataset_repo_id = "lerobot/aloha_static_screw_driver"
-
-# Create a LeRobotDataset with no transformations
-dataset = LeRobotDataset(dataset_repo_id, episodes=[0])
-# This is equivalent to `dataset = LeRobotDataset(dataset_repo_id, image_transforms=None)`
-
-# Get the index of the first observation in the first episode
-first_idx = dataset.episode_data_index["from"][0].item()
-
-# Get the frame corresponding to the first camera
-frame = dataset[first_idx][dataset.meta.camera_keys[0]]
-
-
-# Define the transformations
-transforms = v2.Compose(
- [
- v2.ColorJitter(brightness=(0.5, 1.5)),
- v2.ColorJitter(contrast=(0.5, 1.5)),
- v2.ColorJitter(hue=(-0.1, 0.1)),
- v2.RandomAdjustSharpness(sharpness_factor=2, p=1),
- ]
-)
-
-# Create another LeRobotDataset with the defined transformations
-transformed_dataset = LeRobotDataset(dataset_repo_id, episodes=[0], image_transforms=transforms)
-
-# Get a frame from the transformed dataset
-transformed_frame = transformed_dataset[first_idx][transformed_dataset.meta.camera_keys[0]]
-
-# Create a directory to store output images
-output_dir = Path("outputs/image_transforms")
-output_dir.mkdir(parents=True, exist_ok=True)
-
-# Save the original frame
-to_pil = ToPILImage()
-to_pil(frame).save(output_dir / "original_frame.png", quality=100)
-print(f"Original frame saved to {output_dir / 'original_frame.png'}.")
-
-# Save the transformed frame
-to_pil(transformed_frame).save(output_dir / "transformed_frame.png", quality=100)
-print(f"Transformed frame saved to {output_dir / 'transformed_frame.png'}.")
diff --git a/examples/advanced/2_calculate_validation_loss.py b/examples/advanced/2_calculate_validation_loss.py
deleted file mode 100644
index aac8e2e4..00000000
--- a/examples/advanced/2_calculate_validation_loss.py
+++ /dev/null
@@ -1,104 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""This script demonstrates how to slice a dataset and calculate the loss on a subset of the data.
-
-This technique can be useful for debugging and testing purposes, as well as identifying whether a policy
-is learning effectively.
-
-Furthermore, relying on validation loss to evaluate performance is generally not considered a good practice,
-especially in the context of imitation learning. The most reliable approach is to evaluate the policy directly
-on the target environment, whether that be in simulation or the real world.
-"""
-
-import math
-
-import torch
-
-from lerobot.common.datasets.lerobot_dataset import LeRobotDataset, LeRobotDatasetMetadata
-from lerobot.common.policies.diffusion.modeling_diffusion import DiffusionPolicy
-
-
-def main():
- device = torch.device("cuda")
-
- # Download the diffusion policy for pusht environment
- pretrained_policy_path = "lerobot/diffusion_pusht"
- # OR uncomment the following to evaluate a policy from the local outputs/train folder.
- # pretrained_policy_path = Path("outputs/train/example_pusht_diffusion")
-
- policy = DiffusionPolicy.from_pretrained(pretrained_policy_path)
- policy.eval()
- policy.to(device)
-
- # Set up the dataset.
- delta_timestamps = {
- # Load the previous image and state at -0.1 seconds before current frame,
- # then load current image and state corresponding to 0.0 second.
- "observation.image": [-0.1, 0.0],
- "observation.state": [-0.1, 0.0],
- # Load the previous action (-0.1), the next action to be executed (0.0),
- # and 14 future actions with a 0.1 seconds spacing. All these actions will be
- # used to calculate the loss.
- "action": [-0.1, 0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3, 1.4],
- }
-
- # Load the last 10% of episodes of the dataset as a validation set.
- # - Load dataset metadata
- dataset_metadata = LeRobotDatasetMetadata("lerobot/pusht")
- # - Calculate train and val episodes
- total_episodes = dataset_metadata.total_episodes
- episodes = list(range(dataset_metadata.total_episodes))
- num_train_episodes = math.floor(total_episodes * 90 / 100)
- train_episodes = episodes[:num_train_episodes]
- val_episodes = episodes[num_train_episodes:]
- print(f"Number of episodes in full dataset: {total_episodes}")
- print(f"Number of episodes in training dataset (90% subset): {len(train_episodes)}")
- print(f"Number of episodes in validation dataset (10% subset): {len(val_episodes)}")
- # - Load train and val datasets
- train_dataset = LeRobotDataset(
- "lerobot/pusht", episodes=train_episodes, delta_timestamps=delta_timestamps
- )
- val_dataset = LeRobotDataset("lerobot/pusht", episodes=val_episodes, delta_timestamps=delta_timestamps)
- print(f"Number of frames in training dataset (90% subset): {len(train_dataset)}")
- print(f"Number of frames in validation dataset (10% subset): {len(val_dataset)}")
-
- # Create dataloader for evaluation.
- val_dataloader = torch.utils.data.DataLoader(
- val_dataset,
- num_workers=4,
- batch_size=64,
- shuffle=False,
- pin_memory=device != torch.device("cpu"),
- drop_last=False,
- )
-
- # Run validation loop.
- loss_cumsum = 0
- n_examples_evaluated = 0
- for batch in val_dataloader:
- batch = {k: v.to(device, non_blocking=True) for k, v in batch.items()}
- loss, _ = policy.forward(batch)
-
- loss_cumsum += loss.item()
- n_examples_evaluated += batch["index"].shape[0]
-
- # Calculate the average loss over the validation set.
- average_loss = loss_cumsum / n_examples_evaluated
-
- print(f"Average loss on validation set: {average_loss:.4f}")
-
-
-if __name__ == "__main__":
- main()
diff --git a/examples/backward_compatibility/replay.py b/examples/backward_compatibility/replay.py
new file mode 100644
index 00000000..6bca0570
--- /dev/null
+++ b/examples/backward_compatibility/replay.py
@@ -0,0 +1,106 @@
+# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Replays the actions of an episode from a dataset on a robot.
+
+Example:
+
+```shell
+lerobot-replay \
+ --robot.type=so100_follower \
+ --robot.port=/dev/tty.usbmodem58760431541 \
+ --robot.id=black \
+ --dataset.repo_id=aliberts/record-test \
+ --dataset.episode=2
+```
+"""
+
+import logging
+import time
+from dataclasses import asdict, dataclass
+from pathlib import Path
+from pprint import pformat
+
+import draccus
+
+from lerobot.datasets.lerobot_dataset import LeRobotDataset
+from lerobot.robots import ( # noqa: F401
+ Robot,
+ RobotConfig,
+ koch_follower,
+ make_robot_from_config,
+ so100_follower,
+ so101_follower,
+)
+from lerobot.utils.constants import ACTION
+from lerobot.utils.robot_utils import busy_wait
+from lerobot.utils.utils import (
+ init_logging,
+ log_say,
+)
+
+
+@dataclass
+class DatasetReplayConfig:
+ # Dataset identifier. By convention it should match '{hf_username}/{dataset_name}' (e.g. `lerobot/test`).
+ repo_id: str
+ # Episode to replay.
+ episode: int
+ # Root directory where the dataset will be stored (e.g. 'dataset/path').
+ root: str | Path | None = None
+ # Limit the frames per second. By default, uses the policy fps.
+ fps: int = 30
+
+
+@dataclass
+class ReplayConfig:
+ robot: RobotConfig
+ dataset: DatasetReplayConfig
+ # Use vocal synthesis to read events.
+ play_sounds: bool = True
+
+
+@draccus.wrap()
+def replay(cfg: ReplayConfig):
+ init_logging()
+ logging.info(pformat(asdict(cfg)))
+
+ robot = make_robot_from_config(cfg.robot)
+ dataset = LeRobotDataset(cfg.dataset.repo_id, root=cfg.dataset.root, episodes=[cfg.dataset.episode])
+ actions = dataset.hf_dataset.select_columns(ACTION)
+ robot.connect()
+
+ log_say("Replaying episode", cfg.play_sounds, blocking=True)
+ for idx in range(dataset.num_frames):
+ start_episode_t = time.perf_counter()
+
+ action_array = actions[idx][ACTION]
+ action = {}
+ for i, name in enumerate(dataset.features[ACTION]["names"]):
+ key = f"{name.removeprefix('main_')}.pos"
+ action[key] = action_array[i].item()
+
+ action["shoulder_lift.pos"] = -(action["shoulder_lift.pos"] - 90)
+ action["elbow_flex.pos"] -= 90
+ robot.send_action(action)
+
+ dt_s = time.perf_counter() - start_episode_t
+ busy_wait(1 / dataset.fps - dt_s)
+
+ robot.disconnect()
+
+
+if __name__ == "__main__":
+ replay()
diff --git a/examples/1_load_lerobot_dataset.py b/examples/dataset/load_lerobot_dataset.py
similarity index 88%
rename from examples/1_load_lerobot_dataset.py
rename to examples/dataset/load_lerobot_dataset.py
index 07db38a1..a6916981 100644
--- a/examples/1_load_lerobot_dataset.py
+++ b/examples/dataset/load_lerobot_dataset.py
@@ -32,7 +32,7 @@ import torch
from huggingface_hub import HfApi
import lerobot
-from lerobot.common.datasets.lerobot_dataset import LeRobotDataset, LeRobotDatasetMetadata
+from lerobot.datasets.lerobot_dataset import LeRobotDataset, LeRobotDatasetMetadata
# We ported a number of existing datasets ourselves, use this to see the list:
print("List of available datasets:")
@@ -92,11 +92,11 @@ print(dataset.hf_dataset)
# LeRobot datasets also subclasses PyTorch datasets so you can do everything you know and love from working
# with the latter, like iterating through the dataset.
# The __getitem__ iterates over the frames of the dataset. Since our datasets are also structured by
-# episodes, you can access the frame indices of any episode using the episode_data_index. Here, we access
+# episodes, you can access the frame indices of any episode using dataset.meta.episodes. Here, we access
# frame indices associated to the first episode:
episode_index = 0
-from_idx = dataset.episode_data_index["from"][episode_index].item()
-to_idx = dataset.episode_data_index["to"][episode_index].item()
+from_idx = dataset.meta.episodes["dataset_from_index"][episode_index]
+to_idx = dataset.meta.episodes["dataset_to_index"][episode_index]
# Then we grab all the image frames from the first camera:
camera_key = dataset.meta.camera_keys[0]
@@ -132,17 +132,15 @@ print(f"\n{dataset[0][camera_key].shape=}") # (4, c, h, w)
print(f"{dataset[0]['observation.state'].shape=}") # (6, c)
print(f"{dataset[0]['action'].shape=}\n") # (64, c)
-# Finally, our datasets are fully compatible with PyTorch dataloaders and samplers because they are just
-# PyTorch datasets.
-dataloader = torch.utils.data.DataLoader(
- dataset,
- num_workers=0,
- batch_size=32,
- shuffle=True,
-)
-
-for batch in dataloader:
- print(f"{batch[camera_key].shape=}") # (32, 4, c, h, w)
- print(f"{batch['observation.state'].shape=}") # (32, 6, c)
- print(f"{batch['action'].shape=}") # (32, 64, c)
- break
+if __name__ == "__main__":
+ dataloader = torch.utils.data.DataLoader(
+ dataset,
+ num_workers=4,
+ batch_size=32,
+ shuffle=True,
+ )
+ for batch in dataloader:
+ print(f"{batch[camera_key].shape=}") # (32, 4, c, h, w)
+ print(f"{batch['observation.state'].shape=}") # (32, 6, c)
+ print(f"{batch['action'].shape=}") # (32, 64, c)
+ break
diff --git a/examples/dataset/use_dataset_image_transforms.py b/examples/dataset/use_dataset_image_transforms.py
new file mode 100644
index 00000000..c28f2ef0
--- /dev/null
+++ b/examples/dataset/use_dataset_image_transforms.py
@@ -0,0 +1,177 @@
+#!/usr/bin/env python
+
+# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+This example demonstrates how to use image transforms with LeRobot datasets for data augmentation during training.
+
+Image transforms are applied to camera frames to improve model robustness and generalization. They are applied
+at training time only, not during dataset recording, allowing you to experiment with different augmentations
+without re-recording data.
+"""
+
+import torch
+from torchvision.transforms import v2
+from torchvision.transforms.functional import to_pil_image
+
+from lerobot.datasets.lerobot_dataset import LeRobotDataset
+from lerobot.datasets.transforms import ImageTransformConfig, ImageTransforms, ImageTransformsConfig
+
+
+def save_image(tensor, filename):
+ """Helper function to save a tensor as an image file."""
+ if tensor.dim() == 3: # [C, H, W]
+ if tensor.max() > 1.0:
+ tensor = tensor / 255.0
+ tensor = torch.clamp(tensor, 0.0, 1.0)
+ pil_image = to_pil_image(tensor)
+ pil_image.save(filename)
+ print(f"Saved: {filename}")
+ else:
+ print(f"Skipped {filename}: unexpected tensor shape {tensor.shape}")
+
+
+def example_1_default_transforms():
+ """Example 1: Use default transform configuration and save original vs transformed images"""
+ print("\n Example 1: Default Transform Configuration with Image Saving")
+
+ repo_id = "pepijn223/record_main_0" # Example dataset
+
+ try:
+ # Load dataset without transforms (original)
+ dataset_original = LeRobotDataset(repo_id=repo_id)
+
+ # Load dataset with transforms enabled
+ transforms_config = ImageTransformsConfig(
+ enable=True, # Enable transforms (disabled by default)
+ max_num_transforms=2, # Apply up to 2 transforms per frame
+ random_order=False, # Apply in standard order
+ )
+ dataset_with_transforms = LeRobotDataset(
+ repo_id=repo_id, image_transforms=ImageTransforms(transforms_config)
+ )
+
+ # Save original and transformed images for comparison
+ if len(dataset_original) > 0:
+ frame_idx = 0 # Use first frame
+ original_sample = dataset_original[frame_idx]
+ transformed_sample = dataset_with_transforms[frame_idx]
+
+ print(f"Saving comparison images (frame {frame_idx}):")
+
+ for cam_key in dataset_original.meta.camera_keys:
+ if cam_key in original_sample and cam_key in transformed_sample:
+ cam_name = cam_key.replace(".", "_").replace("/", "_")
+
+ # Save original and transformed images
+ save_image(original_sample[cam_key], f"{cam_name}_original.png")
+ save_image(transformed_sample[cam_key], f"{cam_name}_transformed.png")
+
+ except Exception as e:
+ print(f"Could not load dataset '{repo_id}': {e}")
+
+
+def example_2_custom_transforms():
+ """Example 2: Create custom transform configuration and save examples"""
+ print("\n Example 2: Custom Transform Configuration")
+
+ repo_id = "pepijn223/record_main_0" # Example dataset
+
+ try:
+ # Create custom transform configuration with strong effects
+ custom_transforms_config = ImageTransformsConfig(
+ enable=True,
+ max_num_transforms=2, # Apply up to 2 transforms per frame
+ random_order=True, # Apply transforms in random order
+ tfs={
+ "brightness": ImageTransformConfig(
+ weight=1.0,
+ type="ColorJitter",
+ kwargs={"brightness": (0.5, 1.5)}, # Strong brightness range
+ ),
+ "contrast": ImageTransformConfig(
+ weight=1.0, # Higher weight = more likely to be selected
+ type="ColorJitter",
+ kwargs={"contrast": (0.6, 1.4)}, # Strong contrast
+ ),
+ "sharpness": ImageTransformConfig(
+ weight=0.5, # Lower weight = less likely to be selected
+ type="SharpnessJitter",
+ kwargs={"sharpness": (0.2, 2.0)}, # Strong sharpness variation
+ ),
+ },
+ )
+
+ dataset_with_custom_transforms = LeRobotDataset(
+ repo_id=repo_id, image_transforms=ImageTransforms(custom_transforms_config)
+ )
+
+ # Save examples with strong transforms
+ if len(dataset_with_custom_transforms) > 0:
+ sample = dataset_with_custom_transforms[0]
+ print("Saving custom transform examples:")
+
+ for cam_key in dataset_with_custom_transforms.meta.camera_keys:
+ if cam_key in sample:
+ cam_name = cam_key.replace(".", "_").replace("/", "_")
+ save_image(sample[cam_key], f"{cam_name}_custom_transforms.png")
+
+ except Exception as e:
+ print(f"Could not load dataset '{repo_id}': {e}")
+
+
+def example_3_torchvision_transforms():
+ """Example 3: Use pure torchvision transforms and save examples"""
+ print("\n Example 3: Pure Torchvision Transforms")
+
+ repo_id = "pepijn223/record_main_0" # Example dataset
+
+ try:
+ # Create torchvision transform pipeline
+ torchvision_transforms = v2.Compose(
+ [
+ v2.ColorJitter(brightness=0.3, contrast=0.3, saturation=0.3, hue=0.1),
+ v2.GaussianBlur(kernel_size=3, sigma=(0.1, 2.0)),
+ v2.RandomRotation(degrees=10), # Small rotation
+ ]
+ )
+
+ dataset_with_torchvision = LeRobotDataset(repo_id=repo_id, image_transforms=torchvision_transforms)
+
+ # Save examples with torchvision transforms
+ if len(dataset_with_torchvision) > 0:
+ sample = dataset_with_torchvision[0]
+ print("Saving torchvision transform examples:")
+
+ for cam_key in dataset_with_torchvision.meta.camera_keys:
+ if cam_key in sample:
+ cam_name = cam_key.replace(".", "_").replace("/", "_")
+ save_image(sample[cam_key], f"{cam_name}_torchvision.png")
+
+ except Exception as e:
+ print(f"Could not load dataset '{repo_id}': {e}")
+
+
+def main():
+ """Run all examples"""
+ print("LeRobot Dataset Image Transforms Examples")
+
+ example_1_default_transforms()
+ example_2_custom_transforms()
+ example_3_torchvision_transforms()
+
+
+if __name__ == "__main__":
+ main()
diff --git a/examples/dataset/use_dataset_tools.py b/examples/dataset/use_dataset_tools.py
new file mode 100644
index 00000000..bd7c389b
--- /dev/null
+++ b/examples/dataset/use_dataset_tools.py
@@ -0,0 +1,124 @@
+#!/usr/bin/env python
+
+# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Example script demonstrating dataset tools utilities.
+
+This script shows how to:
+1. Delete episodes from a dataset
+2. Split a dataset into train/val sets
+3. Add/remove features
+4. Merge datasets
+
+Usage:
+ python examples/dataset/use_dataset_tools.py
+"""
+
+import numpy as np
+
+from lerobot.datasets.dataset_tools import (
+ add_features,
+ delete_episodes,
+ merge_datasets,
+ modify_features,
+ remove_feature,
+ split_dataset,
+)
+from lerobot.datasets.lerobot_dataset import LeRobotDataset
+
+
+def main():
+ dataset = LeRobotDataset("lerobot/pusht")
+
+ print(f"Original dataset: {dataset.meta.total_episodes} episodes, {dataset.meta.total_frames} frames")
+ print(f"Features: {list(dataset.meta.features.keys())}")
+
+ print("\n1. Deleting episodes 0 and 2...")
+ filtered_dataset = delete_episodes(dataset, episode_indices=[0, 2], repo_id="lerobot/pusht_filtered")
+ print(f"Filtered dataset: {filtered_dataset.meta.total_episodes} episodes")
+
+ print("\n2. Splitting dataset into train/val...")
+ splits = split_dataset(
+ dataset,
+ splits={"train": 0.8, "val": 0.2},
+ )
+ print(f"Train split: {splits['train'].meta.total_episodes} episodes")
+ print(f"Val split: {splits['val'].meta.total_episodes} episodes")
+
+ print("\n3. Adding features...")
+
+ reward_values = np.random.randn(dataset.meta.total_frames).astype(np.float32)
+
+ def compute_success(row_dict, episode_index, frame_index):
+ episode_length = 10
+ return float(frame_index >= episode_length - 10)
+
+ dataset_with_features = add_features(
+ dataset,
+ features={
+ "reward": (
+ reward_values,
+ {"dtype": "float32", "shape": (1,), "names": None},
+ ),
+ "success": (
+ compute_success,
+ {"dtype": "float32", "shape": (1,), "names": None},
+ ),
+ },
+ repo_id="lerobot/pusht_with_features",
+ )
+
+ print(f"New features: {list(dataset_with_features.meta.features.keys())}")
+
+ print("\n4. Removing the success feature...")
+ dataset_cleaned = remove_feature(
+ dataset_with_features, feature_names="success", repo_id="lerobot/pusht_cleaned"
+ )
+ print(f"Features after removal: {list(dataset_cleaned.meta.features.keys())}")
+
+ print("\n5. Using modify_features to add and remove features simultaneously...")
+ dataset_modified = modify_features(
+ dataset_with_features,
+ add_features={
+ "discount": (
+ np.ones(dataset.meta.total_frames, dtype=np.float32) * 0.99,
+ {"dtype": "float32", "shape": (1,), "names": None},
+ ),
+ },
+ remove_features="reward",
+ repo_id="lerobot/pusht_modified",
+ )
+ print(f"Modified features: {list(dataset_modified.meta.features.keys())}")
+
+ print("\n6. Merging train and val splits back together...")
+ merged = merge_datasets([splits["train"], splits["val"]], output_repo_id="lerobot/pusht_merged")
+ print(f"Merged dataset: {merged.meta.total_episodes} episodes")
+
+ print("\n7. Complex workflow example...")
+
+ if len(dataset.meta.camera_keys) > 1:
+ camera_to_remove = dataset.meta.camera_keys[0]
+ print(f"Removing camera: {camera_to_remove}")
+ dataset_no_cam = remove_feature(
+ dataset, feature_names=camera_to_remove, repo_id="pusht_no_first_camera"
+ )
+ print(f"Remaining cameras: {dataset_no_cam.meta.camera_keys}")
+
+ print("\nDone! Check ~/.cache/huggingface/lerobot/ for the created datasets.")
+
+
+if __name__ == "__main__":
+ main()
diff --git a/examples/lekiwi/evaluate.py b/examples/lekiwi/evaluate.py
new file mode 100644
index 00000000..4501008d
--- /dev/null
+++ b/examples/lekiwi/evaluate.py
@@ -0,0 +1,138 @@
+# !/usr/bin/env python
+
+# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from lerobot.datasets.lerobot_dataset import LeRobotDataset
+from lerobot.datasets.utils import hw_to_dataset_features
+from lerobot.policies.act.modeling_act import ACTPolicy
+from lerobot.policies.factory import make_pre_post_processors
+from lerobot.processor import make_default_processors
+from lerobot.robots.lekiwi import LeKiwiClient, LeKiwiClientConfig
+from lerobot.scripts.lerobot_record import record_loop
+from lerobot.utils.constants import ACTION, OBS_STR
+from lerobot.utils.control_utils import init_keyboard_listener
+from lerobot.utils.utils import log_say
+from lerobot.utils.visualization_utils import init_rerun
+
+NUM_EPISODES = 2
+FPS = 30
+EPISODE_TIME_SEC = 60
+TASK_DESCRIPTION = "My task description"
+HF_MODEL_ID = "/"
+HF_DATASET_ID = "/"
+
+# Create the robot configuration & robot
+robot_config = LeKiwiClientConfig(remote_ip="172.18.134.136", id="lekiwi")
+
+robot = LeKiwiClient(robot_config)
+
+# Create policy
+policy = ACTPolicy.from_pretrained(HF_MODEL_ID)
+
+# Configure the dataset features
+action_features = hw_to_dataset_features(robot.action_features, ACTION)
+obs_features = hw_to_dataset_features(robot.observation_features, OBS_STR)
+dataset_features = {**action_features, **obs_features}
+
+# Create the dataset
+dataset = LeRobotDataset.create(
+ repo_id=HF_DATASET_ID,
+ fps=FPS,
+ features=dataset_features,
+ robot_type=robot.name,
+ use_videos=True,
+ image_writer_threads=4,
+)
+
+# Build Policy Processors
+preprocessor, postprocessor = make_pre_post_processors(
+ policy_cfg=policy,
+ pretrained_path=HF_MODEL_ID,
+ dataset_stats=dataset.meta.stats,
+ # The inference device is automatically set to match the detected hardware, overriding any previous device settings from training to ensure compatibility.
+ preprocessor_overrides={"device_processor": {"device": str(policy.config.device)}},
+)
+
+# Connect the robot
+# To connect you already should have this script running on LeKiwi: `python -m lerobot.robots.lekiwi.lekiwi_host --robot.id=my_awesome_kiwi`
+robot.connect()
+
+# TODO(Steven): Update this example to use pipelines
+teleop_action_processor, robot_action_processor, robot_observation_processor = make_default_processors()
+
+# Initialize the keyboard listener and rerun visualization
+listener, events = init_keyboard_listener()
+init_rerun(session_name="lekiwi_evaluate")
+
+if not robot.is_connected:
+ raise ValueError("Robot is not connected!")
+
+print("Starting evaluate loop...")
+recorded_episodes = 0
+while recorded_episodes < NUM_EPISODES and not events["stop_recording"]:
+ log_say(f"Running inference, recording eval episode {recorded_episodes} of {NUM_EPISODES}")
+
+ # Main record loop
+ record_loop(
+ robot=robot,
+ events=events,
+ fps=FPS,
+ policy=policy,
+ preprocessor=preprocessor, # Pass the pre and post policy processors
+ postprocessor=postprocessor,
+ dataset=dataset,
+ control_time_s=EPISODE_TIME_SEC,
+ single_task=TASK_DESCRIPTION,
+ display_data=True,
+ teleop_action_processor=teleop_action_processor,
+ robot_action_processor=robot_action_processor,
+ robot_observation_processor=robot_observation_processor,
+ )
+
+ # Reset the environment if not stopping or re-recording
+ if not events["stop_recording"] and (
+ (recorded_episodes < NUM_EPISODES - 1) or events["rerecord_episode"]
+ ):
+ log_say("Reset the environment")
+ record_loop(
+ robot=robot,
+ events=events,
+ fps=FPS,
+ control_time_s=EPISODE_TIME_SEC,
+ single_task=TASK_DESCRIPTION,
+ display_data=True,
+ teleop_action_processor=teleop_action_processor,
+ robot_action_processor=robot_action_processor,
+ robot_observation_processor=robot_observation_processor,
+ )
+
+ if events["rerecord_episode"]:
+ log_say("Re-record episode")
+ events["rerecord_episode"] = False
+ events["exit_early"] = False
+ dataset.clear_episode_buffer()
+ continue
+
+ # Save episode
+ dataset.save_episode()
+ recorded_episodes += 1
+
+# Clean up
+log_say("Stop recording")
+robot.disconnect()
+listener.stop()
+
+dataset.finalize()
+dataset.push_to_hub()
diff --git a/examples/lekiwi/record.py b/examples/lekiwi/record.py
new file mode 100644
index 00000000..491e1c38
--- /dev/null
+++ b/examples/lekiwi/record.py
@@ -0,0 +1,135 @@
+# !/usr/bin/env python
+
+# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from lerobot.datasets.lerobot_dataset import LeRobotDataset
+from lerobot.datasets.utils import hw_to_dataset_features
+from lerobot.processor import make_default_processors
+from lerobot.robots.lekiwi.config_lekiwi import LeKiwiClientConfig
+from lerobot.robots.lekiwi.lekiwi_client import LeKiwiClient
+from lerobot.scripts.lerobot_record import record_loop
+from lerobot.teleoperators.keyboard import KeyboardTeleop, KeyboardTeleopConfig
+from lerobot.teleoperators.so100_leader import SO100Leader, SO100LeaderConfig
+from lerobot.utils.constants import ACTION, OBS_STR
+from lerobot.utils.control_utils import init_keyboard_listener
+from lerobot.utils.utils import log_say
+from lerobot.utils.visualization_utils import init_rerun
+
+NUM_EPISODES = 2
+FPS = 30
+EPISODE_TIME_SEC = 30
+RESET_TIME_SEC = 10
+TASK_DESCRIPTION = "My task description"
+HF_REPO_ID = "/"
+
+# Create the robot and teleoperator configurations
+robot_config = LeKiwiClientConfig(remote_ip="172.18.134.136", id="lekiwi")
+leader_arm_config = SO100LeaderConfig(port="/dev/tty.usbmodem585A0077581", id="my_awesome_leader_arm")
+keyboard_config = KeyboardTeleopConfig()
+
+# Initialize the robot and teleoperator
+robot = LeKiwiClient(robot_config)
+leader_arm = SO100Leader(leader_arm_config)
+keyboard = KeyboardTeleop(keyboard_config)
+
+# TODO(Steven): Update this example to use pipelines
+teleop_action_processor, robot_action_processor, robot_observation_processor = make_default_processors()
+
+# Configure the dataset features
+action_features = hw_to_dataset_features(robot.action_features, ACTION)
+obs_features = hw_to_dataset_features(robot.observation_features, OBS_STR)
+dataset_features = {**action_features, **obs_features}
+
+# Create the dataset
+dataset = LeRobotDataset.create(
+ repo_id=HF_REPO_ID,
+ fps=FPS,
+ features=dataset_features,
+ robot_type=robot.name,
+ use_videos=True,
+ image_writer_threads=4,
+)
+
+# Connect the robot and teleoperator
+# To connect you already should have this script running on LeKiwi: `python -m lerobot.robots.lekiwi.lekiwi_host --robot.id=my_awesome_kiwi`
+robot.connect()
+leader_arm.connect()
+keyboard.connect()
+
+# Initialize the keyboard listener and rerun visualization
+listener, events = init_keyboard_listener()
+init_rerun(session_name="lekiwi_record")
+
+if not robot.is_connected or not leader_arm.is_connected or not keyboard.is_connected:
+ raise ValueError("Robot or teleop is not connected!")
+
+print("Starting record loop...")
+recorded_episodes = 0
+while recorded_episodes < NUM_EPISODES and not events["stop_recording"]:
+ log_say(f"Recording episode {recorded_episodes}")
+
+ # Main record loop
+ record_loop(
+ robot=robot,
+ events=events,
+ fps=FPS,
+ dataset=dataset,
+ teleop=[leader_arm, keyboard],
+ control_time_s=EPISODE_TIME_SEC,
+ single_task=TASK_DESCRIPTION,
+ display_data=True,
+ teleop_action_processor=teleop_action_processor,
+ robot_action_processor=robot_action_processor,
+ robot_observation_processor=robot_observation_processor,
+ )
+
+ # Reset the environment if not stopping or re-recording
+ if not events["stop_recording"] and (
+ (recorded_episodes < NUM_EPISODES - 1) or events["rerecord_episode"]
+ ):
+ log_say("Reset the environment")
+ record_loop(
+ robot=robot,
+ events=events,
+ fps=FPS,
+ teleop=[leader_arm, keyboard],
+ control_time_s=RESET_TIME_SEC,
+ single_task=TASK_DESCRIPTION,
+ display_data=True,
+ teleop_action_processor=teleop_action_processor,
+ robot_action_processor=robot_action_processor,
+ robot_observation_processor=robot_observation_processor,
+ )
+
+ if events["rerecord_episode"]:
+ log_say("Re-record episode")
+ events["rerecord_episode"] = False
+ events["exit_early"] = False
+ dataset.clear_episode_buffer()
+ continue
+
+ # Save episode
+ dataset.save_episode()
+ recorded_episodes += 1
+
+# Clean up
+log_say("Stop recording")
+robot.disconnect()
+leader_arm.disconnect()
+keyboard.disconnect()
+listener.stop()
+
+dataset.finalize()
+dataset.push_to_hub()
diff --git a/examples/lekiwi/replay.py b/examples/lekiwi/replay.py
new file mode 100644
index 00000000..3ae91528
--- /dev/null
+++ b/examples/lekiwi/replay.py
@@ -0,0 +1,61 @@
+# !/usr/bin/env python
+
+# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import time
+
+from lerobot.datasets.lerobot_dataset import LeRobotDataset
+from lerobot.robots.lekiwi.config_lekiwi import LeKiwiClientConfig
+from lerobot.robots.lekiwi.lekiwi_client import LeKiwiClient
+from lerobot.utils.constants import ACTION
+from lerobot.utils.robot_utils import busy_wait
+from lerobot.utils.utils import log_say
+
+EPISODE_IDX = 0
+
+# Initialize the robot config
+robot_config = LeKiwiClientConfig(remote_ip="172.18.134.136", id="lekiwi")
+
+# Initialize the robot
+robot = LeKiwiClient(robot_config)
+
+# Fetch the dataset to replay
+dataset = LeRobotDataset("/", episodes=[EPISODE_IDX])
+# Filter dataset to only include frames from the specified episode since episodes are chunked in dataset V3.0
+episode_frames = dataset.hf_dataset.filter(lambda x: x["episode_index"] == EPISODE_IDX)
+actions = episode_frames.select_columns(ACTION)
+
+# Connect to the robot
+robot.connect()
+
+if not robot.is_connected:
+ raise ValueError("Robot is not connected!")
+
+print("Starting replay loop...")
+log_say(f"Replaying episode {EPISODE_IDX}")
+for idx in range(len(episode_frames)):
+ t0 = time.perf_counter()
+
+ # Get recorded action from dataset
+ action = {
+ name: float(actions[idx][ACTION][i]) for i, name in enumerate(dataset.features[ACTION]["names"])
+ }
+
+ # Send action to robot
+ _ = robot.send_action(action)
+
+ busy_wait(max(1.0 / dataset.fps - (time.perf_counter() - t0), 0.0))
+
+robot.disconnect()
diff --git a/examples/lekiwi/teleoperate.py b/examples/lekiwi/teleoperate.py
new file mode 100644
index 00000000..6b430df4
--- /dev/null
+++ b/examples/lekiwi/teleoperate.py
@@ -0,0 +1,72 @@
+# !/usr/bin/env python
+
+# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import time
+
+from lerobot.robots.lekiwi import LeKiwiClient, LeKiwiClientConfig
+from lerobot.teleoperators.keyboard.teleop_keyboard import KeyboardTeleop, KeyboardTeleopConfig
+from lerobot.teleoperators.so100_leader import SO100Leader, SO100LeaderConfig
+from lerobot.utils.robot_utils import busy_wait
+from lerobot.utils.visualization_utils import init_rerun, log_rerun_data
+
+FPS = 30
+
+# Create the robot and teleoperator configurations
+robot_config = LeKiwiClientConfig(remote_ip="172.18.134.136", id="my_lekiwi")
+teleop_arm_config = SO100LeaderConfig(port="/dev/tty.usbmodem585A0077581", id="my_awesome_leader_arm")
+keyboard_config = KeyboardTeleopConfig(id="my_laptop_keyboard")
+
+# Initialize the robot and teleoperator
+robot = LeKiwiClient(robot_config)
+leader_arm = SO100Leader(teleop_arm_config)
+keyboard = KeyboardTeleop(keyboard_config)
+
+# Connect to the robot and teleoperator
+# To connect you already should have this script running on LeKiwi: `python -m lerobot.robots.lekiwi.lekiwi_host --robot.id=my_awesome_kiwi`
+robot.connect()
+leader_arm.connect()
+keyboard.connect()
+
+# Init rerun viewer
+init_rerun(session_name="lekiwi_teleop")
+
+if not robot.is_connected or not leader_arm.is_connected or not keyboard.is_connected:
+ raise ValueError("Robot or teleop is not connected!")
+
+print("Starting teleop loop...")
+while True:
+ t0 = time.perf_counter()
+
+ # Get robot observation
+ observation = robot.get_observation()
+
+ # Get teleop action
+ # Arm
+ arm_action = leader_arm.get_action()
+ arm_action = {f"arm_{k}": v for k, v in arm_action.items()}
+ # Keyboard
+ keyboard_keys = keyboard.get_action()
+ base_action = robot._from_keyboard_to_base_action(keyboard_keys)
+
+ action = {**arm_action, **base_action} if len(base_action) > 0 else arm_action
+
+ # Send action to robot
+ _ = robot.send_action(action)
+
+ # Visualize
+ log_rerun_data(observation=observation, action=action)
+
+ busy_wait(max(1.0 / FPS - (time.perf_counter() - t0), 0.0))
diff --git a/examples/phone_to_so100/evaluate.py b/examples/phone_to_so100/evaluate.py
new file mode 100644
index 00000000..ff8dbddd
--- /dev/null
+++ b/examples/phone_to_so100/evaluate.py
@@ -0,0 +1,199 @@
+# !/usr/bin/env python
+
+# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
+from lerobot.configs.types import FeatureType, PolicyFeature
+from lerobot.datasets.lerobot_dataset import LeRobotDataset
+from lerobot.datasets.pipeline_features import aggregate_pipeline_dataset_features, create_initial_features
+from lerobot.datasets.utils import combine_feature_dicts
+from lerobot.model.kinematics import RobotKinematics
+from lerobot.policies.act.modeling_act import ACTPolicy
+from lerobot.policies.factory import make_pre_post_processors
+from lerobot.processor import (
+ RobotAction,
+ RobotObservation,
+ RobotProcessorPipeline,
+ make_default_teleop_action_processor,
+)
+from lerobot.processor.converters import (
+ observation_to_transition,
+ robot_action_observation_to_transition,
+ transition_to_observation,
+ transition_to_robot_action,
+)
+from lerobot.robots.so100_follower.config_so100_follower import SO100FollowerConfig
+from lerobot.robots.so100_follower.robot_kinematic_processor import (
+ ForwardKinematicsJointsToEE,
+ InverseKinematicsEEToJoints,
+)
+from lerobot.robots.so100_follower.so100_follower import SO100Follower
+from lerobot.scripts.lerobot_record import record_loop
+from lerobot.utils.control_utils import init_keyboard_listener
+from lerobot.utils.utils import log_say
+from lerobot.utils.visualization_utils import init_rerun
+
+NUM_EPISODES = 5
+FPS = 30
+EPISODE_TIME_SEC = 60
+TASK_DESCRIPTION = "My task description"
+HF_MODEL_ID = "/"
+HF_DATASET_ID = "/"
+
+# Create the robot configuration & robot
+camera_config = {"front": OpenCVCameraConfig(index_or_path=0, width=640, height=480, fps=FPS)}
+robot_config = SO100FollowerConfig(
+ port="/dev/tty.usbmodem58760434471",
+ id="my_awesome_follower_arm",
+ cameras=camera_config,
+ use_degrees=True,
+)
+
+robot = SO100Follower(robot_config)
+
+# Create policy
+policy = ACTPolicy.from_pretrained(HF_MODEL_ID)
+
+# NOTE: It is highly recommended to use the urdf in the SO-ARM100 repo: https://github.com/TheRobotStudio/SO-ARM100/blob/main/Simulation/SO101/so101_new_calib.urdf
+kinematics_solver = RobotKinematics(
+ urdf_path="./SO101/so101_new_calib.urdf",
+ target_frame_name="gripper_frame_link",
+ joint_names=list(robot.bus.motors.keys()),
+)
+
+# Build pipeline to convert EE action to joints action
+robot_ee_to_joints_processor = RobotProcessorPipeline[tuple[RobotAction, RobotObservation], RobotAction](
+ steps=[
+ InverseKinematicsEEToJoints(
+ kinematics=kinematics_solver,
+ motor_names=list(robot.bus.motors.keys()),
+ initial_guess_current_joints=True,
+ ),
+ ],
+ to_transition=robot_action_observation_to_transition,
+ to_output=transition_to_robot_action,
+)
+
+# Build pipeline to convert joints observation to EE observation
+robot_joints_to_ee_pose_processor = RobotProcessorPipeline[RobotObservation, RobotObservation](
+ steps=[
+ ForwardKinematicsJointsToEE(kinematics=kinematics_solver, motor_names=list(robot.bus.motors.keys()))
+ ],
+ to_transition=observation_to_transition,
+ to_output=transition_to_observation,
+)
+
+# Create the dataset
+dataset = LeRobotDataset.create(
+ repo_id=HF_DATASET_ID,
+ fps=FPS,
+ features=combine_feature_dicts(
+ aggregate_pipeline_dataset_features(
+ pipeline=robot_joints_to_ee_pose_processor,
+ initial_features=create_initial_features(observation=robot.observation_features),
+ use_videos=True,
+ ),
+ # User for now should be explicit on the feature keys that were used for record
+ # Alternatively, the user can pass the processor step that has the right features
+ aggregate_pipeline_dataset_features(
+ pipeline=make_default_teleop_action_processor(),
+ initial_features=create_initial_features(
+ action={
+ f"ee.{k}": PolicyFeature(type=FeatureType.ACTION, shape=(1,))
+ for k in ["x", "y", "z", "wx", "wy", "wz", "gripper_pos"]
+ }
+ ),
+ use_videos=True,
+ ),
+ ),
+ robot_type=robot.name,
+ use_videos=True,
+ image_writer_threads=4,
+)
+
+# Build Policy Processors
+preprocessor, postprocessor = make_pre_post_processors(
+ policy_cfg=policy,
+ pretrained_path=HF_MODEL_ID,
+ dataset_stats=dataset.meta.stats,
+ # The inference device is automatically set to match the detected hardware, overriding any previous device settings from training to ensure compatibility.
+ preprocessor_overrides={"device_processor": {"device": str(policy.config.device)}},
+)
+
+# Connect the robot
+robot.connect()
+
+# Initialize the keyboard listener and rerun visualization
+listener, events = init_keyboard_listener()
+init_rerun(session_name="phone_so100_evaluate")
+
+if not robot.is_connected:
+ raise ValueError("Robot is not connected!")
+
+print("Starting evaluate loop...")
+episode_idx = 0
+for episode_idx in range(NUM_EPISODES):
+ log_say(f"Running inference, recording eval episode {episode_idx + 1} of {NUM_EPISODES}")
+
+ # Main record loop
+ record_loop(
+ robot=robot,
+ events=events,
+ fps=FPS,
+ policy=policy,
+ preprocessor=preprocessor, # Pass the pre and post policy processors
+ postprocessor=postprocessor,
+ dataset=dataset,
+ control_time_s=EPISODE_TIME_SEC,
+ single_task=TASK_DESCRIPTION,
+ display_data=True,
+ teleop_action_processor=make_default_teleop_action_processor(),
+ robot_action_processor=robot_ee_to_joints_processor,
+ robot_observation_processor=robot_joints_to_ee_pose_processor,
+ )
+
+ # Reset the environment if not stopping or re-recording
+ if not events["stop_recording"] and ((episode_idx < NUM_EPISODES - 1) or events["rerecord_episode"]):
+ log_say("Reset the environment")
+ record_loop(
+ robot=robot,
+ events=events,
+ fps=FPS,
+ control_time_s=EPISODE_TIME_SEC,
+ single_task=TASK_DESCRIPTION,
+ display_data=True,
+ teleop_action_processor=make_default_teleop_action_processor(),
+ robot_action_processor=robot_ee_to_joints_processor,
+ robot_observation_processor=robot_joints_to_ee_pose_processor,
+ )
+
+ if events["rerecord_episode"]:
+ log_say("Re-record episode")
+ events["rerecord_episode"] = False
+ events["exit_early"] = False
+ dataset.clear_episode_buffer()
+ continue
+
+ # Save episode
+ dataset.save_episode()
+ episode_idx += 1
+
+# Clean up
+log_say("Stop recording")
+robot.disconnect()
+listener.stop()
+
+dataset.finalize()
+dataset.push_to_hub()
diff --git a/examples/phone_to_so100/record.py b/examples/phone_to_so100/record.py
new file mode 100644
index 00000000..880f9c9b
--- /dev/null
+++ b/examples/phone_to_so100/record.py
@@ -0,0 +1,205 @@
+# !/usr/bin/env python
+
+# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
+from lerobot.datasets.lerobot_dataset import LeRobotDataset
+from lerobot.datasets.pipeline_features import aggregate_pipeline_dataset_features, create_initial_features
+from lerobot.datasets.utils import combine_feature_dicts
+from lerobot.model.kinematics import RobotKinematics
+from lerobot.processor import RobotAction, RobotObservation, RobotProcessorPipeline
+from lerobot.processor.converters import (
+ observation_to_transition,
+ robot_action_observation_to_transition,
+ transition_to_observation,
+ transition_to_robot_action,
+)
+from lerobot.robots.so100_follower.config_so100_follower import SO100FollowerConfig
+from lerobot.robots.so100_follower.robot_kinematic_processor import (
+ EEBoundsAndSafety,
+ EEReferenceAndDelta,
+ ForwardKinematicsJointsToEE,
+ GripperVelocityToJoint,
+ InverseKinematicsEEToJoints,
+)
+from lerobot.robots.so100_follower.so100_follower import SO100Follower
+from lerobot.scripts.lerobot_record import record_loop
+from lerobot.teleoperators.phone.config_phone import PhoneConfig, PhoneOS
+from lerobot.teleoperators.phone.phone_processor import MapPhoneActionToRobotAction
+from lerobot.teleoperators.phone.teleop_phone import Phone
+from lerobot.utils.control_utils import init_keyboard_listener
+from lerobot.utils.utils import log_say
+from lerobot.utils.visualization_utils import init_rerun
+
+NUM_EPISODES = 2
+FPS = 30
+EPISODE_TIME_SEC = 60
+RESET_TIME_SEC = 30
+TASK_DESCRIPTION = "My task description"
+HF_REPO_ID = "/"
+
+# Create the robot and teleoperator configurations
+camera_config = {"front": OpenCVCameraConfig(index_or_path=0, width=640, height=480, fps=FPS)}
+robot_config = SO100FollowerConfig(
+ port="/dev/tty.usbmodem5A460814411",
+ id="my_awesome_follower_arm",
+ cameras=camera_config,
+ use_degrees=True,
+)
+teleop_config = PhoneConfig(phone_os=PhoneOS.IOS) # or PhoneOS.ANDROID
+
+# Initialize the robot and teleoperator
+robot = SO100Follower(robot_config)
+phone = Phone(teleop_config)
+
+# NOTE: It is highly recommended to use the urdf in the SO-ARM100 repo: https://github.com/TheRobotStudio/SO-ARM100/blob/main/Simulation/SO101/so101_new_calib.urdf
+kinematics_solver = RobotKinematics(
+ urdf_path="./SO101/so101_new_calib.urdf",
+ target_frame_name="gripper_frame_link",
+ joint_names=list(robot.bus.motors.keys()),
+)
+
+# Build pipeline to convert phone action to EE action
+phone_to_robot_ee_pose_processor = RobotProcessorPipeline[tuple[RobotAction, RobotObservation], RobotAction](
+ steps=[
+ MapPhoneActionToRobotAction(platform=teleop_config.phone_os),
+ EEReferenceAndDelta(
+ kinematics=kinematics_solver,
+ end_effector_step_sizes={"x": 0.5, "y": 0.5, "z": 0.5},
+ motor_names=list(robot.bus.motors.keys()),
+ use_latched_reference=True,
+ ),
+ EEBoundsAndSafety(
+ end_effector_bounds={"min": [-1.0, -1.0, -1.0], "max": [1.0, 1.0, 1.0]},
+ max_ee_step_m=0.20,
+ ),
+ GripperVelocityToJoint(speed_factor=20.0),
+ ],
+ to_transition=robot_action_observation_to_transition,
+ to_output=transition_to_robot_action,
+)
+
+# Build pipeline to convert EE action to joints action
+robot_ee_to_joints_processor = RobotProcessorPipeline[tuple[RobotAction, RobotObservation], RobotAction](
+ steps=[
+ InverseKinematicsEEToJoints(
+ kinematics=kinematics_solver,
+ motor_names=list(robot.bus.motors.keys()),
+ initial_guess_current_joints=True,
+ ),
+ ],
+ to_transition=robot_action_observation_to_transition,
+ to_output=transition_to_robot_action,
+)
+
+# Build pipeline to convert joint observation to EE observation
+robot_joints_to_ee_pose = RobotProcessorPipeline[RobotObservation, RobotObservation](
+ steps=[
+ ForwardKinematicsJointsToEE(kinematics=kinematics_solver, motor_names=list(robot.bus.motors.keys()))
+ ],
+ to_transition=observation_to_transition,
+ to_output=transition_to_observation,
+)
+
+# Create the dataset
+dataset = LeRobotDataset.create(
+ repo_id=HF_REPO_ID,
+ fps=FPS,
+ features=combine_feature_dicts(
+ # Run the feature contract of the pipelines
+ # This tells you how the features would look like after the pipeline steps
+ aggregate_pipeline_dataset_features(
+ pipeline=phone_to_robot_ee_pose_processor,
+ initial_features=create_initial_features(action=phone.action_features),
+ use_videos=True,
+ ),
+ aggregate_pipeline_dataset_features(
+ pipeline=robot_joints_to_ee_pose,
+ initial_features=create_initial_features(observation=robot.observation_features),
+ use_videos=True,
+ ),
+ ),
+ robot_type=robot.name,
+ use_videos=True,
+ image_writer_threads=4,
+)
+
+# Connect the robot and teleoperator
+robot.connect()
+phone.connect()
+
+# Initialize the keyboard listener and rerun visualization
+listener, events = init_keyboard_listener()
+init_rerun(session_name="phone_so100_record")
+
+if not robot.is_connected or not phone.is_connected:
+ raise ValueError("Robot or teleop is not connected!")
+
+
+print("Starting record loop. Move your phone to teleoperate the robot...")
+episode_idx = 0
+while episode_idx < NUM_EPISODES and not events["stop_recording"]:
+ log_say(f"Recording episode {episode_idx + 1} of {NUM_EPISODES}")
+
+ # Main record loop
+ record_loop(
+ robot=robot,
+ events=events,
+ fps=FPS,
+ teleop=phone,
+ dataset=dataset,
+ control_time_s=EPISODE_TIME_SEC,
+ single_task=TASK_DESCRIPTION,
+ display_data=True,
+ teleop_action_processor=phone_to_robot_ee_pose_processor,
+ robot_action_processor=robot_ee_to_joints_processor,
+ robot_observation_processor=robot_joints_to_ee_pose,
+ )
+
+ # Reset the environment if not stopping or re-recording
+ if not events["stop_recording"] and (episode_idx < NUM_EPISODES - 1 or events["rerecord_episode"]):
+ log_say("Reset the environment")
+ record_loop(
+ robot=robot,
+ events=events,
+ fps=FPS,
+ teleop=phone,
+ control_time_s=RESET_TIME_SEC,
+ single_task=TASK_DESCRIPTION,
+ display_data=True,
+ teleop_action_processor=phone_to_robot_ee_pose_processor,
+ robot_action_processor=robot_ee_to_joints_processor,
+ robot_observation_processor=robot_joints_to_ee_pose,
+ )
+
+ if events["rerecord_episode"]:
+ log_say("Re-recording episode")
+ events["rerecord_episode"] = False
+ events["exit_early"] = False
+ dataset.clear_episode_buffer()
+ continue
+
+ # Save episode
+ dataset.save_episode()
+ episode_idx += 1
+
+# Clean up
+log_say("Stop recording")
+robot.disconnect()
+phone.disconnect()
+listener.stop()
+
+dataset.finalize()
+dataset.push_to_hub()
diff --git a/examples/phone_to_so100/replay.py b/examples/phone_to_so100/replay.py
new file mode 100644
index 00000000..f1181143
--- /dev/null
+++ b/examples/phone_to_so100/replay.py
@@ -0,0 +1,100 @@
+# !/usr/bin/env python
+
+# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import time
+
+from lerobot.datasets.lerobot_dataset import LeRobotDataset
+from lerobot.model.kinematics import RobotKinematics
+from lerobot.processor import RobotAction, RobotObservation, RobotProcessorPipeline
+from lerobot.processor.converters import (
+ robot_action_observation_to_transition,
+ transition_to_robot_action,
+)
+from lerobot.robots.so100_follower.config_so100_follower import SO100FollowerConfig
+from lerobot.robots.so100_follower.robot_kinematic_processor import (
+ InverseKinematicsEEToJoints,
+)
+from lerobot.robots.so100_follower.so100_follower import SO100Follower
+from lerobot.utils.constants import ACTION
+from lerobot.utils.robot_utils import busy_wait
+from lerobot.utils.utils import log_say
+
+EPISODE_IDX = 0
+HF_REPO_ID = "/"
+
+# Initialize the robot config
+robot_config = SO100FollowerConfig(
+ port="/dev/tty.usbmodem5A460814411", id="my_awesome_follower_arm", use_degrees=True
+)
+
+# Initialize the robot
+robot = SO100Follower(robot_config)
+
+# NOTE: It is highly recommended to use the urdf in the SO-ARM100 repo: https://github.com/TheRobotStudio/SO-ARM100/blob/main/Simulation/SO101/so101_new_calib.urdf
+kinematics_solver = RobotKinematics(
+ urdf_path="./SO101/so101_new_calib.urdf",
+ target_frame_name="gripper_frame_link",
+ joint_names=list(robot.bus.motors.keys()),
+)
+
+# Build pipeline to convert EE action to joints action
+robot_ee_to_joints_processor = RobotProcessorPipeline[tuple[RobotAction, RobotObservation], RobotAction](
+ steps=[
+ InverseKinematicsEEToJoints(
+ kinematics=kinematics_solver,
+ motor_names=list(robot.bus.motors.keys()),
+ initial_guess_current_joints=False, # Because replay is open loop
+ ),
+ ],
+ to_transition=robot_action_observation_to_transition,
+ to_output=transition_to_robot_action,
+)
+
+# Fetch the dataset to replay
+dataset = LeRobotDataset(HF_REPO_ID, episodes=[EPISODE_IDX])
+# Filter dataset to only include frames from the specified episode since episodes are chunked in dataset V3.0
+episode_frames = dataset.hf_dataset.filter(lambda x: x["episode_index"] == EPISODE_IDX)
+actions = episode_frames.select_columns(ACTION)
+
+# Connect to the robot
+robot.connect()
+
+if not robot.is_connected:
+ raise ValueError("Robot is not connected!")
+
+print("Starting replay loop...")
+log_say(f"Replaying episode {EPISODE_IDX}")
+for idx in range(len(episode_frames)):
+ t0 = time.perf_counter()
+
+ # Get recorded action from dataset
+ ee_action = {
+ name: float(actions[idx][ACTION][i]) for i, name in enumerate(dataset.features[ACTION]["names"])
+ }
+
+ # Get robot observation
+ robot_obs = robot.get_observation()
+
+ # Dataset EE -> robot joints
+ joint_action = robot_ee_to_joints_processor((ee_action, robot_obs))
+
+ # Send action to robot
+ _ = robot.send_action(joint_action)
+
+ busy_wait(1.0 / dataset.fps - (time.perf_counter() - t0))
+
+# Clean up
+robot.disconnect()
diff --git a/examples/phone_to_so100/teleoperate.py b/examples/phone_to_so100/teleoperate.py
new file mode 100644
index 00000000..783dce24
--- /dev/null
+++ b/examples/phone_to_so100/teleoperate.py
@@ -0,0 +1,113 @@
+# !/usr/bin/env python
+
+# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specif
+
+import time
+
+from lerobot.model.kinematics import RobotKinematics
+from lerobot.processor import RobotAction, RobotObservation, RobotProcessorPipeline
+from lerobot.processor.converters import (
+ robot_action_observation_to_transition,
+ transition_to_robot_action,
+)
+from lerobot.robots.so100_follower.config_so100_follower import SO100FollowerConfig
+from lerobot.robots.so100_follower.robot_kinematic_processor import (
+ EEBoundsAndSafety,
+ EEReferenceAndDelta,
+ GripperVelocityToJoint,
+ InverseKinematicsEEToJoints,
+)
+from lerobot.robots.so100_follower.so100_follower import SO100Follower
+from lerobot.teleoperators.phone.config_phone import PhoneConfig, PhoneOS
+from lerobot.teleoperators.phone.phone_processor import MapPhoneActionToRobotAction
+from lerobot.teleoperators.phone.teleop_phone import Phone
+from lerobot.utils.robot_utils import busy_wait
+from lerobot.utils.visualization_utils import init_rerun, log_rerun_data
+
+FPS = 30
+
+# Initialize the robot and teleoperator
+robot_config = SO100FollowerConfig(
+ port="/dev/tty.usbmodem5A460814411", id="my_awesome_follower_arm", use_degrees=True
+)
+teleop_config = PhoneConfig(phone_os=PhoneOS.IOS) # or PhoneOS.ANDROID
+
+# Initialize the robot and teleoperator
+robot = SO100Follower(robot_config)
+teleop_device = Phone(teleop_config)
+
+# NOTE: It is highly recommended to use the urdf in the SO-ARM100 repo: https://github.com/TheRobotStudio/SO-ARM100/blob/main/Simulation/SO101/so101_new_calib.urdf
+kinematics_solver = RobotKinematics(
+ urdf_path="./SO101/so101_new_calib.urdf",
+ target_frame_name="gripper_frame_link",
+ joint_names=list(robot.bus.motors.keys()),
+)
+
+# Build pipeline to convert phone action to ee pose action to joint action
+phone_to_robot_joints_processor = RobotProcessorPipeline[tuple[RobotAction, RobotObservation], RobotAction](
+ steps=[
+ MapPhoneActionToRobotAction(platform=teleop_config.phone_os),
+ EEReferenceAndDelta(
+ kinematics=kinematics_solver,
+ end_effector_step_sizes={"x": 0.5, "y": 0.5, "z": 0.5},
+ motor_names=list(robot.bus.motors.keys()),
+ use_latched_reference=True,
+ ),
+ EEBoundsAndSafety(
+ end_effector_bounds={"min": [-1.0, -1.0, -1.0], "max": [1.0, 1.0, 1.0]},
+ max_ee_step_m=0.10,
+ ),
+ GripperVelocityToJoint(
+ speed_factor=20.0,
+ ),
+ InverseKinematicsEEToJoints(
+ kinematics=kinematics_solver,
+ motor_names=list(robot.bus.motors.keys()),
+ initial_guess_current_joints=True,
+ ),
+ ],
+ to_transition=robot_action_observation_to_transition,
+ to_output=transition_to_robot_action,
+)
+
+# Connect to the robot and teleoperator
+robot.connect()
+teleop_device.connect()
+
+# Init rerun viewer
+init_rerun(session_name="phone_so100_teleop")
+
+if not robot.is_connected or not teleop_device.is_connected:
+ raise ValueError("Robot or teleop is not connected!")
+
+print("Starting teleop loop. Move your phone to teleoperate the robot...")
+while True:
+ t0 = time.perf_counter()
+
+ # Get robot observation
+ robot_obs = robot.get_observation()
+
+ # Get teleop action
+ phone_obs = teleop_device.get_action()
+
+ # Phone -> EE pose -> Joints transition
+ joint_action = phone_to_robot_joints_processor((phone_obs, robot_obs))
+
+ # Send action to robot
+ _ = robot.send_action(joint_action)
+
+ # Visualize
+ log_rerun_data(observation=phone_obs, action=joint_action)
+
+ busy_wait(max(1.0 / FPS - (time.perf_counter() - t0), 0.0))
diff --git a/examples/port_datasets/display_error_files.py b/examples/port_datasets/display_error_files.py
new file mode 100644
index 00000000..fffab5ff
--- /dev/null
+++ b/examples/port_datasets/display_error_files.py
@@ -0,0 +1,85 @@
+#!/usr/bin/env python
+
+# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import json
+from pathlib import Path
+
+
+def find_missing_workers(completions_dir, world_size):
+ """Find workers that are not completed and returns their indices."""
+ full = list(range(world_size))
+
+ completed = []
+ for path in completions_dir.glob("*"):
+ if path.name in [".", ".."]:
+ continue
+ index = path.name.lstrip("0")
+ index = 0 if index == "" else int(index)
+ completed.append(index)
+
+ missing_workers = set(full) - set(completed)
+ return missing_workers
+
+
+def find_output_files(slurm_dir, worker_indices):
+ """Find output files associated to worker indices, and return tuples
+ of (worker index, output file path)
+ """
+ out_files = []
+ for path in slurm_dir.glob("*.out"):
+ _, worker_id = path.name.replace(".out", "").split("_")
+ worker_id = int(worker_id)
+ if worker_id in worker_indices:
+ out_files.append((worker_id, path))
+ return out_files
+
+
+def display_error_files(logs_dir, job_name):
+ executor_path = Path(logs_dir) / job_name / "executor.json"
+ completions_dir = Path(logs_dir) / job_name / "completions"
+
+ with open(executor_path) as f:
+ executor = json.load(f)
+
+ missing_workers = find_missing_workers(completions_dir, executor["world_size"])
+
+ for missing in sorted(missing_workers)[::-1]:
+ print(missing)
+
+
+def main():
+ parser = argparse.ArgumentParser()
+
+ parser.add_argument(
+ "--logs-dir",
+ type=str,
+ help="Path to logs directory for `datatrove`.",
+ )
+ parser.add_argument(
+ "--job-name",
+ type=str,
+ default="port_droid",
+ help="Job name used in slurm, and name of the directory created inside the provided logs directory.",
+ )
+
+ args = parser.parse_args()
+
+ display_error_files(**vars(args))
+
+
+if __name__ == "__main__":
+ main()
diff --git a/examples/port_datasets/port_droid.py b/examples/port_datasets/port_droid.py
new file mode 100644
index 00000000..a1fb5091
--- /dev/null
+++ b/examples/port_datasets/port_droid.py
@@ -0,0 +1,432 @@
+#!/usr/bin/env python
+
+# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import logging
+import time
+from pathlib import Path
+
+import numpy as np
+import tensorflow_datasets as tfds
+
+from lerobot.datasets.lerobot_dataset import LeRobotDataset, LeRobotDatasetMetadata
+from lerobot.utils.utils import get_elapsed_time_in_days_hours_minutes_seconds
+
+DROID_SHARDS = 2048
+DROID_FPS = 15
+DROID_ROBOT_TYPE = "Franka"
+
+# Dataset schema slightly adapted from: https://droid-dataset.github.io/droid/the-droid-dataset.html#-dataset-schema
+DROID_FEATURES = {
+ # true on first step of the episode
+ "is_first": {
+ "dtype": "bool",
+ "shape": (1,),
+ "names": None,
+ },
+ # true on last step of the episode
+ "is_last": {
+ "dtype": "bool",
+ "shape": (1,),
+ "names": None,
+ },
+ # true on last step of the episode if it is a terminal step, True for demos
+ "is_terminal": {
+ "dtype": "bool",
+ "shape": (1,),
+ "names": None,
+ },
+ # language_instruction is also stored as "task" to follow LeRobot standard
+ "language_instruction": {
+ "dtype": "string",
+ "shape": (1,),
+ "names": None,
+ },
+ "language_instruction_2": {
+ "dtype": "string",
+ "shape": (1,),
+ "names": None,
+ },
+ "language_instruction_3": {
+ "dtype": "string",
+ "shape": (1,),
+ "names": None,
+ },
+ "observation.state.gripper_position": {
+ "dtype": "float32",
+ "shape": (1,),
+ "names": {
+ "axes": ["gripper"],
+ },
+ },
+ "observation.state.cartesian_position": {
+ "dtype": "float32",
+ "shape": (6,),
+ "names": {
+ "axes": ["x", "y", "z", "roll", "pitch", "yaw"],
+ },
+ },
+ "observation.state.joint_position": {
+ "dtype": "float32",
+ "shape": (7,),
+ "names": {
+ "axes": ["joint_0", "joint_1", "joint_2", "joint_3", "joint_4", "joint_5", "joint_6"],
+ },
+ },
+ # Add this new feature to follow LeRobot standard of using joint position + gripper
+ "observation.state": {
+ "dtype": "float32",
+ "shape": (8,),
+ "names": {
+ "axes": ["joint_0", "joint_1", "joint_2", "joint_3", "joint_4", "joint_5", "joint_6", "gripper"],
+ },
+ },
+ # Initially called wrist_image_left
+ "observation.images.wrist_left": {
+ "dtype": "video",
+ "shape": (180, 320, 3),
+ "names": [
+ "height",
+ "width",
+ "channels",
+ ],
+ },
+ # Initially called exterior_image_1_left
+ "observation.images.exterior_1_left": {
+ "dtype": "video",
+ "shape": (180, 320, 3),
+ "names": [
+ "height",
+ "width",
+ "channels",
+ ],
+ },
+ # Initially called exterior_image_2_left
+ "observation.images.exterior_2_left": {
+ "dtype": "video",
+ "shape": (180, 320, 3),
+ "names": [
+ "height",
+ "width",
+ "channels",
+ ],
+ },
+ "action.gripper_position": {
+ "dtype": "float32",
+ "shape": (1,),
+ "names": {
+ "axes": ["gripper"],
+ },
+ },
+ "action.gripper_velocity": {
+ "dtype": "float32",
+ "shape": (1,),
+ "names": {
+ "axes": ["gripper"],
+ },
+ },
+ "action.cartesian_position": {
+ "dtype": "float32",
+ "shape": (6,),
+ "names": {
+ "axes": ["x", "y", "z", "roll", "pitch", "yaw"],
+ },
+ },
+ "action.cartesian_velocity": {
+ "dtype": "float32",
+ "shape": (6,),
+ "names": {
+ "axes": ["x", "y", "z", "roll", "pitch", "yaw"],
+ },
+ },
+ "action.joint_position": {
+ "dtype": "float32",
+ "shape": (7,),
+ "names": {
+ "axes": ["joint_0", "joint_1", "joint_2", "joint_3", "joint_4", "joint_5", "joint_6"],
+ },
+ },
+ "action.joint_velocity": {
+ "dtype": "float32",
+ "shape": (7,),
+ "names": {
+ "axes": ["joint_0", "joint_1", "joint_2", "joint_3", "joint_4", "joint_5", "joint_6"],
+ },
+ },
+ # This feature was called "action" in RLDS dataset and consists of [6x joint velocities, 1x gripper position]
+ "action.original": {
+ "dtype": "float32",
+ "shape": (7,),
+ "names": {
+ "axes": ["x", "y", "z", "roll", "pitch", "yaw", "gripper"],
+ },
+ },
+ # Add this new feature to follow LeRobot standard of using joint position + gripper
+ "action": {
+ "dtype": "float32",
+ "shape": (8,),
+ "names": {
+ "axes": ["joint_0", "joint_1", "joint_2", "joint_3", "joint_4", "joint_5", "joint_6", "gripper"],
+ },
+ },
+ "discount": {
+ "dtype": "float32",
+ "shape": (1,),
+ "names": None,
+ },
+ "reward": {
+ "dtype": "float32",
+ "shape": (1,),
+ "names": None,
+ },
+ # Meta data that are the same for all frames in the episode
+ "task_category": {
+ "dtype": "string",
+ "shape": (1,),
+ "names": None,
+ },
+ "building": {
+ "dtype": "string",
+ "shape": (1,),
+ "names": None,
+ },
+ "collector_id": {
+ "dtype": "string",
+ "shape": (1,),
+ "names": None,
+ },
+ "date": {
+ "dtype": "string",
+ "shape": (1,),
+ "names": None,
+ },
+ "camera_extrinsics.wrist_left": {
+ "dtype": "float32",
+ "shape": (6,),
+ "names": {
+ "axes": ["x", "y", "z", "roll", "pitch", "yaw"],
+ },
+ },
+ "camera_extrinsics.exterior_1_left": {
+ "dtype": "float32",
+ "shape": (6,),
+ "names": {
+ "axes": ["x", "y", "z", "roll", "pitch", "yaw"],
+ },
+ },
+ "camera_extrinsics.exterior_2_left": {
+ "dtype": "float32",
+ "shape": (6,),
+ "names": {
+ "axes": ["x", "y", "z", "roll", "pitch", "yaw"],
+ },
+ },
+ "is_episode_successful": {
+ "dtype": "bool",
+ "shape": (1,),
+ "names": None,
+ },
+}
+
+
+def is_episode_successful(tf_episode_metadata):
+ # Adapted from: https://github.com/droid-dataset/droid_policy_learning/blob/dd1020eb20d981f90b5ff07dc80d80d5c0cb108b/robomimic/utils/rlds_utils.py#L8
+ return "/success/" in tf_episode_metadata["file_path"].numpy().decode()
+
+
+def generate_lerobot_frames(tf_episode):
+ m = tf_episode["episode_metadata"]
+ frame_meta = {
+ "task_category": m["building"].numpy().decode(),
+ "building": m["building"].numpy().decode(),
+ "collector_id": m["collector_id"].numpy().decode(),
+ "date": m["date"].numpy().decode(),
+ "camera_extrinsics.wrist_left": m["extrinsics_wrist_cam"].numpy(),
+ "camera_extrinsics.exterior_1_left": m["extrinsics_exterior_cam_1"].numpy(),
+ "camera_extrinsics.exterior_2_left": m["extrinsics_exterior_cam_2"].numpy(),
+ "is_episode_successful": np.array([is_episode_successful(m)]),
+ }
+ for f in tf_episode["steps"]:
+ # Dataset schema slightly adapted from: https://droid-dataset.github.io/droid/the-droid-dataset.html#-dataset-schema
+ frame = {
+ "is_first": np.array([f["is_first"].numpy()]),
+ "is_last": np.array([f["is_last"].numpy()]),
+ "is_terminal": np.array([f["is_terminal"].numpy()]),
+ "language_instruction": f["language_instruction"].numpy().decode(),
+ "language_instruction_2": f["language_instruction_2"].numpy().decode(),
+ "language_instruction_3": f["language_instruction_3"].numpy().decode(),
+ "observation.state.gripper_position": f["observation"]["gripper_position"].numpy(),
+ "observation.state.cartesian_position": f["observation"]["cartesian_position"].numpy(),
+ "observation.state.joint_position": f["observation"]["joint_position"].numpy(),
+ "observation.images.wrist_left": f["observation"]["wrist_image_left"].numpy(),
+ "observation.images.exterior_1_left": f["observation"]["exterior_image_1_left"].numpy(),
+ "observation.images.exterior_2_left": f["observation"]["exterior_image_2_left"].numpy(),
+ "action.gripper_position": f["action_dict"]["gripper_position"].numpy(),
+ "action.gripper_velocity": f["action_dict"]["gripper_velocity"].numpy(),
+ "action.cartesian_position": f["action_dict"]["cartesian_position"].numpy(),
+ "action.cartesian_velocity": f["action_dict"]["cartesian_velocity"].numpy(),
+ "action.joint_position": f["action_dict"]["joint_position"].numpy(),
+ "action.joint_velocity": f["action_dict"]["joint_velocity"].numpy(),
+ "discount": np.array([f["discount"].numpy()]),
+ "reward": np.array([f["reward"].numpy()]),
+ "action.original": f["action"].numpy(),
+ }
+
+ # language_instruction is also stored as "task" to follow LeRobot standard
+ frame["task"] = frame["language_instruction"]
+
+ # Add this new feature to follow LeRobot standard of using joint position + gripper
+ frame["observation.state"] = np.concatenate(
+ [frame["observation.state.joint_position"], frame["observation.state.gripper_position"]]
+ )
+ frame["action"] = np.concatenate([frame["action.joint_position"], frame["action.gripper_position"]])
+
+ # Meta data that are the same for all frames in the episode
+ frame.update(frame_meta)
+
+ # Cast fp64 to fp32
+ for key in frame:
+ if isinstance(frame[key], np.ndarray) and frame[key].dtype == np.float64:
+ frame[key] = frame[key].astype(np.float32)
+
+ yield frame
+
+
+def port_droid(
+ raw_dir: Path,
+ repo_id: str,
+ push_to_hub: bool = False,
+ num_shards: int | None = None,
+ shard_index: int | None = None,
+):
+ dataset_name = raw_dir.parent.name
+ version = raw_dir.name
+ data_dir = raw_dir.parent.parent
+
+ builder = tfds.builder(f"{dataset_name}/{version}", data_dir=data_dir, version="")
+
+ if num_shards is not None:
+ tfds_num_shards = builder.info.splits["train"].num_shards
+ if tfds_num_shards != DROID_SHARDS:
+ raise ValueError(
+ f"Number of shards of Droid dataset is expected to be {DROID_SHARDS} but is {tfds_num_shards}."
+ )
+ if num_shards != tfds_num_shards:
+ raise ValueError(
+ f"We only shard over the fixed number of shards provided by tensorflow dataset ({tfds_num_shards}), but {num_shards} shards provided instead."
+ )
+ if shard_index >= tfds_num_shards:
+ raise ValueError(
+ f"Shard index is greater than the num of shards ({shard_index} >= {num_shards})."
+ )
+
+ raw_dataset = builder.as_dataset(split=f"train[{shard_index}shard]")
+ else:
+ raw_dataset = builder.as_dataset(split="train")
+
+ lerobot_dataset = LeRobotDataset.create(
+ repo_id=repo_id,
+ robot_type=DROID_ROBOT_TYPE,
+ fps=DROID_FPS,
+ features=DROID_FEATURES,
+ )
+
+ start_time = time.time()
+ num_episodes = raw_dataset.cardinality().numpy().item()
+ logging.info(f"Number of episodes {num_episodes}")
+
+ for episode_index, episode in enumerate(raw_dataset):
+ elapsed_time = time.time() - start_time
+ d, h, m, s = get_elapsed_time_in_days_hours_minutes_seconds(elapsed_time)
+
+ logging.info(
+ f"{episode_index} / {num_episodes} episodes processed (after {d} days, {h} hours, {m} minutes, {s:.3f} seconds)"
+ )
+
+ for frame in generate_lerobot_frames(episode):
+ lerobot_dataset.add_frame(frame)
+
+ lerobot_dataset.save_episode()
+ logging.info("Save_episode")
+
+ lerobot_dataset.finalize()
+
+ if push_to_hub:
+ lerobot_dataset.push_to_hub(
+ # Add openx tag, since it belongs to the openx collection of datasets
+ tags=["openx"],
+ private=False,
+ )
+
+
+def validate_dataset(repo_id):
+ """Sanity check that ensure meta data can be loaded and all files are present."""
+ meta = LeRobotDatasetMetadata(repo_id)
+
+ if meta.total_episodes == 0:
+ raise ValueError("Number of episodes is 0.")
+
+ for ep_idx in range(meta.total_episodes):
+ data_path = meta.root / meta.get_data_file_path(ep_idx)
+
+ if not data_path.exists():
+ raise ValueError(f"Parquet file is missing in: {data_path}")
+
+ for vid_key in meta.video_keys:
+ vid_path = meta.root / meta.get_video_file_path(ep_idx, vid_key)
+ if not vid_path.exists():
+ raise ValueError(f"Video file is missing in: {vid_path}")
+
+
+def main():
+ parser = argparse.ArgumentParser()
+
+ parser.add_argument(
+ "--raw-dir",
+ type=Path,
+ required=True,
+ help="Directory containing input raw datasets (e.g. `path/to/dataset` or `path/to/dataset/version).",
+ )
+ parser.add_argument(
+ "--repo-id",
+ type=str,
+ help="Repositery identifier on Hugging Face: a community or a user name `/` the name of the dataset, required when push-to-hub is True",
+ )
+ parser.add_argument(
+ "--push-to-hub",
+ action="store_true",
+ help="Upload to hub.",
+ )
+ parser.add_argument(
+ "--num-shards",
+ type=int,
+ default=None,
+ help="Number of shards. Can be either None to load the full dataset, or 2048 to load one of the 2048 tensorflow dataset files.",
+ )
+ parser.add_argument(
+ "--shard-index",
+ type=int,
+ default=None,
+ help="Index of the shard. Can be either None to load the full dataset, or in [0,2047] to load one of the 2048 tensorflow dataset files.",
+ )
+
+ args = parser.parse_args()
+
+ port_droid(**vars(args))
+
+
+if __name__ == "__main__":
+ main()
diff --git a/examples/port_datasets/slurm_aggregate_shards.py b/examples/port_datasets/slurm_aggregate_shards.py
new file mode 100644
index 00000000..4e1b71a3
--- /dev/null
+++ b/examples/port_datasets/slurm_aggregate_shards.py
@@ -0,0 +1,148 @@
+#!/usr/bin/env python
+
+# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import logging
+from pathlib import Path
+
+from datatrove.executor import LocalPipelineExecutor
+from datatrove.executor.slurm import SlurmPipelineExecutor
+from datatrove.pipeline.base import PipelineStep
+from port_datasets.droid_rlds.port_droid import DROID_SHARDS
+
+from lerobot.datasets.aggregate import aggregate_datasets
+from lerobot.utils.utils import init_logging
+
+
+class AggregateDatasets(PipelineStep):
+ def __init__(
+ self,
+ repo_ids: list[str],
+ aggregated_repo_id: str,
+ ):
+ super().__init__()
+ self.repo_ids = repo_ids
+ self.aggr_repo_id = aggregated_repo_id
+
+ def run(self, data=None, rank: int = 0, world_size: int = 1):
+ init_logging()
+
+ # Since aggregate_datasets already handles parallel processing internally,
+ # we only need one worker to run the entire aggregation
+ if rank == 0:
+ logging.info(f"Starting aggregation of {len(self.repo_ids)} datasets into {self.aggr_repo_id}")
+ aggregate_datasets(self.repo_ids, self.aggr_repo_id)
+ logging.info("Aggregation complete!")
+ else:
+ logging.info(f"Worker {rank} skipping - only worker 0 performs aggregation")
+
+
+def make_aggregate_executor(
+ repo_ids, repo_id, job_name, logs_dir, workers, partition, cpus_per_task, mem_per_cpu, slurm=True
+):
+ kwargs = {
+ "pipeline": [
+ AggregateDatasets(repo_ids, repo_id),
+ ],
+ "logging_dir": str(logs_dir / job_name),
+ }
+
+ if slurm:
+ # For aggregation, we only need 1 task since aggregate_datasets handles everything
+ kwargs.update(
+ {
+ "job_name": job_name,
+ "tasks": 1, # Only need 1 task for aggregation
+ "workers": 1, # Only need 1 worker
+ "time": "08:00:00",
+ "partition": partition,
+ "cpus_per_task": cpus_per_task,
+ "sbatch_args": {"mem-per-cpu": mem_per_cpu},
+ }
+ )
+ executor = SlurmPipelineExecutor(**kwargs)
+ else:
+ kwargs.update(
+ {
+ "tasks": 1,
+ "workers": 1,
+ }
+ )
+ executor = LocalPipelineExecutor(**kwargs)
+
+ return executor
+
+
+def main():
+ parser = argparse.ArgumentParser()
+
+ parser.add_argument(
+ "--repo-id",
+ type=str,
+ help="Repository identifier on Hugging Face: a community or a user name `/` the name of the dataset, required when push-to-hub is True.",
+ )
+ parser.add_argument(
+ "--logs-dir",
+ type=Path,
+ help="Path to logs directory for `datatrove`.",
+ )
+ parser.add_argument(
+ "--job-name",
+ type=str,
+ default="aggr_droid",
+ help="Job name used in slurm, and name of the directory created inside the provided logs directory.",
+ )
+ parser.add_argument(
+ "--slurm",
+ type=int,
+ default=1,
+ help="Launch over slurm. Use `--slurm 0` to launch sequentially (useful to debug).",
+ )
+ parser.add_argument(
+ "--workers",
+ type=int,
+ default=1, # Changed default to 1 since aggregation doesn't need multiple workers
+ help="Number of slurm workers. For aggregation, this should be 1.",
+ )
+ parser.add_argument(
+ "--partition",
+ type=str,
+ help="Slurm partition. Ideally a CPU partition. No need for GPU partition.",
+ )
+ parser.add_argument(
+ "--cpus-per-task",
+ type=int,
+ default=8,
+ help="Number of cpus that each slurm worker will use.",
+ )
+ parser.add_argument(
+ "--mem-per-cpu",
+ type=str,
+ default="1950M",
+ help="Memory per cpu that each worker will use.",
+ )
+
+ args = parser.parse_args()
+ kwargs = vars(args)
+ kwargs["slurm"] = kwargs.pop("slurm") == 1
+
+ repo_ids = [f"{args.repo_id}_world_{DROID_SHARDS}_rank_{rank}" for rank in range(DROID_SHARDS)]
+ aggregate_executor = make_aggregate_executor(repo_ids, **kwargs)
+ aggregate_executor.run()
+
+
+if __name__ == "__main__":
+ main()
diff --git a/examples/port_datasets/slurm_port_shards.py b/examples/port_datasets/slurm_port_shards.py
new file mode 100644
index 00000000..3bb4c135
--- /dev/null
+++ b/examples/port_datasets/slurm_port_shards.py
@@ -0,0 +1,162 @@
+#!/usr/bin/env python
+
+# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+from pathlib import Path
+
+from datatrove.executor import LocalPipelineExecutor
+from datatrove.executor.slurm import SlurmPipelineExecutor
+from datatrove.pipeline.base import PipelineStep
+from port_datasets.droid_rlds.port_droid import DROID_SHARDS
+
+
+class PortDroidShards(PipelineStep):
+ def __init__(
+ self,
+ raw_dir: Path | str,
+ repo_id: str = None,
+ ):
+ super().__init__()
+ self.raw_dir = Path(raw_dir)
+ self.repo_id = repo_id
+
+ def run(self, data=None, rank: int = 0, world_size: int = 1):
+ from datasets.utils.tqdm import disable_progress_bars
+ from port_datasets.droid_rlds.port_droid import port_droid, validate_dataset
+
+ from lerobot.utils.utils import init_logging
+
+ init_logging()
+ disable_progress_bars()
+
+ shard_repo_id = f"{self.repo_id}_world_{world_size}_rank_{rank}"
+
+ try:
+ validate_dataset(shard_repo_id)
+ return
+ except Exception:
+ pass # nosec B110 - Dataset doesn't exist yet, continue with porting
+
+ port_droid(
+ self.raw_dir,
+ shard_repo_id,
+ push_to_hub=False,
+ num_shards=world_size,
+ shard_index=rank,
+ )
+
+ validate_dataset(shard_repo_id)
+
+
+def make_port_executor(
+ raw_dir, repo_id, job_name, logs_dir, workers, partition, cpus_per_task, mem_per_cpu, slurm=True
+):
+ kwargs = {
+ "pipeline": [
+ PortDroidShards(raw_dir, repo_id),
+ ],
+ "logging_dir": str(logs_dir / job_name),
+ }
+
+ if slurm:
+ kwargs.update(
+ {
+ "job_name": job_name,
+ "tasks": DROID_SHARDS,
+ "workers": workers,
+ "time": "08:00:00",
+ "partition": partition,
+ "cpus_per_task": cpus_per_task,
+ "sbatch_args": {"mem-per-cpu": mem_per_cpu},
+ }
+ )
+ executor = SlurmPipelineExecutor(**kwargs)
+ else:
+ kwargs.update(
+ {
+ "tasks": 1,
+ "workers": 1,
+ }
+ )
+ executor = LocalPipelineExecutor(**kwargs)
+
+ return executor
+
+
+def main():
+ parser = argparse.ArgumentParser()
+
+ parser.add_argument(
+ "--raw-dir",
+ type=Path,
+ required=True,
+ help="Directory containing input raw datasets (e.g. `path/to/dataset` or `path/to/dataset/version).",
+ )
+ parser.add_argument(
+ "--repo-id",
+ type=str,
+ help="Repositery identifier on Hugging Face: a community or a user name `/` the name of the dataset, required when push-to-hub is True.",
+ )
+ parser.add_argument(
+ "--logs-dir",
+ type=Path,
+ help="Path to logs directory for `datatrove`.",
+ )
+ parser.add_argument(
+ "--job-name",
+ type=str,
+ default="port_droid",
+ help="Job name used in slurm, and name of the directory created inside the provided logs directory.",
+ )
+ parser.add_argument(
+ "--slurm",
+ type=int,
+ default=1,
+ help="Launch over slurm. Use `--slurm 0` to launch sequentially (useful to debug).",
+ )
+ parser.add_argument(
+ "--workers",
+ type=int,
+ default=2048,
+ help="Number of slurm workers. It should be less than the maximum number of shards.",
+ )
+ parser.add_argument(
+ "--partition",
+ type=str,
+ help="Slurm partition. Ideally a CPU partition. No need for GPU partition.",
+ )
+ parser.add_argument(
+ "--cpus-per-task",
+ type=int,
+ default=8,
+ help="Number of cpus that each slurm worker will use.",
+ )
+ parser.add_argument(
+ "--mem-per-cpu",
+ type=str,
+ default="1950M",
+ help="Memory per cpu that each worker will use.",
+ )
+
+ args = parser.parse_args()
+ kwargs = vars(args)
+ kwargs["slurm"] = kwargs.pop("slurm") == 1
+ port_executor = make_port_executor(**kwargs)
+ port_executor.run()
+
+
+if __name__ == "__main__":
+ main()
diff --git a/examples/port_datasets/slurm_upload.py b/examples/port_datasets/slurm_upload.py
new file mode 100644
index 00000000..ade1ef87
--- /dev/null
+++ b/examples/port_datasets/slurm_upload.py
@@ -0,0 +1,281 @@
+#!/usr/bin/env python
+
+# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import logging
+import os
+from pathlib import Path
+
+from datatrove.executor import LocalPipelineExecutor
+from datatrove.executor.slurm import SlurmPipelineExecutor
+from datatrove.pipeline.base import PipelineStep
+from huggingface_hub import HfApi
+from huggingface_hub.constants import REPOCARD_NAME
+from port_datasets.droid_rlds.port_droid import DROID_SHARDS
+
+from lerobot.datasets.lerobot_dataset import CODEBASE_VERSION, LeRobotDatasetMetadata
+from lerobot.datasets.utils import create_lerobot_dataset_card
+from lerobot.utils.utils import init_logging
+
+
+class UploadDataset(PipelineStep):
+ def __init__(
+ self,
+ repo_id: str,
+ branch: str | None = None,
+ revision: str | None = None,
+ tags: list | None = None,
+ license: str | None = "apache-2.0",
+ private: bool = False,
+ distant_repo_id: str | None = None,
+ **card_kwargs,
+ ):
+ super().__init__()
+ self.repo_id = repo_id
+ self.distant_repo_id = self.repo_id if distant_repo_id is None else distant_repo_id
+ self.branch = branch
+ self.tags = tags
+ self.license = license
+ self.private = private
+ self.card_kwargs = card_kwargs
+ self.revision = revision if revision else CODEBASE_VERSION
+
+ if os.environ.get("HF_HUB_ENABLE_HF_TRANSFER", "0") != "1":
+ logging.warning(
+ 'HF_HUB_ENABLE_HF_TRANSFER is not set to "1". Install hf_transfer and set the env '
+ "variable for faster uploads:\npip install hf-transfer\nexport HF_HUB_ENABLE_HF_TRANSFER=1"
+ )
+
+ self.create_repo()
+
+ def create_repo(self):
+ logging.info(f"Loading meta data from {self.repo_id}...")
+ meta = LeRobotDatasetMetadata(self.repo_id)
+
+ logging.info(f"Creating repo {self.distant_repo_id}...")
+ hub_api = HfApi()
+ hub_api.create_repo(
+ repo_id=self.distant_repo_id,
+ private=self.private,
+ repo_type="dataset",
+ exist_ok=True,
+ )
+ if self.branch:
+ hub_api.create_branch(
+ repo_id=self.distant_repo_id,
+ branch=self.branch,
+ revision=self.revision,
+ repo_type="dataset",
+ exist_ok=True,
+ )
+
+ if not hub_api.file_exists(
+ self.distant_repo_id, REPOCARD_NAME, repo_type="dataset", revision=self.branch
+ ):
+ card = create_lerobot_dataset_card(
+ tags=self.tags, dataset_info=meta.info, license=self.license, **self.card_kwargs
+ )
+ card.push_to_hub(repo_id=self.distant_repo_id, repo_type="dataset", revision=self.branch)
+
+ hub_api.create_tag(self.distant_repo_id, tag=CODEBASE_VERSION, repo_type="dataset")
+
+ def list_files_recursively(directory):
+ base_path = Path(directory)
+ return [str(file.relative_to(base_path)) for file in base_path.rglob("*") if file.is_file()]
+
+ logging.info(f"Listing all local files from {self.repo_id}...")
+ self.file_paths = list_files_recursively(meta.root)
+ self.file_paths = sorted(self.file_paths)
+
+ def create_chunks(self, lst, n):
+ from itertools import islice
+
+ it = iter(lst)
+ return [list(islice(it, size)) for size in [len(lst) // n + (i < len(lst) % n) for i in range(n)]]
+
+ def create_commits(self, additions):
+ import logging
+ import math
+ import random
+ import time
+
+ from huggingface_hub import create_commit
+ from huggingface_hub.utils import HfHubHTTPError
+
+ FILES_BETWEEN_COMMITS = 10 # noqa: N806
+ BASE_DELAY = 0.1 # noqa: N806
+ MAX_RETRIES = 12 # noqa: N806
+
+ # Split the files into smaller chunks for faster commit
+ # and avoiding "A commit has happened since" error
+ num_chunks = math.ceil(len(additions) / FILES_BETWEEN_COMMITS)
+ chunks = self.create_chunks(additions, num_chunks)
+
+ for chunk in chunks:
+ retries = 0
+ while True:
+ try:
+ create_commit(
+ self.distant_repo_id,
+ repo_type="dataset",
+ operations=chunk,
+ commit_message=f"DataTrove upload ({len(chunk)} files)",
+ revision=self.branch,
+ )
+ # TODO: every 100 chunks super_squach_commits()
+ logging.info("create_commit completed!")
+ break
+ except HfHubHTTPError as e:
+ if "A commit has happened since" in e.server_message:
+ if retries >= MAX_RETRIES:
+ logging.error(f"Failed to create commit after {MAX_RETRIES=}. Giving up.")
+ raise e
+ logging.info("Commit creation race condition issue. Waiting...")
+ time.sleep(BASE_DELAY * 2**retries + random.uniform(0, 2))
+ retries += 1
+ else:
+ raise e
+
+ def run(self, data=None, rank: int = 0, world_size: int = 1):
+ import logging
+
+ from datasets.utils.tqdm import disable_progress_bars
+ from huggingface_hub import CommitOperationAdd, preupload_lfs_files
+
+ from lerobot.datasets.lerobot_dataset import LeRobotDatasetMetadata
+ from lerobot.utils.utils import init_logging
+
+ init_logging()
+ disable_progress_bars()
+
+ chunks = self.create_chunks(self.file_paths, world_size)
+ file_paths = chunks[rank]
+
+ if len(file_paths) == 0:
+ raise ValueError(file_paths)
+
+ logging.info("Pre-uploading LFS files...")
+ for i, path in enumerate(file_paths):
+ logging.info(f"{i}: {path}")
+
+ meta = LeRobotDatasetMetadata(self.repo_id)
+ additions = [
+ CommitOperationAdd(path_in_repo=path, path_or_fileobj=meta.root / path) for path in file_paths
+ ]
+ preupload_lfs_files(
+ repo_id=self.distant_repo_id, repo_type="dataset", additions=additions, revision=self.branch
+ )
+
+ logging.info("Creating commits...")
+ self.create_commits(additions)
+ logging.info("Done!")
+
+
+def make_upload_executor(
+ repo_id, job_name, logs_dir, workers, partition, cpus_per_task, mem_per_cpu, slurm=True
+):
+ kwargs = {
+ "pipeline": [
+ UploadDataset(repo_id),
+ ],
+ "logging_dir": str(logs_dir / job_name),
+ }
+
+ if slurm:
+ kwargs.update(
+ {
+ "job_name": job_name,
+ "tasks": DROID_SHARDS,
+ "workers": workers,
+ "time": "08:00:00",
+ "partition": partition,
+ "cpus_per_task": cpus_per_task,
+ "sbatch_args": {"mem-per-cpu": mem_per_cpu},
+ }
+ )
+ executor = SlurmPipelineExecutor(**kwargs)
+ else:
+ kwargs.update(
+ {
+ "tasks": DROID_SHARDS,
+ "workers": 1,
+ }
+ )
+ executor = LocalPipelineExecutor(**kwargs)
+
+ return executor
+
+
+def main():
+ parser = argparse.ArgumentParser()
+
+ parser.add_argument(
+ "--repo-id",
+ type=str,
+ help="Repositery identifier on Hugging Face: a community or a user name `/` the name of the dataset, required when push-to-hub is True.",
+ )
+ parser.add_argument(
+ "--logs-dir",
+ type=Path,
+ help="Path to logs directory for `datatrove`.",
+ )
+ parser.add_argument(
+ "--job-name",
+ type=str,
+ default="upload_droid",
+ help="Job name used in slurm, and name of the directory created inside the provided logs directory.",
+ )
+ parser.add_argument(
+ "--slurm",
+ type=int,
+ default=1,
+ help="Launch over slurm. Use `--slurm 0` to launch sequentially (useful to debug).",
+ )
+ parser.add_argument(
+ "--workers",
+ type=int,
+ default=50,
+ help="Number of slurm workers. It should be less than the maximum number of shards.",
+ )
+ parser.add_argument(
+ "--partition",
+ type=str,
+ help="Slurm partition. Ideally a CPU partition. No need for GPU partition.",
+ )
+ parser.add_argument(
+ "--cpus-per-task",
+ type=int,
+ default=8,
+ help="Number of cpus that each slurm worker will use.",
+ )
+ parser.add_argument(
+ "--mem-per-cpu",
+ type=str,
+ default="1950M",
+ help="Memory per cpu that each worker will use.",
+ )
+
+ init_logging()
+
+ args = parser.parse_args()
+ kwargs = vars(args)
+ kwargs["slurm"] = kwargs.pop("slurm") == 1
+ upload_executor = make_upload_executor(**kwargs)
+ upload_executor.run()
+
+
+if __name__ == "__main__":
+ main()
diff --git a/examples/so100_to_so100_EE/evaluate.py b/examples/so100_to_so100_EE/evaluate.py
new file mode 100644
index 00000000..60489b3c
--- /dev/null
+++ b/examples/so100_to_so100_EE/evaluate.py
@@ -0,0 +1,200 @@
+# !/usr/bin/env python
+
+# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
+from lerobot.configs.types import FeatureType, PolicyFeature
+from lerobot.datasets.lerobot_dataset import LeRobotDataset
+from lerobot.datasets.pipeline_features import aggregate_pipeline_dataset_features, create_initial_features
+from lerobot.datasets.utils import combine_feature_dicts
+from lerobot.model.kinematics import RobotKinematics
+from lerobot.policies.act.modeling_act import ACTPolicy
+from lerobot.policies.factory import make_pre_post_processors
+from lerobot.processor import (
+ RobotAction,
+ RobotObservation,
+ RobotProcessorPipeline,
+ make_default_teleop_action_processor,
+)
+from lerobot.processor.converters import (
+ observation_to_transition,
+ robot_action_observation_to_transition,
+ transition_to_observation,
+ transition_to_robot_action,
+)
+from lerobot.robots.so100_follower.config_so100_follower import SO100FollowerConfig
+from lerobot.robots.so100_follower.robot_kinematic_processor import (
+ ForwardKinematicsJointsToEE,
+ InverseKinematicsEEToJoints,
+)
+from lerobot.robots.so100_follower.so100_follower import SO100Follower
+from lerobot.scripts.lerobot_record import record_loop
+from lerobot.utils.control_utils import init_keyboard_listener
+from lerobot.utils.utils import log_say
+from lerobot.utils.visualization_utils import init_rerun
+
+NUM_EPISODES = 5
+FPS = 30
+EPISODE_TIME_SEC = 60
+TASK_DESCRIPTION = "My task description"
+HF_MODEL_ID = "/"
+HF_DATASET_ID = "/"
+
+# Create the robot configuration & robot
+camera_config = {"front": OpenCVCameraConfig(index_or_path=0, width=640, height=480, fps=FPS)}
+robot_config = SO100FollowerConfig(
+ port="/dev/tty.usbmodem5A460814411",
+ id="my_awesome_follower_arm",
+ cameras=camera_config,
+ use_degrees=True,
+)
+
+robot = SO100Follower(robot_config)
+
+# Create policy
+policy = ACTPolicy.from_pretrained(HF_MODEL_ID)
+
+# NOTE: It is highly recommended to use the urdf in the SO-ARM100 repo: https://github.com/TheRobotStudio/SO-ARM100/blob/main/Simulation/SO101/so101_new_calib.urdf
+kinematics_solver = RobotKinematics(
+ urdf_path="./SO101/so101_new_calib.urdf",
+ target_frame_name="gripper_frame_link",
+ joint_names=list(robot.bus.motors.keys()),
+)
+
+# Build pipeline to convert EE action to joints action
+robot_ee_to_joints_processor = RobotProcessorPipeline[tuple[RobotAction, RobotObservation], RobotAction](
+ steps=[
+ InverseKinematicsEEToJoints(
+ kinematics=kinematics_solver,
+ motor_names=list(robot.bus.motors.keys()),
+ initial_guess_current_joints=True,
+ ),
+ ],
+ to_transition=robot_action_observation_to_transition,
+ to_output=transition_to_robot_action,
+)
+
+# Build pipeline to convert joints observation to EE observation
+robot_joints_to_ee_pose_processor = RobotProcessorPipeline[RobotObservation, RobotObservation](
+ steps=[
+ ForwardKinematicsJointsToEE(kinematics=kinematics_solver, motor_names=list(robot.bus.motors.keys()))
+ ],
+ to_transition=observation_to_transition,
+ to_output=transition_to_observation,
+)
+
+
+# Create the dataset
+dataset = LeRobotDataset.create(
+ repo_id=HF_DATASET_ID,
+ fps=FPS,
+ features=combine_feature_dicts(
+ aggregate_pipeline_dataset_features(
+ pipeline=robot_joints_to_ee_pose_processor,
+ initial_features=create_initial_features(observation=robot.observation_features),
+ use_videos=True,
+ ),
+ # User for now should be explicit on the feature keys that were used for record
+ # Alternatively, the user can pass the processor step that has the right features
+ aggregate_pipeline_dataset_features(
+ pipeline=make_default_teleop_action_processor(),
+ initial_features=create_initial_features(
+ action={
+ f"ee.{k}": PolicyFeature(type=FeatureType.ACTION, shape=(1,))
+ for k in ["x", "y", "z", "wx", "wy", "wz", "gripper_pos"]
+ }
+ ),
+ use_videos=True,
+ ),
+ ),
+ robot_type=robot.name,
+ use_videos=True,
+ image_writer_threads=4,
+)
+
+# Build Policy Processors
+preprocessor, postprocessor = make_pre_post_processors(
+ policy_cfg=policy,
+ pretrained_path=HF_MODEL_ID,
+ dataset_stats=dataset.meta.stats,
+ # The inference device is automatically set to match the detected hardware, overriding any previous device settings from training to ensure compatibility.
+ preprocessor_overrides={"device_processor": {"device": str(policy.config.device)}},
+)
+
+# Connect the robot and teleoperator
+robot.connect()
+
+# Initialize the keyboard listener and rerun visualization
+listener, events = init_keyboard_listener()
+init_rerun(session_name="so100_so100_evaluate")
+
+if not robot.is_connected:
+ raise ValueError("Robot is not connected!")
+
+print("Starting evaluate loop...")
+episode_idx = 0
+for episode_idx in range(NUM_EPISODES):
+ log_say(f"Running inference, recording eval episode {episode_idx + 1} of {NUM_EPISODES}")
+
+ # Main record loop
+ record_loop(
+ robot=robot,
+ events=events,
+ fps=FPS,
+ policy=policy,
+ preprocessor=preprocessor, # Pass the pre and post policy processors
+ postprocessor=postprocessor,
+ dataset=dataset,
+ control_time_s=EPISODE_TIME_SEC,
+ single_task=TASK_DESCRIPTION,
+ display_data=True,
+ teleop_action_processor=make_default_teleop_action_processor(),
+ robot_action_processor=robot_ee_to_joints_processor,
+ robot_observation_processor=robot_joints_to_ee_pose_processor,
+ )
+
+ # Reset the environment if not stopping or re-recording
+ if not events["stop_recording"] and ((episode_idx < NUM_EPISODES - 1) or events["rerecord_episode"]):
+ log_say("Reset the environment")
+ record_loop(
+ robot=robot,
+ events=events,
+ fps=FPS,
+ control_time_s=EPISODE_TIME_SEC,
+ single_task=TASK_DESCRIPTION,
+ display_data=True,
+ teleop_action_processor=make_default_teleop_action_processor(),
+ robot_action_processor=robot_ee_to_joints_processor,
+ robot_observation_processor=robot_joints_to_ee_pose_processor,
+ )
+
+ if events["rerecord_episode"]:
+ log_say("Re-record episode")
+ events["rerecord_episode"] = False
+ events["exit_early"] = False
+ dataset.clear_episode_buffer()
+ continue
+
+ # Save episode
+ dataset.save_episode()
+ episode_idx += 1
+
+# Clean up
+log_say("Stop recording")
+robot.disconnect()
+listener.stop()
+
+dataset.finalize()
+dataset.push_to_hub()
diff --git a/examples/so100_to_so100_EE/record.py b/examples/so100_to_so100_EE/record.py
new file mode 100644
index 00000000..5ff1c286
--- /dev/null
+++ b/examples/so100_to_so100_EE/record.py
@@ -0,0 +1,204 @@
+# !/usr/bin/env python
+
+# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
+from lerobot.datasets.lerobot_dataset import LeRobotDataset
+from lerobot.datasets.pipeline_features import aggregate_pipeline_dataset_features, create_initial_features
+from lerobot.datasets.utils import combine_feature_dicts
+from lerobot.model.kinematics import RobotKinematics
+from lerobot.processor import RobotAction, RobotObservation, RobotProcessorPipeline
+from lerobot.processor.converters import (
+ observation_to_transition,
+ robot_action_observation_to_transition,
+ transition_to_observation,
+ transition_to_robot_action,
+)
+from lerobot.robots.so100_follower.config_so100_follower import SO100FollowerConfig
+from lerobot.robots.so100_follower.robot_kinematic_processor import (
+ EEBoundsAndSafety,
+ ForwardKinematicsJointsToEE,
+ InverseKinematicsEEToJoints,
+)
+from lerobot.robots.so100_follower.so100_follower import SO100Follower
+from lerobot.scripts.lerobot_record import record_loop
+from lerobot.teleoperators.so100_leader.config_so100_leader import SO100LeaderConfig
+from lerobot.teleoperators.so100_leader.so100_leader import SO100Leader
+from lerobot.utils.control_utils import init_keyboard_listener
+from lerobot.utils.utils import log_say
+from lerobot.utils.visualization_utils import init_rerun
+
+NUM_EPISODES = 2
+FPS = 30
+EPISODE_TIME_SEC = 60
+RESET_TIME_SEC = 30
+TASK_DESCRIPTION = "My task description"
+HF_REPO_ID = "/"
+
+# Create the robot and teleoperator configurations
+camera_config = {"front": OpenCVCameraConfig(index_or_path=0, width=640, height=480, fps=FPS)}
+follower_config = SO100FollowerConfig(
+ port="/dev/tty.usbmodem5A460814411", id="my_awesome_follower_arm", cameras=camera_config, use_degrees=True
+)
+leader_config = SO100LeaderConfig(port="/dev/tty.usbmodem5A460819811", id="my_awesome_leader_arm")
+
+# Initialize the robot and teleoperator
+follower = SO100Follower(follower_config)
+leader = SO100Leader(leader_config)
+
+# NOTE: It is highly recommended to use the urdf in the SO-ARM100 repo: https://github.com/TheRobotStudio/SO-ARM100/blob/main/Simulation/SO101/so101_new_calib.urdf
+follower_kinematics_solver = RobotKinematics(
+ urdf_path="./SO101/so101_new_calib.urdf",
+ target_frame_name="gripper_frame_link",
+ joint_names=list(follower.bus.motors.keys()),
+)
+
+# NOTE: It is highly recommended to use the urdf in the SO-ARM100 repo: https://github.com/TheRobotStudio/SO-ARM100/blob/main/Simulation/SO101/so101_new_calib.urdf
+leader_kinematics_solver = RobotKinematics(
+ urdf_path="./SO101/so101_new_calib.urdf",
+ target_frame_name="gripper_frame_link",
+ joint_names=list(leader.bus.motors.keys()),
+)
+
+# Build pipeline to convert follower joints to EE observation
+follower_joints_to_ee = RobotProcessorPipeline[RobotObservation, RobotObservation](
+ steps=[
+ ForwardKinematicsJointsToEE(
+ kinematics=follower_kinematics_solver, motor_names=list(follower.bus.motors.keys())
+ ),
+ ],
+ to_transition=observation_to_transition,
+ to_output=transition_to_observation,
+)
+
+# Build pipeline to convert leader joints to EE action
+leader_joints_to_ee = RobotProcessorPipeline[tuple[RobotAction, RobotObservation], RobotAction](
+ steps=[
+ ForwardKinematicsJointsToEE(
+ kinematics=leader_kinematics_solver, motor_names=list(leader.bus.motors.keys())
+ ),
+ ],
+ to_transition=robot_action_observation_to_transition,
+ to_output=transition_to_robot_action,
+)
+
+# Build pipeline to convert EE action to follower joints
+ee_to_follower_joints = RobotProcessorPipeline[tuple[RobotAction, RobotObservation], RobotAction](
+ [
+ EEBoundsAndSafety(
+ end_effector_bounds={"min": [-1.0, -1.0, -1.0], "max": [1.0, 1.0, 1.0]},
+ max_ee_step_m=0.10,
+ ),
+ InverseKinematicsEEToJoints(
+ kinematics=follower_kinematics_solver,
+ motor_names=list(follower.bus.motors.keys()),
+ initial_guess_current_joints=True,
+ ),
+ ],
+ to_transition=robot_action_observation_to_transition,
+ to_output=transition_to_robot_action,
+)
+
+# Create the dataset
+dataset = LeRobotDataset.create(
+ repo_id=HF_REPO_ID,
+ fps=FPS,
+ features=combine_feature_dicts(
+ # Run the feature contract of the pipelines
+ # This tells you how the features would look like after the pipeline steps
+ aggregate_pipeline_dataset_features(
+ pipeline=leader_joints_to_ee,
+ initial_features=create_initial_features(action=leader.action_features),
+ use_videos=True,
+ ),
+ aggregate_pipeline_dataset_features(
+ pipeline=follower_joints_to_ee,
+ initial_features=create_initial_features(observation=follower.observation_features),
+ use_videos=True,
+ ),
+ ),
+ robot_type=follower.name,
+ use_videos=True,
+ image_writer_threads=4,
+)
+
+
+# Connect the robot and teleoperator
+leader.connect()
+follower.connect()
+
+# Initialize the keyboard listener and rerun visualization
+listener, events = init_keyboard_listener()
+init_rerun(session_name="recording_phone")
+
+if not leader.is_connected or not follower.is_connected:
+ raise ValueError("Robot or teleop is not connected!")
+
+print("Starting record loop...")
+episode_idx = 0
+while episode_idx < NUM_EPISODES and not events["stop_recording"]:
+ log_say(f"Recording episode {episode_idx + 1} of {NUM_EPISODES}")
+
+ # Main record loop
+ record_loop(
+ robot=follower,
+ events=events,
+ fps=FPS,
+ teleop=leader,
+ dataset=dataset,
+ control_time_s=EPISODE_TIME_SEC,
+ single_task=TASK_DESCRIPTION,
+ display_data=True,
+ teleop_action_processor=leader_joints_to_ee,
+ robot_action_processor=ee_to_follower_joints,
+ robot_observation_processor=follower_joints_to_ee,
+ )
+
+ # Reset the environment if not stopping or re-recording
+ if not events["stop_recording"] and (episode_idx < NUM_EPISODES - 1 or events["rerecord_episode"]):
+ log_say("Reset the environment")
+ record_loop(
+ robot=follower,
+ events=events,
+ fps=FPS,
+ teleop=leader,
+ control_time_s=RESET_TIME_SEC,
+ single_task=TASK_DESCRIPTION,
+ display_data=True,
+ teleop_action_processor=leader_joints_to_ee,
+ robot_action_processor=ee_to_follower_joints,
+ robot_observation_processor=follower_joints_to_ee,
+ )
+
+ if events["rerecord_episode"]:
+ log_say("Re-recording episode")
+ events["rerecord_episode"] = False
+ events["exit_early"] = False
+ dataset.clear_episode_buffer()
+ continue
+
+ # Save episode
+ dataset.save_episode()
+ episode_idx += 1
+
+# Clean up
+log_say("Stop recording")
+leader.disconnect()
+follower.disconnect()
+listener.stop()
+
+dataset.finalize()
+dataset.push_to_hub()
diff --git a/examples/so100_to_so100_EE/replay.py b/examples/so100_to_so100_EE/replay.py
new file mode 100644
index 00000000..ea78d4e6
--- /dev/null
+++ b/examples/so100_to_so100_EE/replay.py
@@ -0,0 +1,101 @@
+# !/usr/bin/env python
+
+# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import time
+
+from lerobot.datasets.lerobot_dataset import LeRobotDataset
+from lerobot.model.kinematics import RobotKinematics
+from lerobot.processor import RobotAction, RobotObservation, RobotProcessorPipeline
+from lerobot.processor.converters import (
+ robot_action_observation_to_transition,
+ transition_to_robot_action,
+)
+from lerobot.robots.so100_follower.config_so100_follower import SO100FollowerConfig
+from lerobot.robots.so100_follower.robot_kinematic_processor import (
+ InverseKinematicsEEToJoints,
+)
+from lerobot.robots.so100_follower.so100_follower import SO100Follower
+from lerobot.utils.constants import ACTION
+from lerobot.utils.robot_utils import busy_wait
+from lerobot.utils.utils import log_say
+
+EPISODE_IDX = 0
+HF_REPO_ID = "/"
+
+# Initialize the robot config
+robot_config = SO100FollowerConfig(
+ port="/dev/tty.usbmodem5A460814411", id="my_awesome_follower_arm", use_degrees=True
+)
+
+# Initialize the robot
+robot = SO100Follower(robot_config)
+
+# NOTE: It is highly recommended to use the urdf in the SO-ARM100 repo: https://github.com/TheRobotStudio/SO-ARM100/blob/main/Simulation/SO101/so101_new_calib.urdf
+kinematics_solver = RobotKinematics(
+ urdf_path="./SO101/so101_new_calib.urdf",
+ target_frame_name="gripper_frame_link",
+ joint_names=list(robot.bus.motors.keys()),
+)
+
+# Build pipeline to convert EE action to joints action
+robot_ee_to_joints_processor = RobotProcessorPipeline[tuple[RobotAction, RobotObservation], RobotAction](
+ steps=[
+ InverseKinematicsEEToJoints(
+ kinematics=kinematics_solver,
+ motor_names=list(robot.bus.motors.keys()),
+ initial_guess_current_joints=False, # Because replay is open loop
+ ),
+ ],
+ to_transition=robot_action_observation_to_transition,
+ to_output=transition_to_robot_action,
+)
+
+# Fetch the dataset to replay
+dataset = LeRobotDataset(HF_REPO_ID, episodes=[EPISODE_IDX])
+# Filter dataset to only include frames from the specified episode since episodes are chunked in dataset V3.0
+episode_frames = dataset.hf_dataset.filter(lambda x: x["episode_index"] == EPISODE_IDX)
+actions = episode_frames.select_columns(ACTION)
+
+# Connect to the robot
+robot.connect()
+
+if not robot.is_connected:
+ raise ValueError("Robot is not connected!")
+
+print("Starting replay loop...")
+log_say(f"Replaying episode {EPISODE_IDX}")
+for idx in range(len(episode_frames)):
+ t0 = time.perf_counter()
+
+ # Get recorded action from dataset
+ ee_action = {
+ name: float(actions[idx][ACTION][i]) for i, name in enumerate(dataset.features[ACTION]["names"])
+ }
+
+ # Get robot observation
+ robot_obs = robot.get_observation()
+
+ # Dataset EE -> robot joints
+ joint_action = robot_ee_to_joints_processor((ee_action, robot_obs))
+
+ # Send action to robot
+ _ = robot.send_action(joint_action)
+
+ busy_wait(1.0 / dataset.fps - (time.perf_counter() - t0))
+
+# Clean up
+robot.disconnect()
diff --git a/examples/so100_to_so100_EE/teleoperate.py b/examples/so100_to_so100_EE/teleoperate.py
new file mode 100644
index 00000000..b1a8c8c2
--- /dev/null
+++ b/examples/so100_to_so100_EE/teleoperate.py
@@ -0,0 +1,121 @@
+# !/usr/bin/env python
+
+# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import time
+
+from lerobot.model.kinematics import RobotKinematics
+from lerobot.processor import RobotAction, RobotObservation, RobotProcessorPipeline
+from lerobot.processor.converters import (
+ robot_action_observation_to_transition,
+ robot_action_to_transition,
+ transition_to_robot_action,
+)
+from lerobot.robots.so100_follower.config_so100_follower import SO100FollowerConfig
+from lerobot.robots.so100_follower.robot_kinematic_processor import (
+ EEBoundsAndSafety,
+ ForwardKinematicsJointsToEE,
+ InverseKinematicsEEToJoints,
+)
+from lerobot.robots.so100_follower.so100_follower import SO100Follower
+from lerobot.teleoperators.so100_leader.config_so100_leader import SO100LeaderConfig
+from lerobot.teleoperators.so100_leader.so100_leader import SO100Leader
+from lerobot.utils.robot_utils import busy_wait
+from lerobot.utils.visualization_utils import init_rerun, log_rerun_data
+
+FPS = 30
+
+# Initialize the robot and teleoperator config
+follower_config = SO100FollowerConfig(
+ port="/dev/tty.usbmodem5A460814411", id="my_awesome_follower_arm", use_degrees=True
+)
+leader_config = SO100LeaderConfig(port="/dev/tty.usbmodem5A460819811", id="my_awesome_leader_arm")
+
+# Initialize the robot and teleoperator
+follower = SO100Follower(follower_config)
+leader = SO100Leader(leader_config)
+
+# NOTE: It is highly recommended to use the urdf in the SO-ARM100 repo: https://github.com/TheRobotStudio/SO-ARM100/blob/main/Simulation/SO101/so101_new_calib.urdf
+follower_kinematics_solver = RobotKinematics(
+ urdf_path="./SO101/so101_new_calib.urdf",
+ target_frame_name="gripper_frame_link",
+ joint_names=list(follower.bus.motors.keys()),
+)
+
+# NOTE: It is highly recommended to use the urdf in the SO-ARM100 repo: https://github.com/TheRobotStudio/SO-ARM100/blob/main/Simulation/SO101/so101_new_calib.urdf
+leader_kinematics_solver = RobotKinematics(
+ urdf_path="./SO101/so101_new_calib.urdf",
+ target_frame_name="gripper_frame_link",
+ joint_names=list(leader.bus.motors.keys()),
+)
+
+# Build pipeline to convert teleop joints to EE action
+leader_to_ee = RobotProcessorPipeline[RobotAction, RobotAction](
+ steps=[
+ ForwardKinematicsJointsToEE(
+ kinematics=leader_kinematics_solver, motor_names=list(leader.bus.motors.keys())
+ ),
+ ],
+ to_transition=robot_action_to_transition,
+ to_output=transition_to_robot_action,
+)
+
+# build pipeline to convert EE action to robot joints
+ee_to_follower_joints = RobotProcessorPipeline[tuple[RobotAction, RobotObservation], RobotAction](
+ [
+ EEBoundsAndSafety(
+ end_effector_bounds={"min": [-1.0, -1.0, -1.0], "max": [1.0, 1.0, 1.0]},
+ max_ee_step_m=0.10,
+ ),
+ InverseKinematicsEEToJoints(
+ kinematics=follower_kinematics_solver,
+ motor_names=list(follower.bus.motors.keys()),
+ initial_guess_current_joints=False,
+ ),
+ ],
+ to_transition=robot_action_observation_to_transition,
+ to_output=transition_to_robot_action,
+)
+
+# Connect to the robot and teleoperator
+follower.connect()
+leader.connect()
+
+# Init rerun viewer
+init_rerun(session_name="so100_so100_EE_teleop")
+
+print("Starting teleop loop...")
+while True:
+ t0 = time.perf_counter()
+
+ # Get robot observation
+ robot_obs = follower.get_observation()
+
+ # Get teleop observation
+ leader_joints_obs = leader.get_action()
+
+ # teleop joints -> teleop EE action
+ leader_ee_act = leader_to_ee(leader_joints_obs)
+
+ # teleop EE -> robot joints
+ follower_joints_act = ee_to_follower_joints((leader_ee_act, robot_obs))
+
+ # Send action to robot
+ _ = follower.send_action(follower_joints_act)
+
+ # Visualize
+ log_rerun_data(observation=leader_ee_act, action=follower_joints_act)
+
+ busy_wait(max(1.0 / FPS - (time.perf_counter() - t0), 0.0))
diff --git a/examples/3_train_policy.py b/examples/training/train_policy.py
similarity index 87%
rename from examples/3_train_policy.py
rename to examples/training/train_policy.py
index f9c251a0..16f2a4d8 100644
--- a/examples/3_train_policy.py
+++ b/examples/training/train_policy.py
@@ -12,21 +12,18 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-"""This script demonstrates how to train Diffusion Policy on the PushT environment.
-
-Once you have trained a model with this script, you can try to evaluate it on
-examples/2_evaluate_pretrained_policy.py
-"""
+"""This script demonstrates how to train Diffusion Policy on the PushT environment."""
from pathlib import Path
import torch
-from lerobot.common.datasets.lerobot_dataset import LeRobotDataset, LeRobotDatasetMetadata
-from lerobot.common.datasets.utils import dataset_to_policy_features
-from lerobot.common.policies.diffusion.configuration_diffusion import DiffusionConfig
-from lerobot.common.policies.diffusion.modeling_diffusion import DiffusionPolicy
from lerobot.configs.types import FeatureType
+from lerobot.datasets.lerobot_dataset import LeRobotDataset, LeRobotDatasetMetadata
+from lerobot.datasets.utils import dataset_to_policy_features
+from lerobot.policies.diffusion.configuration_diffusion import DiffusionConfig
+from lerobot.policies.diffusion.modeling_diffusion import DiffusionPolicy
+from lerobot.policies.factory import make_pre_post_processors
def main():
@@ -56,9 +53,10 @@ def main():
cfg = DiffusionConfig(input_features=input_features, output_features=output_features)
# We can now instantiate our policy with this config and the dataset stats.
- policy = DiffusionPolicy(cfg, dataset_stats=dataset_metadata.stats)
+ policy = DiffusionPolicy(cfg)
policy.train()
policy.to(device)
+ preprocessor, postprocessor = make_pre_post_processors(cfg, dataset_stats=dataset_metadata.stats)
# Another policy-dataset interaction is with the delta_timestamps. Each policy expects a given number frames
# which can differ for inputs, outputs and rewards (if there are some).
@@ -99,7 +97,7 @@ def main():
done = False
while not done:
for batch in dataloader:
- batch = {k: (v.to(device) if isinstance(v, torch.Tensor) else v) for k, v in batch.items()}
+ batch = preprocessor(batch)
loss, _ = policy.forward(batch)
loss.backward()
optimizer.step()
@@ -114,6 +112,8 @@ def main():
# Save a policy checkpoint.
policy.save_pretrained(output_directory)
+ preprocessor.save_pretrained(output_directory)
+ postprocessor.save_pretrained(output_directory)
if __name__ == "__main__":
diff --git a/examples/training/train_with_streaming.py b/examples/training/train_with_streaming.py
new file mode 100644
index 00000000..185be5b1
--- /dev/null
+++ b/examples/training/train_with_streaming.py
@@ -0,0 +1,108 @@
+# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""This script demonstrates how to train a Diffusion Policy on the PushT environment,
+using a dataset processed in streaming mode."""
+
+from pathlib import Path
+
+import torch
+
+from lerobot.configs.types import FeatureType
+from lerobot.datasets.lerobot_dataset import LeRobotDatasetMetadata
+from lerobot.datasets.streaming_dataset import StreamingLeRobotDataset
+from lerobot.datasets.utils import dataset_to_policy_features
+from lerobot.policies.act.configuration_act import ACTConfig
+from lerobot.policies.act.modeling_act import ACTPolicy
+from lerobot.policies.factory import make_pre_post_processors
+from lerobot.utils.constants import ACTION
+
+
+def main():
+ # Create a directory to store the training checkpoint.
+ output_directory = Path("outputs/train/example_streaming_dataset")
+ output_directory.mkdir(parents=True, exist_ok=True)
+
+ # Selects the "best" device available
+ device = (
+ torch.device("cuda")
+ if torch.cuda.is_available()
+ else torch.device("mps")
+ if torch.backends.mps.is_available()
+ else torch.device("cpu")
+ )
+ print(f"Using device: {device}")
+
+ training_steps = 10
+ log_freq = 1
+
+ dataset_id = "lerobot/droid_1.0.1" # 26M frames! Would require 4TB of disk space if installed locally (:
+ dataset_metadata = LeRobotDatasetMetadata(dataset_id)
+ features = dataset_to_policy_features(dataset_metadata.features)
+ output_features = {key: ft for key, ft in features.items() if ft.type is FeatureType.ACTION}
+ input_features = {key: ft for key, ft in features.items() if key not in output_features}
+
+ # We can now instantiate our policy with this config and the dataset stats.
+ cfg = ACTConfig(input_features=input_features, output_features=output_features)
+ policy = ACTPolicy(cfg)
+ policy.train()
+ policy.to(device)
+ preprocessor, postprocessor = make_pre_post_processors(cfg, dataset_stats=dataset_metadata.stats)
+
+ # Delta timestamps are used to (1) augment frames used during training and (2) supervise the policy.
+ # Here, we use delta-timestamps to only provide ground truth actions for supervision
+ delta_timestamps = {
+ ACTION: [t / dataset_metadata.fps for t in range(cfg.n_action_steps)],
+ }
+
+ # Instantiating the training dataset in streaming mode allows to not consume up memory as the data is fetched
+ # iteratively rather than being load into memory all at once. Retrieved frames are shuffled across epochs
+ dataset = StreamingLeRobotDataset(dataset_id, delta_timestamps=delta_timestamps, tolerance_s=1e-3)
+
+ optimizer = torch.optim.Adam(policy.parameters(), lr=1e-4)
+ dataloader = torch.utils.data.DataLoader(
+ dataset,
+ num_workers=4,
+ batch_size=16,
+ pin_memory=device.type != "cpu",
+ drop_last=True,
+ prefetch_factor=2, # loads batches with multiprocessing while policy trains
+ )
+
+ # Run training loop.
+ step = 0
+ done = False
+ while not done:
+ for batch in dataloader:
+ batch = preprocessor(batch)
+ loss, _ = policy.forward(batch)
+ loss.backward()
+ optimizer.step()
+ optimizer.zero_grad()
+
+ if step % log_freq == 0:
+ print(f"step: {step} loss: {loss.item():.3f}")
+ step += 1
+ if step >= training_steps:
+ done = True
+ break
+
+ # Save a policy checkpoint.
+ policy.save_pretrained(output_directory)
+ preprocessor.save_pretrained(output_directory)
+ postprocessor.save_pretrained(output_directory)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/examples/tutorial/act/act_training_example.py b/examples/tutorial/act/act_training_example.py
new file mode 100644
index 00000000..6db495ca
--- /dev/null
+++ b/examples/tutorial/act/act_training_example.py
@@ -0,0 +1,98 @@
+"""This script demonstrates how to train ACT Policy on a real-world dataset."""
+
+from pathlib import Path
+
+import torch
+
+from lerobot.configs.types import FeatureType
+from lerobot.datasets.lerobot_dataset import LeRobotDataset, LeRobotDatasetMetadata
+from lerobot.datasets.utils import dataset_to_policy_features
+from lerobot.policies.act.configuration_act import ACTConfig
+from lerobot.policies.act.modeling_act import ACTPolicy
+from lerobot.policies.factory import make_pre_post_processors
+
+
+def make_delta_timestamps(delta_indices: list[int] | None, fps: int) -> list[float]:
+ if delta_indices is None:
+ return [0]
+
+ return [i / fps for i in delta_indices]
+
+
+output_directory = Path("outputs/robot_learning_tutorial/act")
+output_directory.mkdir(parents=True, exist_ok=True)
+
+# Select your device
+device = torch.device("mps") # or "cuda" or "cpu"
+
+dataset_id = "lerobot/svla_so101_pickplace"
+
+# This specifies the inputs the model will be expecting and the outputs it will produce
+dataset_metadata = LeRobotDatasetMetadata(dataset_id)
+features = dataset_to_policy_features(dataset_metadata.features)
+
+output_features = {key: ft for key, ft in features.items() if ft.type is FeatureType.ACTION}
+input_features = {key: ft for key, ft in features.items() if key not in output_features}
+
+cfg = ACTConfig(input_features=input_features, output_features=output_features)
+policy = ACTPolicy(cfg)
+preprocessor, postprocessor = make_pre_post_processors(cfg, dataset_stats=dataset_metadata.stats)
+
+policy.train()
+policy.to(device)
+
+# To perform action chunking, ACT expects a given number of actions as targets
+delta_timestamps = {
+ "action": make_delta_timestamps(cfg.action_delta_indices, dataset_metadata.fps),
+}
+
+# add image features if they are present
+delta_timestamps |= {
+ k: make_delta_timestamps(cfg.observation_delta_indices, dataset_metadata.fps) for k in cfg.image_features
+}
+
+# Instantiate the dataset
+dataset = LeRobotDataset(dataset_id, delta_timestamps=delta_timestamps)
+
+# Create the optimizer and dataloader for offline training
+optimizer = cfg.get_optimizer_preset().build(policy.parameters())
+batch_size = 32
+dataloader = torch.utils.data.DataLoader(
+ dataset,
+ batch_size=batch_size,
+ shuffle=True,
+ pin_memory=device.type != "cpu",
+ drop_last=True,
+)
+
+# Number of training steps and logging frequency
+training_steps = 1
+log_freq = 1
+
+# Run training loop
+step = 0
+done = False
+while not done:
+ for batch in dataloader:
+ batch = preprocessor(batch)
+ loss, _ = policy.forward(batch)
+ loss.backward()
+ optimizer.step()
+ optimizer.zero_grad()
+
+ if step % log_freq == 0:
+ print(f"step: {step} loss: {loss.item():.3f}")
+ step += 1
+ if step >= training_steps:
+ done = True
+ break
+
+# Save the policy checkpoint, alongside the pre/post processors
+policy.save_pretrained(output_directory)
+preprocessor.save_pretrained(output_directory)
+postprocessor.save_pretrained(output_directory)
+
+# Save all assets to the Hub
+policy.push_to_hub("fracapuano/robot_learning_tutorial_act")
+preprocessor.push_to_hub("fracapuano/robot_learning_tutorial_act")
+postprocessor.push_to_hub("fracapuano/robot_learning_tutorial_act")
diff --git a/examples/tutorial/act/act_using_example.py b/examples/tutorial/act/act_using_example.py
new file mode 100644
index 00000000..8beef765
--- /dev/null
+++ b/examples/tutorial/act/act_using_example.py
@@ -0,0 +1,57 @@
+import torch
+
+from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
+from lerobot.datasets.lerobot_dataset import LeRobotDatasetMetadata
+from lerobot.policies.act.modeling_act import ACTPolicy
+from lerobot.policies.factory import make_pre_post_processors
+from lerobot.policies.utils import build_inference_frame, make_robot_action
+from lerobot.robots.so100_follower.config_so100_follower import SO100FollowerConfig
+from lerobot.robots.so100_follower.so100_follower import SO100Follower
+
+device = torch.device("mps") # or "cuda" or "cpu"
+model_id = "fracapuano/robot_learning_tutorial_act"
+model = ACTPolicy.from_pretrained(model_id)
+
+dataset_id = "lerobot/svla_so101_pickplace"
+# This only downloads the metadata for the dataset, ~10s of MB even for large-scale datasets
+dataset_metadata = LeRobotDatasetMetadata(dataset_id)
+preprocess, postprocess = make_pre_post_processors(model.config, dataset_stats=dataset_metadata.stats)
+
+# # find ports using lerobot-find-port
+follower_port = ... # something like "/dev/tty.usbmodem58760431631"
+
+# # the robot ids are used the load the right calibration files
+follower_id = ... # something like "follower_so100"
+
+MAX_EPISODES = 5
+MAX_STEPS_PER_EPISODE = 20
+
+# Robot and environment configuration
+# Camera keys must match the name and resolutions of the ones used for training!
+# You can check the camera keys expected by a model in the info.json card on the model card on the Hub
+camera_config = {
+ "side": OpenCVCameraConfig(index_or_path=0, width=640, height=480, fps=30),
+ "up": OpenCVCameraConfig(index_or_path=1, width=640, height=480, fps=30),
+}
+
+robot_cfg = SO100FollowerConfig(port=follower_port, id=follower_id, cameras=camera_config)
+robot = SO100Follower(robot_cfg)
+robot.connect()
+
+for _ in range(MAX_EPISODES):
+ for _ in range(MAX_STEPS_PER_EPISODE):
+ obs = robot.get_observation()
+ obs_frame = build_inference_frame(
+ observation=obs, ds_features=dataset_metadata.features, device=device
+ )
+
+ obs = preprocess(obs_frame)
+
+ action = model.select_action(obs)
+ action = postprocess(action)
+
+ action = make_robot_action(action, dataset_metadata.features)
+
+ robot.send_action(action)
+
+ print("Episode finished! Starting new episode...")
diff --git a/examples/tutorial/async-inf/policy_server.py b/examples/tutorial/async-inf/policy_server.py
new file mode 100644
index 00000000..cd2b1f04
--- /dev/null
+++ b/examples/tutorial/async-inf/policy_server.py
@@ -0,0 +1,11 @@
+from lerobot.async_inference.configs import PolicyServerConfig
+from lerobot.async_inference.policy_server import serve
+
+host = ... # something like "127.0.0.1" if you're exposing to localhost
+port = ... # something like 8080
+
+config = PolicyServerConfig(
+ host=host,
+ port=port,
+)
+serve(config)
diff --git a/examples/tutorial/async-inf/robot_client.py b/examples/tutorial/async-inf/robot_client.py
new file mode 100644
index 00000000..3e46e5e3
--- /dev/null
+++ b/examples/tutorial/async-inf/robot_client.py
@@ -0,0 +1,55 @@
+import threading
+
+from lerobot.async_inference.configs import RobotClientConfig
+from lerobot.async_inference.helpers import visualize_action_queue_size
+from lerobot.async_inference.robot_client import RobotClient
+from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
+from lerobot.robots.so100_follower import SO100FollowerConfig
+
+# these cameras must match the ones expected by the policy - find your cameras with lerobot-find-cameras
+# check the config.json on the Hub for the policy you are using to see the expected camera specs
+camera_cfg = {
+ "up": OpenCVCameraConfig(index_or_path=0, width=640, height=480, fps=30),
+ "side": OpenCVCameraConfig(index_or_path=1, width=640, height=480, fps=30),
+}
+
+# # find ports using lerobot-find-port
+follower_port = ... # something like "/dev/tty.usbmodem58760431631"
+
+# # the robot ids are used the load the right calibration files
+follower_id = ... # something like "follower_so100"
+
+robot_cfg = SO100FollowerConfig(port=follower_port, id=follower_id, cameras=camera_cfg)
+
+server_address = ... # something like "127.0.0.1:8080" if using localhost
+
+# 3. Create client configuration
+client_cfg = RobotClientConfig(
+ robot=robot_cfg,
+ server_address=server_address,
+ policy_device="mps",
+ policy_type="act",
+ pretrained_name_or_path="fracapuano/robot_learning_tutorial_act",
+ chunk_size_threshold=0.5, # g
+ actions_per_chunk=50, # make sure this is less than the max actions of the policy
+)
+
+# 4. Create and start client
+client = RobotClient(client_cfg)
+
+# 5. Provide a textual description of the task
+task = ...
+
+if client.start():
+ # Start action receiver thread
+ action_receiver_thread = threading.Thread(target=client.receive_actions, daemon=True)
+ action_receiver_thread.start()
+
+ try:
+ # Run the control loop
+ client.control_loop(task)
+ except KeyboardInterrupt:
+ client.stop()
+ action_receiver_thread.join()
+ # (Optionally) plot the action queue size
+ visualize_action_queue_size(client.action_queue_size)
diff --git a/examples/tutorial/diffusion/diffusion_training_example.py b/examples/tutorial/diffusion/diffusion_training_example.py
new file mode 100644
index 00000000..4c3fac09
--- /dev/null
+++ b/examples/tutorial/diffusion/diffusion_training_example.py
@@ -0,0 +1,99 @@
+"""This script demonstrates how to train Diffusion Policy on a real-world dataset."""
+
+from pathlib import Path
+
+import torch
+
+from lerobot.configs.types import FeatureType
+from lerobot.datasets.lerobot_dataset import LeRobotDataset, LeRobotDatasetMetadata
+from lerobot.datasets.utils import dataset_to_policy_features
+from lerobot.policies.diffusion.configuration_diffusion import DiffusionConfig
+from lerobot.policies.diffusion.modeling_diffusion import DiffusionPolicy
+from lerobot.policies.factory import make_pre_post_processors
+
+
+def make_delta_timestamps(delta_indices: list[int] | None, fps: int) -> list[float]:
+ if delta_indices is None:
+ return [0]
+
+ return [i / fps for i in delta_indices]
+
+
+output_directory = Path("outputs/robot_learning_tutorial/diffusion")
+output_directory.mkdir(parents=True, exist_ok=True)
+
+# Select your device
+device = torch.device("mps") # or "cuda" or "cpu"
+
+dataset_id = "lerobot/svla_so101_pickplace"
+
+# This specifies the inputs the model will be expecting and the outputs it will produce
+dataset_metadata = LeRobotDatasetMetadata(dataset_id)
+features = dataset_to_policy_features(dataset_metadata.features)
+
+output_features = {key: ft for key, ft in features.items() if ft.type is FeatureType.ACTION}
+input_features = {key: ft for key, ft in features.items() if key not in output_features}
+
+cfg = DiffusionConfig(input_features=input_features, output_features=output_features)
+policy = DiffusionPolicy(cfg)
+preprocessor, postprocessor = make_pre_post_processors(cfg, dataset_stats=dataset_metadata.stats)
+
+policy.train()
+policy.to(device)
+
+# To perform action chunking, ACT expects a given number of actions as targets
+delta_timestamps = {
+ "observation.state": make_delta_timestamps(cfg.observation_delta_indices, dataset_metadata.fps),
+ "action": make_delta_timestamps(cfg.action_delta_indices, dataset_metadata.fps),
+}
+
+# add image features if they are present
+delta_timestamps |= {
+ k: make_delta_timestamps(cfg.observation_delta_indices, dataset_metadata.fps) for k in cfg.image_features
+}
+
+# Instantiate the dataset
+dataset = LeRobotDataset(dataset_id, delta_timestamps=delta_timestamps)
+
+# Create the optimizer and dataloader for offline training
+optimizer = cfg.get_optimizer_preset().build(policy.parameters())
+batch_size = 32
+dataloader = torch.utils.data.DataLoader(
+ dataset,
+ batch_size=batch_size,
+ shuffle=True,
+ pin_memory=device.type != "cpu",
+ drop_last=True,
+)
+
+# Number of training steps and logging frequency
+training_steps = 1
+log_freq = 1
+
+# Run training loop
+step = 0
+done = False
+while not done:
+ for batch in dataloader:
+ batch = preprocessor(batch)
+ loss, _ = policy.forward(batch)
+ loss.backward()
+ optimizer.step()
+ optimizer.zero_grad()
+
+ if step % log_freq == 0:
+ print(f"step: {step} loss: {loss.item():.3f}")
+ step += 1
+ if step >= training_steps:
+ done = True
+ break
+
+# Save the policy checkpoint, alongside the pre/post processors
+policy.save_pretrained(output_directory)
+preprocessor.save_pretrained(output_directory)
+postprocessor.save_pretrained(output_directory)
+
+# Save all assets to the Hub
+policy.push_to_hub("fracapuano/robot_learning_tutorial_diffusion")
+preprocessor.push_to_hub("fracapuano/robot_learning_tutorial_diffusion")
+postprocessor.push_to_hub("fracapuano/robot_learning_tutorial_diffusion")
diff --git a/examples/tutorial/diffusion/diffusion_using_example.py b/examples/tutorial/diffusion/diffusion_using_example.py
new file mode 100644
index 00000000..fe516383
--- /dev/null
+++ b/examples/tutorial/diffusion/diffusion_using_example.py
@@ -0,0 +1,60 @@
+import torch
+
+from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
+from lerobot.datasets.lerobot_dataset import LeRobotDatasetMetadata
+from lerobot.policies.diffusion.modeling_diffusion import DiffusionPolicy
+from lerobot.policies.factory import make_pre_post_processors
+from lerobot.policies.utils import build_inference_frame, make_robot_action
+from lerobot.robots.so100_follower.config_so100_follower import SO100FollowerConfig
+from lerobot.robots.so100_follower.so100_follower import SO100Follower
+
+device = torch.device("mps") # or "cuda" or "cpu"
+model_id = "fracapuano/robot_learning_tutorial_diffusion"
+
+model = DiffusionPolicy.from_pretrained(model_id)
+
+dataset_id = "lerobot/svla_so101_pickplace"
+# This only downloads the metadata for the dataset, ~10s of MB even for large-scale datasets
+dataset_metadata = LeRobotDatasetMetadata(dataset_id)
+preprocess, postprocess = make_pre_post_processors(
+ model.config, model_id, dataset_stats=dataset_metadata.stats
+)
+
+MAX_EPISODES = 5
+MAX_STEPS_PER_EPISODE = 20
+
+
+# # find ports using lerobot-find-port
+follower_port = ... # something like "/dev/tty.usbmodem58760431631"
+
+# # the robot ids are used the load the right calibration files
+follower_id = ... # something like "follower_so100"
+
+# Robot and environment configuration
+# Camera keys must match the name and resolutions of the ones used for training!
+# You can check the camera keys expected by a model in the info.json card on the model card on the Hub
+camera_config = {
+ "side": OpenCVCameraConfig(index_or_path=0, width=640, height=480, fps=30),
+ "up": OpenCVCameraConfig(index_or_path=1, width=640, height=480, fps=30),
+}
+
+robot_cfg = SO100FollowerConfig(port=follower_port, id=follower_id, cameras=camera_config)
+robot = SO100Follower(robot_cfg)
+robot.connect()
+
+
+for _ in range(MAX_EPISODES):
+ for _ in range(MAX_STEPS_PER_EPISODE):
+ obs = robot.get_observation()
+ obs_frame = build_inference_frame(
+ observation=obs, ds_features=dataset_metadata.features, device=device
+ )
+
+ obs = preprocess(obs_frame)
+
+ action = model.select_action(obs)
+ action = postprocess(action)
+ action = make_robot_action(action, dataset_metadata.features)
+ robot.send_action(action)
+
+ print("Episode finished! Starting new episode...")
diff --git a/examples/tutorial/pi0/using_pi0_example.py b/examples/tutorial/pi0/using_pi0_example.py
new file mode 100644
index 00000000..98fc4b4b
--- /dev/null
+++ b/examples/tutorial/pi0/using_pi0_example.py
@@ -0,0 +1,67 @@
+import torch
+
+from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
+from lerobot.datasets.utils import hw_to_dataset_features
+from lerobot.policies.factory import make_pre_post_processors
+from lerobot.policies.pi0.modeling_pi0 import PI0Policy
+from lerobot.policies.utils import build_inference_frame, make_robot_action
+from lerobot.robots.so100_follower.config_so100_follower import SO100FollowerConfig
+from lerobot.robots.so100_follower.so100_follower import SO100Follower
+
+MAX_EPISODES = 5
+MAX_STEPS_PER_EPISODE = 20
+
+device = torch.device("mps") # or "cuda" or "cpu"
+model_id = "lerobot/pi0_base"
+
+model = PI0Policy.from_pretrained(model_id)
+
+preprocess, postprocess = make_pre_post_processors(
+ model.config,
+ model_id,
+ # This overrides allows to run on MPS, otherwise defaults to CUDA (if available)
+ preprocessor_overrides={"device_processor": {"device": str(device)}},
+)
+
+# find ports using lerobot-find-port
+follower_port = ... # something like "/dev/tty.usbmodem58760431631"
+
+# the robot ids are used the load the right calibration files
+follower_id = ... # something like "follower_so100"
+
+# Robot and environment configuration
+# Camera keys must match the name and resolutions of the ones used for training!
+# You can check the camera keys expected by a model in the info.json card on the model card on the Hub
+camera_config = {
+ "base_0_rgb": OpenCVCameraConfig(index_or_path=0, width=640, height=480, fps=30),
+ "left_wrist_0_rgb": OpenCVCameraConfig(index_or_path=1, width=640, height=480, fps=30),
+ "right_wrist_0_rgb": OpenCVCameraConfig(index_or_path=2, width=640, height=480, fps=30),
+}
+
+robot_cfg = SO100FollowerConfig(port=follower_port, id=follower_id, cameras=camera_config)
+robot = SO100Follower(robot_cfg)
+robot.connect()
+
+task = "" # something like "pick the red block"
+robot_type = "" # something like "so100_follower" for multi-embodiment datasets
+
+# This is used to match the raw observation keys to the keys expected by the policy
+action_features = hw_to_dataset_features(robot.action_features, "action")
+obs_features = hw_to_dataset_features(robot.observation_features, "observation")
+dataset_features = {**action_features, **obs_features}
+
+for _ in range(MAX_EPISODES):
+ for _ in range(MAX_STEPS_PER_EPISODE):
+ obs = robot.get_observation()
+ obs_frame = build_inference_frame(
+ observation=obs, ds_features=dataset_features, device=device, task=task, robot_type=robot_type
+ )
+
+ obs = preprocess(obs_frame)
+
+ action = model.select_action(obs)
+ action = postprocess(action)
+ action = make_robot_action(action, dataset_features)
+ robot.send_action(action)
+
+ print("Episode finished! Starting new episode...")
diff --git a/examples/tutorial/rl/hilserl_example.py b/examples/tutorial/rl/hilserl_example.py
new file mode 100644
index 00000000..865053d3
--- /dev/null
+++ b/examples/tutorial/rl/hilserl_example.py
@@ -0,0 +1,345 @@
+import multiprocessing as mp
+import signal
+from pathlib import Path
+from queue import Empty, Full
+
+import torch
+import torch.optim as optim
+
+from lerobot.datasets.lerobot_dataset import LeRobotDataset
+from lerobot.datasets.utils import hw_to_dataset_features
+from lerobot.envs.configs import HILSerlProcessorConfig, HILSerlRobotEnvConfig
+from lerobot.policies.sac.configuration_sac import SACConfig
+from lerobot.policies.sac.modeling_sac import SACPolicy
+from lerobot.policies.sac.reward_model.modeling_classifier import Classifier
+from lerobot.rl.buffer import ReplayBuffer
+from lerobot.rl.gym_manipulator import make_robot_env
+from lerobot.robots.so100_follower import SO100FollowerConfig
+from lerobot.teleoperators.so100_leader import SO100LeaderConfig
+from lerobot.teleoperators.utils import TeleopEvents
+
+LOG_EVERY = 10
+SEND_EVERY = 10
+
+
+def run_learner(
+ transitions_queue: mp.Queue,
+ parameters_queue: mp.Queue,
+ shutdown_event: mp.Event,
+ policy_learner: SACPolicy,
+ online_buffer: ReplayBuffer,
+ offline_buffer: ReplayBuffer,
+ lr: float = 3e-4,
+ batch_size: int = 32,
+ device: torch.device = "mps",
+):
+ """The learner process - trains SAC policy on transitions streamed from the actor, updating parameters
+ for the actor to adopt."""
+ policy_learner.train()
+ policy_learner.to(device)
+
+ # Create Adam optimizer from scratch - simple and clean
+ optimizer = optim.Adam(policy_learner.parameters(), lr=lr)
+
+ print(f"[LEARNER] Online buffer capacity: {online_buffer.capacity}")
+ print(f"[LEARNER] Offline buffer capacity: {offline_buffer.capacity}")
+
+ training_step = 0
+
+ while not shutdown_event.is_set():
+ # retrieve incoming transitions from the actor process
+ try:
+ transitions = transitions_queue.get(timeout=0.1)
+ for transition in transitions:
+ # HIL-SERL: Add ALL transitions to online buffer
+ online_buffer.add(**transition)
+
+ # HIL-SERL: Add ONLY human intervention transitions to offline buffer
+ is_intervention = transition.get("complementary_info", {}).get("is_intervention", False)
+ if is_intervention:
+ offline_buffer.add(**transition)
+ print(
+ f"[LEARNER] Human intervention detected! Added to offline buffer (now {len(offline_buffer)} transitions)"
+ )
+
+ except Empty:
+ pass # No transitions available, continue
+
+ # Train if we have enough data
+ if len(online_buffer) >= policy_learner.config.online_step_before_learning:
+ # Sample from online buffer (autonomous + human data)
+ online_batch = online_buffer.sample(batch_size // 2)
+
+ # Sample from offline buffer (human demonstrations only, either precollected or at runtime)
+ offline_batch = offline_buffer.sample(batch_size // 2)
+
+ # Combine batches - this is the key HIL-SERL mechanism!
+ batch = {}
+ for key in online_batch:
+ if key in offline_batch:
+ batch[key] = torch.cat([online_batch[key], offline_batch[key]], dim=0)
+ else:
+ batch[key] = online_batch[key]
+
+ loss, _ = policy_learner.forward(batch)
+
+ optimizer.zero_grad()
+ loss.backward()
+ optimizer.step()
+ training_step += 1
+
+ if training_step % LOG_EVERY == 0:
+ print(
+ f"[LEARNER] Training step {training_step}, Loss: {loss.item():.4f}, "
+ f"Buffers: Online={len(online_buffer)}, Offline={len(offline_buffer)}"
+ )
+
+ # Send updated parameters to actor every 10 training steps
+ if training_step % SEND_EVERY == 0:
+ try:
+ state_dict = {k: v.cpu() for k, v in policy_learner.state_dict().items()}
+ parameters_queue.put_nowait(state_dict)
+ print("[LEARNER] Sent updated parameters to actor")
+ except Full:
+ # Missing write due to queue not being consumed (should happen rarely)
+ pass
+
+ print("[LEARNER] Learner process finished")
+
+
+def run_actor(
+ transitions_queue: mp.Queue,
+ parameters_queue: mp.Queue,
+ shutdown_event: mp.Event,
+ policy_actor: SACPolicy,
+ reward_classifier: Classifier,
+ env_cfg: HILSerlRobotEnvConfig,
+ device: torch.device = "mps",
+ output_directory: Path | None = None,
+):
+ """The actor process - interacts with environment and collects data.
+ The policy is frozen and only the parameters are updated, popping the most recent ones from a queue."""
+ policy_actor.eval()
+ policy_actor.to(device)
+
+ reward_classifier.eval()
+ reward_classifier.to(device)
+
+ # Create robot environment inside the actor process
+ env, teleop_device = make_robot_env(env_cfg)
+
+ try:
+ for episode in range(MAX_EPISODES):
+ if shutdown_event.is_set():
+ break
+
+ obs, _info = env.reset()
+ episode_reward = 0.0
+ step = 0
+ episode_transitions = []
+
+ print(f"[ACTOR] Starting episode {episode + 1}")
+
+ while step < MAX_STEPS_PER_EPISODE and not shutdown_event.is_set():
+ try:
+ new_params = parameters_queue.get_nowait()
+ policy_actor.load_state_dict(new_params)
+ print("[ACTOR] Updated policy parameters from learner")
+ except Empty: # No new updated parameters available from learner, waiting
+ pass
+
+ # Get action from policy
+ policy_obs = make_policy_obs(obs, device=device)
+ action_tensor = policy_actor.select_action(policy_obs) # predicts a single action
+ action = action_tensor.squeeze(0).cpu().numpy()
+
+ # Step environment
+ next_obs, _env_reward, terminated, truncated, _info = env.step(action)
+ done = terminated or truncated
+
+ # Predict reward
+ policy_next_obs = make_policy_obs(next_obs, device=device)
+ reward = reward_classifier.predict_reward(policy_next_obs)
+
+ if reward >= 1.0 and not done: # success detected! halt episode
+ terminated = True
+ done = True
+
+ # In HIL-SERL, human interventions come from the teleop device
+ is_intervention = False
+ if hasattr(teleop_device, "get_teleop_events"):
+ # Real intervention detection from teleop device
+ teleop_events = teleop_device.get_teleop_events()
+ is_intervention = teleop_events.get(TeleopEvents.IS_INTERVENTION, False)
+
+ # Store transition with intervention metadata
+ transition = {
+ "state": policy_obs,
+ "action": action,
+ "reward": float(reward) if hasattr(reward, "item") else reward,
+ "next_state": policy_next_obs,
+ "done": done,
+ "truncated": truncated,
+ "complementary_info": {
+ "is_intervention": is_intervention,
+ },
+ }
+
+ episode_transitions.append(transition)
+
+ episode_reward += reward
+ step += 1
+
+ obs = next_obs
+
+ if done:
+ break
+
+ # Send episode transitions to learner
+ transitions_queue.put_nowait(episode_transitions)
+
+ except KeyboardInterrupt:
+ print("[ACTOR] Interrupted by user")
+ finally:
+ # Clean up
+ if hasattr(env, "robot") and env.robot.is_connected:
+ env.robot.disconnect()
+ if teleop_device and hasattr(teleop_device, "disconnect"):
+ teleop_device.disconnect()
+ if output_directory is not None:
+ policy_actor.save_pretrained(output_directory)
+ print(f"[ACTOR] Latest actor policy saved at: {output_directory}")
+
+ print("[ACTOR] Actor process finished")
+
+
+def make_policy_obs(obs, device: torch.device = "cpu"):
+ return {
+ "observation.state": torch.from_numpy(obs["agent_pos"]).float().unsqueeze(0).to(device),
+ **{
+ f"observation.image.{k}": torch.from_numpy(obs["pixels"][k]).float().unsqueeze(0).to(device)
+ for k in obs["pixels"]
+ },
+ }
+
+
+"""Main function - coordinates actor and learner processes."""
+
+device = "mps" # or "cuda" or "cpu"
+output_directory = Path("outputs/robot_learning_tutorial/hil_serl")
+output_directory.mkdir(parents=True, exist_ok=True)
+
+# find ports using lerobot-find-port
+follower_port = ...
+leader_port = ...
+
+# the robot ids are used the load the right calibration files
+follower_id = ...
+leader_id = ...
+
+# A pretrained model (to be used in-distribution!)
+reward_classifier_id = "fracapuano/reward_classifier_hil_serl_example"
+reward_classifier = Classifier.from_pretrained(reward_classifier_id)
+
+reward_classifier.to(device)
+reward_classifier.eval()
+
+MAX_EPISODES = 5
+MAX_STEPS_PER_EPISODE = 20
+
+# Robot and environment configuration
+robot_cfg = SO100FollowerConfig(port=follower_port, id=follower_id)
+teleop_cfg = SO100LeaderConfig(port=leader_port, id=leader_id)
+processor_cfg = HILSerlProcessorConfig(control_mode="leader")
+
+env_cfg = HILSerlRobotEnvConfig(robot=robot_cfg, teleop=teleop_cfg, processor=processor_cfg)
+
+# Create robot environment
+env, teleop_device = make_robot_env(env_cfg)
+
+obs_features = hw_to_dataset_features(env.robot.observation_features, "observation")
+action_features = hw_to_dataset_features(env.robot.action_features, "action")
+
+# Create SAC policy for action selection
+policy_cfg = SACConfig(
+ device=device,
+ input_features=obs_features,
+ output_features=action_features,
+)
+
+policy_actor = SACPolicy(policy_cfg)
+policy_learner = SACPolicy(policy_cfg)
+
+demonstrations_repo_id = "lerobot/example_hil_serl_dataset"
+offline_dataset = LeRobotDataset(repo_id=demonstrations_repo_id)
+
+# Online buffer: initialized from scratch
+online_replay_buffer = ReplayBuffer(device=device, state_keys=list(obs_features.keys()))
+# Offline buffer: Created from dataset (pre-populated it with demonstrations)
+offline_replay_buffer = ReplayBuffer.from_lerobot_dataset(
+ lerobot_dataset=offline_dataset, device=device, state_keys=list(obs_features.keys())
+)
+
+# Create communication channels between learner and actor processes
+transitions_queue = mp.Queue(maxsize=10)
+parameters_queue = mp.Queue(maxsize=2)
+shutdown_event = mp.Event()
+
+
+# Signal handler for graceful shutdown
+def signal_handler(sig):
+ print(f"\nSignal {sig} received, shutting down...")
+ shutdown_event.set()
+
+
+signal.signal(signal.SIGINT, signal_handler)
+signal.signal(signal.SIGTERM, signal_handler)
+
+# Create processes
+learner_process = mp.Process(
+ target=run_learner,
+ args=(
+ transitions_queue,
+ parameters_queue,
+ shutdown_event,
+ policy_learner,
+ online_replay_buffer,
+ offline_replay_buffer,
+ ),
+ kwargs={"device": device}, # can run on accelerated hardware for training
+)
+
+actor_process = mp.Process(
+ target=run_actor,
+ args=(
+ transitions_queue,
+ parameters_queue,
+ shutdown_event,
+ policy_actor,
+ reward_classifier,
+ env_cfg,
+ output_directory,
+ ),
+ kwargs={"device": "cpu"}, # actor is frozen, can run on CPU or accelerate for inference
+)
+
+learner_process.start()
+actor_process.start()
+
+try:
+ # Wait for actor to finish (it controls the episode loop)
+ actor_process.join()
+ shutdown_event.set()
+ learner_process.join(timeout=10)
+
+except KeyboardInterrupt:
+ print("Main process interrupted")
+ shutdown_event.set()
+ actor_process.join(timeout=5)
+ learner_process.join(timeout=10)
+
+finally:
+ if learner_process.is_alive():
+ learner_process.terminate()
+ if actor_process.is_alive():
+ actor_process.terminate()
diff --git a/examples/tutorial/rl/reward_classifier_example.py b/examples/tutorial/rl/reward_classifier_example.py
new file mode 100644
index 00000000..a3d852e3
--- /dev/null
+++ b/examples/tutorial/rl/reward_classifier_example.py
@@ -0,0 +1,62 @@
+import torch
+
+from lerobot.datasets.lerobot_dataset import LeRobotDataset
+from lerobot.policies.factory import make_policy, make_pre_post_processors
+from lerobot.policies.sac.reward_model.configuration_classifier import RewardClassifierConfig
+
+# Device to use for training
+device = "mps" # or "cuda", or "cpu"
+
+# Load the dataset used for training
+repo_id = "lerobot/example_hil_serl_dataset"
+dataset = LeRobotDataset(repo_id)
+
+# Configure the policy to extract features from the image frames
+camera_keys = dataset.meta.camera_keys
+
+config = RewardClassifierConfig(
+ num_cameras=len(camera_keys),
+ device=device,
+ # backbone model to extract features from the image frames
+ model_name="microsoft/resnet-18",
+)
+
+# Make policy, preprocessor, and optimizer
+policy = make_policy(config, ds_meta=dataset.meta)
+optimizer = config.get_optimizer_preset().build(policy.parameters())
+preprocessor, _ = make_pre_post_processors(policy_cfg=config, dataset_stats=dataset.meta.stats)
+
+
+classifier_id = "fracapuano/reward_classifier_hil_serl_example"
+
+# Instantiate a dataloader
+dataloader = torch.utils.data.DataLoader(dataset, batch_size=16, shuffle=True)
+
+# Training loop
+num_epochs = 5
+for epoch in range(num_epochs):
+ total_loss = 0
+ total_accuracy = 0
+ for batch in dataloader:
+ # Preprocess the batch and move it to the correct device.
+ batch = preprocessor(batch)
+
+ # Forward pass
+ loss, output_dict = policy.forward(batch)
+
+ # Backward pass and optimization
+ optimizer.zero_grad()
+ loss.backward()
+ optimizer.step()
+
+ total_loss += loss.item()
+ total_accuracy += output_dict["accuracy"]
+
+ avg_loss = total_loss / len(dataloader)
+ avg_accuracy = total_accuracy / len(dataloader)
+ print(f"Epoch {epoch + 1}/{num_epochs}, Loss: {avg_loss:.4f}, Accuracy: {avg_accuracy:.2f}%")
+
+print("Training finished!")
+
+# You can now save the trained policy.
+policy.push_to_hub(classifier_id)
diff --git a/examples/tutorial/smolvla/using_smolvla_example.py b/examples/tutorial/smolvla/using_smolvla_example.py
new file mode 100644
index 00000000..04c32783
--- /dev/null
+++ b/examples/tutorial/smolvla/using_smolvla_example.py
@@ -0,0 +1,66 @@
+import torch
+
+from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
+from lerobot.datasets.utils import hw_to_dataset_features
+from lerobot.policies.factory import make_pre_post_processors
+from lerobot.policies.smolvla.modeling_smolvla import SmolVLAPolicy
+from lerobot.policies.utils import build_inference_frame, make_robot_action
+from lerobot.robots.so100_follower.config_so100_follower import SO100FollowerConfig
+from lerobot.robots.so100_follower.so100_follower import SO100Follower
+
+MAX_EPISODES = 5
+MAX_STEPS_PER_EPISODE = 20
+
+device = torch.device("mps") # or "cuda" or "cpu"
+model_id = "lerobot/smolvla_base"
+
+model = SmolVLAPolicy.from_pretrained(model_id)
+
+preprocess, postprocess = make_pre_post_processors(
+ model.config,
+ model_id,
+ # This overrides allows to run on MPS, otherwise defaults to CUDA (if available)
+ preprocessor_overrides={"device_processor": {"device": str(device)}},
+)
+
+# find ports using lerobot-find-port
+follower_port = ... # something like "/dev/tty.usbmodem58760431631"
+
+# the robot ids are used the load the right calibration files
+follower_id = ... # something like "follower_so100"
+
+# Robot and environment configuration
+# Camera keys must match the name and resolutions of the ones used for training!
+# You can check the camera keys expected by a model in the info.json card on the model card on the Hub
+camera_config = {
+ "camera1": OpenCVCameraConfig(index_or_path=0, width=640, height=480, fps=30),
+ "camera2": OpenCVCameraConfig(index_or_path=1, width=640, height=480, fps=30),
+}
+
+robot_cfg = SO100FollowerConfig(port=follower_port, id=follower_id, cameras=camera_config)
+robot = SO100Follower(robot_cfg)
+robot.connect()
+
+task = "" # something like "pick the red block"
+robot_type = "" # something like "so100_follower" for multi-embodiment datasets
+
+# This is used to match the raw observation keys to the keys expected by the policy
+action_features = hw_to_dataset_features(robot.action_features, "action")
+obs_features = hw_to_dataset_features(robot.observation_features, "observation")
+dataset_features = {**action_features, **obs_features}
+
+for _ in range(MAX_EPISODES):
+ for _ in range(MAX_STEPS_PER_EPISODE):
+ obs = robot.get_observation()
+ obs_frame = build_inference_frame(
+ observation=obs, ds_features=dataset_features, device=device, task=task, robot_type=robot_type
+ )
+
+ obs = preprocess(obs_frame)
+
+ action = model.select_action(obs)
+ action = postprocess(action)
+ action = make_robot_action(action, dataset_features)
+ robot.send_action(action)
+
+ print("Episode finished! Starting new episode...")
diff --git a/lerobot/common/datasets/compute_stats.py b/lerobot/common/datasets/compute_stats.py
deleted file mode 100644
index 1149ec83..00000000
--- a/lerobot/common/datasets/compute_stats.py
+++ /dev/null
@@ -1,176 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import numpy as np
-
-from lerobot.common.datasets.utils import load_image_as_numpy
-
-
-def estimate_num_samples(
- dataset_len: int, min_num_samples: int = 100, max_num_samples: int = 10_000, power: float = 0.75
-) -> int:
- """Heuristic to estimate the number of samples based on dataset size.
- The power controls the sample growth relative to dataset size.
- Lower the power for less number of samples.
-
- For default arguments, we have:
- - from 1 to ~500, num_samples=100
- - at 1000, num_samples=177
- - at 2000, num_samples=299
- - at 5000, num_samples=594
- - at 10000, num_samples=1000
- - at 20000, num_samples=1681
- """
- if dataset_len < min_num_samples:
- min_num_samples = dataset_len
- return max(min_num_samples, min(int(dataset_len**power), max_num_samples))
-
-
-def sample_indices(data_len: int) -> list[int]:
- num_samples = estimate_num_samples(data_len)
- return np.round(np.linspace(0, data_len - 1, num_samples)).astype(int).tolist()
-
-
-def auto_downsample_height_width(img: np.ndarray, target_size: int = 150, max_size_threshold: int = 300):
- _, height, width = img.shape
-
- if max(width, height) < max_size_threshold:
- # no downsampling needed
- return img
-
- downsample_factor = int(width / target_size) if width > height else int(height / target_size)
- return img[:, ::downsample_factor, ::downsample_factor]
-
-
-def sample_images(image_paths: list[str]) -> np.ndarray:
- sampled_indices = sample_indices(len(image_paths))
-
- images = None
- for i, idx in enumerate(sampled_indices):
- path = image_paths[idx]
- # we load as uint8 to reduce memory usage
- img = load_image_as_numpy(path, dtype=np.uint8, channel_first=True)
- img = auto_downsample_height_width(img)
-
- if images is None:
- images = np.empty((len(sampled_indices), *img.shape), dtype=np.uint8)
-
- images[i] = img
-
- return images
-
-
-def get_feature_stats(array: np.ndarray, axis: tuple, keepdims: bool) -> dict[str, np.ndarray]:
- return {
- "min": np.min(array, axis=axis, keepdims=keepdims),
- "max": np.max(array, axis=axis, keepdims=keepdims),
- "mean": np.mean(array, axis=axis, keepdims=keepdims),
- "std": np.std(array, axis=axis, keepdims=keepdims),
- "count": np.array([len(array)]),
- }
-
-
-def compute_episode_stats(episode_data: dict[str, list[str] | np.ndarray], features: dict) -> dict:
- ep_stats = {}
- for key, data in episode_data.items():
- if features[key]["dtype"] == "string":
- continue # HACK: we should receive np.arrays of strings
- elif features[key]["dtype"] in ["image", "video"]:
- ep_ft_array = sample_images(data) # data is a list of image paths
- axes_to_reduce = (0, 2, 3) # keep channel dim
- keepdims = True
- else:
- ep_ft_array = data # data is already a np.ndarray
- axes_to_reduce = 0 # compute stats over the first axis
- keepdims = data.ndim == 1 # keep as np.array
-
- ep_stats[key] = get_feature_stats(ep_ft_array, axis=axes_to_reduce, keepdims=keepdims)
-
- # finally, we normalize and remove batch dim for images
- if features[key]["dtype"] in ["image", "video"]:
- ep_stats[key] = {
- k: v if k == "count" else np.squeeze(v / 255.0, axis=0) for k, v in ep_stats[key].items()
- }
-
- return ep_stats
-
-
-def _assert_type_and_shape(stats_list: list[dict[str, dict]]):
- for i in range(len(stats_list)):
- for fkey in stats_list[i]:
- for k, v in stats_list[i][fkey].items():
- if not isinstance(v, np.ndarray):
- raise ValueError(
- f"Stats must be composed of numpy array, but key '{k}' of feature '{fkey}' is of type '{type(v)}' instead."
- )
- if v.ndim == 0:
- raise ValueError("Number of dimensions must be at least 1, and is 0 instead.")
- if k == "count" and v.shape != (1,):
- raise ValueError(f"Shape of 'count' must be (1), but is {v.shape} instead.")
- if "image" in fkey and k != "count" and v.shape != (3, 1, 1):
- raise ValueError(f"Shape of '{k}' must be (3,1,1), but is {v.shape} instead.")
-
-
-def aggregate_feature_stats(stats_ft_list: list[dict[str, dict]]) -> dict[str, dict[str, np.ndarray]]:
- """Aggregates stats for a single feature."""
- means = np.stack([s["mean"] for s in stats_ft_list])
- variances = np.stack([s["std"] ** 2 for s in stats_ft_list])
- counts = np.stack([s["count"] for s in stats_ft_list])
- total_count = counts.sum(axis=0)
-
- # Prepare weighted mean by matching number of dimensions
- while counts.ndim < means.ndim:
- counts = np.expand_dims(counts, axis=-1)
-
- # Compute the weighted mean
- weighted_means = means * counts
- total_mean = weighted_means.sum(axis=0) / total_count
-
- # Compute the variance using the parallel algorithm
- delta_means = means - total_mean
- weighted_variances = (variances + delta_means**2) * counts
- total_variance = weighted_variances.sum(axis=0) / total_count
-
- return {
- "min": np.min(np.stack([s["min"] for s in stats_ft_list]), axis=0),
- "max": np.max(np.stack([s["max"] for s in stats_ft_list]), axis=0),
- "mean": total_mean,
- "std": np.sqrt(total_variance),
- "count": total_count,
- }
-
-
-def aggregate_stats(stats_list: list[dict[str, dict]]) -> dict[str, dict[str, np.ndarray]]:
- """Aggregate stats from multiple compute_stats outputs into a single set of stats.
-
- The final stats will have the union of all data keys from each of the stats dicts.
-
- For instance:
- - new_min = min(min_dataset_0, min_dataset_1, ...)
- - new_max = max(max_dataset_0, max_dataset_1, ...)
- - new_mean = (mean of all data, weighted by counts)
- - new_std = (std of all data)
- """
-
- _assert_type_and_shape(stats_list)
-
- data_keys = {key for stats in stats_list for key in stats}
- aggregated_stats = {key: {} for key in data_keys}
-
- for key in data_keys:
- stats_with_key = [stats[key] for stats in stats_list if key in stats]
- aggregated_stats[key] = aggregate_feature_stats(stats_with_key)
-
- return aggregated_stats
diff --git a/lerobot/common/datasets/utils.py b/lerobot/common/datasets/utils.py
deleted file mode 100644
index 9d8a54db..00000000
--- a/lerobot/common/datasets/utils.py
+++ /dev/null
@@ -1,813 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import contextlib
-import importlib.resources
-import json
-import logging
-from collections.abc import Iterator
-from itertools import accumulate
-from pathlib import Path
-from pprint import pformat
-from types import SimpleNamespace
-from typing import Any
-
-import datasets
-import jsonlines
-import numpy as np
-import packaging.version
-import torch
-from datasets.table import embed_table_storage
-from huggingface_hub import DatasetCard, DatasetCardData, HfApi
-from huggingface_hub.errors import RevisionNotFoundError
-from PIL import Image as PILImage
-from torchvision import transforms
-
-from lerobot.common.datasets.backward_compatibility import (
- V21_MESSAGE,
- BackwardCompatibilityError,
- ForwardCompatibilityError,
-)
-from lerobot.common.robot_devices.robots.utils import Robot
-from lerobot.common.utils.utils import is_valid_numpy_dtype_string
-from lerobot.configs.types import DictLike, FeatureType, PolicyFeature
-
-DEFAULT_CHUNK_SIZE = 1000 # Max number of episodes per chunk
-
-INFO_PATH = "meta/info.json"
-EPISODES_PATH = "meta/episodes.jsonl"
-STATS_PATH = "meta/stats.json"
-EPISODES_STATS_PATH = "meta/episodes_stats.jsonl"
-TASKS_PATH = "meta/tasks.jsonl"
-
-DEFAULT_VIDEO_PATH = "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4"
-DEFAULT_PARQUET_PATH = "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet"
-DEFAULT_IMAGE_PATH = "images/{image_key}/episode_{episode_index:06d}/frame_{frame_index:06d}.png"
-
-DATASET_CARD_TEMPLATE = """
----
-# Metadata will go there
----
-This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
-
-## {}
-
-"""
-
-DEFAULT_FEATURES = {
- "timestamp": {"dtype": "float32", "shape": (1,), "names": None},
- "frame_index": {"dtype": "int64", "shape": (1,), "names": None},
- "episode_index": {"dtype": "int64", "shape": (1,), "names": None},
- "index": {"dtype": "int64", "shape": (1,), "names": None},
- "task_index": {"dtype": "int64", "shape": (1,), "names": None},
-}
-
-
-def flatten_dict(d: dict, parent_key: str = "", sep: str = "/") -> dict:
- """Flatten a nested dictionary structure by collapsing nested keys into one key with a separator.
-
- For example:
- ```
- >>> dct = {"a": {"b": 1, "c": {"d": 2}}, "e": 3}`
- >>> print(flatten_dict(dct))
- {"a/b": 1, "a/c/d": 2, "e": 3}
- """
- items = []
- for k, v in d.items():
- new_key = f"{parent_key}{sep}{k}" if parent_key else k
- if isinstance(v, dict):
- items.extend(flatten_dict(v, new_key, sep=sep).items())
- else:
- items.append((new_key, v))
- return dict(items)
-
-
-def unflatten_dict(d: dict, sep: str = "/") -> dict:
- outdict = {}
- for key, value in d.items():
- parts = key.split(sep)
- d = outdict
- for part in parts[:-1]:
- if part not in d:
- d[part] = {}
- d = d[part]
- d[parts[-1]] = value
- return outdict
-
-
-def get_nested_item(obj: DictLike, flattened_key: str, sep: str = "/") -> Any:
- split_keys = flattened_key.split(sep)
- getter = obj[split_keys[0]]
- if len(split_keys) == 1:
- return getter
-
- for key in split_keys[1:]:
- getter = getter[key]
-
- return getter
-
-
-def serialize_dict(stats: dict[str, torch.Tensor | np.ndarray | dict]) -> dict:
- serialized_dict = {}
- for key, value in flatten_dict(stats).items():
- if isinstance(value, (torch.Tensor, np.ndarray)):
- serialized_dict[key] = value.tolist()
- elif isinstance(value, np.generic):
- serialized_dict[key] = value.item()
- elif isinstance(value, (int, float)):
- serialized_dict[key] = value
- else:
- raise NotImplementedError(f"The value '{value}' of type '{type(value)}' is not supported.")
- return unflatten_dict(serialized_dict)
-
-
-def embed_images(dataset: datasets.Dataset) -> datasets.Dataset:
- # Embed image bytes into the table before saving to parquet
- format = dataset.format
- dataset = dataset.with_format("arrow")
- dataset = dataset.map(embed_table_storage, batched=False)
- dataset = dataset.with_format(**format)
- return dataset
-
-
-def load_json(fpath: Path) -> Any:
- with open(fpath) as f:
- return json.load(f)
-
-
-def write_json(data: dict, fpath: Path) -> None:
- fpath.parent.mkdir(exist_ok=True, parents=True)
- with open(fpath, "w") as f:
- json.dump(data, f, indent=4, ensure_ascii=False)
-
-
-def load_jsonlines(fpath: Path) -> list[Any]:
- with jsonlines.open(fpath, "r") as reader:
- return list(reader)
-
-
-def write_jsonlines(data: dict, fpath: Path) -> None:
- fpath.parent.mkdir(exist_ok=True, parents=True)
- with jsonlines.open(fpath, "w") as writer:
- writer.write_all(data)
-
-
-def append_jsonlines(data: dict, fpath: Path) -> None:
- fpath.parent.mkdir(exist_ok=True, parents=True)
- with jsonlines.open(fpath, "a") as writer:
- writer.write(data)
-
-
-def write_info(info: dict, local_dir: Path):
- write_json(info, local_dir / INFO_PATH)
-
-
-def load_info(local_dir: Path) -> dict:
- info = load_json(local_dir / INFO_PATH)
- for ft in info["features"].values():
- ft["shape"] = tuple(ft["shape"])
- return info
-
-
-def write_stats(stats: dict, local_dir: Path):
- serialized_stats = serialize_dict(stats)
- write_json(serialized_stats, local_dir / STATS_PATH)
-
-
-def cast_stats_to_numpy(stats) -> dict[str, dict[str, np.ndarray]]:
- stats = {key: np.array(value) for key, value in flatten_dict(stats).items()}
- return unflatten_dict(stats)
-
-
-def load_stats(local_dir: Path) -> dict[str, dict[str, np.ndarray]]:
- if not (local_dir / STATS_PATH).exists():
- return None
- stats = load_json(local_dir / STATS_PATH)
- return cast_stats_to_numpy(stats)
-
-
-def write_task(task_index: int, task: dict, local_dir: Path):
- task_dict = {
- "task_index": task_index,
- "task": task,
- }
- append_jsonlines(task_dict, local_dir / TASKS_PATH)
-
-
-def load_tasks(local_dir: Path) -> tuple[dict, dict]:
- tasks = load_jsonlines(local_dir / TASKS_PATH)
- tasks = {item["task_index"]: item["task"] for item in sorted(tasks, key=lambda x: x["task_index"])}
- task_to_task_index = {task: task_index for task_index, task in tasks.items()}
- return tasks, task_to_task_index
-
-
-def write_episode(episode: dict, local_dir: Path):
- append_jsonlines(episode, local_dir / EPISODES_PATH)
-
-
-def load_episodes(local_dir: Path) -> dict:
- episodes = load_jsonlines(local_dir / EPISODES_PATH)
- return {item["episode_index"]: item for item in sorted(episodes, key=lambda x: x["episode_index"])}
-
-
-def write_episode_stats(episode_index: int, episode_stats: dict, local_dir: Path):
- # We wrap episode_stats in a dictionary since `episode_stats["episode_index"]`
- # is a dictionary of stats and not an integer.
- episode_stats = {"episode_index": episode_index, "stats": serialize_dict(episode_stats)}
- append_jsonlines(episode_stats, local_dir / EPISODES_STATS_PATH)
-
-
-def load_episodes_stats(local_dir: Path) -> dict:
- episodes_stats = load_jsonlines(local_dir / EPISODES_STATS_PATH)
- return {
- item["episode_index"]: cast_stats_to_numpy(item["stats"])
- for item in sorted(episodes_stats, key=lambda x: x["episode_index"])
- }
-
-
-def backward_compatible_episodes_stats(
- stats: dict[str, dict[str, np.ndarray]], episodes: list[int]
-) -> dict[str, dict[str, np.ndarray]]:
- return dict.fromkeys(episodes, stats)
-
-
-def load_image_as_numpy(
- fpath: str | Path, dtype: np.dtype = np.float32, channel_first: bool = True
-) -> np.ndarray:
- img = PILImage.open(fpath).convert("RGB")
- img_array = np.array(img, dtype=dtype)
- if channel_first: # (H, W, C) -> (C, H, W)
- img_array = np.transpose(img_array, (2, 0, 1))
- if np.issubdtype(dtype, np.floating):
- img_array /= 255.0
- return img_array
-
-
-def hf_transform_to_torch(items_dict: dict[torch.Tensor | None]):
- """Get a transform function that convert items from Hugging Face dataset (pyarrow)
- to torch tensors. Importantly, images are converted from PIL, which corresponds to
- a channel last representation (h w c) of uint8 type, to a torch image representation
- with channel first (c h w) of float32 type in range [0,1].
- """
- for key in items_dict:
- first_item = items_dict[key][0]
- if isinstance(first_item, PILImage.Image):
- to_tensor = transforms.ToTensor()
- items_dict[key] = [to_tensor(img) for img in items_dict[key]]
- elif first_item is None:
- pass
- else:
- items_dict[key] = [x if isinstance(x, str) else torch.tensor(x) for x in items_dict[key]]
- return items_dict
-
-
-def is_valid_version(version: str) -> bool:
- try:
- packaging.version.parse(version)
- return True
- except packaging.version.InvalidVersion:
- return False
-
-
-def check_version_compatibility(
- repo_id: str,
- version_to_check: str | packaging.version.Version,
- current_version: str | packaging.version.Version,
- enforce_breaking_major: bool = True,
-) -> None:
- v_check = (
- packaging.version.parse(version_to_check)
- if not isinstance(version_to_check, packaging.version.Version)
- else version_to_check
- )
- v_current = (
- packaging.version.parse(current_version)
- if not isinstance(current_version, packaging.version.Version)
- else current_version
- )
- if v_check.major < v_current.major and enforce_breaking_major:
- raise BackwardCompatibilityError(repo_id, v_check)
- elif v_check.minor < v_current.minor:
- logging.warning(V21_MESSAGE.format(repo_id=repo_id, version=v_check))
-
-
-def get_repo_versions(repo_id: str) -> list[packaging.version.Version]:
- """Returns available valid versions (branches and tags) on given repo."""
- api = HfApi()
- repo_refs = api.list_repo_refs(repo_id, repo_type="dataset")
- repo_refs = [b.name for b in repo_refs.branches + repo_refs.tags]
- repo_versions = []
- for ref in repo_refs:
- with contextlib.suppress(packaging.version.InvalidVersion):
- repo_versions.append(packaging.version.parse(ref))
-
- return repo_versions
-
-
-def get_safe_version(repo_id: str, version: str | packaging.version.Version) -> str:
- """
- Returns the version if available on repo or the latest compatible one.
- Otherwise, will throw a `CompatibilityError`.
- """
- target_version = (
- packaging.version.parse(version) if not isinstance(version, packaging.version.Version) else version
- )
- hub_versions = get_repo_versions(repo_id)
-
- if not hub_versions:
- raise RevisionNotFoundError(
- f"""Your dataset must be tagged with a codebase version.
- Assuming _version_ is the codebase_version value in the info.json, you can run this:
- ```python
- from huggingface_hub import HfApi
-
- hub_api = HfApi()
- hub_api.create_tag("{repo_id}", tag="_version_", repo_type="dataset")
- ```
- """
- )
-
- if target_version in hub_versions:
- return f"v{target_version}"
-
- compatibles = [
- v for v in hub_versions if v.major == target_version.major and v.minor <= target_version.minor
- ]
- if compatibles:
- return_version = max(compatibles)
- if return_version < target_version:
- logging.warning(f"Revision {version} for {repo_id} not found, using version v{return_version}")
- return f"v{return_version}"
-
- lower_major = [v for v in hub_versions if v.major < target_version.major]
- if lower_major:
- raise BackwardCompatibilityError(repo_id, max(lower_major))
-
- upper_versions = [v for v in hub_versions if v > target_version]
- assert len(upper_versions) > 0
- raise ForwardCompatibilityError(repo_id, min(upper_versions))
-
-
-def get_hf_features_from_features(features: dict) -> datasets.Features:
- hf_features = {}
- for key, ft in features.items():
- if ft["dtype"] == "video":
- continue
- elif ft["dtype"] == "image":
- hf_features[key] = datasets.Image()
- elif ft["shape"] == (1,):
- hf_features[key] = datasets.Value(dtype=ft["dtype"])
- elif len(ft["shape"]) == 1:
- hf_features[key] = datasets.Sequence(
- length=ft["shape"][0], feature=datasets.Value(dtype=ft["dtype"])
- )
- elif len(ft["shape"]) == 2:
- hf_features[key] = datasets.Array2D(shape=ft["shape"], dtype=ft["dtype"])
- elif len(ft["shape"]) == 3:
- hf_features[key] = datasets.Array3D(shape=ft["shape"], dtype=ft["dtype"])
- elif len(ft["shape"]) == 4:
- hf_features[key] = datasets.Array4D(shape=ft["shape"], dtype=ft["dtype"])
- elif len(ft["shape"]) == 5:
- hf_features[key] = datasets.Array5D(shape=ft["shape"], dtype=ft["dtype"])
- else:
- raise ValueError(f"Corresponding feature is not valid: {ft}")
-
- return datasets.Features(hf_features)
-
-
-def get_features_from_robot(robot: Robot, use_videos: bool = True) -> dict:
- camera_ft = {}
- if robot.cameras:
- camera_ft = {
- key: {"dtype": "video" if use_videos else "image", **ft}
- for key, ft in robot.camera_features.items()
- }
- return {**robot.motor_features, **camera_ft, **DEFAULT_FEATURES}
-
-
-def dataset_to_policy_features(features: dict[str, dict]) -> dict[str, PolicyFeature]:
- # TODO(aliberts): Implement "type" in dataset features and simplify this
- policy_features = {}
- for key, ft in features.items():
- shape = ft["shape"]
- if ft["dtype"] in ["image", "video"]:
- type = FeatureType.VISUAL
- if len(shape) != 3:
- raise ValueError(f"Number of dimensions of {key} != 3 (shape={shape})")
-
- names = ft["names"]
- # Backward compatibility for "channel" which is an error introduced in LeRobotDataset v2.0 for ported datasets.
- if names[2] in ["channel", "channels"]: # (h, w, c) -> (c, h, w)
- shape = (shape[2], shape[0], shape[1])
- elif key == "observation.environment_state":
- type = FeatureType.ENV
- elif key.startswith("observation"):
- type = FeatureType.STATE
- elif key == "action":
- type = FeatureType.ACTION
- else:
- continue
-
- policy_features[key] = PolicyFeature(
- type=type,
- shape=shape,
- )
-
- return policy_features
-
-
-def create_empty_dataset_info(
- codebase_version: str,
- fps: int,
- robot_type: str,
- features: dict,
- use_videos: bool,
-) -> dict:
- return {
- "codebase_version": codebase_version,
- "robot_type": robot_type,
- "total_episodes": 0,
- "total_frames": 0,
- "total_tasks": 0,
- "total_videos": 0,
- "total_chunks": 0,
- "chunks_size": DEFAULT_CHUNK_SIZE,
- "fps": fps,
- "splits": {},
- "data_path": DEFAULT_PARQUET_PATH,
- "video_path": DEFAULT_VIDEO_PATH if use_videos else None,
- "features": features,
- }
-
-
-def get_episode_data_index(
- episode_dicts: dict[dict], episodes: list[int] | None = None
-) -> dict[str, torch.Tensor]:
- episode_lengths = {ep_idx: ep_dict["length"] for ep_idx, ep_dict in episode_dicts.items()}
- if episodes is not None:
- episode_lengths = {ep_idx: episode_lengths[ep_idx] for ep_idx in episodes}
-
- cumulative_lengths = list(accumulate(episode_lengths.values()))
- return {
- "from": torch.LongTensor([0] + cumulative_lengths[:-1]),
- "to": torch.LongTensor(cumulative_lengths),
- }
-
-
-def check_timestamps_sync(
- timestamps: np.ndarray,
- episode_indices: np.ndarray,
- episode_data_index: dict[str, np.ndarray],
- fps: int,
- tolerance_s: float,
- raise_value_error: bool = True,
-) -> bool:
- """
- This check is to make sure that each timestamp is separated from the next by (1/fps) +/- tolerance
- to account for possible numerical error.
-
- Args:
- timestamps (np.ndarray): Array of timestamps in seconds.
- episode_indices (np.ndarray): Array indicating the episode index for each timestamp.
- episode_data_index (dict[str, np.ndarray]): A dictionary that includes 'to',
- which identifies indices for the end of each episode.
- fps (int): Frames per second. Used to check the expected difference between consecutive timestamps.
- tolerance_s (float): Allowed deviation from the expected (1/fps) difference.
- raise_value_error (bool): Whether to raise a ValueError if the check fails.
-
- Returns:
- bool: True if all checked timestamp differences lie within tolerance, False otherwise.
-
- Raises:
- ValueError: If the check fails and `raise_value_error` is True.
- """
- if timestamps.shape != episode_indices.shape:
- raise ValueError(
- "timestamps and episode_indices should have the same shape. "
- f"Found {timestamps.shape=} and {episode_indices.shape=}."
- )
-
- # Consecutive differences
- diffs = np.diff(timestamps)
- within_tolerance = np.abs(diffs - (1.0 / fps)) <= tolerance_s
-
- # Mask to ignore differences at the boundaries between episodes
- mask = np.ones(len(diffs), dtype=bool)
- ignored_diffs = episode_data_index["to"][:-1] - 1 # indices at the end of each episode
- mask[ignored_diffs] = False
- filtered_within_tolerance = within_tolerance[mask]
-
- # Check if all remaining diffs are within tolerance
- if not np.all(filtered_within_tolerance):
- # Track original indices before masking
- original_indices = np.arange(len(diffs))
- filtered_indices = original_indices[mask]
- outside_tolerance_filtered_indices = np.nonzero(~filtered_within_tolerance)[0]
- outside_tolerance_indices = filtered_indices[outside_tolerance_filtered_indices]
-
- outside_tolerances = []
- for idx in outside_tolerance_indices:
- entry = {
- "timestamps": [timestamps[idx], timestamps[idx + 1]],
- "diff": diffs[idx],
- "episode_index": episode_indices[idx].item()
- if hasattr(episode_indices[idx], "item")
- else episode_indices[idx],
- }
- outside_tolerances.append(entry)
-
- if raise_value_error:
- raise ValueError(
- f"""One or several timestamps unexpectedly violate the tolerance inside episode range.
- This might be due to synchronization issues during data collection.
- \n{pformat(outside_tolerances)}"""
- )
- return False
-
- return True
-
-
-def check_delta_timestamps(
- delta_timestamps: dict[str, list[float]], fps: int, tolerance_s: float, raise_value_error: bool = True
-) -> bool:
- """This will check if all the values in delta_timestamps are multiples of 1/fps +/- tolerance.
- This is to ensure that these delta_timestamps added to any timestamp from a dataset will themselves be
- actual timestamps from the dataset.
- """
- outside_tolerance = {}
- for key, delta_ts in delta_timestamps.items():
- within_tolerance = [abs(ts * fps - round(ts * fps)) / fps <= tolerance_s for ts in delta_ts]
- if not all(within_tolerance):
- outside_tolerance[key] = [
- ts for ts, is_within in zip(delta_ts, within_tolerance, strict=True) if not is_within
- ]
-
- if len(outside_tolerance) > 0:
- if raise_value_error:
- raise ValueError(
- f"""
- The following delta_timestamps are found outside of tolerance range.
- Please make sure they are multiples of 1/{fps} +/- tolerance and adjust
- their values accordingly.
- \n{pformat(outside_tolerance)}
- """
- )
- return False
-
- return True
-
-
-def get_delta_indices(delta_timestamps: dict[str, list[float]], fps: int) -> dict[str, list[int]]:
- delta_indices = {}
- for key, delta_ts in delta_timestamps.items():
- delta_indices[key] = [round(d * fps) for d in delta_ts]
-
- return delta_indices
-
-
-def cycle(iterable):
- """The equivalent of itertools.cycle, but safe for Pytorch dataloaders.
-
- See https://github.com/pytorch/pytorch/issues/23900 for information on why itertools.cycle is not safe.
- """
- iterator = iter(iterable)
- while True:
- try:
- yield next(iterator)
- except StopIteration:
- iterator = iter(iterable)
-
-
-def create_branch(repo_id, *, branch: str, repo_type: str | None = None) -> None:
- """Create a branch on a existing Hugging Face repo. Delete the branch if it already
- exists before creating it.
- """
- api = HfApi()
-
- branches = api.list_repo_refs(repo_id, repo_type=repo_type).branches
- refs = [branch.ref for branch in branches]
- ref = f"refs/heads/{branch}"
- if ref in refs:
- api.delete_branch(repo_id, repo_type=repo_type, branch=branch)
-
- api.create_branch(repo_id, repo_type=repo_type, branch=branch)
-
-
-def create_lerobot_dataset_card(
- tags: list | None = None,
- dataset_info: dict | None = None,
- **kwargs,
-) -> DatasetCard:
- """
- Keyword arguments will be used to replace values in ./lerobot/common/datasets/card_template.md.
- Note: If specified, license must be one of https://huggingface.co/docs/hub/repositories-licenses.
- """
- card_tags = ["LeRobot"]
-
- if tags:
- card_tags += tags
- if dataset_info:
- dataset_structure = "[meta/info.json](meta/info.json):\n"
- dataset_structure += f"```json\n{json.dumps(dataset_info, indent=4)}\n```\n"
- kwargs = {**kwargs, "dataset_structure": dataset_structure}
- card_data = DatasetCardData(
- license=kwargs.get("license"),
- tags=card_tags,
- task_categories=["robotics"],
- configs=[
- {
- "config_name": "default",
- "data_files": "data/*/*.parquet",
- }
- ],
- )
-
- card_template = (importlib.resources.files("lerobot.common.datasets") / "card_template.md").read_text()
-
- return DatasetCard.from_template(
- card_data=card_data,
- template_str=card_template,
- **kwargs,
- )
-
-
-class IterableNamespace(SimpleNamespace):
- """
- A namespace object that supports both dictionary-like iteration and dot notation access.
- Automatically converts nested dictionaries into IterableNamespaces.
-
- This class extends SimpleNamespace to provide:
- - Dictionary-style iteration over keys
- - Access to items via both dot notation (obj.key) and brackets (obj["key"])
- - Dictionary-like methods: items(), keys(), values()
- - Recursive conversion of nested dictionaries
-
- Args:
- dictionary: Optional dictionary to initialize the namespace
- **kwargs: Additional keyword arguments passed to SimpleNamespace
-
- Examples:
- >>> data = {"name": "Alice", "details": {"age": 25}}
- >>> ns = IterableNamespace(data)
- >>> ns.name
- 'Alice'
- >>> ns.details.age
- 25
- >>> list(ns.keys())
- ['name', 'details']
- >>> for key, value in ns.items():
- ... print(f"{key}: {value}")
- name: Alice
- details: IterableNamespace(age=25)
- """
-
- def __init__(self, dictionary: dict[str, Any] = None, **kwargs):
- super().__init__(**kwargs)
- if dictionary is not None:
- for key, value in dictionary.items():
- if isinstance(value, dict):
- setattr(self, key, IterableNamespace(value))
- else:
- setattr(self, key, value)
-
- def __iter__(self) -> Iterator[str]:
- return iter(vars(self))
-
- def __getitem__(self, key: str) -> Any:
- return vars(self)[key]
-
- def items(self):
- return vars(self).items()
-
- def values(self):
- return vars(self).values()
-
- def keys(self):
- return vars(self).keys()
-
-
-def validate_frame(frame: dict, features: dict):
- optional_features = {"timestamp"}
- expected_features = (set(features) - set(DEFAULT_FEATURES.keys())) | {"task"}
- actual_features = set(frame.keys())
-
- error_message = validate_features_presence(actual_features, expected_features, optional_features)
-
- if "task" in frame:
- error_message += validate_feature_string("task", frame["task"])
-
- common_features = actual_features & (expected_features | optional_features)
- for name in common_features - {"task"}:
- error_message += validate_feature_dtype_and_shape(name, features[name], frame[name])
-
- if error_message:
- raise ValueError(error_message)
-
-
-def validate_features_presence(
- actual_features: set[str], expected_features: set[str], optional_features: set[str]
-):
- error_message = ""
- missing_features = expected_features - actual_features
- extra_features = actual_features - (expected_features | optional_features)
-
- if missing_features or extra_features:
- error_message += "Feature mismatch in `frame` dictionary:\n"
- if missing_features:
- error_message += f"Missing features: {missing_features}\n"
- if extra_features:
- error_message += f"Extra features: {extra_features}\n"
-
- return error_message
-
-
-def validate_feature_dtype_and_shape(name: str, feature: dict, value: np.ndarray | PILImage.Image | str):
- expected_dtype = feature["dtype"]
- expected_shape = feature["shape"]
- if is_valid_numpy_dtype_string(expected_dtype):
- return validate_feature_numpy_array(name, expected_dtype, expected_shape, value)
- elif expected_dtype in ["image", "video"]:
- return validate_feature_image_or_video(name, expected_shape, value)
- elif expected_dtype == "string":
- return validate_feature_string(name, value)
- else:
- raise NotImplementedError(f"The feature dtype '{expected_dtype}' is not implemented yet.")
-
-
-def validate_feature_numpy_array(
- name: str, expected_dtype: str, expected_shape: list[int], value: np.ndarray
-):
- error_message = ""
- if isinstance(value, np.ndarray):
- actual_dtype = value.dtype
- actual_shape = value.shape
-
- if actual_dtype != np.dtype(expected_dtype):
- error_message += f"The feature '{name}' of dtype '{actual_dtype}' is not of the expected dtype '{expected_dtype}'.\n"
-
- if actual_shape != expected_shape:
- error_message += f"The feature '{name}' of shape '{actual_shape}' does not have the expected shape '{expected_shape}'.\n"
- else:
- error_message += f"The feature '{name}' is not a 'np.ndarray'. Expected type is '{expected_dtype}', but type '{type(value)}' provided instead.\n"
-
- return error_message
-
-
-def validate_feature_image_or_video(name: str, expected_shape: list[str], value: np.ndarray | PILImage.Image):
- # Note: The check of pixels range ([0,1] for float and [0,255] for uint8) is done by the image writer threads.
- error_message = ""
- if isinstance(value, np.ndarray):
- actual_shape = value.shape
- c, h, w = expected_shape
- if len(actual_shape) != 3 or (actual_shape != (c, h, w) and actual_shape != (h, w, c)):
- error_message += f"The feature '{name}' of shape '{actual_shape}' does not have the expected shape '{(c, h, w)}' or '{(h, w, c)}'.\n"
- elif isinstance(value, PILImage.Image):
- pass
- else:
- error_message += f"The feature '{name}' is expected to be of type 'PIL.Image' or 'np.ndarray' channel first or channel last, but type '{type(value)}' provided instead.\n"
-
- return error_message
-
-
-def validate_feature_string(name: str, value: str):
- if not isinstance(value, str):
- return f"The feature '{name}' is expected to be of type 'str', but type '{type(value)}' provided instead.\n"
- return ""
-
-
-def validate_episode_buffer(episode_buffer: dict, total_episodes: int, features: dict):
- if "size" not in episode_buffer:
- raise ValueError("size key not found in episode_buffer")
-
- if "task" not in episode_buffer:
- raise ValueError("task key not found in episode_buffer")
-
- if episode_buffer["episode_index"] != total_episodes:
- # TODO(aliberts): Add option to use existing episode_index
- raise NotImplementedError(
- "You might have manually provided the episode_buffer with an episode_index that doesn't "
- "match the total number of episodes already in the dataset. This is not supported for now."
- )
-
- if episode_buffer["size"] == 0:
- raise ValueError("You must add one or several frames with `add_frame` before calling `add_episode`.")
-
- buffer_keys = set(episode_buffer.keys()) - {"task", "size"}
- if not buffer_keys == set(features):
- raise ValueError(
- f"Features from `episode_buffer` don't match the ones in `features`."
- f"In episode_buffer not in features: {buffer_keys - set(features)}"
- f"In features not in episode_buffer: {set(features) - buffer_keys}"
- )
diff --git a/lerobot/common/datasets/v2/batch_convert_dataset_v1_to_v2.py b/lerobot/common/datasets/v2/batch_convert_dataset_v1_to_v2.py
deleted file mode 100644
index 99ab2cbf..00000000
--- a/lerobot/common/datasets/v2/batch_convert_dataset_v1_to_v2.py
+++ /dev/null
@@ -1,884 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-This script is for internal use to convert all datasets under the 'lerobot' hub user account to v2.
-
-Note: Since the original Aloha datasets don't use shadow motors, you need to comment those out in
-lerobot/configs/robot/aloha.yaml before running this script.
-"""
-
-import traceback
-from pathlib import Path
-from textwrap import dedent
-
-from lerobot import available_datasets
-from lerobot.common.datasets.v2.convert_dataset_v1_to_v2 import convert_dataset
-from lerobot.common.robot_devices.robots.configs import AlohaRobotConfig
-
-LOCAL_DIR = Path("data/")
-
-# spellchecker:off
-ALOHA_MOBILE_INFO = {
- "robot_config": AlohaRobotConfig(),
- "license": "mit",
- "url": "https://mobile-aloha.github.io/",
- "paper": "https://arxiv.org/abs/2401.02117",
- "citation_bibtex": dedent(r"""
- @inproceedings{fu2024mobile,
- author = {Fu, Zipeng and Zhao, Tony Z. and Finn, Chelsea},
- title = {Mobile ALOHA: Learning Bimanual Mobile Manipulation with Low-Cost Whole-Body Teleoperation},
- booktitle = {arXiv},
- year = {2024},
- }""").lstrip(),
-}
-ALOHA_STATIC_INFO = {
- "robot_config": AlohaRobotConfig(),
- "license": "mit",
- "url": "https://tonyzhaozh.github.io/aloha/",
- "paper": "https://arxiv.org/abs/2304.13705",
- "citation_bibtex": dedent(r"""
- @article{Zhao2023LearningFB,
- title={Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware},
- author={Tony Zhao and Vikash Kumar and Sergey Levine and Chelsea Finn},
- journal={RSS},
- year={2023},
- volume={abs/2304.13705},
- url={https://arxiv.org/abs/2304.13705}
- }""").lstrip(),
-}
-PUSHT_INFO = {
- "license": "mit",
- "url": "https://diffusion-policy.cs.columbia.edu/",
- "paper": "https://arxiv.org/abs/2303.04137v5",
- "citation_bibtex": dedent(r"""
- @article{chi2024diffusionpolicy,
- author = {Cheng Chi and Zhenjia Xu and Siyuan Feng and Eric Cousineau and Yilun Du and Benjamin Burchfiel and Russ Tedrake and Shuran Song},
- title ={Diffusion Policy: Visuomotor Policy Learning via Action Diffusion},
- journal = {The International Journal of Robotics Research},
- year = {2024},
- }""").lstrip(),
-}
-XARM_INFO = {
- "license": "mit",
- "url": "https://www.nicklashansen.com/td-mpc/",
- "paper": "https://arxiv.org/abs/2203.04955",
- "citation_bibtex": dedent(r"""
- @inproceedings{Hansen2022tdmpc,
- title={Temporal Difference Learning for Model Predictive Control},
- author={Nicklas Hansen and Xiaolong Wang and Hao Su},
- booktitle={ICML},
- year={2022}
- }
- """),
-}
-UNITREEH_INFO = {
- "license": "apache-2.0",
-}
-
-DATASETS = {
- "aloha_mobile_cabinet": {
- "single_task": "Open the top cabinet, store the pot inside it then close the cabinet.",
- **ALOHA_MOBILE_INFO,
- },
- "aloha_mobile_chair": {
- "single_task": "Push the chairs in front of the desk to place them against it.",
- **ALOHA_MOBILE_INFO,
- },
- "aloha_mobile_elevator": {
- "single_task": "Take the elevator to the 1st floor.",
- **ALOHA_MOBILE_INFO,
- },
- "aloha_mobile_shrimp": {
- "single_task": "Sauté the raw shrimp on both sides, then serve it in the bowl.",
- **ALOHA_MOBILE_INFO,
- },
- "aloha_mobile_wash_pan": {
- "single_task": "Pick up the pan, rinse it in the sink and then place it in the drying rack.",
- **ALOHA_MOBILE_INFO,
- },
- "aloha_mobile_wipe_wine": {
- "single_task": "Pick up the wet cloth on the faucet and use it to clean the spilled wine on the table and underneath the glass.",
- **ALOHA_MOBILE_INFO,
- },
- "aloha_static_battery": {
- "single_task": "Place the battery into the slot of the remote controller.",
- **ALOHA_STATIC_INFO,
- },
- "aloha_static_candy": {"single_task": "Pick up the candy and unwrap it.", **ALOHA_STATIC_INFO},
- "aloha_static_coffee": {
- "single_task": "Place the coffee capsule inside the capsule container, then place the cup onto the center of the cup tray, then push the 'Hot Water' and 'Travel Mug' buttons.",
- **ALOHA_STATIC_INFO,
- },
- "aloha_static_coffee_new": {
- "single_task": "Place the coffee capsule inside the capsule container, then place the cup onto the center of the cup tray.",
- **ALOHA_STATIC_INFO,
- },
- "aloha_static_cups_open": {
- "single_task": "Pick up the plastic cup and open its lid.",
- **ALOHA_STATIC_INFO,
- },
- "aloha_static_fork_pick_up": {
- "single_task": "Pick up the fork and place it on the plate.",
- **ALOHA_STATIC_INFO,
- },
- "aloha_static_pingpong_test": {
- "single_task": "Transfer one of the two balls in the right glass into the left glass, then transfer it back to the right glass.",
- **ALOHA_STATIC_INFO,
- },
- "aloha_static_pro_pencil": {
- "single_task": "Pick up the pencil with the right arm, hand it over to the left arm then place it back onto the table.",
- **ALOHA_STATIC_INFO,
- },
- "aloha_static_screw_driver": {
- "single_task": "Pick up the screwdriver with the right arm, hand it over to the left arm then place it into the cup.",
- **ALOHA_STATIC_INFO,
- },
- "aloha_static_tape": {
- "single_task": "Cut a small piece of tape from the tape dispenser then place it on the cardboard box's edge.",
- **ALOHA_STATIC_INFO,
- },
- "aloha_static_thread_velcro": {
- "single_task": "Pick up the velcro cable tie with the left arm, then insert the end of the velcro tie into the other end's loop with the right arm.",
- **ALOHA_STATIC_INFO,
- },
- "aloha_static_towel": {
- "single_task": "Pick up a piece of paper towel and place it on the spilled liquid.",
- **ALOHA_STATIC_INFO,
- },
- "aloha_static_vinh_cup": {
- "single_task": "Pick up the plastic cup with the right arm, then pop its lid open with the left arm.",
- **ALOHA_STATIC_INFO,
- },
- "aloha_static_vinh_cup_left": {
- "single_task": "Pick up the plastic cup with the left arm, then pop its lid open with the right arm.",
- **ALOHA_STATIC_INFO,
- },
- "aloha_static_ziploc_slide": {"single_task": "Slide open the ziploc bag.", **ALOHA_STATIC_INFO},
- "aloha_sim_insertion_scripted": {"single_task": "Insert the peg into the socket.", **ALOHA_STATIC_INFO},
- "aloha_sim_insertion_scripted_image": {
- "single_task": "Insert the peg into the socket.",
- **ALOHA_STATIC_INFO,
- },
- "aloha_sim_insertion_human": {"single_task": "Insert the peg into the socket.", **ALOHA_STATIC_INFO},
- "aloha_sim_insertion_human_image": {
- "single_task": "Insert the peg into the socket.",
- **ALOHA_STATIC_INFO,
- },
- "aloha_sim_transfer_cube_scripted": {
- "single_task": "Pick up the cube with the right arm and transfer it to the left arm.",
- **ALOHA_STATIC_INFO,
- },
- "aloha_sim_transfer_cube_scripted_image": {
- "single_task": "Pick up the cube with the right arm and transfer it to the left arm.",
- **ALOHA_STATIC_INFO,
- },
- "aloha_sim_transfer_cube_human": {
- "single_task": "Pick up the cube with the right arm and transfer it to the left arm.",
- **ALOHA_STATIC_INFO,
- },
- "aloha_sim_transfer_cube_human_image": {
- "single_task": "Pick up the cube with the right arm and transfer it to the left arm.",
- **ALOHA_STATIC_INFO,
- },
- "pusht": {"single_task": "Push the T-shaped block onto the T-shaped target.", **PUSHT_INFO},
- "pusht_image": {"single_task": "Push the T-shaped block onto the T-shaped target.", **PUSHT_INFO},
- "unitreeh1_fold_clothes": {"single_task": "Fold the sweatshirt.", **UNITREEH_INFO},
- "unitreeh1_rearrange_objects": {"single_task": "Put the object into the bin.", **UNITREEH_INFO},
- "unitreeh1_two_robot_greeting": {
- "single_task": "Greet the other robot with a high five.",
- **UNITREEH_INFO,
- },
- "unitreeh1_warehouse": {
- "single_task": "Grab the spray paint on the shelf and place it in the bin on top of the robot dog.",
- **UNITREEH_INFO,
- },
- "xarm_lift_medium": {"single_task": "Pick up the cube and lift it.", **XARM_INFO},
- "xarm_lift_medium_image": {"single_task": "Pick up the cube and lift it.", **XARM_INFO},
- "xarm_lift_medium_replay": {"single_task": "Pick up the cube and lift it.", **XARM_INFO},
- "xarm_lift_medium_replay_image": {"single_task": "Pick up the cube and lift it.", **XARM_INFO},
- "xarm_push_medium": {"single_task": "Push the cube onto the target.", **XARM_INFO},
- "xarm_push_medium_image": {"single_task": "Push the cube onto the target.", **XARM_INFO},
- "xarm_push_medium_replay": {"single_task": "Push the cube onto the target.", **XARM_INFO},
- "xarm_push_medium_replay_image": {"single_task": "Push the cube onto the target.", **XARM_INFO},
- "umi_cup_in_the_wild": {
- "single_task": "Put the cup on the plate.",
- "license": "apache-2.0",
- },
- "asu_table_top": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "paper": "https://link.springer.com/article/10.1007/s10514-023-10129-1",
- "citation_bibtex": dedent(r"""
- @inproceedings{zhou2023modularity,
- title={Modularity through Attention: Efficient Training and Transfer of Language-Conditioned Policies for Robot Manipulation},
- author={Zhou, Yifan and Sonawani, Shubham and Phielipp, Mariano and Stepputtis, Simon and Amor, Heni},
- booktitle={Conference on Robot Learning},
- pages={1684--1695},
- year={2023},
- organization={PMLR}
- }
- @article{zhou2023learning,
- title={Learning modular language-conditioned robot policies through attention},
- author={Zhou, Yifan and Sonawani, Shubham and Phielipp, Mariano and Ben Amor, Heni and Stepputtis, Simon},
- journal={Autonomous Robots},
- pages={1--21},
- year={2023},
- publisher={Springer}
- }""").lstrip(),
- },
- "austin_buds_dataset": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "https://ut-austin-rpl.github.io/BUDS-website/",
- "paper": "https://arxiv.org/abs/2109.13841",
- "citation_bibtex": dedent(r"""
- @article{zhu2022bottom,
- title={Bottom-Up Skill Discovery From Unsegmented Demonstrations for Long-Horizon Robot Manipulation},
- author={Zhu, Yifeng and Stone, Peter and Zhu, Yuke},
- journal={IEEE Robotics and Automation Letters},
- volume={7},
- number={2},
- pages={4126--4133},
- year={2022},
- publisher={IEEE}
- }""").lstrip(),
- },
- "austin_sailor_dataset": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "https://ut-austin-rpl.github.io/sailor/",
- "paper": "https://arxiv.org/abs/2210.11435",
- "citation_bibtex": dedent(r"""
- @inproceedings{nasiriany2022sailor,
- title={Learning and Retrieval from Prior Data for Skill-based Imitation Learning},
- author={Soroush Nasiriany and Tian Gao and Ajay Mandlekar and Yuke Zhu},
- booktitle={Conference on Robot Learning (CoRL)},
- year={2022}
- }""").lstrip(),
- },
- "austin_sirius_dataset": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "https://ut-austin-rpl.github.io/sirius/",
- "paper": "https://arxiv.org/abs/2211.08416",
- "citation_bibtex": dedent(r"""
- @inproceedings{liu2022robot,
- title = {Robot Learning on the Job: Human-in-the-Loop Autonomy and Learning During Deployment},
- author = {Huihan Liu and Soroush Nasiriany and Lance Zhang and Zhiyao Bao and Yuke Zhu},
- booktitle = {Robotics: Science and Systems (RSS)},
- year = {2023}
- }""").lstrip(),
- },
- "berkeley_autolab_ur5": {
- "tasks_col": "language_instruction",
- "license": "cc-by-4.0",
- "url": "https://sites.google.com/view/berkeley-ur5/home",
- "citation_bibtex": dedent(r"""
- @misc{BerkeleyUR5Website,
- title = {Berkeley {UR5} Demonstration Dataset},
- author = {Lawrence Yunliang Chen and Simeon Adebola and Ken Goldberg},
- howpublished = {https://sites.google.com/view/berkeley-ur5/home},
- }""").lstrip(),
- },
- "berkeley_cable_routing": {
- "tasks_col": "language_instruction",
- "license": "cc-by-4.0",
- "url": "https://sites.google.com/view/cablerouting/home",
- "paper": "https://arxiv.org/abs/2307.08927",
- "citation_bibtex": dedent(r"""
- @article{luo2023multistage,
- author = {Jianlan Luo and Charles Xu and Xinyang Geng and Gilbert Feng and Kuan Fang and Liam Tan and Stefan Schaal and Sergey Levine},
- title = {Multi-Stage Cable Routing through Hierarchical Imitation Learning},
- journal = {arXiv pre-print},
- year = {2023},
- url = {https://arxiv.org/abs/2307.08927},
- }""").lstrip(),
- },
- "berkeley_fanuc_manipulation": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "https://sites.google.com/berkeley.edu/fanuc-manipulation",
- "citation_bibtex": dedent(r"""
- @article{fanuc_manipulation2023,
- title={Fanuc Manipulation: A Dataset for Learning-based Manipulation with FANUC Mate 200iD Robot},
- author={Zhu, Xinghao and Tian, Ran and Xu, Chenfeng and Ding, Mingyu and Zhan, Wei and Tomizuka, Masayoshi},
- year={2023},
- }""").lstrip(),
- },
- "berkeley_gnm_cory_hall": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "paper": "https://arxiv.org/abs/1709.10489",
- "citation_bibtex": dedent(r"""
- @inproceedings{kahn2018self,
- title={Self-supervised deep reinforcement learning with generalized computation graphs for robot navigation},
- author={Kahn, Gregory and Villaflor, Adam and Ding, Bosen and Abbeel, Pieter and Levine, Sergey},
- booktitle={2018 IEEE international conference on robotics and automation (ICRA)},
- pages={5129--5136},
- year={2018},
- organization={IEEE}
- }""").lstrip(),
- },
- "berkeley_gnm_recon": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "https://sites.google.com/view/recon-robot",
- "paper": "https://arxiv.org/abs/2104.05859",
- "citation_bibtex": dedent(r"""
- @inproceedings{shah2021rapid,
- title={Rapid Exploration for Open-World Navigation with Latent Goal Models},
- author={Dhruv Shah and Benjamin Eysenbach and Nicholas Rhinehart and Sergey Levine},
- booktitle={5th Annual Conference on Robot Learning },
- year={2021},
- url={https://openreview.net/forum?id=d_SWJhyKfVw}
- }""").lstrip(),
- },
- "berkeley_gnm_sac_son": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "https://sites.google.com/view/SACSoN-review",
- "paper": "https://arxiv.org/abs/2306.01874",
- "citation_bibtex": dedent(r"""
- @article{hirose2023sacson,
- title={SACSoN: Scalable Autonomous Data Collection for Social Navigation},
- author={Hirose, Noriaki and Shah, Dhruv and Sridhar, Ajay and Levine, Sergey},
- journal={arXiv preprint arXiv:2306.01874},
- year={2023}
- }""").lstrip(),
- },
- "berkeley_mvp": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "paper": "https://arxiv.org/abs/2203.06173",
- "citation_bibtex": dedent(r"""
- @InProceedings{Radosavovic2022,
- title = {Real-World Robot Learning with Masked Visual Pre-training},
- author = {Ilija Radosavovic and Tete Xiao and Stephen James and Pieter Abbeel and Jitendra Malik and Trevor Darrell},
- booktitle = {CoRL},
- year = {2022}
- }""").lstrip(),
- },
- "berkeley_rpt": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "paper": "https://arxiv.org/abs/2306.10007",
- "citation_bibtex": dedent(r"""
- @article{Radosavovic2023,
- title={Robot Learning with Sensorimotor Pre-training},
- author={Ilija Radosavovic and Baifeng Shi and Letian Fu and Ken Goldberg and Trevor Darrell and Jitendra Malik},
- year={2023},
- journal={arXiv:2306.10007}
- }""").lstrip(),
- },
- "cmu_franka_exploration_dataset": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "https://human-world-model.github.io/",
- "paper": "https://arxiv.org/abs/2308.10901",
- "citation_bibtex": dedent(r"""
- @inproceedings{mendonca2023structured,
- title={Structured World Models from Human Videos},
- author={Mendonca, Russell and Bahl, Shikhar and Pathak, Deepak},
- journal={RSS},
- year={2023}
- }""").lstrip(),
- },
- "cmu_play_fusion": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "https://play-fusion.github.io/",
- "paper": "https://arxiv.org/abs/2312.04549",
- "citation_bibtex": dedent(r"""
- @inproceedings{chen2023playfusion,
- title={PlayFusion: Skill Acquisition via Diffusion from Language-Annotated Play},
- author={Chen, Lili and Bahl, Shikhar and Pathak, Deepak},
- booktitle={CoRL},
- year={2023}
- }""").lstrip(),
- },
- "cmu_stretch": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "https://robo-affordances.github.io/",
- "paper": "https://arxiv.org/abs/2304.08488",
- "citation_bibtex": dedent(r"""
- @inproceedings{bahl2023affordances,
- title={Affordances from Human Videos as a Versatile Representation for Robotics},
- author={Bahl, Shikhar and Mendonca, Russell and Chen, Lili and Jain, Unnat and Pathak, Deepak},
- booktitle={CVPR},
- year={2023}
- }
- @article{mendonca2023structured,
- title={Structured World Models from Human Videos},
- author={Mendonca, Russell and Bahl, Shikhar and Pathak, Deepak},
- journal={CoRL},
- year={2023}
- }""").lstrip(),
- },
- "columbia_cairlab_pusht_real": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "https://diffusion-policy.cs.columbia.edu/",
- "paper": "https://arxiv.org/abs/2303.04137v5",
- "citation_bibtex": dedent(r"""
- @inproceedings{chi2023diffusionpolicy,
- title={Diffusion Policy: Visuomotor Policy Learning via Action Diffusion},
- author={Chi, Cheng and Feng, Siyuan and Du, Yilun and Xu, Zhenjia and Cousineau, Eric and Burchfiel, Benjamin and Song, Shuran},
- booktitle={Proceedings of Robotics: Science and Systems (RSS)},
- year={2023}
- }""").lstrip(),
- },
- "conq_hose_manipulation": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "https://sites.google.com/view/conq-hose-manipulation-dataset/home",
- "citation_bibtex": dedent(r"""
- @misc{ConqHoseManipData,
- author={Peter Mitrano and Dmitry Berenson},
- title={Conq Hose Manipulation Dataset, v1.15.0},
- year={2024},
- howpublished={https://sites.google.com/view/conq-hose-manipulation-dataset}
- }""").lstrip(),
- },
- "dlr_edan_shared_control": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "paper": "https://ieeexplore.ieee.org/document/9341156",
- "citation_bibtex": dedent(r"""
- @inproceedings{vogel_edan_2020,
- title = {EDAN - an EMG-Controlled Daily Assistant to Help People with Physical Disabilities},
- language = {en},
- booktitle = {2020 {IEEE}/{RSJ} {International} {Conference} on {Intelligent} {Robots} and {Systems} ({IROS})},
- author = {Vogel, Jörn and Hagengruber, Annette and Iskandar, Maged and Quere, Gabriel and Leipscher, Ulrike and Bustamante, Samuel and Dietrich, Alexander and Hoeppner, Hannes and Leidner, Daniel and Albu-Schäffer, Alin},
- year = {2020}
- }
- @inproceedings{quere_shared_2020,
- address = {Paris, France},
- title = {Shared {Control} {Templates} for {Assistive} {Robotics}},
- language = {en},
- booktitle = {2020 {IEEE} {International} {Conference} on {Robotics} and {Automation} ({ICRA})},
- author = {Quere, Gabriel and Hagengruber, Annette and Iskandar, Maged and Bustamante, Samuel and Leidner, Daniel and Stulp, Freek and Vogel, Joern},
- year = {2020},
- pages = {7},
- }""").lstrip(),
- },
- "dlr_sara_grid_clamp": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "paper": "https://www.researchsquare.com/article/rs-3289569/v1",
- "citation_bibtex": dedent(r"""
- @article{padalkar2023guided,
- title={A guided reinforcement learning approach using shared control templates for learning manipulation skills in the real world},
- author={Padalkar, Abhishek and Quere, Gabriel and Raffin, Antonin and Silv{\'e}rio, Jo{\~a}o and Stulp, Freek},
- journal={Research square preprint rs-3289569/v1},
- year={2023}
- }""").lstrip(),
- },
- "dlr_sara_pour": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "paper": "https://elib.dlr.de/193739/1/padalkar2023rlsct.pdf",
- "citation_bibtex": dedent(r"""
- @inproceedings{padalkar2023guiding,
- title={Guiding Reinforcement Learning with Shared Control Templates},
- author={Padalkar, Abhishek and Quere, Gabriel and Steinmetz, Franz and Raffin, Antonin and Nieuwenhuisen, Matthias and Silv{\'e}rio, Jo{\~a}o and Stulp, Freek},
- booktitle={40th IEEE International Conference on Robotics and Automation, ICRA 2023},
- year={2023},
- organization={IEEE}
- }""").lstrip(),
- },
- "droid_100": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "https://droid-dataset.github.io/",
- "paper": "https://arxiv.org/abs/2403.12945",
- "citation_bibtex": dedent(r"""
- @article{khazatsky2024droid,
- title = {DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset},
- author = {Alexander Khazatsky and Karl Pertsch and Suraj Nair and Ashwin Balakrishna and Sudeep Dasari and Siddharth Karamcheti and Soroush Nasiriany and Mohan Kumar Srirama and Lawrence Yunliang Chen and Kirsty Ellis and Peter David Fagan and Joey Hejna and Masha Itkina and Marion Lepert and Yecheng Jason Ma and Patrick Tree Miller and Jimmy Wu and Suneel Belkhale and Shivin Dass and Huy Ha and Arhan Jain and Abraham Lee and Youngwoon Lee and Marius Memmel and Sungjae Park and Ilija Radosavovic and Kaiyuan Wang and Albert Zhan and Kevin Black and Cheng Chi and Kyle Beltran Hatch and Shan Lin and Jingpei Lu and Jean Mercat and Abdul Rehman and Pannag R Sanketi and Archit Sharma and Cody Simpson and Quan Vuong and Homer Rich Walke and Blake Wulfe and Ted Xiao and Jonathan Heewon Yang and Arefeh Yavary and Tony Z. Zhao and Christopher Agia and Rohan Baijal and Mateo Guaman Castro and Daphne Chen and Qiuyu Chen and Trinity Chung and Jaimyn Drake and Ethan Paul Foster and Jensen Gao and David Antonio Herrera and Minho Heo and Kyle Hsu and Jiaheng Hu and Donovon Jackson and Charlotte Le and Yunshuang Li and Kevin Lin and Roy Lin and Zehan Ma and Abhiram Maddukuri and Suvir Mirchandani and Daniel Morton and Tony Nguyen and Abigail O'Neill and Rosario Scalise and Derick Seale and Victor Son and Stephen Tian and Emi Tran and Andrew E. Wang and Yilin Wu and Annie Xie and Jingyun Yang and Patrick Yin and Yunchu Zhang and Osbert Bastani and Glen Berseth and Jeannette Bohg and Ken Goldberg and Abhinav Gupta and Abhishek Gupta and Dinesh Jayaraman and Joseph J Lim and Jitendra Malik and Roberto Martín-Martín and Subramanian Ramamoorthy and Dorsa Sadigh and Shuran Song and Jiajun Wu and Michael C. Yip and Yuke Zhu and Thomas Kollar and Sergey Levine and Chelsea Finn},
- year = {2024},
- }""").lstrip(),
- },
- "fmb": {
- "tasks_col": "language_instruction",
- "license": "cc-by-4.0",
- "url": "https://functional-manipulation-benchmark.github.io/",
- "paper": "https://arxiv.org/abs/2401.08553",
- "citation_bibtex": dedent(r"""
- @article{luo2024fmb,
- title={FMB: a Functional Manipulation Benchmark for Generalizable Robotic Learning},
- author={Luo, Jianlan and Xu, Charles and Liu, Fangchen and Tan, Liam and Lin, Zipeng and Wu, Jeffrey and Abbeel, Pieter and Levine, Sergey},
- journal={arXiv preprint arXiv:2401.08553},
- year={2024}
- }""").lstrip(),
- },
- "iamlab_cmu_pickup_insert": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "https://openreview.net/forum?id=WuBv9-IGDUA",
- "paper": "https://arxiv.org/abs/2401.14502",
- "citation_bibtex": dedent(r"""
- @inproceedings{saxena2023multiresolution,
- title={Multi-Resolution Sensing for Real-Time Control with Vision-Language Models},
- author={Saumya Saxena and Mohit Sharma and Oliver Kroemer},
- booktitle={7th Annual Conference on Robot Learning},
- year={2023},
- url={https://openreview.net/forum?id=WuBv9-IGDUA}
- }""").lstrip(),
- },
- "imperialcollege_sawyer_wrist_cam": {
- "tasks_col": "language_instruction",
- "license": "mit",
- },
- "jaco_play": {
- "tasks_col": "language_instruction",
- "license": "cc-by-4.0",
- "url": "https://github.com/clvrai/clvr_jaco_play_dataset",
- "citation_bibtex": dedent(r"""
- @software{dass2023jacoplay,
- author = {Dass, Shivin and Yapeter, Jullian and Zhang, Jesse and Zhang, Jiahui
- and Pertsch, Karl and Nikolaidis, Stefanos and Lim, Joseph J.},
- title = {CLVR Jaco Play Dataset},
- url = {https://github.com/clvrai/clvr_jaco_play_dataset},
- version = {1.0.0},
- year = {2023}
- }""").lstrip(),
- },
- "kaist_nonprehensile": {
- "tasks_col": "language_instruction",
- "license": "cc-by-4.0",
- "url": "https://github.com/JaeHyung-Kim/rlds_dataset_builder",
- "citation_bibtex": dedent(r"""
- @article{kimpre,
- title={Pre-and post-contact policy decomposition for non-prehensile manipulation with zero-shot sim-to-real transfer},
- author={Kim, Minchan and Han, Junhyek and Kim, Jaehyung and Kim, Beomjoon},
- booktitle={2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
- year={2023},
- organization={IEEE}
- }""").lstrip(),
- },
- "nyu_door_opening_surprising_effectiveness": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "https://jyopari.github.io/VINN/",
- "paper": "https://arxiv.org/abs/2112.01511",
- "citation_bibtex": dedent(r"""
- @misc{pari2021surprising,
- title={The Surprising Effectiveness of Representation Learning for Visual Imitation},
- author={Jyothish Pari and Nur Muhammad Shafiullah and Sridhar Pandian Arunachalam and Lerrel Pinto},
- year={2021},
- eprint={2112.01511},
- archivePrefix={arXiv},
- primaryClass={cs.RO}
- }""").lstrip(),
- },
- "nyu_franka_play_dataset": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "https://play-to-policy.github.io/",
- "paper": "https://arxiv.org/abs/2210.10047",
- "citation_bibtex": dedent(r"""
- @article{cui2022play,
- title = {From Play to Policy: Conditional Behavior Generation from Uncurated Robot Data},
- author = {Cui, Zichen Jeff and Wang, Yibin and Shafiullah, Nur Muhammad Mahi and Pinto, Lerrel},
- journal = {arXiv preprint arXiv:2210.10047},
- year = {2022}
- }""").lstrip(),
- },
- "nyu_rot_dataset": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "https://rot-robot.github.io/",
- "paper": "https://arxiv.org/abs/2206.15469",
- "citation_bibtex": dedent(r"""
- @inproceedings{haldar2023watch,
- title={Watch and match: Supercharging imitation with regularized optimal transport},
- author={Haldar, Siddhant and Mathur, Vaibhav and Yarats, Denis and Pinto, Lerrel},
- booktitle={Conference on Robot Learning},
- pages={32--43},
- year={2023},
- organization={PMLR}
- }""").lstrip(),
- },
- "roboturk": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "https://roboturk.stanford.edu/dataset_real.html",
- "paper": "PAPER",
- "citation_bibtex": dedent(r"""
- @inproceedings{mandlekar2019scaling,
- title={Scaling robot supervision to hundreds of hours with roboturk: Robotic manipulation dataset through human reasoning and dexterity},
- author={Mandlekar, Ajay and Booher, Jonathan and Spero, Max and Tung, Albert and Gupta, Anchit and Zhu, Yuke and Garg, Animesh and Savarese, Silvio and Fei-Fei, Li},
- booktitle={2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
- pages={1048--1055},
- year={2019},
- organization={IEEE}
- }""").lstrip(),
- },
- "stanford_hydra_dataset": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "https://sites.google.com/view/hydra-il-2023",
- "paper": "https://arxiv.org/abs/2306.17237",
- "citation_bibtex": dedent(r"""
- @article{belkhale2023hydra,
- title={HYDRA: Hybrid Robot Actions for Imitation Learning},
- author={Belkhale, Suneel and Cui, Yuchen and Sadigh, Dorsa},
- journal={arxiv},
- year={2023}
- }""").lstrip(),
- },
- "stanford_kuka_multimodal_dataset": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "https://sites.google.com/view/visionandtouch",
- "paper": "https://arxiv.org/abs/1810.10191",
- "citation_bibtex": dedent(r"""
- @inproceedings{lee2019icra,
- title={Making sense of vision and touch: Self-supervised learning of multimodal representations for contact-rich tasks},
- author={Lee, Michelle A and Zhu, Yuke and Srinivasan, Krishnan and Shah, Parth and Savarese, Silvio and Fei-Fei, Li and Garg, Animesh and Bohg, Jeannette},
- booktitle={2019 IEEE International Conference on Robotics and Automation (ICRA)},
- year={2019},
- url={https://arxiv.org/abs/1810.10191}
- }""").lstrip(),
- },
- "stanford_robocook": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "https://hshi74.github.io/robocook/",
- "paper": "https://arxiv.org/abs/2306.14447",
- "citation_bibtex": dedent(r"""
- @article{shi2023robocook,
- title={RoboCook: Long-Horizon Elasto-Plastic Object Manipulation with Diverse Tools},
- author={Shi, Haochen and Xu, Huazhe and Clarke, Samuel and Li, Yunzhu and Wu, Jiajun},
- journal={arXiv preprint arXiv:2306.14447},
- year={2023}
- }""").lstrip(),
- },
- "taco_play": {
- "tasks_col": "language_instruction",
- "license": "cc-by-4.0",
- "url": "https://www.kaggle.com/datasets/oiermees/taco-robot",
- "paper": "https://arxiv.org/abs/2209.08959, https://arxiv.org/abs/2210.01911",
- "citation_bibtex": dedent(r"""
- @inproceedings{rosete2022tacorl,
- author = {Erick Rosete-Beas and Oier Mees and Gabriel Kalweit and Joschka Boedecker and Wolfram Burgard},
- title = {Latent Plans for Task Agnostic Offline Reinforcement Learning},
- journal = {Proceedings of the 6th Conference on Robot Learning (CoRL)},
- year = {2022}
- }
- @inproceedings{mees23hulc2,
- title={Grounding Language with Visual Affordances over Unstructured Data},
- author={Oier Mees and Jessica Borja-Diaz and Wolfram Burgard},
- booktitle = {Proceedings of the IEEE International Conference on Robotics and Automation (ICRA)},
- year={2023},
- address = {London, UK}
- }""").lstrip(),
- },
- "tokyo_u_lsmo": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "URL",
- "paper": "https://arxiv.org/abs/2107.05842",
- "citation_bibtex": dedent(r"""
- @Article{Osa22,
- author = {Takayuki Osa},
- journal = {The International Journal of Robotics Research},
- title = {Motion Planning by Learning the Solution Manifold in Trajectory Optimization},
- year = {2022},
- number = {3},
- pages = {291--311},
- volume = {41},
- }""").lstrip(),
- },
- "toto": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "https://toto-benchmark.org/",
- "paper": "https://arxiv.org/abs/2306.00942",
- "citation_bibtex": dedent(r"""
- @inproceedings{zhou2023train,
- author={Zhou, Gaoyue and Dean, Victoria and Srirama, Mohan Kumar and Rajeswaran, Aravind and Pari, Jyothish and Hatch, Kyle and Jain, Aryan and Yu, Tianhe and Abbeel, Pieter and Pinto, Lerrel and Finn, Chelsea and Gupta, Abhinav},
- booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)},
- title={Train Offline, Test Online: A Real Robot Learning Benchmark},
- year={2023},
- }""").lstrip(),
- },
- "ucsd_kitchen_dataset": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "citation_bibtex": dedent(r"""
- @ARTICLE{ucsd_kitchens,
- author = {Ge Yan, Kris Wu, and Xiaolong Wang},
- title = {{ucsd kitchens Dataset}},
- year = {2023},
- month = {August}
- }""").lstrip(),
- },
- "ucsd_pick_and_place_dataset": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "https://owmcorl.github.io/#",
- "paper": "https://arxiv.org/abs/2310.16029",
- "citation_bibtex": dedent(r"""
- @preprint{Feng2023Finetuning,
- title={Finetuning Offline World Models in the Real World},
- author={Yunhai Feng, Nicklas Hansen, Ziyan Xiong, Chandramouli Rajagopalan, Xiaolong Wang},
- year={2023}
- }""").lstrip(),
- },
- "uiuc_d3field": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "https://robopil.github.io/d3fields/",
- "paper": "https://arxiv.org/abs/2309.16118",
- "citation_bibtex": dedent(r"""
- @article{wang2023d3field,
- title={D^3Field: Dynamic 3D Descriptor Fields for Generalizable Robotic Manipulation},
- author={Wang, Yixuan and Li, Zhuoran and Zhang, Mingtong and Driggs-Campbell, Katherine and Wu, Jiajun and Fei-Fei, Li and Li, Yunzhu},
- journal={arXiv preprint arXiv:},
- year={2023},
- }""").lstrip(),
- },
- "usc_cloth_sim": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "https://uscresl.github.io/dmfd/",
- "paper": "https://arxiv.org/abs/2207.10148",
- "citation_bibtex": dedent(r"""
- @article{salhotra2022dmfd,
- author={Salhotra, Gautam and Liu, I-Chun Arthur and Dominguez-Kuhne, Marcus and Sukhatme, Gaurav S.},
- journal={IEEE Robotics and Automation Letters},
- title={Learning Deformable Object Manipulation From Expert Demonstrations},
- year={2022},
- volume={7},
- number={4},
- pages={8775-8782},
- doi={10.1109/LRA.2022.3187843}
- }""").lstrip(),
- },
- "utaustin_mutex": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "https://ut-austin-rpl.github.io/MUTEX/",
- "paper": "https://arxiv.org/abs/2309.14320",
- "citation_bibtex": dedent(r"""
- @inproceedings{shah2023mutex,
- title={{MUTEX}: Learning Unified Policies from Multimodal Task Specifications},
- author={Rutav Shah and Roberto Mart{\'\i}n-Mart{\'\i}n and Yuke Zhu},
- booktitle={7th Annual Conference on Robot Learning},
- year={2023},
- url={https://openreview.net/forum?id=PwqiqaaEzJ}
- }""").lstrip(),
- },
- "utokyo_pr2_opening_fridge": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "citation_bibtex": dedent(r"""
- @misc{oh2023pr2utokyodatasets,
- author={Jihoon Oh and Naoaki Kanazawa and Kento Kawaharazuka},
- title={X-Embodiment U-Tokyo PR2 Datasets},
- year={2023},
- url={https://github.com/ojh6404/rlds_dataset_builder},
- }""").lstrip(),
- },
- "utokyo_pr2_tabletop_manipulation": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "citation_bibtex": dedent(r"""
- @misc{oh2023pr2utokyodatasets,
- author={Jihoon Oh and Naoaki Kanazawa and Kento Kawaharazuka},
- title={X-Embodiment U-Tokyo PR2 Datasets},
- year={2023},
- url={https://github.com/ojh6404/rlds_dataset_builder},
- }""").lstrip(),
- },
- "utokyo_saytap": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "https://saytap.github.io/",
- "paper": "https://arxiv.org/abs/2306.07580",
- "citation_bibtex": dedent(r"""
- @article{saytap2023,
- author = {Yujin Tang and Wenhao Yu and Jie Tan and Heiga Zen and Aleksandra Faust and
- Tatsuya Harada},
- title = {SayTap: Language to Quadrupedal Locomotion},
- eprint = {arXiv:2306.07580},
- url = {https://saytap.github.io},
- note = {https://saytap.github.io},
- year = {2023}
- }""").lstrip(),
- },
- "utokyo_xarm_bimanual": {
- "tasks_col": "language_instruction",
- "license": "cc-by-4.0",
- "citation_bibtex": dedent(r"""
- @misc{matsushima2023weblab,
- title={Weblab xArm Dataset},
- author={Tatsuya Matsushima and Hiroki Furuta and Yusuke Iwasawa and Yutaka Matsuo},
- year={2023},
- }""").lstrip(),
- },
- "utokyo_xarm_pick_and_place": {
- "tasks_col": "language_instruction",
- "license": "cc-by-4.0",
- "citation_bibtex": dedent(r"""
- @misc{matsushima2023weblab,
- title={Weblab xArm Dataset},
- author={Tatsuya Matsushima and Hiroki Furuta and Yusuke Iwasawa and Yutaka Matsuo},
- year={2023},
- }""").lstrip(),
- },
- "viola": {
- "tasks_col": "language_instruction",
- "license": "mit",
- "url": "https://ut-austin-rpl.github.io/VIOLA/",
- "paper": "https://arxiv.org/abs/2210.11339",
- "citation_bibtex": dedent(r"""
- @article{zhu2022viola,
- title={VIOLA: Imitation Learning for Vision-Based Manipulation with Object Proposal Priors},
- author={Zhu, Yifeng and Joshi, Abhishek and Stone, Peter and Zhu, Yuke},
- journal={6th Annual Conference on Robot Learning (CoRL)},
- year={2022}
- }""").lstrip(),
- },
-}
-# spellchecker:on
-
-
-def batch_convert():
- status = {}
- logfile = LOCAL_DIR / "conversion_log.txt"
- assert set(DATASETS) == {id_.split("/")[1] for id_ in available_datasets}
- for num, (name, kwargs) in enumerate(DATASETS.items()):
- repo_id = f"lerobot/{name}"
- print(f"\nConverting {repo_id} ({num}/{len(DATASETS)})")
- print("---------------------------------------------------------")
- try:
- convert_dataset(repo_id, LOCAL_DIR, **kwargs)
- status = f"{repo_id}: success."
- with open(logfile, "a") as file:
- file.write(status + "\n")
- except Exception:
- status = f"{repo_id}: failed\n {traceback.format_exc()}"
- with open(logfile, "a") as file:
- file.write(status + "\n")
- continue
-
-
-if __name__ == "__main__":
- batch_convert()
diff --git a/lerobot/common/datasets/v2/convert_dataset_v1_to_v2.py b/lerobot/common/datasets/v2/convert_dataset_v1_to_v2.py
deleted file mode 100644
index 024576d7..00000000
--- a/lerobot/common/datasets/v2/convert_dataset_v1_to_v2.py
+++ /dev/null
@@ -1,664 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-This script will help you convert any LeRobot dataset already pushed to the hub from codebase version 1.6 to
-2.0. You will be required to provide the 'tasks', which is a short but accurate description in plain English
-for each of the task performed in the dataset. This will allow to easily train models with task-conditioning.
-
-We support 3 different scenarios for these tasks (see instructions below):
- 1. Single task dataset: all episodes of your dataset have the same single task.
- 2. Single task episodes: the episodes of your dataset each contain a single task but they can differ from
- one episode to the next.
- 3. Multi task episodes: episodes of your dataset may each contain several different tasks.
-
-
-Can you can also provide a robot config .yaml file (not mandatory) to this script via the option
-'--robot-config' so that it writes information about the robot (robot type, motors names) this dataset was
-recorded with. For now, only Aloha/Koch type robots are supported with this option.
-
-
-# 1. Single task dataset
-If your dataset contains a single task, you can simply provide it directly via the CLI with the
-'--single-task' option.
-
-Examples:
-
-```bash
-python lerobot/common/datasets/v2/convert_dataset_v1_to_v2.py \
- --repo-id lerobot/aloha_sim_insertion_human_image \
- --single-task "Insert the peg into the socket." \
- --robot-config lerobot/configs/robot/aloha.yaml \
- --local-dir data
-```
-
-```bash
-python lerobot/common/datasets/v2/convert_dataset_v1_to_v2.py \
- --repo-id aliberts/koch_tutorial \
- --single-task "Pick the Lego block and drop it in the box on the right." \
- --robot-config lerobot/configs/robot/koch.yaml \
- --local-dir data
-```
-
-
-# 2. Single task episodes
-If your dataset is a multi-task dataset, you have two options to provide the tasks to this script:
-
-- If your dataset already contains a language instruction column in its parquet file, you can simply provide
- this column's name with the '--tasks-col' arg.
-
- Example:
-
- ```bash
- python lerobot/common/datasets/v2/convert_dataset_v1_to_v2.py \
- --repo-id lerobot/stanford_kuka_multimodal_dataset \
- --tasks-col "language_instruction" \
- --local-dir data
- ```
-
-- If your dataset doesn't contain a language instruction, you should provide the path to a .json file with the
- '--tasks-path' arg. This file should have the following structure where keys correspond to each
- episode_index in the dataset, and values are the language instruction for that episode.
-
- Example:
-
- ```json
- {
- "0": "Do something",
- "1": "Do something else",
- "2": "Do something",
- "3": "Go there",
- ...
- }
- ```
-
-# 3. Multi task episodes
-If you have multiple tasks per episodes, your dataset should contain a language instruction column in its
-parquet file, and you must provide this column's name with the '--tasks-col' arg.
-
-Example:
-
-```bash
-python lerobot/common/datasets/v2/convert_dataset_v1_to_v2.py \
- --repo-id lerobot/stanford_kuka_multimodal_dataset \
- --tasks-col "language_instruction" \
- --local-dir data
-```
-"""
-
-import argparse
-import contextlib
-import filecmp
-import json
-import logging
-import math
-import shutil
-import subprocess
-import tempfile
-from pathlib import Path
-
-import datasets
-import pyarrow.compute as pc
-import pyarrow.parquet as pq
-import torch
-from datasets import Dataset
-from huggingface_hub import HfApi
-from huggingface_hub.errors import EntryNotFoundError, HfHubHTTPError
-from safetensors.torch import load_file
-
-from lerobot.common.datasets.utils import (
- DEFAULT_CHUNK_SIZE,
- DEFAULT_PARQUET_PATH,
- DEFAULT_VIDEO_PATH,
- EPISODES_PATH,
- INFO_PATH,
- STATS_PATH,
- TASKS_PATH,
- create_branch,
- create_lerobot_dataset_card,
- flatten_dict,
- get_safe_version,
- load_json,
- unflatten_dict,
- write_json,
- write_jsonlines,
-)
-from lerobot.common.datasets.video_utils import (
- VideoFrame, # noqa: F401
- get_image_pixel_channels,
- get_video_info,
-)
-from lerobot.common.robot_devices.robots.configs import RobotConfig
-from lerobot.common.robot_devices.robots.utils import make_robot_config
-
-V16 = "v1.6"
-V20 = "v2.0"
-
-GITATTRIBUTES_REF = "aliberts/gitattributes_reference"
-V1_VIDEO_FILE = "{video_key}_episode_{episode_index:06d}.mp4"
-V1_INFO_PATH = "meta_data/info.json"
-V1_STATS_PATH = "meta_data/stats.safetensors"
-
-
-def parse_robot_config(robot_cfg: RobotConfig) -> tuple[str, dict]:
- if robot_cfg.type in ["aloha", "koch"]:
- state_names = [
- f"{arm}_{motor}" if len(robot_cfg.follower_arms) > 1 else motor
- for arm in robot_cfg.follower_arms
- for motor in robot_cfg.follower_arms[arm].motors
- ]
- action_names = [
- # f"{arm}_{motor}" for arm in ["left", "right"] for motor in robot_cfg["leader_arms"][arm]["motors"]
- f"{arm}_{motor}" if len(robot_cfg.leader_arms) > 1 else motor
- for arm in robot_cfg.leader_arms
- for motor in robot_cfg.leader_arms[arm].motors
- ]
- # elif robot_cfg["robot_type"] == "stretch3": TODO
- else:
- raise NotImplementedError(
- "Please provide robot_config={'robot_type': ..., 'names': ...} directly to convert_dataset()."
- )
-
- return {
- "robot_type": robot_cfg.type,
- "names": {
- "observation.state": state_names,
- "observation.effort": state_names,
- "action": action_names,
- },
- }
-
-
-def convert_stats_to_json(v1_dir: Path, v2_dir: Path) -> None:
- safetensor_path = v1_dir / V1_STATS_PATH
- stats = load_file(safetensor_path)
- serialized_stats = {key: value.tolist() for key, value in stats.items()}
- serialized_stats = unflatten_dict(serialized_stats)
-
- json_path = v2_dir / STATS_PATH
- json_path.parent.mkdir(exist_ok=True, parents=True)
- with open(json_path, "w") as f:
- json.dump(serialized_stats, f, indent=4)
-
- # Sanity check
- with open(json_path) as f:
- stats_json = json.load(f)
-
- stats_json = flatten_dict(stats_json)
- stats_json = {key: torch.tensor(value) for key, value in stats_json.items()}
- for key in stats:
- torch.testing.assert_close(stats_json[key], stats[key])
-
-
-def get_features_from_hf_dataset(
- dataset: Dataset, robot_config: RobotConfig | None = None
-) -> dict[str, list]:
- robot_config = parse_robot_config(robot_config)
- features = {}
- for key, ft in dataset.features.items():
- if isinstance(ft, datasets.Value):
- dtype = ft.dtype
- shape = (1,)
- names = None
- if isinstance(ft, datasets.Sequence):
- assert isinstance(ft.feature, datasets.Value)
- dtype = ft.feature.dtype
- shape = (ft.length,)
- motor_names = (
- robot_config["names"][key] if robot_config else [f"motor_{i}" for i in range(ft.length)]
- )
- assert len(motor_names) == shape[0]
- names = {"motors": motor_names}
- elif isinstance(ft, datasets.Image):
- dtype = "image"
- image = dataset[0][key] # Assuming first row
- channels = get_image_pixel_channels(image)
- shape = (image.height, image.width, channels)
- names = ["height", "width", "channels"]
- elif ft._type == "VideoFrame":
- dtype = "video"
- shape = None # Add shape later
- names = ["height", "width", "channels"]
-
- features[key] = {
- "dtype": dtype,
- "shape": shape,
- "names": names,
- }
-
- return features
-
-
-def add_task_index_by_episodes(dataset: Dataset, tasks_by_episodes: dict) -> tuple[Dataset, list[str]]:
- df = dataset.to_pandas()
- tasks = list(set(tasks_by_episodes.values()))
- tasks_to_task_index = {task: task_idx for task_idx, task in enumerate(tasks)}
- episodes_to_task_index = {ep_idx: tasks_to_task_index[task] for ep_idx, task in tasks_by_episodes.items()}
- df["task_index"] = df["episode_index"].map(episodes_to_task_index).astype(int)
-
- features = dataset.features
- features["task_index"] = datasets.Value(dtype="int64")
- dataset = Dataset.from_pandas(df, features=features, split="train")
- return dataset, tasks
-
-
-def add_task_index_from_tasks_col(
- dataset: Dataset, tasks_col: str
-) -> tuple[Dataset, dict[str, list[str]], list[str]]:
- df = dataset.to_pandas()
-
- # HACK: This is to clean some of the instructions in our version of Open X datasets
- prefix_to_clean = "tf.Tensor(b'"
- suffix_to_clean = "', shape=(), dtype=string)"
- df[tasks_col] = df[tasks_col].str.removeprefix(prefix_to_clean).str.removesuffix(suffix_to_clean)
-
- # Create task_index col
- tasks_by_episode = df.groupby("episode_index")[tasks_col].unique().apply(lambda x: x.tolist()).to_dict()
- tasks = df[tasks_col].unique().tolist()
- tasks_to_task_index = {task: idx for idx, task in enumerate(tasks)}
- df["task_index"] = df[tasks_col].map(tasks_to_task_index).astype(int)
-
- # Build the dataset back from df
- features = dataset.features
- features["task_index"] = datasets.Value(dtype="int64")
- dataset = Dataset.from_pandas(df, features=features, split="train")
- dataset = dataset.remove_columns(tasks_col)
-
- return dataset, tasks, tasks_by_episode
-
-
-def split_parquet_by_episodes(
- dataset: Dataset,
- total_episodes: int,
- total_chunks: int,
- output_dir: Path,
-) -> list:
- table = dataset.data.table
- episode_lengths = []
- for ep_chunk in range(total_chunks):
- ep_chunk_start = DEFAULT_CHUNK_SIZE * ep_chunk
- ep_chunk_end = min(DEFAULT_CHUNK_SIZE * (ep_chunk + 1), total_episodes)
- chunk_dir = "/".join(DEFAULT_PARQUET_PATH.split("/")[:-1]).format(episode_chunk=ep_chunk)
- (output_dir / chunk_dir).mkdir(parents=True, exist_ok=True)
- for ep_idx in range(ep_chunk_start, ep_chunk_end):
- ep_table = table.filter(pc.equal(table["episode_index"], ep_idx))
- episode_lengths.insert(ep_idx, len(ep_table))
- output_file = output_dir / DEFAULT_PARQUET_PATH.format(
- episode_chunk=ep_chunk, episode_index=ep_idx
- )
- pq.write_table(ep_table, output_file)
-
- return episode_lengths
-
-
-def move_videos(
- repo_id: str,
- video_keys: list[str],
- total_episodes: int,
- total_chunks: int,
- work_dir: Path,
- clean_gittatributes: Path,
- branch: str = "main",
-) -> None:
- """
- HACK: Since HfApi() doesn't provide a way to move files directly in a repo, this function will run git
- commands to fetch git lfs video files references to move them into subdirectories without having to
- actually download them.
- """
- _lfs_clone(repo_id, work_dir, branch)
-
- videos_moved = False
- video_files = [str(f.relative_to(work_dir)) for f in work_dir.glob("videos*/*.mp4")]
- if len(video_files) == 0:
- video_files = [str(f.relative_to(work_dir)) for f in work_dir.glob("videos*/*/*/*.mp4")]
- videos_moved = True # Videos have already been moved
-
- assert len(video_files) == total_episodes * len(video_keys)
-
- lfs_untracked_videos = _get_lfs_untracked_videos(work_dir, video_files)
-
- current_gittatributes = work_dir / ".gitattributes"
- if not filecmp.cmp(current_gittatributes, clean_gittatributes, shallow=False):
- fix_gitattributes(work_dir, current_gittatributes, clean_gittatributes)
-
- if lfs_untracked_videos:
- fix_lfs_video_files_tracking(work_dir, video_files)
-
- if videos_moved:
- return
-
- video_dirs = sorted(work_dir.glob("videos*/"))
- for ep_chunk in range(total_chunks):
- ep_chunk_start = DEFAULT_CHUNK_SIZE * ep_chunk
- ep_chunk_end = min(DEFAULT_CHUNK_SIZE * (ep_chunk + 1), total_episodes)
- for vid_key in video_keys:
- chunk_dir = "/".join(DEFAULT_VIDEO_PATH.split("/")[:-1]).format(
- episode_chunk=ep_chunk, video_key=vid_key
- )
- (work_dir / chunk_dir).mkdir(parents=True, exist_ok=True)
-
- for ep_idx in range(ep_chunk_start, ep_chunk_end):
- target_path = DEFAULT_VIDEO_PATH.format(
- episode_chunk=ep_chunk, video_key=vid_key, episode_index=ep_idx
- )
- video_file = V1_VIDEO_FILE.format(video_key=vid_key, episode_index=ep_idx)
- if len(video_dirs) == 1:
- video_path = video_dirs[0] / video_file
- else:
- for dir in video_dirs:
- if (dir / video_file).is_file():
- video_path = dir / video_file
- break
-
- video_path.rename(work_dir / target_path)
-
- commit_message = "Move video files into chunk subdirectories"
- subprocess.run(["git", "add", "."], cwd=work_dir, check=True)
- subprocess.run(["git", "commit", "-m", commit_message], cwd=work_dir, check=True)
- subprocess.run(["git", "push"], cwd=work_dir, check=True)
-
-
-def fix_lfs_video_files_tracking(work_dir: Path, lfs_untracked_videos: list[str]) -> None:
- """
- HACK: This function fixes the tracking by git lfs which was not properly set on some repos. In that case,
- there's no other option than to download the actual files and reupload them with lfs tracking.
- """
- for i in range(0, len(lfs_untracked_videos), 100):
- files = lfs_untracked_videos[i : i + 100]
- try:
- subprocess.run(["git", "rm", "--cached", *files], cwd=work_dir, capture_output=True, check=True)
- except subprocess.CalledProcessError as e:
- print("git rm --cached ERROR:")
- print(e.stderr)
- subprocess.run(["git", "add", *files], cwd=work_dir, check=True)
-
- commit_message = "Track video files with git lfs"
- subprocess.run(["git", "commit", "-m", commit_message], cwd=work_dir, check=True)
- subprocess.run(["git", "push"], cwd=work_dir, check=True)
-
-
-def fix_gitattributes(work_dir: Path, current_gittatributes: Path, clean_gittatributes: Path) -> None:
- shutil.copyfile(clean_gittatributes, current_gittatributes)
- subprocess.run(["git", "add", ".gitattributes"], cwd=work_dir, check=True)
- subprocess.run(["git", "commit", "-m", "Fix .gitattributes"], cwd=work_dir, check=True)
- subprocess.run(["git", "push"], cwd=work_dir, check=True)
-
-
-def _lfs_clone(repo_id: str, work_dir: Path, branch: str) -> None:
- subprocess.run(["git", "lfs", "install"], cwd=work_dir, check=True)
- repo_url = f"https://huggingface.co/datasets/{repo_id}"
- env = {"GIT_LFS_SKIP_SMUDGE": "1"} # Prevent downloading LFS files
- subprocess.run(
- ["git", "clone", "--branch", branch, "--single-branch", "--depth", "1", repo_url, str(work_dir)],
- check=True,
- env=env,
- )
-
-
-def _get_lfs_untracked_videos(work_dir: Path, video_files: list[str]) -> list[str]:
- lfs_tracked_files = subprocess.run(
- ["git", "lfs", "ls-files", "-n"], cwd=work_dir, capture_output=True, text=True, check=True
- )
- lfs_tracked_files = set(lfs_tracked_files.stdout.splitlines())
- return [f for f in video_files if f not in lfs_tracked_files]
-
-
-def get_videos_info(repo_id: str, local_dir: Path, video_keys: list[str], branch: str) -> dict:
- # Assumes first episode
- video_files = [
- DEFAULT_VIDEO_PATH.format(episode_chunk=0, video_key=vid_key, episode_index=0)
- for vid_key in video_keys
- ]
- hub_api = HfApi()
- hub_api.snapshot_download(
- repo_id=repo_id, repo_type="dataset", local_dir=local_dir, revision=branch, allow_patterns=video_files
- )
- videos_info_dict = {}
- for vid_key, vid_path in zip(video_keys, video_files, strict=True):
- videos_info_dict[vid_key] = get_video_info(local_dir / vid_path)
-
- return videos_info_dict
-
-
-def convert_dataset(
- repo_id: str,
- local_dir: Path,
- single_task: str | None = None,
- tasks_path: Path | None = None,
- tasks_col: Path | None = None,
- robot_config: RobotConfig | None = None,
- test_branch: str | None = None,
- **card_kwargs,
-):
- v1 = get_safe_version(repo_id, V16)
- v1x_dir = local_dir / V16 / repo_id
- v20_dir = local_dir / V20 / repo_id
- v1x_dir.mkdir(parents=True, exist_ok=True)
- v20_dir.mkdir(parents=True, exist_ok=True)
-
- hub_api = HfApi()
- hub_api.snapshot_download(
- repo_id=repo_id, repo_type="dataset", revision=v1, local_dir=v1x_dir, ignore_patterns="videos*/"
- )
- branch = "main"
- if test_branch:
- branch = test_branch
- create_branch(repo_id=repo_id, branch=test_branch, repo_type="dataset")
-
- metadata_v1 = load_json(v1x_dir / V1_INFO_PATH)
- dataset = datasets.load_dataset("parquet", data_dir=v1x_dir / "data", split="train")
- features = get_features_from_hf_dataset(dataset, robot_config)
- video_keys = [key for key, ft in features.items() if ft["dtype"] == "video"]
-
- if single_task and "language_instruction" in dataset.column_names:
- logging.warning(
- "'single_task' provided but 'language_instruction' tasks_col found. Using 'language_instruction'.",
- )
- single_task = None
- tasks_col = "language_instruction"
-
- # Episodes & chunks
- episode_indices = sorted(dataset.unique("episode_index"))
- total_episodes = len(episode_indices)
- assert episode_indices == list(range(total_episodes))
- total_videos = total_episodes * len(video_keys)
- total_chunks = total_episodes // DEFAULT_CHUNK_SIZE
- if total_episodes % DEFAULT_CHUNK_SIZE != 0:
- total_chunks += 1
-
- # Tasks
- if single_task:
- tasks_by_episodes = dict.fromkeys(episode_indices, single_task)
- dataset, tasks = add_task_index_by_episodes(dataset, tasks_by_episodes)
- tasks_by_episodes = {ep_idx: [task] for ep_idx, task in tasks_by_episodes.items()}
- elif tasks_path:
- tasks_by_episodes = load_json(tasks_path)
- tasks_by_episodes = {int(ep_idx): task for ep_idx, task in tasks_by_episodes.items()}
- dataset, tasks = add_task_index_by_episodes(dataset, tasks_by_episodes)
- tasks_by_episodes = {ep_idx: [task] for ep_idx, task in tasks_by_episodes.items()}
- elif tasks_col:
- dataset, tasks, tasks_by_episodes = add_task_index_from_tasks_col(dataset, tasks_col)
- else:
- raise ValueError
-
- assert set(tasks) == {task for ep_tasks in tasks_by_episodes.values() for task in ep_tasks}
- tasks = [{"task_index": task_idx, "task": task} for task_idx, task in enumerate(tasks)]
- write_jsonlines(tasks, v20_dir / TASKS_PATH)
- features["task_index"] = {
- "dtype": "int64",
- "shape": (1,),
- "names": None,
- }
-
- # Videos
- if video_keys:
- assert metadata_v1.get("video", False)
- dataset = dataset.remove_columns(video_keys)
- clean_gitattr = Path(
- hub_api.hf_hub_download(
- repo_id=GITATTRIBUTES_REF, repo_type="dataset", local_dir=local_dir, filename=".gitattributes"
- )
- ).absolute()
- with tempfile.TemporaryDirectory() as tmp_video_dir:
- move_videos(
- repo_id, video_keys, total_episodes, total_chunks, Path(tmp_video_dir), clean_gitattr, branch
- )
- videos_info = get_videos_info(repo_id, v1x_dir, video_keys=video_keys, branch=branch)
- for key in video_keys:
- features[key]["shape"] = (
- videos_info[key].pop("video.height"),
- videos_info[key].pop("video.width"),
- videos_info[key].pop("video.channels"),
- )
- features[key]["video_info"] = videos_info[key]
- assert math.isclose(videos_info[key]["video.fps"], metadata_v1["fps"], rel_tol=1e-3)
- if "encoding" in metadata_v1:
- assert videos_info[key]["video.pix_fmt"] == metadata_v1["encoding"]["pix_fmt"]
- else:
- assert metadata_v1.get("video", 0) == 0
- videos_info = None
-
- # Split data into 1 parquet file by episode
- episode_lengths = split_parquet_by_episodes(dataset, total_episodes, total_chunks, v20_dir)
-
- if robot_config is not None:
- robot_type = robot_config.type
- repo_tags = [robot_type]
- else:
- robot_type = "unknown"
- repo_tags = None
-
- # Episodes
- episodes = [
- {"episode_index": ep_idx, "tasks": tasks_by_episodes[ep_idx], "length": episode_lengths[ep_idx]}
- for ep_idx in episode_indices
- ]
- write_jsonlines(episodes, v20_dir / EPISODES_PATH)
-
- # Assemble metadata v2.0
- metadata_v2_0 = {
- "codebase_version": V20,
- "robot_type": robot_type,
- "total_episodes": total_episodes,
- "total_frames": len(dataset),
- "total_tasks": len(tasks),
- "total_videos": total_videos,
- "total_chunks": total_chunks,
- "chunks_size": DEFAULT_CHUNK_SIZE,
- "fps": metadata_v1["fps"],
- "splits": {"train": f"0:{total_episodes}"},
- "data_path": DEFAULT_PARQUET_PATH,
- "video_path": DEFAULT_VIDEO_PATH if video_keys else None,
- "features": features,
- }
- write_json(metadata_v2_0, v20_dir / INFO_PATH)
- convert_stats_to_json(v1x_dir, v20_dir)
- card = create_lerobot_dataset_card(tags=repo_tags, dataset_info=metadata_v2_0, **card_kwargs)
-
- with contextlib.suppress(EntryNotFoundError, HfHubHTTPError):
- hub_api.delete_folder(repo_id=repo_id, path_in_repo="data", repo_type="dataset", revision=branch)
-
- with contextlib.suppress(EntryNotFoundError, HfHubHTTPError):
- hub_api.delete_folder(repo_id=repo_id, path_in_repo="meta_data", repo_type="dataset", revision=branch)
-
- with contextlib.suppress(EntryNotFoundError, HfHubHTTPError):
- hub_api.delete_folder(repo_id=repo_id, path_in_repo="meta", repo_type="dataset", revision=branch)
-
- hub_api.upload_folder(
- repo_id=repo_id,
- path_in_repo="data",
- folder_path=v20_dir / "data",
- repo_type="dataset",
- revision=branch,
- )
- hub_api.upload_folder(
- repo_id=repo_id,
- path_in_repo="meta",
- folder_path=v20_dir / "meta",
- repo_type="dataset",
- revision=branch,
- )
-
- card.push_to_hub(repo_id=repo_id, repo_type="dataset", revision=branch)
-
- if not test_branch:
- create_branch(repo_id=repo_id, branch=V20, repo_type="dataset")
-
-
-def main():
- parser = argparse.ArgumentParser()
- task_args = parser.add_mutually_exclusive_group(required=True)
-
- parser.add_argument(
- "--repo-id",
- type=str,
- required=True,
- help="Repository identifier on Hugging Face: a community or a user name `/` the name of the dataset (e.g. `lerobot/pusht`, `cadene/aloha_sim_insertion_human`).",
- )
- task_args.add_argument(
- "--single-task",
- type=str,
- help="A short but accurate description of the single task performed in the dataset.",
- )
- task_args.add_argument(
- "--tasks-col",
- type=str,
- help="The name of the column containing language instructions",
- )
- task_args.add_argument(
- "--tasks-path",
- type=Path,
- help="The path to a .json file containing one language instruction for each episode_index",
- )
- parser.add_argument(
- "--robot",
- type=str,
- default=None,
- help="Robot config used for the dataset during conversion (e.g. 'koch', 'aloha', 'so100', etc.)",
- )
- parser.add_argument(
- "--local-dir",
- type=Path,
- default=None,
- help="Local directory to store the dataset during conversion. Defaults to /tmp/lerobot_dataset_v2",
- )
- parser.add_argument(
- "--license",
- type=str,
- default="apache-2.0",
- help="Repo license. Must be one of https://huggingface.co/docs/hub/repositories-licenses. Defaults to mit.",
- )
- parser.add_argument(
- "--test-branch",
- type=str,
- default=None,
- help="Repo branch to test your conversion first (e.g. 'v2.0.test')",
- )
-
- args = parser.parse_args()
- if not args.local_dir:
- args.local_dir = Path("/tmp/lerobot_dataset_v2")
-
- if args.robot is not None:
- robot_config = make_robot_config(args.robot)
-
- del args.robot
-
- convert_dataset(**vars(args), robot_config=robot_config)
-
-
-if __name__ == "__main__":
- main()
diff --git a/lerobot/common/datasets/v21/_remove_language_instruction.py b/lerobot/common/datasets/v21/_remove_language_instruction.py
deleted file mode 100644
index 643ddd3f..00000000
--- a/lerobot/common/datasets/v21/_remove_language_instruction.py
+++ /dev/null
@@ -1,87 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import logging
-import traceback
-from pathlib import Path
-
-from datasets import get_dataset_config_info
-from huggingface_hub import HfApi
-
-from lerobot import available_datasets
-from lerobot.common.datasets.lerobot_dataset import LeRobotDatasetMetadata
-from lerobot.common.datasets.utils import INFO_PATH, write_info
-from lerobot.common.datasets.v21.convert_dataset_v20_to_v21 import V20, SuppressWarnings
-
-LOCAL_DIR = Path("data/")
-
-hub_api = HfApi()
-
-
-def fix_dataset(repo_id: str) -> str:
- if not hub_api.revision_exists(repo_id, V20, repo_type="dataset"):
- return f"{repo_id}: skipped (not in {V20})."
-
- dataset_info = get_dataset_config_info(repo_id, "default")
- with SuppressWarnings():
- lerobot_metadata = LeRobotDatasetMetadata(repo_id, revision=V20, force_cache_sync=True)
-
- meta_features = {key for key, ft in lerobot_metadata.features.items() if ft["dtype"] != "video"}
- parquet_features = set(dataset_info.features)
-
- diff_parquet_meta = parquet_features - meta_features
- diff_meta_parquet = meta_features - parquet_features
-
- if diff_parquet_meta:
- raise ValueError(f"In parquet not in info.json: {parquet_features - meta_features}")
-
- if not diff_meta_parquet:
- return f"{repo_id}: skipped (no diff)"
-
- if diff_meta_parquet:
- logging.warning(f"In info.json not in parquet: {meta_features - parquet_features}")
- assert diff_meta_parquet == {"language_instruction"}
- lerobot_metadata.features.pop("language_instruction")
- write_info(lerobot_metadata.info, lerobot_metadata.root)
- commit_info = hub_api.upload_file(
- path_or_fileobj=lerobot_metadata.root / INFO_PATH,
- path_in_repo=INFO_PATH,
- repo_id=repo_id,
- repo_type="dataset",
- revision=V20,
- commit_message="Remove 'language_instruction'",
- create_pr=True,
- )
- return f"{repo_id}: success - PR: {commit_info.pr_url}"
-
-
-def batch_fix():
- status = {}
- LOCAL_DIR.mkdir(parents=True, exist_ok=True)
- logfile = LOCAL_DIR / "fix_features_v20.txt"
- for num, repo_id in enumerate(available_datasets):
- print(f"\nConverting {repo_id} ({num}/{len(available_datasets)})")
- print("---------------------------------------------------------")
- try:
- status = fix_dataset(repo_id)
- except Exception:
- status = f"{repo_id}: failed\n {traceback.format_exc()}"
-
- logging.info(status)
- with open(logfile, "a") as file:
- file.write(status + "\n")
-
-
-if __name__ == "__main__":
- batch_fix()
diff --git a/lerobot/common/datasets/v21/batch_convert_dataset_v20_to_v21.py b/lerobot/common/datasets/v21/batch_convert_dataset_v20_to_v21.py
deleted file mode 100644
index cc9272a8..00000000
--- a/lerobot/common/datasets/v21/batch_convert_dataset_v20_to_v21.py
+++ /dev/null
@@ -1,54 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-This script is for internal use to convert all datasets under the 'lerobot' hub user account to v2.1.
-"""
-
-import traceback
-from pathlib import Path
-
-from huggingface_hub import HfApi
-
-from lerobot import available_datasets
-from lerobot.common.datasets.v21.convert_dataset_v20_to_v21 import V21, convert_dataset
-
-LOCAL_DIR = Path("data/")
-
-
-def batch_convert():
- status = {}
- LOCAL_DIR.mkdir(parents=True, exist_ok=True)
- logfile = LOCAL_DIR / "conversion_log_v21.txt"
- hub_api = HfApi()
- for num, repo_id in enumerate(available_datasets):
- print(f"\nConverting {repo_id} ({num}/{len(available_datasets)})")
- print("---------------------------------------------------------")
- try:
- if hub_api.revision_exists(repo_id, V21, repo_type="dataset"):
- status = f"{repo_id}: success (already in {V21})."
- else:
- convert_dataset(repo_id)
- status = f"{repo_id}: success."
- except Exception:
- status = f"{repo_id}: failed\n {traceback.format_exc()}"
-
- with open(logfile, "a") as file:
- file.write(status + "\n")
-
-
-if __name__ == "__main__":
- batch_convert()
diff --git a/lerobot/common/datasets/v21/convert_dataset_v20_to_v21.py b/lerobot/common/datasets/v21/convert_dataset_v20_to_v21.py
deleted file mode 100644
index 176d16d0..00000000
--- a/lerobot/common/datasets/v21/convert_dataset_v20_to_v21.py
+++ /dev/null
@@ -1,114 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-This script will help you convert any LeRobot dataset already pushed to the hub from codebase version 2.0 to
-2.1. It will:
-
-- Generate per-episodes stats and writes them in `episodes_stats.jsonl`
-- Check consistency between these new stats and the old ones.
-- Remove the deprecated `stats.json`.
-- Update codebase_version in `info.json`.
-- Push this new version to the hub on the 'main' branch and tags it with "v2.1".
-
-Usage:
-
-```bash
-python lerobot/common/datasets/v21/convert_dataset_v20_to_v21.py \
- --repo-id=aliberts/koch_tutorial
-```
-
-"""
-
-import argparse
-import logging
-
-from huggingface_hub import HfApi
-
-from lerobot.common.datasets.lerobot_dataset import CODEBASE_VERSION, LeRobotDataset
-from lerobot.common.datasets.utils import EPISODES_STATS_PATH, STATS_PATH, load_stats, write_info
-from lerobot.common.datasets.v21.convert_stats import check_aggregate_stats, convert_stats
-
-V20 = "v2.0"
-V21 = "v2.1"
-
-
-class SuppressWarnings:
- def __enter__(self):
- self.previous_level = logging.getLogger().getEffectiveLevel()
- logging.getLogger().setLevel(logging.ERROR)
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- logging.getLogger().setLevel(self.previous_level)
-
-
-def convert_dataset(
- repo_id: str,
- branch: str | None = None,
- num_workers: int = 4,
-):
- with SuppressWarnings():
- dataset = LeRobotDataset(repo_id, revision=V20, force_cache_sync=True)
-
- if (dataset.root / EPISODES_STATS_PATH).is_file():
- (dataset.root / EPISODES_STATS_PATH).unlink()
-
- convert_stats(dataset, num_workers=num_workers)
- ref_stats = load_stats(dataset.root)
- check_aggregate_stats(dataset, ref_stats)
-
- dataset.meta.info["codebase_version"] = CODEBASE_VERSION
- write_info(dataset.meta.info, dataset.root)
-
- dataset.push_to_hub(branch=branch, tag_version=False, allow_patterns="meta/")
-
- # delete old stats.json file
- if (dataset.root / STATS_PATH).is_file:
- (dataset.root / STATS_PATH).unlink()
-
- hub_api = HfApi()
- if hub_api.file_exists(
- repo_id=dataset.repo_id, filename=STATS_PATH, revision=branch, repo_type="dataset"
- ):
- hub_api.delete_file(
- path_in_repo=STATS_PATH, repo_id=dataset.repo_id, revision=branch, repo_type="dataset"
- )
-
- hub_api.create_tag(repo_id, tag=CODEBASE_VERSION, revision=branch, repo_type="dataset")
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "--repo-id",
- type=str,
- required=True,
- help="Repository identifier on Hugging Face: a community or a user name `/` the name of the dataset "
- "(e.g. `lerobot/pusht`, `cadene/aloha_sim_insertion_human`).",
- )
- parser.add_argument(
- "--branch",
- type=str,
- default=None,
- help="Repo branch to push your dataset. Defaults to the main branch.",
- )
- parser.add_argument(
- "--num-workers",
- type=int,
- default=4,
- help="Number of workers for parallelizing stats compute. Defaults to 4.",
- )
-
- args = parser.parse_args()
- convert_dataset(**vars(args))
diff --git a/lerobot/common/datasets/v21/convert_stats.py b/lerobot/common/datasets/v21/convert_stats.py
deleted file mode 100644
index 4a20b427..00000000
--- a/lerobot/common/datasets/v21/convert_stats.py
+++ /dev/null
@@ -1,99 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from concurrent.futures import ThreadPoolExecutor, as_completed
-
-import numpy as np
-from tqdm import tqdm
-
-from lerobot.common.datasets.compute_stats import aggregate_stats, get_feature_stats, sample_indices
-from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
-from lerobot.common.datasets.utils import write_episode_stats
-
-
-def sample_episode_video_frames(dataset: LeRobotDataset, episode_index: int, ft_key: str) -> np.ndarray:
- ep_len = dataset.meta.episodes[episode_index]["length"]
- sampled_indices = sample_indices(ep_len)
- query_timestamps = dataset._get_query_timestamps(0.0, {ft_key: sampled_indices})
- video_frames = dataset._query_videos(query_timestamps, episode_index)
- return video_frames[ft_key].numpy()
-
-
-def convert_episode_stats(dataset: LeRobotDataset, ep_idx: int):
- ep_start_idx = dataset.episode_data_index["from"][ep_idx]
- ep_end_idx = dataset.episode_data_index["to"][ep_idx]
- ep_data = dataset.hf_dataset.select(range(ep_start_idx, ep_end_idx))
-
- ep_stats = {}
- for key, ft in dataset.features.items():
- if ft["dtype"] == "video":
- # We sample only for videos
- ep_ft_data = sample_episode_video_frames(dataset, ep_idx, key)
- else:
- ep_ft_data = np.array(ep_data[key])
-
- axes_to_reduce = (0, 2, 3) if ft["dtype"] in ["image", "video"] else 0
- keepdims = True if ft["dtype"] in ["image", "video"] else ep_ft_data.ndim == 1
- ep_stats[key] = get_feature_stats(ep_ft_data, axis=axes_to_reduce, keepdims=keepdims)
-
- if ft["dtype"] in ["image", "video"]: # remove batch dim
- ep_stats[key] = {
- k: v if k == "count" else np.squeeze(v, axis=0) for k, v in ep_stats[key].items()
- }
-
- dataset.meta.episodes_stats[ep_idx] = ep_stats
-
-
-def convert_stats(dataset: LeRobotDataset, num_workers: int = 0):
- assert dataset.episodes is None
- print("Computing episodes stats")
- total_episodes = dataset.meta.total_episodes
- if num_workers > 0:
- with ThreadPoolExecutor(max_workers=num_workers) as executor:
- futures = {
- executor.submit(convert_episode_stats, dataset, ep_idx): ep_idx
- for ep_idx in range(total_episodes)
- }
- for future in tqdm(as_completed(futures), total=total_episodes):
- future.result()
- else:
- for ep_idx in tqdm(range(total_episodes)):
- convert_episode_stats(dataset, ep_idx)
-
- for ep_idx in tqdm(range(total_episodes)):
- write_episode_stats(ep_idx, dataset.meta.episodes_stats[ep_idx], dataset.root)
-
-
-def check_aggregate_stats(
- dataset: LeRobotDataset,
- reference_stats: dict[str, dict[str, np.ndarray]],
- video_rtol_atol: tuple[float] = (1e-2, 1e-2),
- default_rtol_atol: tuple[float] = (5e-6, 6e-5),
-):
- """Verifies that the aggregated stats from episodes_stats are close to reference stats."""
- agg_stats = aggregate_stats(list(dataset.meta.episodes_stats.values()))
- for key, ft in dataset.features.items():
- # These values might need some fine-tuning
- if ft["dtype"] == "video":
- # to account for image sub-sampling
- rtol, atol = video_rtol_atol
- else:
- rtol, atol = default_rtol_atol
-
- for stat, val in agg_stats[key].items():
- if key in reference_stats and stat in reference_stats[key]:
- err_msg = f"feature='{key}' stats='{stat}'"
- np.testing.assert_allclose(
- val, reference_stats[key][stat], rtol=rtol, atol=atol, err_msg=err_msg
- )
diff --git a/lerobot/common/envs/configs.py b/lerobot/common/envs/configs.py
deleted file mode 100644
index cf90048a..00000000
--- a/lerobot/common/envs/configs.py
+++ /dev/null
@@ -1,156 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import abc
-from dataclasses import dataclass, field
-
-import draccus
-
-from lerobot.common.constants import ACTION, OBS_ENV, OBS_IMAGE, OBS_IMAGES, OBS_ROBOT
-from lerobot.configs.types import FeatureType, PolicyFeature
-
-
-@dataclass
-class EnvConfig(draccus.ChoiceRegistry, abc.ABC):
- task: str | None = None
- fps: int = 30
- features: dict[str, PolicyFeature] = field(default_factory=dict)
- features_map: dict[str, str] = field(default_factory=dict)
-
- @property
- def type(self) -> str:
- return self.get_choice_name(self.__class__)
-
- @abc.abstractproperty
- def gym_kwargs(self) -> dict:
- raise NotImplementedError()
-
-
-@EnvConfig.register_subclass("aloha")
-@dataclass
-class AlohaEnv(EnvConfig):
- task: str = "AlohaInsertion-v0"
- fps: int = 50
- episode_length: int = 400
- obs_type: str = "pixels_agent_pos"
- render_mode: str = "rgb_array"
- features: dict[str, PolicyFeature] = field(
- default_factory=lambda: {
- "action": PolicyFeature(type=FeatureType.ACTION, shape=(14,)),
- }
- )
- features_map: dict[str, str] = field(
- default_factory=lambda: {
- "action": ACTION,
- "agent_pos": OBS_ROBOT,
- "top": f"{OBS_IMAGE}.top",
- "pixels/top": f"{OBS_IMAGES}.top",
- }
- )
-
- def __post_init__(self):
- if self.obs_type == "pixels":
- self.features["top"] = PolicyFeature(type=FeatureType.VISUAL, shape=(480, 640, 3))
- elif self.obs_type == "pixels_agent_pos":
- self.features["agent_pos"] = PolicyFeature(type=FeatureType.STATE, shape=(14,))
- self.features["pixels/top"] = PolicyFeature(type=FeatureType.VISUAL, shape=(480, 640, 3))
-
- @property
- def gym_kwargs(self) -> dict:
- return {
- "obs_type": self.obs_type,
- "render_mode": self.render_mode,
- "max_episode_steps": self.episode_length,
- }
-
-
-@EnvConfig.register_subclass("pusht")
-@dataclass
-class PushtEnv(EnvConfig):
- task: str = "PushT-v0"
- fps: int = 10
- episode_length: int = 300
- obs_type: str = "pixels_agent_pos"
- render_mode: str = "rgb_array"
- visualization_width: int = 384
- visualization_height: int = 384
- features: dict[str, PolicyFeature] = field(
- default_factory=lambda: {
- "action": PolicyFeature(type=FeatureType.ACTION, shape=(2,)),
- "agent_pos": PolicyFeature(type=FeatureType.STATE, shape=(2,)),
- }
- )
- features_map: dict[str, str] = field(
- default_factory=lambda: {
- "action": ACTION,
- "agent_pos": OBS_ROBOT,
- "environment_state": OBS_ENV,
- "pixels": OBS_IMAGE,
- }
- )
-
- def __post_init__(self):
- if self.obs_type == "pixels_agent_pos":
- self.features["pixels"] = PolicyFeature(type=FeatureType.VISUAL, shape=(384, 384, 3))
- elif self.obs_type == "environment_state_agent_pos":
- self.features["environment_state"] = PolicyFeature(type=FeatureType.ENV, shape=(16,))
-
- @property
- def gym_kwargs(self) -> dict:
- return {
- "obs_type": self.obs_type,
- "render_mode": self.render_mode,
- "visualization_width": self.visualization_width,
- "visualization_height": self.visualization_height,
- "max_episode_steps": self.episode_length,
- }
-
-
-@EnvConfig.register_subclass("xarm")
-@dataclass
-class XarmEnv(EnvConfig):
- task: str = "XarmLift-v0"
- fps: int = 15
- episode_length: int = 200
- obs_type: str = "pixels_agent_pos"
- render_mode: str = "rgb_array"
- visualization_width: int = 384
- visualization_height: int = 384
- features: dict[str, PolicyFeature] = field(
- default_factory=lambda: {
- "action": PolicyFeature(type=FeatureType.ACTION, shape=(4,)),
- "pixels": PolicyFeature(type=FeatureType.VISUAL, shape=(84, 84, 3)),
- }
- )
- features_map: dict[str, str] = field(
- default_factory=lambda: {
- "action": ACTION,
- "agent_pos": OBS_ROBOT,
- "pixels": OBS_IMAGE,
- }
- )
-
- def __post_init__(self):
- if self.obs_type == "pixels_agent_pos":
- self.features["agent_pos"] = PolicyFeature(type=FeatureType.STATE, shape=(4,))
-
- @property
- def gym_kwargs(self) -> dict:
- return {
- "obs_type": self.obs_type,
- "render_mode": self.render_mode,
- "visualization_width": self.visualization_width,
- "visualization_height": self.visualization_height,
- "max_episode_steps": self.episode_length,
- }
diff --git a/lerobot/common/envs/factory.py b/lerobot/common/envs/factory.py
deleted file mode 100644
index 8450f84b..00000000
--- a/lerobot/common/envs/factory.py
+++ /dev/null
@@ -1,69 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import importlib
-
-import gymnasium as gym
-
-from lerobot.common.envs.configs import AlohaEnv, EnvConfig, PushtEnv, XarmEnv
-
-
-def make_env_config(env_type: str, **kwargs) -> EnvConfig:
- if env_type == "aloha":
- return AlohaEnv(**kwargs)
- elif env_type == "pusht":
- return PushtEnv(**kwargs)
- elif env_type == "xarm":
- return XarmEnv(**kwargs)
- else:
- raise ValueError(f"Policy type '{env_type}' is not available.")
-
-
-def make_env(cfg: EnvConfig, n_envs: int = 1, use_async_envs: bool = False) -> gym.vector.VectorEnv | None:
- """Makes a gym vector environment according to the config.
-
- Args:
- cfg (EnvConfig): the config of the environment to instantiate.
- n_envs (int, optional): The number of parallelized env to return. Defaults to 1.
- use_async_envs (bool, optional): Whether to return an AsyncVectorEnv or a SyncVectorEnv. Defaults to
- False.
-
- Raises:
- ValueError: if n_envs < 1
- ModuleNotFoundError: If the requested env package is not installed
-
- Returns:
- gym.vector.VectorEnv: The parallelized gym.env instance.
- """
- if n_envs < 1:
- raise ValueError("`n_envs must be at least 1")
-
- package_name = f"gym_{cfg.type}"
-
- try:
- importlib.import_module(package_name)
- except ModuleNotFoundError as e:
- print(f"{package_name} is not installed. Please install it with `pip install 'lerobot[{cfg.type}]'`")
- raise e
-
- gym_handle = f"{package_name}/{cfg.task}"
-
- # batched version of the env that returns an observation of shape (b, c)
- env_cls = gym.vector.AsyncVectorEnv if use_async_envs else gym.vector.SyncVectorEnv
- env = env_cls(
- [lambda: gym.make(gym_handle, disable_env_checker=True, **cfg.gym_kwargs) for _ in range(n_envs)]
- )
-
- return env
diff --git a/lerobot/common/envs/utils.py b/lerobot/common/envs/utils.py
deleted file mode 100644
index 83334f87..00000000
--- a/lerobot/common/envs/utils.py
+++ /dev/null
@@ -1,127 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import warnings
-from typing import Any
-
-import einops
-import gymnasium as gym
-import numpy as np
-import torch
-from torch import Tensor
-
-from lerobot.common.envs.configs import EnvConfig
-from lerobot.common.utils.utils import get_channel_first_image_shape
-from lerobot.configs.types import FeatureType, PolicyFeature
-
-
-def preprocess_observation(observations: dict[str, np.ndarray]) -> dict[str, Tensor]:
- # TODO(aliberts, rcadene): refactor this to use features from the environment (no hardcoding)
- """Convert environment observation to LeRobot format observation.
- Args:
- observation: Dictionary of observation batches from a Gym vector environment.
- Returns:
- Dictionary of observation batches with keys renamed to LeRobot format and values as tensors.
- """
- # map to expected inputs for the policy
- return_observations = {}
- if "pixels" in observations:
- if isinstance(observations["pixels"], dict):
- imgs = {f"observation.images.{key}": img for key, img in observations["pixels"].items()}
- else:
- imgs = {"observation.image": observations["pixels"]}
-
- for imgkey, img in imgs.items():
- # TODO(aliberts, rcadene): use transforms.ToTensor()?
- img = torch.from_numpy(img)
-
- # sanity check that images are channel last
- _, h, w, c = img.shape
- assert c < h and c < w, f"expect channel last images, but instead got {img.shape=}"
-
- # sanity check that images are uint8
- assert img.dtype == torch.uint8, f"expect torch.uint8, but instead {img.dtype=}"
-
- # convert to channel first of type float32 in range [0,1]
- img = einops.rearrange(img, "b h w c -> b c h w").contiguous()
- img = img.type(torch.float32)
- img /= 255
-
- return_observations[imgkey] = img
-
- if "environment_state" in observations:
- return_observations["observation.environment_state"] = torch.from_numpy(
- observations["environment_state"]
- ).float()
-
- # TODO(rcadene): enable pixels only baseline with `obs_type="pixels"` in environment by removing
- # requirement for "agent_pos"
- return_observations["observation.state"] = torch.from_numpy(observations["agent_pos"]).float()
- return return_observations
-
-
-def env_to_policy_features(env_cfg: EnvConfig) -> dict[str, PolicyFeature]:
- # TODO(aliberts, rcadene): remove this hardcoding of keys and just use the nested keys as is
- # (need to also refactor preprocess_observation and externalize normalization from policies)
- policy_features = {}
- for key, ft in env_cfg.features.items():
- if ft.type is FeatureType.VISUAL:
- if len(ft.shape) != 3:
- raise ValueError(f"Number of dimensions of {key} != 3 (shape={ft.shape})")
-
- shape = get_channel_first_image_shape(ft.shape)
- feature = PolicyFeature(type=ft.type, shape=shape)
- else:
- feature = ft
-
- policy_key = env_cfg.features_map[key]
- policy_features[policy_key] = feature
-
- return policy_features
-
-
-def are_all_envs_same_type(env: gym.vector.VectorEnv) -> bool:
- first_type = type(env.envs[0]) # Get type of first env
- return all(type(e) is first_type for e in env.envs) # Fast type check
-
-
-def check_env_attributes_and_types(env: gym.vector.VectorEnv) -> None:
- with warnings.catch_warnings():
- warnings.simplefilter("once", UserWarning) # Apply filter only in this function
-
- if not (hasattr(env.envs[0], "task_description") and hasattr(env.envs[0], "task")):
- warnings.warn(
- "The environment does not have 'task_description' and 'task'. Some policies require these features.",
- UserWarning,
- stacklevel=2,
- )
- if not are_all_envs_same_type(env):
- warnings.warn(
- "The environments have different types. Make sure you infer the right task from each environment. Empty task will be passed instead.",
- UserWarning,
- stacklevel=2,
- )
-
-
-def add_envs_task(env: gym.vector.VectorEnv, observation: dict[str, Any]) -> dict[str, Any]:
- """Adds task feature to the observation dict with respect to the first environment attribute."""
- if hasattr(env.envs[0], "task_description"):
- observation["task"] = env.call("task_description")
- elif hasattr(env.envs[0], "task"):
- observation["task"] = env.call("task")
- else: # For envs without language instructions, e.g. aloha transfer cube and etc.
- num_envs = observation[list(observation.keys())[0]].shape[0]
- observation["task"] = ["" for _ in range(num_envs)]
- return observation
diff --git a/lerobot/common/optim/optimizers.py b/lerobot/common/optim/optimizers.py
deleted file mode 100644
index 0cf4124c..00000000
--- a/lerobot/common/optim/optimizers.py
+++ /dev/null
@@ -1,118 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import abc
-from dataclasses import asdict, dataclass
-from pathlib import Path
-
-import draccus
-import torch
-from safetensors.torch import load_file, save_file
-
-from lerobot.common.constants import (
- OPTIMIZER_PARAM_GROUPS,
- OPTIMIZER_STATE,
-)
-from lerobot.common.datasets.utils import flatten_dict, unflatten_dict, write_json
-from lerobot.common.utils.io_utils import deserialize_json_into_object
-
-
-@dataclass
-class OptimizerConfig(draccus.ChoiceRegistry, abc.ABC):
- lr: float
- weight_decay: float
- grad_clip_norm: float
-
- @property
- def type(self) -> str:
- return self.get_choice_name(self.__class__)
-
- @classmethod
- def default_choice_name(cls) -> str | None:
- return "adam"
-
- @abc.abstractmethod
- def build(self) -> torch.optim.Optimizer:
- raise NotImplementedError
-
-
-@OptimizerConfig.register_subclass("adam")
-@dataclass
-class AdamConfig(OptimizerConfig):
- lr: float = 1e-3
- betas: tuple[float, float] = (0.9, 0.999)
- eps: float = 1e-8
- weight_decay: float = 0.0
- grad_clip_norm: float = 10.0
-
- def build(self, params: dict) -> torch.optim.Optimizer:
- kwargs = asdict(self)
- kwargs.pop("grad_clip_norm")
- return torch.optim.Adam(params, **kwargs)
-
-
-@OptimizerConfig.register_subclass("adamw")
-@dataclass
-class AdamWConfig(OptimizerConfig):
- lr: float = 1e-3
- betas: tuple[float, float] = (0.9, 0.999)
- eps: float = 1e-8
- weight_decay: float = 1e-2
- grad_clip_norm: float = 10.0
-
- def build(self, params: dict) -> torch.optim.Optimizer:
- kwargs = asdict(self)
- kwargs.pop("grad_clip_norm")
- return torch.optim.AdamW(params, **kwargs)
-
-
-@OptimizerConfig.register_subclass("sgd")
-@dataclass
-class SGDConfig(OptimizerConfig):
- lr: float = 1e-3
- momentum: float = 0.0
- dampening: float = 0.0
- nesterov: bool = False
- weight_decay: float = 0.0
- grad_clip_norm: float = 10.0
-
- def build(self, params: dict) -> torch.optim.Optimizer:
- kwargs = asdict(self)
- kwargs.pop("grad_clip_norm")
- return torch.optim.SGD(params, **kwargs)
-
-
-def save_optimizer_state(optimizer: torch.optim.Optimizer, save_dir: Path) -> None:
- state = optimizer.state_dict()
- param_groups = state.pop("param_groups")
- flat_state = flatten_dict(state)
- save_file(flat_state, save_dir / OPTIMIZER_STATE)
- write_json(param_groups, save_dir / OPTIMIZER_PARAM_GROUPS)
-
-
-def load_optimizer_state(optimizer: torch.optim.Optimizer, save_dir: Path) -> torch.optim.Optimizer:
- current_state_dict = optimizer.state_dict()
- flat_state = load_file(save_dir / OPTIMIZER_STATE)
- state = unflatten_dict(flat_state)
- loaded_state_dict = {"state": {int(k): v for k, v in state["state"].items()}}
-
- if "param_groups" in current_state_dict:
- param_groups = deserialize_json_into_object(
- save_dir / OPTIMIZER_PARAM_GROUPS, current_state_dict["param_groups"]
- )
- loaded_state_dict["param_groups"] = param_groups
-
- optimizer.load_state_dict(loaded_state_dict)
- return optimizer
diff --git a/lerobot/common/policies/factory.py b/lerobot/common/policies/factory.py
deleted file mode 100644
index 3aade066..00000000
--- a/lerobot/common/policies/factory.py
+++ /dev/null
@@ -1,164 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import logging
-
-from torch import nn
-
-from lerobot.common.datasets.lerobot_dataset import LeRobotDatasetMetadata
-from lerobot.common.datasets.utils import dataset_to_policy_features
-from lerobot.common.envs.configs import EnvConfig
-from lerobot.common.envs.utils import env_to_policy_features
-from lerobot.common.policies.act.configuration_act import ACTConfig
-from lerobot.common.policies.diffusion.configuration_diffusion import DiffusionConfig
-from lerobot.common.policies.pi0.configuration_pi0 import PI0Config
-from lerobot.common.policies.pi0fast.configuration_pi0fast import PI0FASTConfig
-from lerobot.common.policies.pretrained import PreTrainedPolicy
-from lerobot.common.policies.smolvla.configuration_smolvla import SmolVLAConfig
-from lerobot.common.policies.tdmpc.configuration_tdmpc import TDMPCConfig
-from lerobot.common.policies.vqbet.configuration_vqbet import VQBeTConfig
-from lerobot.configs.policies import PreTrainedConfig
-from lerobot.configs.types import FeatureType
-
-
-def get_policy_class(name: str) -> PreTrainedPolicy:
- """Get the policy's class and config class given a name (matching the policy class' `name` attribute)."""
- if name == "tdmpc":
- from lerobot.common.policies.tdmpc.modeling_tdmpc import TDMPCPolicy
-
- return TDMPCPolicy
- elif name == "diffusion":
- from lerobot.common.policies.diffusion.modeling_diffusion import DiffusionPolicy
-
- return DiffusionPolicy
- elif name == "act":
- from lerobot.common.policies.act.modeling_act import ACTPolicy
-
- return ACTPolicy
- elif name == "vqbet":
- from lerobot.common.policies.vqbet.modeling_vqbet import VQBeTPolicy
-
- return VQBeTPolicy
- elif name == "pi0":
- from lerobot.common.policies.pi0.modeling_pi0 import PI0Policy
-
- return PI0Policy
- elif name == "pi0fast":
- from lerobot.common.policies.pi0fast.modeling_pi0fast import PI0FASTPolicy
-
- return PI0FASTPolicy
- elif name == "smolvla":
- from lerobot.common.policies.smolvla.modeling_smolvla import SmolVLAPolicy
-
- return SmolVLAPolicy
- else:
- raise NotImplementedError(f"Policy with name {name} is not implemented.")
-
-
-def make_policy_config(policy_type: str, **kwargs) -> PreTrainedConfig:
- if policy_type == "tdmpc":
- return TDMPCConfig(**kwargs)
- elif policy_type == "diffusion":
- return DiffusionConfig(**kwargs)
- elif policy_type == "act":
- return ACTConfig(**kwargs)
- elif policy_type == "vqbet":
- return VQBeTConfig(**kwargs)
- elif policy_type == "pi0":
- return PI0Config(**kwargs)
- elif policy_type == "pi0fast":
- return PI0FASTConfig(**kwargs)
- elif policy_type == "smolvla":
- return SmolVLAConfig(**kwargs)
- else:
- raise ValueError(f"Policy type '{policy_type}' is not available.")
-
-
-def make_policy(
- cfg: PreTrainedConfig,
- ds_meta: LeRobotDatasetMetadata | None = None,
- env_cfg: EnvConfig | None = None,
-) -> PreTrainedPolicy:
- """Make an instance of a policy class.
-
- This function exists because (for now) we need to parse features from either a dataset or an environment
- in order to properly dimension and instantiate a policy for that dataset or environment.
-
- Args:
- cfg (PreTrainedConfig): The config of the policy to make. If `pretrained_path` is set, the policy will
- be loaded with the weights from that path.
- ds_meta (LeRobotDatasetMetadata | None, optional): Dataset metadata to take input/output shapes and
- statistics to use for (un)normalization of inputs/outputs in the policy. Defaults to None.
- env_cfg (EnvConfig | None, optional): The config of a gym environment to parse features from. Must be
- provided if ds_meta is not. Defaults to None.
-
- Raises:
- ValueError: Either ds_meta or env and env_cfg must be provided.
- NotImplementedError: if the policy.type is 'vqbet' and the policy device 'mps' (due to an incompatibility)
-
- Returns:
- PreTrainedPolicy: _description_
- """
- if bool(ds_meta) == bool(env_cfg):
- raise ValueError("Either one of a dataset metadata or a sim env must be provided.")
-
- # NOTE: Currently, if you try to run vqbet with mps backend, you'll get this error.
- # TODO(aliberts, rcadene): Implement a check_backend_compatibility in policies?
- # NotImplementedError: The operator 'aten::unique_dim' is not currently implemented for the MPS device. If
- # you want this op to be added in priority during the prototype phase of this feature, please comment on
- # https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment
- # variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be
- # slower than running natively on MPS.
- if cfg.type == "vqbet" and cfg.device == "mps":
- raise NotImplementedError(
- "Current implementation of VQBeT does not support `mps` backend. "
- "Please use `cpu` or `cuda` backend."
- )
-
- policy_cls = get_policy_class(cfg.type)
-
- kwargs = {}
- if ds_meta is not None:
- features = dataset_to_policy_features(ds_meta.features)
- kwargs["dataset_stats"] = ds_meta.stats
- else:
- if not cfg.pretrained_path:
- logging.warning(
- "You are instantiating a policy from scratch and its features are parsed from an environment "
- "rather than a dataset. Normalization modules inside the policy will have infinite values "
- "by default without stats from a dataset."
- )
- features = env_to_policy_features(env_cfg)
-
- cfg.output_features = {key: ft for key, ft in features.items() if ft.type is FeatureType.ACTION}
- cfg.input_features = {key: ft for key, ft in features.items() if key not in cfg.output_features}
- kwargs["config"] = cfg
-
- if cfg.pretrained_path:
- # Load a pretrained policy and override the config if needed (for example, if there are inference-time
- # hyperparameters that we want to vary).
- kwargs["pretrained_name_or_path"] = cfg.pretrained_path
- policy = policy_cls.from_pretrained(**kwargs)
- else:
- # Make a fresh policy.
- policy = policy_cls(**kwargs)
-
- policy.to(cfg.device)
- assert isinstance(policy, nn.Module)
-
- # policy = torch.compile(policy, mode="reduce-overhead")
-
- return policy
diff --git a/lerobot/common/policies/normalize.py b/lerobot/common/policies/normalize.py
deleted file mode 100644
index b3255ec1..00000000
--- a/lerobot/common/policies/normalize.py
+++ /dev/null
@@ -1,254 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import numpy as np
-import torch
-from torch import Tensor, nn
-
-from lerobot.configs.types import FeatureType, NormalizationMode, PolicyFeature
-
-
-def create_stats_buffers(
- features: dict[str, PolicyFeature],
- norm_map: dict[str, NormalizationMode],
- stats: dict[str, dict[str, Tensor]] | None = None,
-) -> dict[str, dict[str, nn.ParameterDict]]:
- """
- Create buffers per modality (e.g. "observation.image", "action") containing their mean, std, min, max
- statistics.
-
- Args: (see Normalize and Unnormalize)
-
- Returns:
- dict: A dictionary where keys are modalities and values are `nn.ParameterDict` containing
- `nn.Parameters` set to `requires_grad=False`, suitable to not be updated during backpropagation.
- """
- stats_buffers = {}
-
- for key, ft in features.items():
- norm_mode = norm_map.get(ft.type, NormalizationMode.IDENTITY)
- if norm_mode is NormalizationMode.IDENTITY:
- continue
-
- assert isinstance(norm_mode, NormalizationMode)
-
- shape = tuple(ft.shape)
-
- if ft.type is FeatureType.VISUAL:
- # sanity checks
- assert len(shape) == 3, f"number of dimensions of {key} != 3 ({shape=}"
- c, h, w = shape
- assert c < h and c < w, f"{key} is not channel first ({shape=})"
- # override image shape to be invariant to height and width
- shape = (c, 1, 1)
-
- # Note: we initialize mean, std, min, max to infinity. They should be overwritten
- # downstream by `stats` or `policy.load_state_dict`, as expected. During forward,
- # we assert they are not infinity anymore.
-
- buffer = {}
- if norm_mode is NormalizationMode.MEAN_STD:
- mean = torch.ones(shape, dtype=torch.float32) * torch.inf
- std = torch.ones(shape, dtype=torch.float32) * torch.inf
- buffer = nn.ParameterDict(
- {
- "mean": nn.Parameter(mean, requires_grad=False),
- "std": nn.Parameter(std, requires_grad=False),
- }
- )
- elif norm_mode is NormalizationMode.MIN_MAX:
- min = torch.ones(shape, dtype=torch.float32) * torch.inf
- max = torch.ones(shape, dtype=torch.float32) * torch.inf
- buffer = nn.ParameterDict(
- {
- "min": nn.Parameter(min, requires_grad=False),
- "max": nn.Parameter(max, requires_grad=False),
- }
- )
-
- # TODO(aliberts, rcadene): harmonize this to only use one framework (np or torch)
- if stats:
- if isinstance(stats[key]["mean"], np.ndarray):
- if norm_mode is NormalizationMode.MEAN_STD:
- buffer["mean"].data = torch.from_numpy(stats[key]["mean"]).to(dtype=torch.float32)
- buffer["std"].data = torch.from_numpy(stats[key]["std"]).to(dtype=torch.float32)
- elif norm_mode is NormalizationMode.MIN_MAX:
- buffer["min"].data = torch.from_numpy(stats[key]["min"]).to(dtype=torch.float32)
- buffer["max"].data = torch.from_numpy(stats[key]["max"]).to(dtype=torch.float32)
- elif isinstance(stats[key]["mean"], torch.Tensor):
- # Note: The clone is needed to make sure that the logic in save_pretrained doesn't see duplicated
- # tensors anywhere (for example, when we use the same stats for normalization and
- # unnormalization). See the logic here
- # https://github.com/huggingface/safetensors/blob/079781fd0dc455ba0fe851e2b4507c33d0c0d407/bindings/python/py_src/safetensors/torch.py#L97.
- if norm_mode is NormalizationMode.MEAN_STD:
- buffer["mean"].data = stats[key]["mean"].clone().to(dtype=torch.float32)
- buffer["std"].data = stats[key]["std"].clone().to(dtype=torch.float32)
- elif norm_mode is NormalizationMode.MIN_MAX:
- buffer["min"].data = stats[key]["min"].clone().to(dtype=torch.float32)
- buffer["max"].data = stats[key]["max"].clone().to(dtype=torch.float32)
- else:
- type_ = type(stats[key]["mean"])
- raise ValueError(f"np.ndarray or torch.Tensor expected, but type is '{type_}' instead.")
-
- stats_buffers[key] = buffer
- return stats_buffers
-
-
-def _no_stats_error_str(name: str) -> str:
- return (
- f"`{name}` is infinity. You should either initialize with `stats` as an argument, or use a "
- "pretrained model."
- )
-
-
-class Normalize(nn.Module):
- """Normalizes data (e.g. "observation.image") for more stable and faster convergence during training."""
-
- def __init__(
- self,
- features: dict[str, PolicyFeature],
- norm_map: dict[str, NormalizationMode],
- stats: dict[str, dict[str, Tensor]] | None = None,
- ):
- """
- Args:
- shapes (dict): A dictionary where keys are input modalities (e.g. "observation.image") and values
- are their shapes (e.g. `[3,96,96]`]). These shapes are used to create the tensor buffer containing
- mean, std, min, max statistics. If the provided `shapes` contain keys related to images, the shape
- is adjusted to be invariant to height and width, assuming a channel-first (c, h, w) format.
- modes (dict): A dictionary where keys are output modalities (e.g. "observation.image") and values
- are their normalization modes among:
- - "mean_std": subtract the mean and divide by standard deviation.
- - "min_max": map to [-1, 1] range.
- stats (dict, optional): A dictionary where keys are output modalities (e.g. "observation.image")
- and values are dictionaries of statistic types and their values (e.g.
- `{"mean": torch.randn(3,1,1)}, "std": torch.randn(3,1,1)}`). If provided, as expected for
- training the model for the first time, these statistics will overwrite the default buffers. If
- not provided, as expected for finetuning or evaluation, the default buffers should to be
- overwritten by a call to `policy.load_state_dict(state_dict)`. That way, initializing the
- dataset is not needed to get the stats, since they are already in the policy state_dict.
- """
- super().__init__()
- self.features = features
- self.norm_map = norm_map
- self.stats = stats
- stats_buffers = create_stats_buffers(features, norm_map, stats)
- for key, buffer in stats_buffers.items():
- setattr(self, "buffer_" + key.replace(".", "_"), buffer)
-
- # TODO(rcadene): should we remove torch.no_grad?
- @torch.no_grad
- def forward(self, batch: dict[str, Tensor]) -> dict[str, Tensor]:
- batch = dict(batch) # shallow copy avoids mutating the input batch
- for key, ft in self.features.items():
- if key not in batch:
- # FIXME(aliberts, rcadene): This might lead to silent fail!
- continue
-
- norm_mode = self.norm_map.get(ft.type, NormalizationMode.IDENTITY)
- if norm_mode is NormalizationMode.IDENTITY:
- continue
-
- buffer = getattr(self, "buffer_" + key.replace(".", "_"))
-
- if norm_mode is NormalizationMode.MEAN_STD:
- mean = buffer["mean"]
- std = buffer["std"]
- assert not torch.isinf(mean).any(), _no_stats_error_str("mean")
- assert not torch.isinf(std).any(), _no_stats_error_str("std")
- batch[key] = (batch[key] - mean) / (std + 1e-8)
- elif norm_mode is NormalizationMode.MIN_MAX:
- min = buffer["min"]
- max = buffer["max"]
- assert not torch.isinf(min).any(), _no_stats_error_str("min")
- assert not torch.isinf(max).any(), _no_stats_error_str("max")
- # normalize to [0,1]
- batch[key] = (batch[key] - min) / (max - min + 1e-8)
- # normalize to [-1, 1]
- batch[key] = batch[key] * 2 - 1
- else:
- raise ValueError(norm_mode)
- return batch
-
-
-class Unnormalize(nn.Module):
- """
- Similar to `Normalize` but unnormalizes output data (e.g. `{"action": torch.randn(b,c)}`) in their
- original range used by the environment.
- """
-
- def __init__(
- self,
- features: dict[str, PolicyFeature],
- norm_map: dict[str, NormalizationMode],
- stats: dict[str, dict[str, Tensor]] | None = None,
- ):
- """
- Args:
- shapes (dict): A dictionary where keys are input modalities (e.g. "observation.image") and values
- are their shapes (e.g. `[3,96,96]`]). These shapes are used to create the tensor buffer containing
- mean, std, min, max statistics. If the provided `shapes` contain keys related to images, the shape
- is adjusted to be invariant to height and width, assuming a channel-first (c, h, w) format.
- modes (dict): A dictionary where keys are output modalities (e.g. "observation.image") and values
- are their normalization modes among:
- - "mean_std": subtract the mean and divide by standard deviation.
- - "min_max": map to [-1, 1] range.
- stats (dict, optional): A dictionary where keys are output modalities (e.g. "observation.image")
- and values are dictionaries of statistic types and their values (e.g.
- `{"mean": torch.randn(3,1,1)}, "std": torch.randn(3,1,1)}`). If provided, as expected for
- training the model for the first time, these statistics will overwrite the default buffers. If
- not provided, as expected for finetuning or evaluation, the default buffers should to be
- overwritten by a call to `policy.load_state_dict(state_dict)`. That way, initializing the
- dataset is not needed to get the stats, since they are already in the policy state_dict.
- """
- super().__init__()
- self.features = features
- self.norm_map = norm_map
- self.stats = stats
- # `self.buffer_observation_state["mean"]` contains `torch.tensor(state_dim)`
- stats_buffers = create_stats_buffers(features, norm_map, stats)
- for key, buffer in stats_buffers.items():
- setattr(self, "buffer_" + key.replace(".", "_"), buffer)
-
- # TODO(rcadene): should we remove torch.no_grad?
- @torch.no_grad
- def forward(self, batch: dict[str, Tensor]) -> dict[str, Tensor]:
- batch = dict(batch) # shallow copy avoids mutating the input batch
- for key, ft in self.features.items():
- if key not in batch:
- continue
-
- norm_mode = self.norm_map.get(ft.type, NormalizationMode.IDENTITY)
- if norm_mode is NormalizationMode.IDENTITY:
- continue
-
- buffer = getattr(self, "buffer_" + key.replace(".", "_"))
-
- if norm_mode is NormalizationMode.MEAN_STD:
- mean = buffer["mean"]
- std = buffer["std"]
- assert not torch.isinf(mean).any(), _no_stats_error_str("mean")
- assert not torch.isinf(std).any(), _no_stats_error_str("std")
- batch[key] = batch[key] * std + mean
- elif norm_mode is NormalizationMode.MIN_MAX:
- min = buffer["min"]
- max = buffer["max"]
- assert not torch.isinf(min).any(), _no_stats_error_str("min")
- assert not torch.isinf(max).any(), _no_stats_error_str("max")
- batch[key] = (batch[key] + 1) / 2
- batch[key] = batch[key] * (max - min) + min
- else:
- raise ValueError(norm_mode)
- return batch
diff --git a/lerobot/common/policies/pi0/configuration_pi0.py b/lerobot/common/policies/pi0/configuration_pi0.py
deleted file mode 100644
index 8c7cc130..00000000
--- a/lerobot/common/policies/pi0/configuration_pi0.py
+++ /dev/null
@@ -1,149 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass, field
-
-from lerobot.common.optim.optimizers import AdamWConfig
-from lerobot.common.optim.schedulers import (
- CosineDecayWithWarmupSchedulerConfig,
-)
-from lerobot.configs.policies import PreTrainedConfig
-from lerobot.configs.types import FeatureType, NormalizationMode, PolicyFeature
-
-
-@PreTrainedConfig.register_subclass("pi0")
-@dataclass
-class PI0Config(PreTrainedConfig):
- # Input / output structure.
- n_obs_steps: int = 1
- chunk_size: int = 50
- n_action_steps: int = 50
-
- normalization_mapping: dict[str, NormalizationMode] = field(
- default_factory=lambda: {
- "VISUAL": NormalizationMode.IDENTITY,
- "STATE": NormalizationMode.MEAN_STD,
- "ACTION": NormalizationMode.MEAN_STD,
- }
- )
-
- # Shorter state and action vectors will be padded
- max_state_dim: int = 32
- max_action_dim: int = 32
-
- # Image preprocessing
- resize_imgs_with_padding: tuple[int, int] = (224, 224)
-
- # Add empty images. Used by pi0_aloha_sim which adds the empty
- # left and right wrist cameras in addition to the top camera.
- empty_cameras: int = 0
-
- # Converts the joint and gripper values from the standard Aloha space to
- # the space used by the pi internal runtime which was used to train the base model.
- adapt_to_pi_aloha: bool = False
-
- # Converts joint dimensions to deltas with respect to the current state before passing to the model.
- # Gripper dimensions will remain in absolute values.
- use_delta_joint_actions_aloha: bool = False
-
- # Tokenizer
- tokenizer_max_length: int = 48
-
- # Projector
- proj_width: int = 1024
-
- # Decoding
- num_steps: int = 10
-
- # Attention utils
- use_cache: bool = True
- attention_implementation: str = "eager" # or fa2, flex
-
- # Finetuning settings
- freeze_vision_encoder: bool = True
- train_expert_only: bool = False
- train_state_proj: bool = True
-
- # Training presets
- optimizer_lr: float = 2.5e-5
- optimizer_betas: tuple[float, float] = (0.9, 0.95)
- optimizer_eps: float = 1e-8
- optimizer_weight_decay: float = 1e-10
-
- scheduler_warmup_steps: int = 1_000
- scheduler_decay_steps: int = 30_000
- scheduler_decay_lr: float = 2.5e-6
-
- # TODO: Add EMA
-
- def __post_init__(self):
- super().__post_init__()
-
- # TODO(Steven): Validate device and amp? in all policy configs?
- """Input validation (not exhaustive)."""
- if self.n_action_steps > self.chunk_size:
- raise ValueError(
- f"The chunk size is the upper bound for the number of action steps per model invocation. Got "
- f"{self.n_action_steps} for `n_action_steps` and {self.chunk_size} for `chunk_size`."
- )
- if self.n_obs_steps != 1:
- raise ValueError(
- f"Multiple observation steps not handled yet. Got `nobs_steps={self.n_obs_steps}`"
- )
-
- if self.use_delta_joint_actions_aloha:
- raise NotImplementedError(
- "`use_delta_joint_actions_aloha` is used by pi0 for aloha real models. It is not ported yet in LeRobot."
- )
-
- def validate_features(self) -> None:
- # TODO: implement value error
- # if not self.image_features and not self.env_state_feature:
- # raise ValueError("You must provide at least one image or the environment state among the inputs.")
-
- for i in range(self.empty_cameras):
- key = f"observation.images.empty_camera_{i}"
- empty_camera = PolicyFeature(
- type=FeatureType.VISUAL,
- shape=(3, 480, 640),
- )
- self.input_features[key] = empty_camera
-
- def get_optimizer_preset(self) -> AdamWConfig:
- return AdamWConfig(
- lr=self.optimizer_lr,
- betas=self.optimizer_betas,
- eps=self.optimizer_eps,
- weight_decay=self.optimizer_weight_decay,
- )
-
- def get_scheduler_preset(self):
- return CosineDecayWithWarmupSchedulerConfig(
- peak_lr=self.optimizer_lr,
- decay_lr=self.scheduler_decay_lr,
- num_warmup_steps=self.scheduler_warmup_steps,
- num_decay_steps=self.scheduler_decay_steps,
- )
-
- @property
- def observation_delta_indices(self) -> None:
- return None
-
- @property
- def action_delta_indices(self) -> list:
- return list(range(self.chunk_size))
-
- @property
- def reward_delta_indices(self) -> None:
- return None
diff --git a/lerobot/common/policies/pi0/conversion_scripts/benchmark.py b/lerobot/common/policies/pi0/conversion_scripts/benchmark.py
deleted file mode 100644
index cb3c0e9b..00000000
--- a/lerobot/common/policies/pi0/conversion_scripts/benchmark.py
+++ /dev/null
@@ -1,82 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import torch
-
-from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
-from lerobot.common.policies.factory import make_policy
-from lerobot.configs.policies import PreTrainedConfig
-
-torch.backends.cudnn.benchmark = True
-
-
-def main():
- device = "cuda"
- dataset_repo_id = "danaaubakirova/koch_test"
- # model_name = "pi0_base"
- # ckpt_torch_dir = Path.home() / f".cache/openpi/openpi-assets/checkpoints/{model_name}_pytorch"
- ckpt_torch_dir = "lerobot/pi0"
-
- dataset = LeRobotDataset(dataset_repo_id, episodes=[0])
-
- dataloader = torch.utils.data.DataLoader(
- dataset,
- num_workers=0,
- batch_size=1,
- )
-
- batch = next(iter(dataloader))
-
- # To device
- for k in batch:
- if isinstance(batch[k], torch.Tensor):
- batch[k] = batch[k].to(device=device, dtype=torch.float32)
-
- cfg = PreTrainedConfig.from_pretrained(ckpt_torch_dir)
- cfg.pretrained_path = ckpt_torch_dir
- policy = make_policy(cfg, ds_meta=dataset.meta)
-
- # policy = torch.compile(policy, mode="reduce-overhead")
-
- warmup_iters = 10
- benchmark_iters = 30
-
- # Warmup
- for _ in range(warmup_iters):
- torch.cuda.synchronize()
- policy.select_action(batch)
- policy.reset()
- torch.cuda.synchronize()
-
- # Benchmark
- start_event = torch.cuda.Event(enable_timing=True)
- end_event = torch.cuda.Event(enable_timing=True)
-
- start_event.record()
- for _ in range(benchmark_iters):
- policy.select_action(batch)
- policy.reset()
- end_event.record()
-
- # Synchronize and measure time
- torch.cuda.synchronize()
- elapsed_time_ms = start_event.elapsed_time(end_event)
-
- avg_time_per_iter = elapsed_time_ms / benchmark_iters
- print(f"Average execution time per iteration: {avg_time_per_iter:.3f} ms")
-
-
-if __name__ == "__main__":
- with torch.inference_mode():
- main()
diff --git a/lerobot/common/policies/pi0/conversion_scripts/compare_with_jax.py b/lerobot/common/policies/pi0/conversion_scripts/compare_with_jax.py
deleted file mode 100644
index 6bd7c91f..00000000
--- a/lerobot/common/policies/pi0/conversion_scripts/compare_with_jax.py
+++ /dev/null
@@ -1,131 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import json
-import pickle
-from pathlib import Path
-
-import torch
-
-from lerobot.common.datasets.lerobot_dataset import LeRobotDatasetMetadata
-from lerobot.common.policies.factory import make_policy
-from lerobot.configs.policies import PreTrainedConfig
-
-
-def display(tensor: torch.Tensor):
- if tensor.dtype == torch.bool:
- tensor = tensor.float()
- print(f"Shape: {tensor.shape}")
- print(f"Mean: {tensor.mean().item()}")
- print(f"Std: {tensor.std().item()}")
- print(f"Min: {tensor.min().item()}")
- print(f"Max: {tensor.max().item()}")
-
-
-def main():
- num_motors = 14
- device = "cuda"
- # model_name = "pi0_aloha_towel"
- model_name = "pi0_aloha_sim"
-
- if model_name == "pi0_aloha_towel":
- dataset_repo_id = "lerobot/aloha_static_towel"
- else:
- dataset_repo_id = "lerobot/aloha_sim_transfer_cube_human"
-
- ckpt_torch_dir = Path.home() / f".cache/openpi/openpi-assets/checkpoints/{model_name}_pytorch"
- ckpt_jax_dir = Path.home() / f".cache/openpi/openpi-assets/checkpoints/{model_name}"
- save_dir = Path(f"../openpi/data/{model_name}/save")
-
- with open(save_dir / "example.pkl", "rb") as f:
- example = pickle.load(f)
- with open(save_dir / "outputs.pkl", "rb") as f:
- outputs = pickle.load(f)
- with open(save_dir / "noise.pkl", "rb") as f:
- noise = pickle.load(f)
-
- with open(ckpt_jax_dir / "assets/norm_stats.json") as f:
- norm_stats = json.load(f)
-
- # Override stats
- dataset_meta = LeRobotDatasetMetadata(dataset_repo_id)
- dataset_meta.stats["observation.state"]["mean"] = torch.tensor(
- norm_stats["norm_stats"]["state"]["mean"][:num_motors], dtype=torch.float32
- )
- dataset_meta.stats["observation.state"]["std"] = torch.tensor(
- norm_stats["norm_stats"]["state"]["std"][:num_motors], dtype=torch.float32
- )
-
- # Create LeRobot batch from Jax
- batch = {}
- for cam_key, uint_chw_array in example["images"].items():
- batch[f"observation.images.{cam_key}"] = torch.from_numpy(uint_chw_array) / 255.0
- batch["observation.state"] = torch.from_numpy(example["state"])
- batch["action"] = torch.from_numpy(outputs["actions"])
- batch["task"] = example["prompt"]
-
- if model_name == "pi0_aloha_towel":
- del batch["observation.images.cam_low"]
- elif model_name == "pi0_aloha_sim":
- batch["observation.images.top"] = batch["observation.images.cam_high"]
- del batch["observation.images.cam_high"]
-
- # Batchify
- for key in batch:
- if isinstance(batch[key], torch.Tensor):
- batch[key] = batch[key].unsqueeze(0)
- elif isinstance(batch[key], str):
- batch[key] = [batch[key]]
- else:
- raise ValueError(f"{key}, {batch[key]}")
-
- # To device
- for k in batch:
- if isinstance(batch[k], torch.Tensor):
- batch[k] = batch[k].to(device=device, dtype=torch.float32)
-
- noise = torch.from_numpy(noise).to(device=device, dtype=torch.float32)
-
- from lerobot.common import policies # noqa
-
- cfg = PreTrainedConfig.from_pretrained(ckpt_torch_dir)
- cfg.pretrained_path = ckpt_torch_dir
- policy = make_policy(cfg, dataset_meta)
-
- # loss_dict = policy.forward(batch, noise=noise, time=time_beta)
- # loss_dict["loss"].backward()
- # print("losses")
- # display(loss_dict["losses_after_forward"])
- # print("pi_losses")
- # display(pi_losses)
-
- actions = []
- for _ in range(50):
- action = policy.select_action(batch, noise=noise)
- actions.append(action)
-
- actions = torch.stack(actions, dim=1)
- pi_actions = batch["action"]
- print("actions")
- display(actions)
- print()
- print("pi_actions")
- display(pi_actions)
- print("atol=3e-2", torch.allclose(actions, pi_actions, atol=3e-2))
- print("atol=2e-2", torch.allclose(actions, pi_actions, atol=2e-2))
- print("atol=1e-2", torch.allclose(actions, pi_actions, atol=1e-2))
-
-
-if __name__ == "__main__":
- main()
diff --git a/lerobot/common/policies/pi0/conversion_scripts/conversion_utils.py b/lerobot/common/policies/pi0/conversion_scripts/conversion_utils.py
deleted file mode 100644
index 8835da31..00000000
--- a/lerobot/common/policies/pi0/conversion_scripts/conversion_utils.py
+++ /dev/null
@@ -1,84 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from transformers import GemmaConfig, PaliGemmaConfig
-
-
-def get_paligemma_config(precision: str):
- config = {
- "image_token_index": None,
- "pad_token_id": 0,
- "bos_token_id": 2,
- "eos_token_id": 1,
- }
-
- # image_sizes = {"2b-test": 224, "3b-224px": 224, "3b-448px": 448, "3b-896px": 896}
-
- image_size = 224 # image_sizes[variant]
- patch_size = 14
- num_image_tokens = (image_size**2) // (patch_size**2)
-
- config["image_token_index"] = 257152
- text_config = {
- "vocab_size": 257152,
- "num_hidden_layers": 18,
- "num_key_value_heads": 1,
- "head_dim": 256,
- "torch_dtype": precision,
- "hidden_size": 2048,
- "hidden_activation": "gelu_pytorch_tanh",
- "num_attention_heads": 8,
- "intermediate_size": 16384,
- "is_encoder_decoder": False,
- }
- vision_config = {
- "torch_dtype": precision,
- "image_size": image_size,
- "patch_size": patch_size,
- "num_image_tokens": num_image_tokens,
- "hidden_size": 1152,
- "intermediate_size": 4304,
- "num_hidden_layers": 27,
- "num_attention_heads": 16,
- "projector_hidden_act": "gelu_fast",
- "vision_use_head": False,
- }
- final_config = PaliGemmaConfig(text_config=text_config, vision_config=vision_config, **config)
- return final_config
-
-
-def get_gemma_config(precision: str):
- config = {
- "image_token_index": None,
- "pad_token_id": 0,
- "bos_token_id": 2,
- "eos_token_id": 1,
- }
-
- config["image_token_index"] = 257152
- text_config = {
- "vocab_size": 257152,
- "num_hidden_layers": 18,
- "num_key_value_heads": 1,
- "head_dim": 256,
- "torch_dtype": precision,
- "hidden_size": 1024,
- "hidden_activation": "gelu_pytorch_tanh",
- "num_attention_heads": 8,
- "intermediate_size": 4096,
- "is_encoder_decoder": False,
- }
- final_config = GemmaConfig()
- final_config.update(text_config)
- return final_config
diff --git a/lerobot/common/policies/pi0/conversion_scripts/convert_pi0_to_hf_lerobot.py b/lerobot/common/policies/pi0/conversion_scripts/convert_pi0_to_hf_lerobot.py
deleted file mode 100644
index 73ff506f..00000000
--- a/lerobot/common/policies/pi0/conversion_scripts/convert_pi0_to_hf_lerobot.py
+++ /dev/null
@@ -1,437 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-Convert pi0 parameters from Jax to Pytorch
-
-Follow [README of openpi](https://github.com/Physical-Intelligence/openpi) to create a new environment
-and install the required libraries.
-
-```bash
-cd ~/code/openpi
-source .venv/bin/activate
-```
-
-Example downloading parameters:
-```bash
-python
->>> import openpi.shared.download as download
->>> path='s3://openpi-assets/checkpoints/pi0_base/params'
->>> download.maybe_download(path)
-```
-
-Converting pi0_base:
-```python
-python lerobot/common/policies/pi0/conversion_scripts/convert_pi0_to_hf_lerobot.py \
- --checkpoint_dir /home/remi_cadene/.cache/openpi/openpi-assets/checkpoints/pi0_base/params \
- --output_path /home/remi_cadene/.cache/openpi/openpi-assets/checkpoints/pi0_base_pytorch
-```
-
-```python
-python lerobot/common/policies/pi0/conversion_scripts/convert_pi0_to_hf_lerobot.py \
- --checkpoint_dir /home/remi_cadene/.cache/openpi/openpi-assets/checkpoints/pi0_aloha_sim/params \
- --output_path /home/remi_cadene/.cache/openpi/openpi-assets/checkpoints/pi0_aloha_sim_pytorch
-```
-"""
-
-import argparse
-import pathlib
-
-import jax
-import numpy as np
-import orbax.checkpoint as ocp
-import torch
-from jax.sharding import SingleDeviceSharding
-
-from lerobot.common.policies.pi0.configuration_pi0 import PI0Config
-from lerobot.common.policies.pi0.conversion_scripts.conversion_utils import (
- get_gemma_config,
- get_paligemma_config,
-)
-from lerobot.common.policies.pi0.modeling_pi0 import PI0Policy
-
-PRECISIONS = {"bfloat16": torch.bfloat16, "float32": torch.float32, "float16": torch.float16}
-
-
-def slice_paligemma_state_dict(state_dict, config):
- suffix = "/value" if "img/embedding/kernel/value" in state_dict else ""
-
- # fmt: off
- # patch embeddings
- state_dict["paligemma.vision_tower.vision_model.embeddings.patch_embedding.weight"] = state_dict.pop(f"img/embedding/kernel{suffix}").transpose(
- 3, 2, 0, 1
- )
- state_dict["paligemma.vision_tower.vision_model.embeddings.patch_embedding.bias"] = state_dict.pop(f"img/embedding/bias{suffix}")
- # positional embeddings
- state_dict["paligemma.vision_tower.vision_model.embeddings.position_embedding.weight"] = state_dict.pop(f"img/pos_embedding{suffix}").reshape(
- -1, config.vision_config.hidden_size
- )
-
- # extract vision layers to be sliced at index 0. There are 27 layers in the base model.
- encoderblock_layernorm0_scale = state_dict.pop(f"img/Transformer/encoderblock/LayerNorm_0/scale{suffix}")
- encoderblock_layernorm0_bias = state_dict.pop(f"img/Transformer/encoderblock/LayerNorm_0/bias{suffix}")
- encoderblock_layernorm1_scale = state_dict.pop(f"img/Transformer/encoderblock/LayerNorm_1/scale{suffix}")
- encoderblock_layernorm1_bias = state_dict.pop(f"img/Transformer/encoderblock/LayerNorm_1/bias{suffix}")
-
- encoderblock_mlp_dense0_kernel= state_dict.pop(f"img/Transformer/encoderblock/MlpBlock_0/Dense_0/kernel{suffix}")
- encoderblock_mlp_dense0_bias= state_dict.pop(f"img/Transformer/encoderblock/MlpBlock_0/Dense_0/bias{suffix}")
- encoderblock_mlp_dense1_kernel= state_dict.pop(f"img/Transformer/encoderblock/MlpBlock_0/Dense_1/kernel{suffix}")
- encoderblock_mlp_dense1_bias= state_dict.pop(f"img/Transformer/encoderblock/MlpBlock_0/Dense_1/bias{suffix}")
-
- encoderblock_attention_0_key_kernel = state_dict.pop(f"img/Transformer/encoderblock/MultiHeadDotProductAttention_0/key/kernel{suffix}")
- encoderblock_attention_0_key_bias = state_dict.pop(f"img/Transformer/encoderblock/MultiHeadDotProductAttention_0/key/bias{suffix}")
- encoderblock_attention_0_value_kernel = state_dict.pop(f"img/Transformer/encoderblock/MultiHeadDotProductAttention_0/value/kernel{suffix}")
- encoderblock_attention_0_value_bias = state_dict.pop(f"img/Transformer/encoderblock/MultiHeadDotProductAttention_0/value/bias{suffix}")
- encoderblock_attention_0_query_kernel = state_dict.pop(f"img/Transformer/encoderblock/MultiHeadDotProductAttention_0/query/kernel{suffix}")
- encoderblock_attention_0_query_bias = state_dict.pop(f"img/Transformer/encoderblock/MultiHeadDotProductAttention_0/query/bias{suffix}")
- encoderblock_attention_0_out_kernel = state_dict.pop(f"img/Transformer/encoderblock/MultiHeadDotProductAttention_0/out/kernel{suffix}")
- encoderblock_attention_0_out_bias = state_dict.pop(f"img/Transformer/encoderblock/MultiHeadDotProductAttention_0/out/bias{suffix}")
-
- for i in range(config.vision_config.num_hidden_layers):
- state_dict[f"paligemma.vision_tower.vision_model.encoder.layers.{i}.layer_norm1.weight"] = encoderblock_layernorm0_scale[i].transpose()
- state_dict[f"paligemma.vision_tower.vision_model.encoder.layers.{i}.layer_norm1.bias"] = encoderblock_layernorm0_bias[i]
- state_dict[f"paligemma.vision_tower.vision_model.encoder.layers.{i}.layer_norm2.weight"] = encoderblock_layernorm1_scale[i].transpose()
- state_dict[f"paligemma.vision_tower.vision_model.encoder.layers.{i}.layer_norm2.bias"] = encoderblock_layernorm1_bias[i]
-
- state_dict[f"paligemma.vision_tower.vision_model.encoder.layers.{i}.mlp.fc1.weight"] = encoderblock_mlp_dense0_kernel[i].transpose()
- state_dict[f"paligemma.vision_tower.vision_model.encoder.layers.{i}.mlp.fc1.bias"] = encoderblock_mlp_dense0_bias[i]
- state_dict[f"paligemma.vision_tower.vision_model.encoder.layers.{i}.mlp.fc2.weight"] = encoderblock_mlp_dense1_kernel[i].transpose()
- state_dict[f"paligemma.vision_tower.vision_model.encoder.layers.{i}.mlp.fc2.bias"] = encoderblock_mlp_dense1_bias[i]
- state_dict[f"paligemma.vision_tower.vision_model.encoder.layers.{i}.self_attn.k_proj.weight"] = encoderblock_attention_0_key_kernel[i].reshape(-1, config.vision_config.hidden_size).transpose()
- state_dict[f"paligemma.vision_tower.vision_model.encoder.layers.{i}.self_attn.k_proj.bias"] = encoderblock_attention_0_key_bias[i].reshape(-1, config.vision_config.hidden_size).reshape(-1)
- state_dict[f"paligemma.vision_tower.vision_model.encoder.layers.{i}.self_attn.v_proj.weight"] = encoderblock_attention_0_value_kernel[i].reshape(-1, config.vision_config.hidden_size).transpose()
- state_dict[f"paligemma.vision_tower.vision_model.encoder.layers.{i}.self_attn.v_proj.bias"] = encoderblock_attention_0_value_bias[i].reshape(-1, config.vision_config.hidden_size).reshape(-1)
- state_dict[f"paligemma.vision_tower.vision_model.encoder.layers.{i}.self_attn.q_proj.weight"] = encoderblock_attention_0_query_kernel[i].reshape(-1, config.vision_config.hidden_size).transpose()
- state_dict[f"paligemma.vision_tower.vision_model.encoder.layers.{i}.self_attn.q_proj.bias"] = encoderblock_attention_0_query_bias[i].reshape(-1, config.vision_config.hidden_size).reshape(-1)
- state_dict[f"paligemma.vision_tower.vision_model.encoder.layers.{i}.self_attn.out_proj.weight"] = encoderblock_attention_0_out_kernel[i].reshape(-1, config.vision_config.hidden_size).transpose()
- state_dict[f"paligemma.vision_tower.vision_model.encoder.layers.{i}.self_attn.out_proj.bias"] = encoderblock_attention_0_out_bias[i].reshape(-1, config.vision_config.hidden_size).reshape(-1)
-
- state_dict["paligemma.vision_tower.vision_model.post_layernorm.weight"] = state_dict.pop(f"img/Transformer/encoder_norm/scale{suffix}").transpose()
- state_dict["paligemma.vision_tower.vision_model.post_layernorm.bias"] = state_dict.pop(f"img/Transformer/encoder_norm/bias{suffix}")
-
- # multimodal projector
-
- state_dict['paligemma.multi_modal_projector.linear.weight'] = state_dict.pop(f"img/head/kernel{suffix}").transpose()
- state_dict['paligemma.multi_modal_projector.linear.bias'] = state_dict.pop(f"img/head/bias{suffix}")
-
- # text decoder (gemma)
- embedding_vector = state_dict.pop(f"llm/embedder/input_embedding{suffix}")
- state_dict["paligemma.language_model.model.embed_tokens.weight"] = embedding_vector
-
- # pop the einsum attention + mlp representations. There are 18 layers in gemma-2b.
-
- llm_attention_attn_vec_einsum = state_dict.pop(f"llm/layers/attn/attn_vec_einsum/w{suffix}")
- llm_attention_kv_einsum = state_dict.pop(f"llm/layers/attn/kv_einsum/w{suffix}")
- llm_attention_q_einsum = state_dict.pop(f"llm/layers/attn/q_einsum/w{suffix}")
-
- llm_mlp_gating_einsum = state_dict.pop(f"llm/layers/mlp/gating_einsum{suffix}")
- llm_mlp_linear = state_dict.pop(f"llm/layers/mlp/linear{suffix}")
- # TODO verify correctness of layer norm loading
-
- llm_input_layernorm = state_dict.pop(f"llm/layers/pre_attention_norm/scale{suffix}")
- llm_post_attention_layernorm = state_dict.pop(f"llm/layers/pre_ffw_norm/scale{suffix}")
-
- for i in range(config.text_config.num_hidden_layers):
- # llm_attention_q_einsum[i].shape = (8, 2048, 256)
- q_proj_weight_reshaped = llm_attention_q_einsum[i].transpose(0, 2, 1).reshape(config.text_config.num_attention_heads * config.text_config.head_dim, config.text_config.hidden_size)
-
- state_dict[f"paligemma.language_model.model.layers.{i}.self_attn.q_proj.weight"] = q_proj_weight_reshaped
-
- # llm_attention_kv_einsum[i, 0, 0].shape = (2048, 256)
- k_proj_weight_reshaped = llm_attention_kv_einsum[i, 0, 0].transpose()
- state_dict[f"paligemma.language_model.model.layers.{i}.self_attn.k_proj.weight"] = k_proj_weight_reshaped
- # llm_attention_kv_einsum[i, 1, 0].shape = (2048, 256)
- v_proj_weight_reshaped = llm_attention_kv_einsum[i, 1, 0].transpose()
- state_dict[f"paligemma.language_model.model.layers.{i}.self_attn.v_proj.weight"] = v_proj_weight_reshaped
-
- # output projection.
-
- # llm_attention_attn_vec_einsum[i].shape = (8, 256, 2048)
- o_proj_weight_reshaped = llm_attention_attn_vec_einsum[i].transpose(2, 0, 1).reshape(config.text_config.num_attention_heads * config.text_config.head_dim, config.text_config.hidden_size)
-
- state_dict[f"paligemma.language_model.model.layers.{i}.self_attn.o_proj.weight"] = o_proj_weight_reshaped
- # mlp layers
- gate_proj_weight = llm_mlp_gating_einsum[i, 0]
- state_dict[f"paligemma.language_model.model.layers.{i}.mlp.gate_proj.weight"] = gate_proj_weight.transpose()
- up_proj_weight = llm_mlp_gating_einsum[i, 1]
- state_dict[f"paligemma.language_model.model.layers.{i}.mlp.up_proj.weight"] = up_proj_weight.transpose()
- state_dict[f"paligemma.language_model.model.layers.{i}.mlp.down_proj.weight"] = llm_mlp_linear[i].transpose()
- state_dict[f"paligemma.language_model.model.layers.{i}.input_layernorm.weight"] = llm_input_layernorm[i]
- state_dict[f"paligemma.language_model.model.layers.{i}.post_attention_layernorm.weight"] = llm_post_attention_layernorm[i]
-
- state_dict["paligemma.language_model.model.norm.weight"] = state_dict.pop(f"llm/final_norm/scale{suffix}")
- state_dict["paligemma.language_model.lm_head.weight"] = embedding_vector # weights are tied.
-
- # fmt: on
- expert_dict = {}
- final_state_dict = {}
- for key, value in state_dict.items():
- if key not in [
- f"llm/final_norm_1/scale{suffix}",
- f"llm/layers/attn/attn_vec_einsum_1/w{suffix}",
- f"llm/layers/attn/kv_einsum_1/w{suffix}",
- f"llm/layers/attn/q_einsum_1/w{suffix}",
- f"llm/layers/mlp_1/gating_einsum{suffix}",
- f"llm/layers/mlp_1/linear{suffix}",
- f"llm/layers/pre_attention_norm_1/scale{suffix}",
- f"llm/layers/pre_ffw_norm_1/scale{suffix}",
- ]:
- final_state_dict[key] = torch.from_numpy(value)
- else:
- expert_dict[key] = value
-
- return final_state_dict, expert_dict
-
-
-def slice_gemma_state_dict(state_dict, config, num_expert=1):
- # fmt: off
- # text decoder (gemma)
- # no embedding vector, the expert just has the decoder layers
-
- embedding_vector = torch.zeros([config.vocab_size, config.hidden_size])
- state_dict["gemma_expert.model.embed_tokens.weight"] = embedding_vector
-
- # pop the einsum attention + mlp representations. There are 18 layers in gemma-2b.
-
- suffix = "/value" if f"llm/layers/attn/attn_vec_einsum_{num_expert}/w/value" in state_dict else ""
-
- llm_attention_attn_vec_einsum = state_dict.pop(f"llm/layers/attn/attn_vec_einsum_{num_expert}/w{suffix}")
- llm_attention_kv_einsum = state_dict.pop(f"llm/layers/attn/kv_einsum_{num_expert}/w{suffix}")
- llm_attention_q_einsum = state_dict.pop(f"llm/layers/attn/q_einsum_{num_expert}/w{suffix}")
-
- llm_mlp_gating_einsum = state_dict.pop(f"llm/layers/mlp_{num_expert}/gating_einsum{suffix}")
- llm_mlp_linear = state_dict.pop(f"llm/layers/mlp_{num_expert}/linear{suffix}")
- # TODO verify correctness of layer norm loading
-
- llm_input_layernorm = state_dict.pop(f"llm/layers/pre_attention_norm_{num_expert}/scale{suffix}")
- llm_post_attention_layernorm = state_dict.pop(f"llm/layers/pre_ffw_norm_{num_expert}/scale{suffix}")
-
- for i in range(config.num_hidden_layers):
- q_proj_weight_reshaped = llm_attention_q_einsum[i].transpose(0, 2, 1).reshape(config.num_attention_heads * config.head_dim, config.hidden_size)
-
- state_dict[f"gemma_expert.model.layers.{i}.self_attn.q_proj.weight"] = q_proj_weight_reshaped
-
- k_proj_weight_reshaped = llm_attention_kv_einsum[i, 0, 0].transpose()
- state_dict[f"gemma_expert.model.layers.{i}.self_attn.k_proj.weight"] = k_proj_weight_reshaped
- v_proj_weight_reshaped = llm_attention_kv_einsum[i, 1, 0].transpose()
- state_dict[f"gemma_expert.model.layers.{i}.self_attn.v_proj.weight"] = v_proj_weight_reshaped
-
- # output projection.
-
- # llm_attention_attn_vec_einsum[i].shape = (8, 256, 1024)
- o_proj_weight_reshaped = llm_attention_attn_vec_einsum[i].reshape(config.num_attention_heads * config.head_dim, config.hidden_size).transpose(1,0)# .transpose(2, 0, 1).reshape(config.num_attention_heads * config.head_dim, config.hidden_size).transpose(1, 0)
-
- state_dict[f"gemma_expert.model.layers.{i}.self_attn.o_proj.weight"] = o_proj_weight_reshaped
- # mlp layers
- gate_proj_weight = llm_mlp_gating_einsum[i, 0]
- state_dict[f"gemma_expert.model.layers.{i}.mlp.gate_proj.weight"] = gate_proj_weight.transpose()
- up_proj_weight = llm_mlp_gating_einsum[i, 1]
- state_dict[f"gemma_expert.model.layers.{i}.mlp.up_proj.weight"] = up_proj_weight.transpose()
- state_dict[f"gemma_expert.model.layers.{i}.mlp.down_proj.weight"] = llm_mlp_linear[i].transpose()
- state_dict[f"gemma_expert.model.layers.{i}.input_layernorm.weight"] = llm_input_layernorm[i]
- state_dict[f"gemma_expert.model.layers.{i}.post_attention_layernorm.weight"] = llm_post_attention_layernorm[i]
-
- state_dict["gemma_expert.model.norm.weight"] = state_dict.pop(f"llm/final_norm_{num_expert}/scale{suffix}")
- state_dict["gemma_expert.lm_head.weight"] = embedding_vector # weights are tied. (and zeros here)
-
- # fmt: on
- final_state_dict = {}
- for key, value in state_dict.items():
- if not isinstance(value, torch.Tensor):
- final_state_dict[key] = torch.from_numpy(value)
- else:
- final_state_dict[key] = value
- return final_state_dict
-
-
-def flatten_for_memory(tree, parent_key=""):
- out = {}
- for k, v in tree.items():
- new_key = f"{parent_key}/{k}" if parent_key else k
- if isinstance(v, dict):
- out.update(flatten_for_memory(v, new_key))
- else:
- out[new_key] = np.array(v) # Ensure conversion to np.array for consistency
- return out
-
-
-def flatten_for_npz(tree, parent_key=""):
- out = {}
- for k, v in tree.items():
- new_key = f"{parent_key}/{k}" if parent_key else k
- if isinstance(v, dict):
- out.update(flatten_for_npz(v, new_key))
- else:
- # bf16/f32 here?
- out[new_key] = np.array(v)
- return out
-
-
-def slice_initial_orbax_checkpoint(checkpoint_dir: str):
- params_path = pathlib.Path(checkpoint_dir).resolve()
- checkpointer = ocp.PyTreeCheckpointer()
-
- metadata = checkpointer.metadata(params_path)
- print("Metadata keys:", list(metadata.keys()))
-
- params_name = "params"
-
- item = {params_name: metadata[params_name]}
- device = jax.local_devices()[0] # Use the first local device
- sharding = SingleDeviceSharding(device)
- restored = checkpointer.restore(
- params_path,
- ocp.args.PyTreeRestore(
- item=item,
- restore_args=jax.tree_util.tree_map(
- lambda _: ocp.ArrayRestoreArgs(
- restore_type=jax.Array, # or np.ndarray, but bf16 is annoying about it
- sharding=sharding,
- ),
- item,
- ),
- transforms={},
- ),
- )
- params = restored[params_name]
-
- # get params for PaliGemma
- pali_params = params["PaliGemma"]
- del params["PaliGemma"]
- pali_params_flat = flatten_for_npz(pali_params)
- return {"paligemma_params": pali_params_flat, "projection_params": params}
-
-
-def update_keys_with_prefix(d: dict, prefix: str) -> dict:
- """Update dictionary keys by adding a prefix."""
- return {f"{prefix}{key}": value for key, value in d.items()}
-
-
-def convert_pi0_checkpoint(checkpoint_dir: str, precision: str, tokenizer_id: str, output_path: str):
- # Break down orbax ckpts - they are in OCDBT
- initial_params = slice_initial_orbax_checkpoint(checkpoint_dir=checkpoint_dir)
- # process projection params
- keys = [
- "state_proj",
- "action_in_proj",
- "action_out_proj",
- "action_time_mlp_in",
- "action_time_mlp_out",
- ]
-
- projection_params = {}
- for key in keys:
- kernel_params = initial_params["projection_params"][key]["kernel"]
- bias_params = initial_params["projection_params"][key]["bias"]
- if isinstance(kernel_params, dict):
- weight = kernel_params["value"]
- bias = bias_params["value"]
- else:
- weight = kernel_params
- bias = bias_params
- projection_params[f"{key}.weight"] = torch.from_numpy(np.array(weight)).T
- projection_params[f"{key}.bias"] = torch.from_numpy(np.array(bias))
-
- # Process PaliGemma weights
- paligemma_config = get_paligemma_config(precision)
- paligemma_params, gemma_raw_dictionary = slice_paligemma_state_dict(
- initial_params["paligemma_params"], paligemma_config
- )
-
- # Process Gemma weights (at this stage they are unused)
- gemma_config = get_gemma_config(precision)
- gemma_params = slice_gemma_state_dict(gemma_raw_dictionary, config=gemma_config)
-
- # Instantiate model from configs
-
- if "pi0_aloha_sim" in checkpoint_dir:
- pi0_config = PI0Config(
- empty_cameras=2,
- adapt_to_pi_aloha=True,
- use_delta_joint_actions_aloha=False,
- )
- elif "pi0_aloha_towel" in checkpoint_dir:
- pi0_config = PI0Config(
- adapt_to_pi_aloha=True,
- use_delta_joint_actions_aloha=True,
- )
- elif "pi0_base" in checkpoint_dir:
- pi0_config = PI0Config(
- empty_cameras=0,
- adapt_to_pi_aloha=False,
- use_delta_joint_actions_aloha=False,
- )
- else:
- raise ValueError()
-
- # gemma_config=gemma_config, paligemma_config=paligemma_config)
- pi0_model = PI0Policy(pi0_config)
-
- paligemma_params = update_keys_with_prefix(paligemma_params, "model.paligemma_with_expert.")
- gemma_params = update_keys_with_prefix(gemma_params, "model.paligemma_with_expert.")
- projection_params = update_keys_with_prefix(projection_params, "model.")
-
- # load state dict
- torch_dtype = PRECISIONS[precision]
- pi0_model.load_state_dict({**paligemma_params, **gemma_params, **projection_params})
- pi0_model = pi0_model.to(torch_dtype)
- # pi0_tokenizer = AutoTokenizer.from_pretrained(tokenizer_id)
-
- pi0_model.save_pretrained(output_path, safe_serialization=True)
- # pi0_tokenizer.save_pretrained(output_path, dtype=torch_dtype)
-
- # assert that model loads properly
- del pi0_model
- PI0Policy.from_pretrained(output_path)
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "--checkpoint_dir",
- default="/raid/pablo/.cache/openpi/openpi-assets/checkpoints/pi0_aloha_sim/params",
- type=str,
- help="Path to the ocdbt checkpoint",
- )
-
- parser.add_argument(
- "--precision",
- choices=["float32", "bfloat16", "float16"],
- default="float32",
- type=str,
- help="Precision identifier for model conversion - should match the base checkpoint precision.",
- )
- # tokenizer is identical to paligemma, it appears
-
- parser.add_argument(
- "--tokenizer_hub_id",
- default="google/paligemma-3b-pt-224",
- type=str,
- help="Hub path to the tokenizer to save",
- )
-
- parser.add_argument(
- "--output_path",
- required=True,
- type=str,
- help="Path to save converted weights to",
- )
-
- args = parser.parse_args()
- convert_pi0_checkpoint(
- checkpoint_dir=args.checkpoint_dir,
- precision=args.precision,
- tokenizer_id=args.tokenizer_hub_id,
- output_path=args.output_path,
- )
diff --git a/lerobot/common/policies/pi0/flex_attention.py b/lerobot/common/policies/pi0/flex_attention.py
deleted file mode 100644
index 35628cdd..00000000
--- a/lerobot/common/policies/pi0/flex_attention.py
+++ /dev/null
@@ -1,141 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import torch
-import torch.nn.functional as F # noqa: N812
-from packaging.version import Version
-
-if Version(torch.__version__) > Version("2.5.0"):
- # Ffex attention is only available from torch 2.5 onwards
- from torch.nn.attention.flex_attention import (
- _mask_mod_signature,
- _round_up_to_multiple,
- create_block_mask,
- create_mask,
- flex_attention,
- )
-
-
-# @torch.compile(dynamic=False)
-def flex_attention_forward(
- attention_mask: torch.Tensor,
- batch_size: int,
- head_dim: int,
- query_states: torch.Tensor,
- key_states: torch.Tensor,
- value_states: torch.Tensor,
- scaling=None,
-):
- """
- This is defined out of classes to make compile happy.
- """
-
- original_dtype = query_states.dtype
- num_att_heads = 8
- num_key_value_heads = 1
- num_key_value_groups = num_att_heads // num_key_value_heads
-
- key_states = key_states[:, :, :, None, :]
- key_states = key_states.expand(
- batch_size, key_states.shape[1], num_key_value_heads, num_key_value_groups, head_dim
- )
- key_states = key_states.reshape(
- batch_size, key_states.shape[1], num_key_value_heads * num_key_value_groups, head_dim
- )
-
- value_states = value_states[:, :, :, None, :]
- value_states = value_states.expand(
- batch_size, value_states.shape[1], num_key_value_heads, num_key_value_groups, head_dim
- )
- value_states = value_states.reshape(
- batch_size, value_states.shape[1], num_key_value_heads * num_key_value_groups, head_dim
- )
-
- query_states = query_states.transpose(1, 2)
- key_states = key_states.transpose(1, 2)
- value_states = value_states.transpose(1, 2)
-
- query_states = query_states.to(torch.float32)
- key_states = key_states.to(torch.float32)
- value_states = value_states.to(torch.float32)
-
- causal_mask = attention_mask
- if causal_mask is not None:
- causal_mask = causal_mask[:, None, :, : key_states.shape[2]]
-
- if causal_mask.shape[1] == 1 and query_states.shape[1] > 1:
- causal_mask = causal_mask.expand(-1, query_states.shape[1], -1, -1)
-
- def precomputed_mask_factory(precomputed_mask: torch.Tensor) -> _mask_mod_signature:
- def mask_mod(b, h, q_idx, kv_idx):
- # Danger zone: if b,h,q_idx,kv_idx exceed the shape, device-side assert occurs.
- return precomputed_mask[b][h][q_idx][kv_idx]
-
- return mask_mod
-
- b_mask, h_mask, q_len, kv_len = causal_mask.shape # The shape of your mask
-
- block_size = 128
- q_len_rounded = _round_up_to_multiple(q_len, block_size)
- kv_len_rounded = _round_up_to_multiple(kv_len, block_size)
-
- # *CRITICAL* we do need to expand here, else we get a CUDA index error
-
- pad_q = q_len_rounded - q_len
- pad_k = kv_len_rounded - kv_len
-
- padded_causal_mask = F.pad(causal_mask, (0, pad_k, 0, pad_q), value=0.0)
- mask_mod_fn_orig = precomputed_mask_factory(padded_causal_mask)
-
- mask_4d = create_mask(
- mod_fn=mask_mod_fn_orig,
- B=b_mask,
- H=h_mask,
- Q_LEN=q_len_rounded,
- KV_LEN=kv_len_rounded,
- device=causal_mask.device,
- _compile=False,
- )
-
- mask_mod_fn_padded = precomputed_mask_factory(mask_4d)
- block_mask = create_block_mask(
- mask_mod=mask_mod_fn_padded,
- B=b_mask,
- H=h_mask,
- Q_LEN=q_len_rounded,
- KV_LEN=kv_len_rounded,
- BLOCK_SIZE=block_size,
- device=causal_mask.device,
- _compile=False,
- )
-
- # mask is applied inside the kernel, ideally more efficiently than score_mod.
- attn_output, attention_weights = flex_attention(
- query_states,
- key_states,
- value_states,
- block_mask=block_mask,
- enable_gqa=True, # because we shaped query/key states for GQA
- scale=head_dim**-0.5 if scaling is None else scaling,
- return_lse=True,
- )
-
- attn_output = attn_output.to(dtype=original_dtype)
- attn_output = attn_output.transpose(1, 2).contiguous() # [B, Q_LEN, H, head_dim]
- attn_output = attn_output.reshape(
- batch_size,
- -1,
- attn_output.shape[2] * attn_output.shape[3], # merges [H, head_dim]
- )
- return attn_output
diff --git a/lerobot/common/policies/pi0/modeling_pi0.py b/lerobot/common/policies/pi0/modeling_pi0.py
deleted file mode 100644
index 8ec29d32..00000000
--- a/lerobot/common/policies/pi0/modeling_pi0.py
+++ /dev/null
@@ -1,732 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 Physical Intelligence and The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-π0: A Vision-Language-Action Flow Model for General Robot Control
-
-[Paper](https://www.physicalintelligence.company/download/pi0.pdf)
-[Jax code](https://github.com/Physical-Intelligence/openpi)
-
-Designed by Physical Intelligence. Ported from Jax by Hugging Face.
-
-Install pi0 extra dependencies:
-```bash
-pip install -e ".[pi0]"
-```
-
-Example of finetuning the pi0 pretrained model (`pi0_base` in `openpi`):
-```bash
-python lerobot/scripts/train.py \
---policy.path=lerobot/pi0 \
---dataset.repo_id=danaaubakirova/koch_test
-```
-
-Example of finetuning the pi0 neural network with PaliGemma and expert Gemma
-pretrained with VLM default parameters before pi0 finetuning:
-```bash
-python lerobot/scripts/train.py \
---policy.type=pi0 \
---dataset.repo_id=danaaubakirova/koch_test
-```
-
-Example of using the pi0 pretrained model outside LeRobot training framework:
-```python
-policy = Pi0Policy.from_pretrained("lerobot/pi0")
-```
-
-"""
-
-import math
-from collections import deque
-
-import torch
-import torch.nn.functional as F # noqa: N812
-from torch import Tensor, nn
-from transformers import AutoTokenizer
-
-from lerobot.common.constants import ACTION, OBS_ROBOT
-from lerobot.common.policies.normalize import Normalize, Unnormalize
-from lerobot.common.policies.pi0.configuration_pi0 import PI0Config
-from lerobot.common.policies.pi0.paligemma_with_expert import (
- PaliGemmaWithExpertConfig,
- PaliGemmaWithExpertModel,
-)
-from lerobot.common.policies.pretrained import PreTrainedPolicy
-from lerobot.common.utils.utils import get_safe_dtype
-
-
-def create_sinusoidal_pos_embedding(
- time: torch.tensor, dimension: int, min_period: float, max_period: float, device="cpu"
-) -> Tensor:
- """Computes sine-cosine positional embedding vectors for scalar positions."""
- if dimension % 2 != 0:
- raise ValueError(f"dimension ({dimension}) must be divisible by 2")
-
- if time.ndim != 1:
- raise ValueError("The time tensor is expected to be of shape `(batch_size, )`.")
-
- dtype = get_safe_dtype(torch.float64, device.type)
- fraction = torch.linspace(0.0, 1.0, dimension // 2, dtype=dtype, device=device)
- period = min_period * (max_period / min_period) ** fraction
-
- # Compute the outer product
- scaling_factor = 1.0 / period * 2 * math.pi
- sin_input = scaling_factor[None, :] * time[:, None]
- pos_emb = torch.cat([torch.sin(sin_input), torch.cos(sin_input)], dim=1)
- return pos_emb
-
-
-def sample_beta(alpha, beta, bsize, device):
- gamma1 = torch.empty((bsize,), device=device).uniform_(0, 1).pow(1 / alpha)
- gamma2 = torch.empty((bsize,), device=device).uniform_(0, 1).pow(1 / beta)
- return gamma1 / (gamma1 + gamma2)
-
-
-def make_att_2d_masks(pad_masks, att_masks):
- """Copied from big_vision.
-
- Tokens can attend to valid inputs tokens which have a cumulative mask_ar
- smaller or equal to theirs. This way `mask_ar` int[B, N] can be used to
- setup several types of attention, for example:
-
- [[1 1 1 1 1 1]]: pure causal attention.
-
- [[0 0 0 1 1 1]]: prefix-lm attention. The first 3 tokens can attend between
- themselves and the last 3 tokens have a causal attention. The first
- entry could also be a 1 without changing behaviour.
-
- [[1 0 1 0 1 0 0 1 0 0]]: causal attention between 4 blocks. Tokens of a
- block can attend all previous blocks and all tokens on the same block.
-
- Args:
- input_mask: bool[B, N] true if its part of the input, false if padding.
- mask_ar: int32[B, N] mask that's 1 where previous tokens cannot depend on
- it and 0 where it shares the same attention mask as the previous token.
- """
- if att_masks.ndim != 2:
- raise ValueError(att_masks.ndim)
- if pad_masks.ndim != 2:
- raise ValueError(pad_masks.ndim)
-
- cumsum = torch.cumsum(att_masks, dim=1)
- att_2d_masks = cumsum[:, None, :] <= cumsum[:, :, None]
- pad_2d_masks = pad_masks[:, None, :] * pad_masks[:, :, None]
- att_2d_masks = att_2d_masks & pad_2d_masks
- return att_2d_masks
-
-
-def resize_with_pad(img, width, height, pad_value=-1):
- # assume no-op when width height fits already
- if img.ndim != 4:
- raise ValueError(f"(b,c,h,w) expected, but {img.shape}")
-
- cur_height, cur_width = img.shape[2:]
-
- ratio = max(cur_width / width, cur_height / height)
- resized_height = int(cur_height / ratio)
- resized_width = int(cur_width / ratio)
- resized_img = F.interpolate(
- img, size=(resized_height, resized_width), mode="bilinear", align_corners=False
- )
-
- pad_height = max(0, int(height - resized_height))
- pad_width = max(0, int(width - resized_width))
-
- # pad on left and top of image
- padded_img = F.pad(resized_img, (pad_width, 0, pad_height, 0), value=pad_value)
- return padded_img
-
-
-def pad_vector(vector, new_dim):
- """Can be (batch_size x sequence_length x features_dimension)
- or (batch_size x features_dimension)
- """
- if vector.shape[-1] == new_dim:
- return vector
- shape = list(vector.shape)
- current_dim = shape[-1]
- shape[-1] = new_dim
- new_vector = torch.zeros(*shape, dtype=vector.dtype, device=vector.device)
- new_vector[..., :current_dim] = vector
- return new_vector
-
-
-def normalize(x, min_val, max_val):
- return (x - min_val) / (max_val - min_val)
-
-
-def unnormalize(x, min_val, max_val):
- return x * (max_val - min_val) + min_val
-
-
-def safe_arcsin(value):
- # This ensures that the input stays within
- # [−1,1] to avoid invalid values for arcsin
- return torch.arcsin(torch.clamp(value, -1.0, 1.0))
-
-
-def aloha_gripper_to_angular(value):
- # Aloha transforms the gripper positions into a linear space. The following code
- # reverses this transformation to be consistent with pi0 which is pretrained in
- # angular space.
- #
- # These values are coming from the Aloha code:
- # PUPPET_GRIPPER_POSITION_OPEN, PUPPET_GRIPPER_POSITION_CLOSED
- value = unnormalize(value, min_val=0.01844, max_val=0.05800)
-
- # This is the inverse of the angular to linear transformation inside the Interbotix code.
- def linear_to_radian(linear_position, arm_length, horn_radius):
- value = (horn_radius**2 + linear_position**2 - arm_length**2) / (2 * horn_radius * linear_position)
- return safe_arcsin(value)
-
- # The constants are taken from the Interbotix code.
- value = linear_to_radian(value, arm_length=0.036, horn_radius=0.022)
-
- # Normalize to [0, 1].
- # The values 0.4 and 1.5 were measured on an actual Trossen robot.
- return normalize(value, min_val=0.4, max_val=1.5)
-
-
-def aloha_gripper_from_angular(value):
- # Convert from the gripper position used by pi0 to the gripper position that is used by Aloha.
- # Note that the units are still angular but the range is different.
-
- # The values 0.4 and 1.5 were measured on an actual Trossen robot.
- value = unnormalize(value, min_val=0.4, max_val=1.5)
-
- # These values are coming from the Aloha code:
- # PUPPET_GRIPPER_JOINT_OPEN, PUPPET_GRIPPER_JOINT_CLOSE
- return normalize(value, min_val=-0.6213, max_val=1.4910)
-
-
-def aloha_gripper_from_angular_inv(value):
- # Directly inverts the gripper_from_angular function.
- value = unnormalize(value, min_val=-0.6213, max_val=1.4910)
- return normalize(value, min_val=0.4, max_val=1.5)
-
-
-class PI0Policy(PreTrainedPolicy):
- """Wrapper class around PI0FlowMatching model to train and run inference within LeRobot."""
-
- config_class = PI0Config
- name = "pi0"
-
- def __init__(
- self,
- config: PI0Config,
- dataset_stats: dict[str, dict[str, Tensor]] | None = None,
- ):
- """
- Args:
- config: Policy configuration class instance or None, in which case the default instantiation of
- the configuration class is used.
- dataset_stats: Dataset statistics to be used for normalization. If not passed here, it is expected
- that they will be passed with a call to `load_state_dict` before the policy is used.
- """
-
- super().__init__(config)
- config.validate_features()
- self.config = config
- self.normalize_inputs = Normalize(config.input_features, config.normalization_mapping, dataset_stats)
- self.normalize_targets = Normalize(
- config.output_features, config.normalization_mapping, dataset_stats
- )
- self.unnormalize_outputs = Unnormalize(
- config.output_features, config.normalization_mapping, dataset_stats
- )
-
- self.language_tokenizer = AutoTokenizer.from_pretrained("google/paligemma-3b-pt-224")
- self.model = PI0FlowMatching(config)
-
- self.reset()
-
- def reset(self):
- """This should be called whenever the environment is reset."""
- self._action_queue = deque([], maxlen=self.config.n_action_steps)
-
- def get_optim_params(self) -> dict:
- return self.parameters()
-
- @torch.no_grad
- def select_action(self, batch: dict[str, Tensor], noise: Tensor | None = None) -> Tensor:
- """Select a single action given environment observations.
-
- This method wraps `select_actions` in order to return one action at a time for execution in the
- environment. It works by managing the actions in a queue and only calling `select_actions` when the
- queue is empty.
- """
- self.eval()
-
- if self.config.adapt_to_pi_aloha:
- batch[OBS_ROBOT] = self._pi_aloha_decode_state(batch[OBS_ROBOT])
-
- batch = self.normalize_inputs(batch)
-
- # Action queue logic for n_action_steps > 1. When the action_queue is depleted, populate it by
- # querying the policy.
- if len(self._action_queue) == 0:
- images, img_masks = self.prepare_images(batch)
- state = self.prepare_state(batch)
- lang_tokens, lang_masks = self.prepare_language(batch)
-
- actions = self.model.sample_actions(
- images, img_masks, lang_tokens, lang_masks, state, noise=noise
- )
-
- # Unpad actions
- original_action_dim = self.config.action_feature.shape[0]
- actions = actions[:, :, :original_action_dim]
-
- actions = self.unnormalize_outputs({"action": actions})["action"]
-
- if self.config.adapt_to_pi_aloha:
- actions = self._pi_aloha_encode_actions(actions)
-
- # `self.model.forward` returns a (batch_size, n_action_steps, action_dim) tensor, but the queue
- # effectively has shape (n_action_steps, batch_size, *), hence the transpose.
- self._action_queue.extend(actions.transpose(0, 1))
- return self._action_queue.popleft()
-
- def forward(self, batch: dict[str, Tensor], noise=None, time=None) -> tuple[Tensor, dict[str, Tensor]]:
- """Do a full training forward pass to compute the loss"""
- if self.config.adapt_to_pi_aloha:
- batch[OBS_ROBOT] = self._pi_aloha_decode_state(batch[OBS_ROBOT])
- batch[ACTION] = self._pi_aloha_encode_actions_inv(batch[ACTION])
-
- batch = self.normalize_inputs(batch)
- batch = self.normalize_targets(batch)
-
- images, img_masks = self.prepare_images(batch)
- state = self.prepare_state(batch)
- lang_tokens, lang_masks = self.prepare_language(batch)
- actions = self.prepare_action(batch)
- actions_is_pad = batch.get("action_is_pad")
-
- loss_dict = {}
- losses = self.model.forward(images, img_masks, lang_tokens, lang_masks, state, actions, noise, time)
- loss_dict["losses_after_forward"] = losses.clone()
-
- if actions_is_pad is not None:
- in_episode_bound = ~actions_is_pad
- losses = losses * in_episode_bound.unsqueeze(-1)
- loss_dict["losses_after_in_ep_bound"] = losses.clone()
-
- # Remove padding
- losses = losses[:, :, : self.config.max_action_dim]
- loss_dict["losses_after_rm_padding"] = losses.clone()
-
- # For backward pass
- loss = losses.mean()
- # For logging
- loss_dict["l2_loss"] = loss.item()
-
- return loss, loss_dict
-
- def prepare_images(self, batch):
- """Apply Pi0 preprocessing to the images, like resizing to 224x224 and padding to keep aspect ratio, and
- convert pixel range from [0.0, 1.0] to [-1.0, 1.0] as requested by SigLIP.
- """
- images = []
- img_masks = []
-
- present_img_keys = [key for key in self.config.image_features if key in batch]
- missing_img_keys = [key for key in self.config.image_features if key not in batch]
-
- if len(present_img_keys) == 0:
- raise ValueError(
- f"All image features are missing from the batch. At least one expected. (batch: {batch.keys()}) (image_features:{self.config.image_features})"
- )
-
- # Preprocess image features present in the batch
- for key in present_img_keys:
- img = batch[key]
-
- if self.config.resize_imgs_with_padding is not None:
- img = resize_with_pad(img, *self.config.resize_imgs_with_padding, pad_value=0)
-
- # Normalize from range [0,1] to [-1,1] as expected by siglip
- img = img * 2.0 - 1.0
-
- bsize = img.shape[0]
- device = img.device
- mask = torch.ones(bsize, dtype=torch.bool, device=device)
- images.append(img)
- img_masks.append(mask)
-
- # Create image features not present in the batch
- # as fully 0 padded images.
- for num_empty_cameras in range(len(missing_img_keys)):
- if num_empty_cameras >= self.config.empty_cameras:
- break
- img = torch.ones_like(img) * -1
- mask = torch.zeros_like(mask)
- images.append(img)
- img_masks.append(mask)
-
- return images, img_masks
-
- def prepare_language(self, batch) -> tuple[Tensor, Tensor]:
- """Tokenize the text input"""
- device = batch[OBS_ROBOT].device
- tasks = batch["task"]
-
- # PaliGemma prompt has to end with a new line
- tasks = [task if task.endswith("\n") else f"{task}\n" for task in tasks]
-
- tokenized_prompt = self.language_tokenizer.__call__(
- tasks,
- padding="max_length",
- padding_side="right",
- max_length=self.config.tokenizer_max_length,
- return_tensors="pt",
- )
- lang_tokens = tokenized_prompt["input_ids"].to(device=device)
- lang_masks = tokenized_prompt["attention_mask"].to(device=device, dtype=torch.bool)
-
- return lang_tokens, lang_masks
-
- def _pi_aloha_decode_state(self, state):
- # Flip the joints.
- for motor_idx in [1, 2, 8, 9]:
- state[:, motor_idx] *= -1
- # Reverse the gripper transformation that is being applied by the Aloha runtime.
- for motor_idx in [6, 13]:
- state[:, motor_idx] = aloha_gripper_to_angular(state[:, motor_idx])
- return state
-
- def _pi_aloha_encode_actions(self, actions):
- # Flip the joints.
- for motor_idx in [1, 2, 8, 9]:
- actions[:, :, motor_idx] *= -1
- # Reverse the gripper transformation that is being applied by the Aloha runtime.
- for motor_idx in [6, 13]:
- actions[:, :, motor_idx] = aloha_gripper_from_angular(actions[:, :, motor_idx])
- return actions
-
- def _pi_aloha_encode_actions_inv(self, actions):
- # Flip the joints again.
- for motor_idx in [1, 2, 8, 9]:
- actions[:, :, motor_idx] *= -1
- # Reverse the gripper transformation that is being applied by the Aloha runtime.
- for motor_idx in [6, 13]:
- actions[:, :, motor_idx] = aloha_gripper_from_angular_inv(actions[:, :, motor_idx])
- return actions
-
- def prepare_state(self, batch):
- """Pad state"""
- state = pad_vector(batch[OBS_ROBOT], self.config.max_state_dim)
- return state
-
- def prepare_action(self, batch):
- """Pad action"""
- actions = pad_vector(batch[ACTION], self.config.max_action_dim)
- return actions
-
-
-class PI0FlowMatching(nn.Module):
- """
- π0: A Vision-Language-Action Flow Model for General Robot Control
-
- [Paper](https://www.physicalintelligence.company/download/pi0.pdf)
- [Jax code](https://github.com/Physical-Intelligence/openpi)
-
- Designed by Physical Intelligence. Ported from Jax by Hugging Face.
- ┌──────────────────────────────┐
- │ actions │
- │ ▲ │
- │ ┌┴─────┐ │
- │ kv cache │Gemma │ │
- │ ┌──────────►│Expert│ │
- │ │ │ │ │
- │ ┌┴────────┐ │x 10 │ │
- │ │ │ └▲──▲──┘ │
- │ │PaliGemma│ │ │ │
- │ │ │ │ robot state │
- │ │ │ noise │
- │ └▲──▲─────┘ │
- │ │ │ │
- │ │ image(s) │
- │ language tokens │
- └──────────────────────────────┘
- """
-
- def __init__(self, config):
- super().__init__()
- self.config = config
-
- paligemma_with_export_config = PaliGemmaWithExpertConfig(
- freeze_vision_encoder=self.config.freeze_vision_encoder,
- train_expert_only=self.config.train_expert_only,
- attention_implementation=self.config.attention_implementation,
- )
- self.paligemma_with_expert = PaliGemmaWithExpertModel(paligemma_with_export_config)
-
- # Projections are float32
- self.state_proj = nn.Linear(self.config.max_state_dim, self.config.proj_width)
- self.action_in_proj = nn.Linear(self.config.max_action_dim, self.config.proj_width)
- self.action_out_proj = nn.Linear(self.config.proj_width, self.config.max_action_dim)
-
- self.action_time_mlp_in = nn.Linear(self.config.proj_width * 2, self.config.proj_width)
- self.action_time_mlp_out = nn.Linear(self.config.proj_width, self.config.proj_width)
-
- self.set_requires_grad()
-
- def set_requires_grad(self):
- for params in self.state_proj.parameters():
- params.requires_grad = self.config.train_state_proj
-
- def sample_noise(self, shape, device):
- noise = torch.normal(
- mean=0.0,
- std=1.0,
- size=shape,
- dtype=torch.float32,
- device=device,
- )
- return noise
-
- def sample_time(self, bsize, device):
- time_beta = sample_beta(1.5, 1.0, bsize, device)
- time = time_beta * 0.999 + 0.001
- return time.to(dtype=torch.float32, device=device)
-
- def embed_prefix(
- self, images, img_masks, lang_tokens, lang_masks
- ) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
- """Embed images with SigLIP and language tokens with embedding layer to prepare
- for PaliGemma transformer processing.
- """
- # TODO: avoid list in python and torch.cat ; prefer pre-allocation with torch.empty
- embs = []
- pad_masks = []
- att_masks = []
-
- # TODO: remove for loop
- for (
- img,
- img_mask,
- ) in zip(images, img_masks, strict=False):
- img_emb = self.paligemma_with_expert.embed_image(img)
- img_emb = img_emb.to(dtype=torch.bfloat16)
-
- # Normalize image embeddings
- img_emb_dim = img_emb.shape[-1]
- img_emb = img_emb * torch.tensor(img_emb_dim**0.5, dtype=img_emb.dtype, device=img_emb.device)
-
- bsize, num_img_embs = img_emb.shape[:2]
- img_mask = img_mask[:, None].expand(bsize, num_img_embs)
-
- embs.append(img_emb)
- pad_masks.append(img_mask)
-
- # Create attention masks so that image tokens attend to each other
- att_masks += [0] * num_img_embs
-
- lang_emb = self.paligemma_with_expert.embed_language_tokens(lang_tokens)
-
- # Normalize language embeddings
- lang_emb_dim = lang_emb.shape[-1]
- lang_emb = lang_emb * math.sqrt(lang_emb_dim)
-
- embs.append(lang_emb)
- pad_masks.append(lang_masks)
-
- # full attention between image and language inputs
- num_lang_embs = lang_emb.shape[1]
- att_masks += [0] * num_lang_embs
-
- embs = torch.cat(embs, dim=1)
- pad_masks = torch.cat(pad_masks, dim=1)
- att_masks = torch.tensor(att_masks, dtype=torch.bool, device=pad_masks.device)
- att_masks = att_masks[None, :].expand(bsize, len(att_masks))
-
- return embs, pad_masks, att_masks
-
- def embed_suffix(self, state, noisy_actions, timestep):
- """Embed state, noisy_actions, timestep to prepare for Expert Gemma processing."""
- embs = []
- pad_masks = []
- att_masks = []
-
- # Embed state
- state_emb = self.state_proj(state)
- state_emb = state_emb.to(dtype=torch.bfloat16)
- embs.append(state_emb[:, None, :])
- bsize = state_emb.shape[0]
- dtype = state_emb.dtype
- device = state_emb.device
-
- state_mask = torch.ones(bsize, 1, dtype=torch.bool, device=device)
- pad_masks.append(state_mask)
-
- # Set attention masks so that image and language inputs do not attend to state or actions
- att_masks += [1]
-
- # Embed timestep using sine-cosine positional encoding with sensitivity in the range [0, 1]
- time_emb = create_sinusoidal_pos_embedding(
- timestep, self.config.proj_width, min_period=4e-3, max_period=4.0, device=device
- )
- time_emb = time_emb.type(dtype=dtype)
-
- # Fuse timestep + action information using an MLP
- action_emb = self.action_in_proj(noisy_actions)
-
- time_emb = time_emb[:, None, :].expand_as(action_emb)
- action_time_emb = torch.cat([action_emb, time_emb], dim=2)
-
- action_time_emb = self.action_time_mlp_in(action_time_emb)
- action_time_emb = F.silu(action_time_emb) # swish == silu
- action_time_emb = self.action_time_mlp_out(action_time_emb)
-
- # Add to input tokens
- embs.append(action_time_emb)
-
- bsize, action_time_dim = action_time_emb.shape[:2]
- action_time_mask = torch.ones(bsize, action_time_dim, dtype=torch.bool, device=device)
- pad_masks.append(action_time_mask)
-
- # Set attention masks so that image, language and state inputs do not attend to action tokens
- att_masks += [1] + ([0] * (self.config.n_action_steps - 1))
-
- embs = torch.cat(embs, dim=1)
- pad_masks = torch.cat(pad_masks, dim=1)
- att_masks = torch.tensor(att_masks, dtype=embs.dtype, device=embs.device)
- att_masks = att_masks[None, :].expand(bsize, len(att_masks))
-
- return embs, pad_masks, att_masks
-
- def forward(
- self, images, img_masks, lang_tokens, lang_masks, state, actions, noise=None, time=None
- ) -> Tensor:
- """Do a full training forward pass and compute the loss (batch_size x num_steps x num_motors)"""
- if noise is None:
- noise = self.sample_noise(actions.shape, actions.device)
-
- if time is None:
- time = self.sample_time(actions.shape[0], actions.device)
-
- time_expanded = time[:, None, None]
- x_t = time_expanded * noise + (1 - time_expanded) * actions
- u_t = noise - actions
-
- prefix_embs, prefix_pad_masks, prefix_att_masks = self.embed_prefix(
- images, img_masks, lang_tokens, lang_masks
- )
- suffix_embs, suffix_pad_masks, suffix_att_masks = self.embed_suffix(state, x_t, time)
-
- pad_masks = torch.cat([prefix_pad_masks, suffix_pad_masks], dim=1)
- att_masks = torch.cat([prefix_att_masks, suffix_att_masks], dim=1)
-
- att_2d_masks = make_att_2d_masks(pad_masks, att_masks)
- position_ids = torch.cumsum(pad_masks, dim=1) - 1
-
- (_, suffix_out), _ = self.paligemma_with_expert.forward(
- attention_mask=att_2d_masks,
- position_ids=position_ids,
- past_key_values=None,
- inputs_embeds=[prefix_embs, suffix_embs],
- use_cache=False,
- fill_kv_cache=False,
- )
- suffix_out = suffix_out[:, -self.config.n_action_steps :]
- # Original openpi code, upcast attention output
- suffix_out = suffix_out.to(dtype=torch.float32)
- v_t = self.action_out_proj(suffix_out)
-
- losses = F.mse_loss(u_t, v_t, reduction="none")
- return losses
-
- def sample_actions(self, images, img_masks, lang_tokens, lang_masks, state, noise=None) -> Tensor:
- """Do a full inference forward and compute the action (batch_size x num_steps x num_motors)"""
- bsize = state.shape[0]
- device = state.device
-
- if noise is None:
- actions_shape = (bsize, self.config.n_action_steps, self.config.max_action_dim)
- noise = self.sample_noise(actions_shape, device)
-
- prefix_embs, prefix_pad_masks, prefix_att_masks = self.embed_prefix(
- images, img_masks, lang_tokens, lang_masks
- )
- prefix_att_2d_masks = make_att_2d_masks(prefix_pad_masks, prefix_att_masks)
- prefix_position_ids = torch.cumsum(prefix_pad_masks, dim=1) - 1
-
- # Compute image and language key value cache
- _, past_key_values = self.paligemma_with_expert.forward(
- attention_mask=prefix_att_2d_masks,
- position_ids=prefix_position_ids,
- past_key_values=None,
- inputs_embeds=[prefix_embs, None],
- use_cache=self.config.use_cache,
- fill_kv_cache=True,
- )
-
- dt = -1.0 / self.config.num_steps
- dt = torch.tensor(dt, dtype=torch.float32, device=device)
-
- x_t = noise
- time = torch.tensor(1.0, dtype=torch.float32, device=device)
- while time >= -dt / 2:
- expanded_time = time.expand(bsize)
- v_t = self.denoise_step(
- state,
- prefix_pad_masks,
- past_key_values,
- x_t,
- expanded_time,
- )
-
- # Euler step
- x_t += dt * v_t
- time += dt
- return x_t
-
- def denoise_step(
- self,
- state,
- prefix_pad_masks,
- past_key_values,
- x_t,
- timestep,
- ):
- """Apply one denoising step of the noise `x_t` at a given timestep."""
- suffix_embs, suffix_pad_masks, suffix_att_masks = self.embed_suffix(state, x_t, timestep)
-
- suffix_len = suffix_pad_masks.shape[1]
- batch_size = prefix_pad_masks.shape[0]
- prefix_len = prefix_pad_masks.shape[1]
- prefix_pad_2d_masks = prefix_pad_masks[:, None, :].expand(batch_size, suffix_len, prefix_len)
-
- suffix_att_2d_masks = make_att_2d_masks(suffix_pad_masks, suffix_att_masks)
-
- full_att_2d_masks = torch.cat([prefix_pad_2d_masks, suffix_att_2d_masks], dim=2)
-
- prefix_offsets = torch.sum(prefix_pad_masks, dim=-1)[:, None]
- position_ids = prefix_offsets + torch.cumsum(suffix_pad_masks, dim=1) - 1
-
- outputs_embeds, _ = self.paligemma_with_expert.forward(
- attention_mask=full_att_2d_masks,
- position_ids=position_ids,
- past_key_values=past_key_values,
- inputs_embeds=[None, suffix_embs],
- use_cache=self.config.use_cache,
- fill_kv_cache=False,
- )
- suffix_out = outputs_embeds[1]
- suffix_out = suffix_out[:, -self.config.n_action_steps :]
- suffix_out = suffix_out.to(dtype=torch.float32)
- v_t = self.action_out_proj(suffix_out)
- return v_t
diff --git a/lerobot/common/policies/pi0/paligemma_with_expert.py b/lerobot/common/policies/pi0/paligemma_with_expert.py
deleted file mode 100644
index 76e2ce60..00000000
--- a/lerobot/common/policies/pi0/paligemma_with_expert.py
+++ /dev/null
@@ -1,417 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from typing import List, Optional, Union
-
-import torch
-import torch.version
-from pytest import Cache
-from torch import nn
-from transformers import (
- AutoConfig,
- GemmaForCausalLM,
- PaliGemmaForConditionalGeneration,
- PretrainedConfig,
- PreTrainedModel,
-)
-from transformers.models.auto import CONFIG_MAPPING
-
-from lerobot.common.policies.pi0.flex_attention import flex_attention_forward
-
-
-def apply_rope(x, positions, max_wavelength=10_000):
- """
- Applies RoPE positions [B, L] to x [B, L, H, D].
- """
- d_half = x.shape[-1] // 2
- device = x.device
- dtype = x.dtype
- x = x.to(torch.float32)
-
- freq_exponents = (2.0 / x.shape[-1]) * torch.arange(d_half, dtype=torch.float32, device=device)
- timescale = max_wavelength**freq_exponents
- radians = positions[..., None].to(torch.float32) / timescale[None, None, :].to(torch.float32)
-
- radians = radians[..., None, :]
-
- sin = torch.sin(radians) # .to(dtype=dtype)
- cos = torch.cos(radians) # .to(dtype=dtype)
-
- x1, x2 = x.split(d_half, dim=-1)
- res = torch.empty_like(x)
- res[..., :d_half] = x1 * cos - x2 * sin
- res[..., d_half:] = x2 * cos + x1 * sin
-
- return res.to(dtype)
-
-
-class PaliGemmaWithExpertConfig(PretrainedConfig):
- model_type = "PaliGemmaWithExpertModel"
- sub_configs = {"paligemma_config": AutoConfig, "gemma_expert_config": AutoConfig}
-
- def __init__(
- self,
- paligemma_config: dict | None = None,
- gemma_expert_config: dict | None = None,
- freeze_vision_encoder: bool = True,
- train_expert_only: bool = True,
- attention_implementation: str = "eager",
- **kwargs,
- ):
- self.freeze_vision_encoder = freeze_vision_encoder
- self.train_expert_only = train_expert_only
- self.attention_implementation = attention_implementation
-
- if paligemma_config is None:
- # Default config from Pi0
- self.paligemma_config = CONFIG_MAPPING["paligemma"](
- transformers_version="4.48.1",
- _vocab_size=257152,
- bos_token_id=2,
- eos_token_id=1,
- hidden_size=2048,
- image_token_index=257152,
- model_type="paligemma",
- pad_token_id=0,
- projection_dim=2048,
- text_config={
- "hidden_activation": "gelu_pytorch_tanh",
- "hidden_size": 2048,
- "intermediate_size": 16384,
- "model_type": "gemma",
- "num_attention_heads": 8,
- "num_hidden_layers": 18,
- "num_image_tokens": 256,
- "num_key_value_heads": 1,
- "torch_dtype": "float32",
- "vocab_size": 257152,
- },
- vision_config={
- "hidden_size": 1152,
- "intermediate_size": 4304,
- "model_type": "siglip_vision_model",
- "num_attention_heads": 16,
- "num_hidden_layers": 27,
- "num_image_tokens": 256,
- "patch_size": 14,
- "projection_dim": 2048,
- "projector_hidden_act": "gelu_fast",
- "torch_dtype": "float32",
- "vision_use_head": False,
- },
- )
- elif isinstance(self.paligemma_config, dict):
- # Override Pi0 default config for PaliGemma
- if "model_type" not in gemma_expert_config:
- paligemma_config["model_type"] = "paligemma"
-
- cfg_cls = CONFIG_MAPPING[paligemma_config["model_type"]]
- self.paligemma_config = cfg_cls(**paligemma_config)
-
- if gemma_expert_config is None:
- # Default config from Pi0
- self.gemma_expert_config = CONFIG_MAPPING["gemma"](
- attention_bias=False,
- attention_dropout=0.0,
- bos_token_id=2,
- eos_token_id=1,
- head_dim=256,
- hidden_act="gelu_pytorch_tanh",
- hidden_activation="gelu_pytorch_tanh",
- hidden_size=1024,
- initializer_range=0.02,
- intermediate_size=4096,
- max_position_embeddings=8192,
- model_type="gemma",
- num_attention_heads=8,
- num_hidden_layers=18,
- num_key_value_heads=1,
- pad_token_id=0,
- rms_norm_eps=1e-06,
- rope_theta=10000.0,
- torch_dtype="float32",
- transformers_version="4.48.1",
- use_cache=True,
- vocab_size=257152,
- )
- elif isinstance(self.gemma_expert_config, dict):
- # Override Pi0 default config for Gemma Expert
- if "model_type" not in gemma_expert_config:
- gemma_expert_config["model_type"] = "gemma"
-
- cfg_cls = CONFIG_MAPPING[paligemma_config["model_type"]]
- self.gemma_expert_config = cfg_cls(**gemma_expert_config)
-
- super().__init__(**kwargs)
-
- def __post_init__(self):
- super().__post_init__()
- if self.train_expert_only and not self.freeze_vision_encoder:
- raise ValueError(
- "You set `freeze_vision_encoder=False` and `train_expert_only=True` which are not compatible."
- )
-
- if self.attention_implementation not in ["eager", "fa2", "flex"]:
- raise ValueError(
- f"Wrong value provided for `attention_implementation` ({self.attention_implementation}). Expected 'eager', 'fa2' or 'flex'."
- )
-
-
-class PaliGemmaWithExpertModel(PreTrainedModel):
- config_class = PaliGemmaWithExpertConfig
-
- def __init__(self, config: PaliGemmaWithExpertConfig):
- super().__init__(config=config)
- self.config = config
- self.paligemma = PaliGemmaForConditionalGeneration(config=config.paligemma_config)
- self.gemma_expert = GemmaForCausalLM(config=config.gemma_expert_config)
- # Remove unused embed_tokens
- self.gemma_expert.model.embed_tokens = None
-
- self.to_bfloat16_like_physical_intelligence()
- self.set_requires_grad()
-
- def set_requires_grad(self):
- if self.config.freeze_vision_encoder:
- self.paligemma.vision_tower.eval()
- for params in self.paligemma.vision_tower.parameters():
- params.requires_grad = False
-
- if self.config.train_expert_only:
- self.paligemma.eval()
- for params in self.paligemma.parameters():
- params.requires_grad = False
-
- def train(self, mode: bool = True):
- super().train(mode)
-
- if self.config.freeze_vision_encoder:
- self.paligemma.vision_tower.eval()
-
- if self.config.train_expert_only:
- self.paligemma.eval()
-
- def to_bfloat16_like_physical_intelligence(self):
- self.paligemma = self.paligemma.to(dtype=torch.bfloat16)
-
- params_to_change_dtype = [
- "language_model.model.layers",
- "gemma_expert.model.layers",
- "vision_tower",
- "multi_modal",
- ]
- for name, param in self.named_parameters():
- if any(selector in name for selector in params_to_change_dtype):
- param.data = param.data.to(dtype=torch.bfloat16)
-
- def embed_image(self, image: torch.Tensor):
- return self.paligemma.get_image_features(image)
-
- def embed_language_tokens(self, tokens: torch.Tensor):
- return self.paligemma.language_model.model.embed_tokens(tokens)
-
- # TODO: break down this huge forward into modules or functions
- def forward(
- self,
- attention_mask: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.LongTensor] = None,
- past_key_values: Optional[Union[List[torch.FloatTensor], Cache]] = None,
- inputs_embeds: List[torch.FloatTensor] = None,
- use_cache: Optional[bool] = None,
- fill_kv_cache: Optional[bool] = None,
- ):
- models = [self.paligemma.language_model.model, self.gemma_expert.model]
-
- for hidden_states in inputs_embeds:
- # TODO this is very inefficient
- # dtype is always the same, batch size too (if > 1 len)
- # device could be trickier in multi gpu edge cases but that's it
- if hidden_states is None:
- continue
- batch_size = hidden_states.shape[0]
-
- # RMSNorm
- num_layers = self.paligemma.config.text_config.num_hidden_layers
- head_dim = self.paligemma.config.text_config.head_dim
- for layer_idx in range(num_layers):
- query_states = []
- key_states = []
- value_states = []
- for i, hidden_states in enumerate(inputs_embeds):
- if hidden_states is None:
- continue
- layer = models[i].layers[layer_idx]
- # normalizer = torch.tensor(models[i].config.hidden_size**0.5, dtype=hidden_states.dtype)
- # hidden_states = hidden_states * normalizer
- hidden_states = layer.input_layernorm(hidden_states)
-
- input_shape = hidden_states.shape[:-1]
- hidden_shape = (*input_shape, -1, layer.self_attn.head_dim)
-
- hidden_states = hidden_states.to(dtype=torch.bfloat16)
- query_state = layer.self_attn.q_proj(hidden_states).view(hidden_shape)
- key_state = layer.self_attn.k_proj(hidden_states).view(hidden_shape)
- value_state = layer.self_attn.v_proj(hidden_states).view(hidden_shape)
-
- query_states.append(query_state)
- key_states.append(key_state)
- value_states.append(value_state)
-
- # B,L,H,D with L sequence length, H number of heads, D head dim
- # concatenate on the number of embeddings/tokens
- query_states = torch.cat(query_states, dim=1)
- key_states = torch.cat(key_states, dim=1)
- value_states = torch.cat(value_states, dim=1)
-
- query_states = apply_rope(query_states, position_ids)
- key_states = apply_rope(key_states, position_ids)
-
- if use_cache and past_key_values is None:
- past_key_values = {}
-
- if use_cache:
- if fill_kv_cache:
- past_key_values[layer_idx] = {
- "key_states": key_states,
- "value_states": value_states,
- }
- else:
- # TODO here, some optimization can be done - similar to a `StaticCache` we can declare the `max_len` before.
- # so we create an empty cache, with just one cuda malloc, and if (in autoregressive case) we reach
- # the max len, then we (for instance) double the cache size. This implementation already exists
- # in `transformers`. (molbap)
- key_states = torch.cat([past_key_values[layer_idx]["key_states"], key_states], dim=1)
- value_states = torch.cat(
- [past_key_values[layer_idx]["value_states"], value_states], dim=1
- )
-
- attention_interface = self.get_attention_interface()
- att_output = attention_interface(
- attention_mask, batch_size, head_dim, query_states, key_states, value_states
- )
- att_output = att_output.to(dtype=torch.bfloat16)
-
- # first part of att_output is prefix (up to sequence length, [:, 0:prefix_seq_len])
- outputs_embeds = []
- start = 0
- for i, hidden_states in enumerate(inputs_embeds):
- layer = models[i].layers[layer_idx]
-
- if hidden_states is not None:
- end = start + hidden_states.shape[1]
-
- if att_output.dtype != layer.self_attn.o_proj.weight.dtype:
- att_output = att_output.to(layer.self_attn.o_proj.weight.dtype)
- out_emb = layer.self_attn.o_proj(att_output[:, start:end])
-
- # TODO: first dropout (by default 0.0)
-
- # first residual
- out_emb += hidden_states
- after_first_residual = out_emb.clone()
-
- out_emb = layer.post_attention_layernorm(out_emb)
- out_emb = layer.mlp(out_emb)
-
- # TODO: second dropout (by default 0.0)
-
- # second residual
- out_emb += after_first_residual
-
- outputs_embeds.append(out_emb)
-
- start = end
- else:
- outputs_embeds.append(None)
-
- inputs_embeds = outputs_embeds
-
- # final norm
- outputs_embeds = []
- for i, hidden_states in enumerate(inputs_embeds):
- if hidden_states is not None:
- out_emb = models[i].norm(hidden_states)
- outputs_embeds.append(out_emb)
- else:
- outputs_embeds.append(None)
-
- return outputs_embeds, past_key_values
-
- def get_attention_interface(self):
- if self.config.attention_implementation == "fa2":
- attention_interface = self.flash_attention_forward
- elif self.config.attention_implementation == "flex":
- attention_interface = flex_attention_forward
- else:
- attention_interface = self.eager_attention_forward
- return attention_interface
-
- def flash_attention_forward(
- self, attention_mask, batch_size, head_dim, query_states, key_states, value_states
- ):
- raise NotImplementedError("FA2 is not implemented (yet)")
-
- def eager_attention_forward(
- self, attention_mask, batch_size, head_dim, query_states, key_states, value_states
- ):
- num_att_heads = self.config.paligemma_config.text_config.num_attention_heads
- num_key_value_heads = self.config.paligemma_config.text_config.num_key_value_heads
- num_key_value_groups = num_att_heads // num_key_value_heads
-
- # query_states: batch_size, sequence_length, num_att_head, head_dim
- # key_states: batch_size, sequence_length, num_key_value_head, head_dim
- # value_states: batch_size, sequence_length, num_key_value_head, head_dim
- sequence_length = key_states.shape[1]
-
- key_states = key_states[:, :, :, None, :].expand(
- batch_size, sequence_length, num_key_value_heads, num_key_value_groups, head_dim
- )
- key_states = key_states.reshape(
- batch_size, sequence_length, num_key_value_heads * num_key_value_groups, head_dim
- )
-
- value_states = value_states[:, :, :, None, :].expand(
- batch_size, sequence_length, num_key_value_heads, num_key_value_groups, head_dim
- )
- value_states = value_states.reshape(
- batch_size, sequence_length, num_key_value_heads * num_key_value_groups, head_dim
- )
-
- # Attention here is upcasted to float32 to match the original eager implementation.
-
- query_states = query_states.to(dtype=torch.float32)
- key_states = key_states.to(dtype=torch.float32)
-
- query_states = query_states.transpose(1, 2)
- key_states = key_states.transpose(1, 2)
-
- att_weights = torch.matmul(query_states, key_states.transpose(2, 3))
- att_weights *= head_dim**-0.5
- big_neg = -2.3819763e38 # See gemma/modules.py
-
- masked_att_weights = torch.where(attention_mask[:, None, :, :], att_weights, big_neg)
-
- probs = nn.functional.softmax(masked_att_weights, dim=-1)
- probs = probs.to(dtype=value_states.dtype)
-
- # probs: batch_size, num_key_value_head, num_att_head, sequence_length, sequence_length
- # value_states: batch_size, sequence_length, num_att_heads, head_dim
-
- att_output = torch.matmul(probs, value_states.permute(0, 2, 1, 3))
-
- att_output = att_output.permute(0, 2, 1, 3)
- # we use -1 because sequence length can change
- att_output = att_output.reshape(batch_size, -1, num_key_value_heads * num_key_value_groups * head_dim)
-
- return att_output
diff --git a/lerobot/common/policies/pi0fast/configuration_pi0fast.py b/lerobot/common/policies/pi0fast/configuration_pi0fast.py
deleted file mode 100644
index 29c856e0..00000000
--- a/lerobot/common/policies/pi0fast/configuration_pi0fast.py
+++ /dev/null
@@ -1,136 +0,0 @@
-from dataclasses import dataclass, field
-
-from lerobot.common.optim.optimizers import AdamWConfig
-from lerobot.common.optim.schedulers import (
- CosineDecayWithWarmupSchedulerConfig,
-)
-from lerobot.configs.policies import PreTrainedConfig
-from lerobot.configs.types import FeatureType, NormalizationMode, PolicyFeature
-
-
-@PreTrainedConfig.register_subclass("pi0fast")
-@dataclass
-class PI0FASTConfig(PreTrainedConfig):
- # Input / output structure.
- n_obs_steps: int = 1
- chunk_size: int = 10
- n_action_steps: int = 5
-
- normalization_mapping: dict[str, NormalizationMode] = field(
- default_factory=lambda: {
- "VISUAL": NormalizationMode.IDENTITY,
- "STATE": NormalizationMode.MEAN_STD,
- "ACTION": NormalizationMode.MEAN_STD,
- }
- )
-
- # Shorter state and action vectors will be padded
- max_state_dim: int = 32 # 32
- max_action_dim: int = 32 # 32
-
- # Image preprocessing
- resize_imgs_with_padding: tuple[int, int] = (224, 224)
- interpolate_like_pi: bool = False
-
- # Add empty images. Used by pi0_aloha_sim which adds the empty
- # left and right wrist cameras in addition to the top camera.
- empty_cameras: int = 0
-
- # Converts the joint and gripper values from the standard Aloha space to
- # the space used by the pi internal runtime which was used to train the base model.
- adapt_to_pi_aloha: bool = False
-
- # Converts joint dimensions to deltas with respect to the current state before passing to the model.
- # Gripper dimensions will remain in absolute values.
- use_delta_joint_actions_aloha: bool = False
-
- # Tokenizer
- tokenizer_max_length: int = 48
-
- # Projector
- proj_width: int = 1024
-
- # Decoding
- max_decoding_steps: int = 256
- fast_skip_tokens: int = 128 # Skip last 128 tokens in PaliGemma vocab since they are special tokens
- max_input_seq_len: int = 256 # 512
-
- # Utils
- use_cache: bool = True
-
- # Frozen parameters
- freeze_vision_encoder: bool = True
- freeze_lm_head: bool = True
-
- # Training presets
- optimizer_lr: float = 1e-4
- optimizer_betas: tuple[float, float] = (0.9, 0.95)
- optimizer_eps: float = 1e-8
- optimizer_weight_decay: float = 1e-5
-
- scheduler_warmup_steps: int = 1_000
- scheduler_decay_steps: int = 30_000
- scheduler_decay_lr: float = 2.5e-6
-
- checkpoint_path: str = None
-
- padding_side: str = "right"
-
- precision: str = "bfloat16"
- grad_clip_norm: float = 1
-
- # Allows padding/truncation of generated action tokens during detokenization to ensure decoding.
- # In the original version, tensors of 0s were generated if shapes didn't match for stable decoding.
- relaxed_action_decoding: bool = True
-
- def __post_init__(self):
- super().__post_init__()
-
- """Input validation (not exhaustive)."""
- if self.n_action_steps > self.chunk_size:
- raise ValueError(
- f"The chunk size is the upper bound for the number of action steps per model invocation. Got "
- f"{self.n_action_steps} for `n_action_steps` and {self.chunk_size} for `chunk_size`."
- )
- if self.n_obs_steps != 1:
- raise ValueError(
- f"Multiple observation steps not handled yet. Got `nobs_steps={self.n_obs_steps}`"
- )
-
- def validate_features(self) -> None:
- for i in range(self.empty_cameras):
- key = f"observation.images.empty_camera_{i}"
- empty_camera = PolicyFeature(
- type=FeatureType.VISUAL,
- shape=(3, 480, 640),
- )
- self.input_features[key] = empty_camera
-
- def get_optimizer_preset(self) -> AdamWConfig:
- return AdamWConfig(
- lr=self.optimizer_lr,
- betas=self.optimizer_betas,
- eps=self.optimizer_eps,
- weight_decay=self.optimizer_weight_decay,
- grad_clip_norm=self.grad_clip_norm,
- )
-
- def get_scheduler_preset(self):
- return CosineDecayWithWarmupSchedulerConfig(
- peak_lr=self.optimizer_lr,
- decay_lr=self.scheduler_decay_lr,
- num_warmup_steps=self.scheduler_warmup_steps,
- num_decay_steps=self.scheduler_decay_steps,
- )
-
- @property
- def observation_delta_indices(self) -> None:
- return None
-
- @property
- def action_delta_indices(self) -> list:
- return list(range(self.chunk_size))
-
- @property
- def reward_delta_indices(self) -> None:
- return None
diff --git a/lerobot/common/policies/pi0fast/modeling_pi0fast.py b/lerobot/common/policies/pi0fast/modeling_pi0fast.py
deleted file mode 100644
index f19e8c83..00000000
--- a/lerobot/common/policies/pi0fast/modeling_pi0fast.py
+++ /dev/null
@@ -1,973 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 Physical Intelligence and The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-π0+FAST: Efficient Action Tokenization for Vision-Language-Action Models
-
-[Paper](https://arxiv.org/abs/2501.09747)
-[Jax code](https://github.com/Physical-Intelligence/openpi)
-
-Designed by Physical Intelligence. Ported from Jax by Hugging Face.
-
-Example of finetuning the pi0+FAST pretrained model (`pi0_fast_base` in `openpi`):
-```bash
-python lerobot/scripts/train.py \
---policy.path=lerobot/pi0fast_base \
---dataset.repo_id=danaaubakirova/koch_test
-```
-
-Example of training the pi0+FAST neural network with from scratch:
-```bash
-python lerobot/scripts/train.py \
---policy.type=pi0fast \
---dataset.repo_id=danaaubakirova/koch_test
-```
-
-Example of using the pi0 pretrained model outside LeRobot training framework:
-```python
-policy = PI0FASTPolicy.from_pretrained("lerobot/pi0fast_base")
-```
-
-"""
-
-from collections import deque
-from functools import partial
-
-import numpy as np
-import torch
-import torch.nn.functional as F # noqa: N812
-from PIL import Image
-from scipy.fft import idct
-from torch import Tensor, nn
-from transformers import AutoProcessor, AutoTokenizer, PaliGemmaForConditionalGeneration
-from transformers.cache_utils import HybridCache, StaticCache
-from transformers.models.auto import CONFIG_MAPPING
-
-from lerobot.common.constants import ACTION, OBS_ROBOT
-from lerobot.common.policies.normalize import Normalize, Unnormalize
-from lerobot.common.policies.pi0fast.configuration_pi0fast import PI0FASTConfig
-from lerobot.common.policies.pretrained import PreTrainedPolicy
-
-PRECISION = {
- "float16": torch.float16,
- "float32": torch.float32,
- "bfloat16": torch.bfloat16,
-}
-
-
-def normalize(x, min_val, max_val):
- return (x - min_val) / (max_val - min_val)
-
-
-def unnormalize(x, min_val, max_val):
- return x * (max_val - min_val) + min_val
-
-
-def safe_arcsin(value):
- # This ensures that the input stays within
- # [−1,1] to avoid invalid values for arcsin
- return torch.arcsin(torch.clamp(value, -1.0, 1.0))
-
-
-def aloha_gripper_to_angular(value):
- # Aloha transforms the gripper positions into a linear space. The following code
- # reverses this transformation to be consistent with pi0 which is pretrained in
- # angular space.
- #
- # These values are coming from the Aloha code:
- # PUPPET_GRIPPER_POSITION_OPEN, PUPPET_GRIPPER_POSITION_CLOSED
- value = unnormalize(value, min_val=0.01844, max_val=0.05800)
-
- # This is the inverse of the angular to linear transformation inside the Interbotix code.
- def linear_to_radian(linear_position, arm_length, horn_radius):
- value = (horn_radius**2 + linear_position**2 - arm_length**2) / (2 * horn_radius * linear_position)
- return safe_arcsin(value)
-
- # The constants are taken from the Interbotix code.
- value = linear_to_radian(value, arm_length=0.036, horn_radius=0.022)
-
- # Normalize to [0, 1].
- # The values 0.4 and 1.5 were measured on an actual Trossen robot.
- return normalize(value, min_val=0.4, max_val=1.5)
-
-
-def aloha_gripper_from_angular(value):
- # Convert from the gripper position used by pi0 to the gripper position that is used by Aloha.
- # Note that the units are still angular but the range is different.
-
- # The values 0.4 and 1.5 were measured on an actual Trossen robot.
- value = unnormalize(value, min_val=0.4, max_val=1.5)
-
- # These values are coming from the Aloha code:
- # PUPPET_GRIPPER_JOINT_OPEN, PUPPET_GRIPPER_JOINT_CLOSE
- return normalize(value, min_val=-0.6213, max_val=1.4910)
-
-
-def aloha_gripper_from_angular_inv(value):
- # Directly inverts the gripper_from_angular function.
- value = unnormalize(value, min_val=-0.6213, max_val=1.4910)
- return normalize(value, min_val=0.4, max_val=1.5)
-
-
-class PI0FASTPolicy(PreTrainedPolicy):
- """Wrapper class around PI0FAST tokenizer and model to train and run inference within LeRobot."""
-
- config_class = PI0FASTConfig
- name = "pi0fast"
-
- def __init__(
- self,
- config: PI0FASTConfig,
- dataset_stats: dict[str, dict[str, Tensor]] | None = None,
- ):
- """
- Args:
- config: Policy configuration class instance or None, in which case the default instantiation of
- the configuration class is used.
- dataset_stats: Dataset statistics to be used for normalization. If not passed here, it is expected
- that they will be passed with a call to `load_state_dict` before the policy is used.
- """
-
- super().__init__(config)
- config.validate_features()
- self.config = config
-
- self.normalize_inputs = Normalize(config.input_features, config.normalization_mapping, dataset_stats)
- self.normalize_targets = Normalize(
- config.output_features, config.normalization_mapping, dataset_stats
- )
- self.unnormalize_outputs = Unnormalize(
- config.output_features, config.normalization_mapping, dataset_stats
- )
-
- self.language_tokenizer = AutoProcessor.from_pretrained("google/paligemma-3b-pt-224")
- self.model = PI0FAST(config)
-
- self.reset()
-
- def reset(self):
- """This should be called whenever the environment is reset."""
- self._action_queue = deque([], maxlen=self.config.n_action_steps)
-
- def get_optim_params(self) -> dict:
- return self.parameters()
-
- def _pi_aloha_decode_state(self, state):
- # Flip the joints.
- for motor_idx in [1, 2, 8, 9]:
- state[:, motor_idx] *= -1
- # Reverse the gripper transformation that is being applied by the Aloha runtime.
- for motor_idx in [6, 13]:
- state[:, motor_idx] = aloha_gripper_to_angular(state[:, motor_idx])
- return state
-
- def _pi_aloha_encode_actions(self, actions):
- # Flip the joints.
- for motor_idx in [1, 2, 8, 9]:
- actions[:, :, motor_idx] *= -1
- # Reverse the gripper transformation that is being applied by the Aloha runtime.
- for motor_idx in [6, 13]:
- actions[:, :, motor_idx] = aloha_gripper_from_angular(actions[:, :, motor_idx])
- return actions
-
- def _pi_aloha_encode_actions_inv(self, actions):
- # Flip the joints again.
- for motor_idx in [1, 2, 8, 9]:
- actions[:, :, motor_idx] *= -1
- # Reverse the gripper transformation that is being applied by the Aloha runtime.
- for motor_idx in [6, 13]:
- actions[:, :, motor_idx] = aloha_gripper_from_angular_inv(actions[:, :, motor_idx])
- return actions
-
- @torch.no_grad
- def select_action(self, batch: dict[str, Tensor]) -> Tensor:
- """Select a single action given environment observations.
-
- This method wraps `select_actions` in order to return one action at a time for execution in the
- environment. It works by managing the actions in a queue and only calling `select_actions` when the
- queue is empty.
- """
- self.eval()
-
- if self.config.adapt_to_pi_aloha:
- batch[OBS_ROBOT] = self._pi_aloha_decode_state(batch[OBS_ROBOT])
-
- batch = self.normalize_inputs(batch)
-
- # Action queue logic for n_action_steps > 1. When the action_queue is depleted, populate it by
- # querying the policy.
- if len(self._action_queue) == 0:
- actions = self.model.generate_actions(batch)
-
- actions = actions[:, : self.config.n_action_steps]
-
- original_action_dim = self.config.action_feature.shape[
- 0
- ] # self.config.max_action_dim # self.config.action_feature.shape[0]
- actions = actions[:, :, :original_action_dim]
-
- actions = self.unnormalize_outputs({"action": actions})["action"]
-
- if self.config.adapt_to_pi_aloha:
- actions = self._pi_aloha_encode_actions(actions)
-
- # `self.model.forward` returns a (batch_size, n_action_steps, action_dim) tensor, but the queue
- # effectively has shape (n_action_steps, batch_size, *), hence the transpose.
- self._action_queue.extend(actions.transpose(0, 1))
- return self._action_queue.popleft()
-
- def forward(self, batch: dict[str, Tensor]) -> dict[str, Tensor]:
- if self.config.adapt_to_pi_aloha:
- batch[OBS_ROBOT] = self._pi_aloha_decode_state(batch[OBS_ROBOT])
- batch[ACTION] = self._pi_aloha_encode_actions_inv(batch[ACTION])
- batch = self.normalize_inputs(batch)
- batch = self.normalize_targets(batch)
- loss_dict = self.model.forward(batch)
- return loss_dict["loss"], loss_dict
-
-
-def block_causal_update_causal_mask(
- attention_mask,
- token_type_ids=None,
- past_key_values=None,
- cache_position=None,
- input_tensor=None,
- attn_implementation: str = "eager",
- dtype: torch.dtype = "float32",
-):
- """
- Update the causal mask during training and generation. It can be customized to different attention masks.
- """
- if attn_implementation == "flash_attention_2":
- if attention_mask is not None and 0.0 in attention_mask:
- return attention_mask
- return None
- using_static_cache = isinstance(past_key_values, StaticCache)
- min_dtype = torch.finfo(dtype).min
-
- if input_tensor is None:
- input_tensor = attention_mask
-
- inputs_lead_dim, sequence_length = input_tensor.shape[:2]
-
- if using_static_cache or isinstance(past_key_values, HybridCache):
- target_length = past_key_values.get_max_cache_shape()
- else:
- target_length = (
- attention_mask.shape[-1]
- if isinstance(attention_mask, torch.Tensor)
- else cache_position[0] + sequence_length + 1
- )
-
- # Handle precomputed attention masks
- if attention_mask is not None and attention_mask.dim() == 4:
- return attention_mask
-
- # Causal mask initialization
- causal_mask = torch.full(
- (sequence_length, target_length), fill_value=min_dtype, dtype=dtype, device=cache_position.device
- )
-
- # Standard causal masking (triu ensures tokens can only attend to past)
- if sequence_length != 1:
- causal_mask = torch.triu(causal_mask, diagonal=1)
-
- # Apply block causal mask
- if token_type_ids is not None:
- token_type_ids = token_type_ids.to(causal_mask.device).bool()
- cumsum = torch.cumsum(token_type_ids, dim=1)
- block_causal_mask = cumsum[:, None, :] <= cumsum[:, :, None]
-
- # Combine causal_mask with block-wise attention mask
- causal_mask = torch.where(block_causal_mask, 0.0, causal_mask)
- causal_mask = causal_mask[:, None, :, :]
- else:
- # Apply past cache position constraint
- causal_mask *= torch.arange(target_length, device=cache_position.device) > cache_position.reshape(
- -1, 1
- )
- causal_mask = causal_mask[None, None, :, :].expand(inputs_lead_dim, 1, -1, -1)
- else:
- # Apply past cache position constraint
- causal_mask *= torch.arange(target_length, device=cache_position.device) > cache_position.reshape(
- -1, 1
- )
- causal_mask = causal_mask[None, None, :, :].expand(inputs_lead_dim, 1, -1, -1)
-
- if attention_mask is not None:
- causal_mask = causal_mask.clone() # Copy to contiguous memory for in-place edits
- mask_length = attention_mask.shape[-1]
-
- # Apply padding mask
- padding_mask = causal_mask[:, :, :, :mask_length] + attention_mask[:, None, None, :].to(
- causal_mask.device
- )
- padding_mask = padding_mask == 0
- causal_mask[:, :, :, :mask_length] = causal_mask[:, :, :, :mask_length].masked_fill(
- padding_mask, min_dtype
- )
-
- return causal_mask
-
-
-def prepare_inputs_for_generation(
- # self,
- input_ids,
- past_key_values=None,
- inputs_embeds=None,
- cache_position=None,
- position_ids=None,
- pixel_values=None,
- attention_mask=None,
- token_type_ids=None,
- use_cache=True,
- num_logits_to_keep=None,
- labels=None,
- self=None,
- **kwargs,
-):
- # create block causal attention
- if cache_position[0] > 0 and input_ids.shape[1] > 0:
- input_tensor = input_ids[:, -1:]
- new_positions = (
- torch.ones(
- (position_ids.shape[0], input_ids.shape[1]),
- dtype=position_ids.dtype,
- device=position_ids.device,
- ).cumsum(-1)
- + position_ids[:, -1:]
- )
- position_ids = torch.cat([position_ids, new_positions], dim=-1)
- else:
- input_tensor = inputs_embeds
- attention_mask = block_causal_update_causal_mask(
- attention_mask=attention_mask,
- past_key_values=past_key_values,
- cache_position=cache_position,
- input_tensor=input_tensor,
- token_type_ids=token_type_ids,
- dtype=self.dtype,
- attn_implementation=self.config.text_config._attn_implementation,
- )
- # Overwritten -- custom `position_ids` and `pixel_values` handling
- model_inputs = self.language_model.prepare_inputs_for_generation(
- input_ids,
- past_key_values=past_key_values,
- inputs_embeds=inputs_embeds,
- attention_mask=attention_mask,
- position_ids=position_ids,
- cache_position=cache_position,
- use_cache=use_cache,
- num_logits_to_keep=num_logits_to_keep,
- token_type_ids=token_type_ids,
- **kwargs,
- )
-
- # Position_ids in Paligemma are 1-indexed
- if model_inputs.get("position_ids") is not None:
- model_inputs["position_ids"] += 1
- # If we're in cached decoding stage, pixel values should be None because input ids do not contain special image token anymore
- # Otherwise we need pixel values to be passed to model. NOTE: use_cache=False needs pixel_values always
- if cache_position[0] == 0:
- model_inputs["pixel_values"] = pixel_values
- is_training = token_type_ids is not None and labels is not None
- if cache_position[0] == 0 and isinstance(past_key_values, HybridCache):
- input_tensor = inputs_embeds if inputs_embeds is not None else input_ids
- causal_mask = self._update_causal_mask(
- attention_mask, token_type_ids, past_key_values, cache_position, input_tensor, is_training
- )
- model_inputs["attention_mask"] = causal_mask
-
- return model_inputs
-
-
-class PI0FAST(nn.Module):
- def __init__(self, config: PI0FASTConfig):
- super().__init__()
- self.config = config
-
- # TODO: move tokenizers in Policy
- fast_tokenizer_path = "physical-intelligence/fast"
- pi0_paligemma_path = "google/paligemma-3b-pt-224"
- self.paligemma_tokenizer = AutoTokenizer.from_pretrained(pi0_paligemma_path)
- self.processor = AutoProcessor.from_pretrained(pi0_paligemma_path)
- self.fast_tokenizer = AutoProcessor.from_pretrained(fast_tokenizer_path, trust_remote_code=True)
- self.fast_skip_tokens = self.config.fast_skip_tokens
- self.max_input_seq_len = self.config.max_input_seq_len
- self.action_horizon = self.config.chunk_size
- self.action_dim = self.config.action_feature.shape[
- 0
- ] # self.config.max_action_dim # self.config.action_feature.shape[0]
- precision = config.precision
- torch_precision = PRECISION.get(precision, torch.float32)
- self.pad_token_id = (
- self.paligemma_tokenizer.pad_token_id
- if hasattr(self.paligemma_tokenizer, "pad_token_id")
- else self.paligemma_tokenizer.eos_token_id
- )
-
- paligemma_config = CONFIG_MAPPING["paligemma"](
- transformers_version="4.48.1",
- _vocab_size=257152,
- bos_token_id=2,
- eos_token_id=1,
- hidden_size=2048,
- image_token_index=257152,
- model_type="paligemma",
- pad_token_id=0,
- projection_dim=2048,
- text_config={
- "hidden_activation": "gelu_pytorch_tanh",
- "hidden_size": 2048,
- "intermediate_size": 16384,
- "model_type": "gemma",
- "num_attention_heads": 8,
- "num_hidden_layers": 18,
- "num_image_tokens": 256,
- "num_key_value_heads": 1,
- "torch_dtype": precision,
- "vocab_size": 257152,
- "_attn_implementation": "eager",
- },
- vision_config={
- "hidden_size": 1152,
- "intermediate_size": 4304,
- "model_type": "siglip_vision_model",
- "num_attention_heads": 16,
- "num_hidden_layers": 27,
- "num_image_tokens": 256,
- "patch_size": 14,
- "projection_dim": 2048,
- "projector_hidden_act": "gelu_pytorch_tanh",
- "torch_dtype": precision,
- "vision_use_head": False,
- },
- )
- self.pi0_paligemma = PaliGemmaForConditionalGeneration(config=paligemma_config)
-
- self.pi0_paligemma.prepare_inputs_for_generation = partial(
- prepare_inputs_for_generation, self=self.pi0_paligemma
- )
- # change important stuff in bf16
- params_to_change_dtype = [
- "language_model",
- "vision_tower",
- "multi_modal",
- ]
- for name, param in self.pi0_paligemma.named_parameters():
- if any(selector in name for selector in params_to_change_dtype):
- param.data = param.data.to(dtype=torch_precision)
- self.set_requires_grad()
- self.image_keys = self.config.image_features.keys()
- self.ignore_index = self.pi0_paligemma.config.ignore_index
- self.padding_side = self.config.padding_side
-
- def set_requires_grad(self):
- if self.config.freeze_vision_encoder:
- self.pi0_paligemma.vision_tower.eval()
- for params in self.pi0_paligemma.vision_tower.parameters():
- params.requires_grad = False
- # To avoid unused params issue with distributed training
- if self.config.freeze_lm_head:
- for name, params in self.pi0_paligemma.named_parameters():
- if "embed_tokens" in name: # lm heads and embedding layer are tied
- params.requires_grad = False
-
- def embed_tokens(self, tokens: torch.Tensor):
- return self.pi0_paligemma.language_model.model.embed_tokens(tokens)
-
- def prepare_inputs_for_generation(self, *args, **kwargs):
- return self.pi0_paligemma.prepare_inputs_for_generation(*args, **kwargs)
-
- def prepare_images(self, batch):
- """Preprocess LeRobot batch into Pi0 inputs"""
- images = []
- img_masks = []
- present_img_keys = [key for key in self.image_keys if key in batch]
- if len(present_img_keys) == 0:
- raise ValueError(
- f"All image features are missing from the batch. At least one expected. (batch: {batch.keys()}) (image_features:{self.config.image_features})"
- )
-
- # Preprocess image features present in the batch
- num_empty_cameras = 0
- for key in self.image_keys:
- if key in present_img_keys:
- img = batch[key]
-
- if self.config.resize_imgs_with_padding is not None:
- img = resize_with_pad(
- img,
- *self.config.resize_imgs_with_padding,
- pad_value=0,
- interpolate_like_pi=self.config.interpolate_like_pi,
- )
-
- # Normalize from range [0,1] to [-1,1] as expected by siglip
- img = img * 2.0 - 1.0
-
- bsize = img.shape[0]
- device = img.device
- mask = torch.ones(bsize, dtype=torch.bool, device=device)
- else:
- if num_empty_cameras >= self.config.empty_cameras:
- continue
- img = torch.ones_like(img) * -1
- bsize = img.shape[0]
- device = img.device
- mask = torch.ones(bsize, dtype=torch.bool, device=device)
- num_empty_cameras += 1
-
- images.append(img)
- img_masks.append(mask)
- return images, img_masks
-
- def normalize_actions(self, actions: torch.Tensor) -> torch.Tensor:
- mins = actions.amin(dim=(1, 2), keepdim=True) # [0]
- maxs = actions.amax(dim=(1, 2), keepdim=True) # [0]
- return 2 * (actions - mins) / (maxs - mins + 1e-8) - 1
-
- def _act_tokens_to_paligemma_tokens(self, tokens: torch.Tensor) -> torch.Tensor:
- out = self.paligemma_tokenizer.vocab_size - 1 - self.fast_skip_tokens - tokens
- return out
-
- def fast_tokenizer_wrapper(self, actions_norm):
- """
- A wrapper for self.fast_tokenizer that ensures batch processing,
- conversion to PyTorch tensors, and returns a dictionary without padding.
- """
- batch_tokens = self.fast_tokenizer(actions_norm)
- fast_out = self.processor.tokenizer.pad({"input_ids": batch_tokens}, return_tensors="pt")
-
- return fast_out
-
- def create_token_type_ids(self, padded_mask: torch.Tensor, prefix_len: int) -> torch.Tensor:
- token_type_ids = torch.zeros_like(padded_mask, dtype=torch.bool)
- # Compute cumulative sum mask
- cumsum_mask = (padded_mask != 0).cumsum(dim=1)
- # Suffix block (everything after prefix_len)
- suffix_mask = cumsum_mask > prefix_len
- token_type_ids = suffix_mask
- return token_type_ids
-
- def create_input_tokens(self, state, lang_text, actions=None):
- bsize = state.shape[0]
- device = state.device
- bins = torch.linspace(-1, 1, 256 + 1, device=device)[:-1]
- discretized = torch.bucketize(state, bins) - 1
- discretized = discretized[:, :32]
-
- prefix_texts = []
- state_text = []
- for txt, disc in zip(lang_text, discretized, strict=False):
- cleaned = txt.lower().strip().replace("_", " ")
- state_str = " ".join(str(val.item()) for val in disc)
- prefix_texts.append(f"Task: {cleaned}, State: {state_str};\n")
- state_text.append(f"State: {state_str};\n")
-
- prefix_out = self.paligemma_tokenizer(
- prefix_texts, add_special_tokens=True, return_tensors="pt", padding="longest", truncation=False
- )
- prefix_ids = prefix_out["input_ids"].to(device)
- prefix_mask = prefix_out["attention_mask"].to(device)
- prefix_lens = prefix_mask.sum(dim=1)[:, None].cpu()
-
- if actions is not None:
- actions_norm = self.normalize_actions(actions)
- actions_pad = F.pad(
- actions_norm, (0, max(0, self.config.max_action_dim - actions_norm.shape[2])), value=0
- )[:, :, : self.config.max_action_dim]
- fast_out = self.fast_tokenizer_wrapper(
- actions_pad.cpu(),
- )
- act_ids = fast_out["input_ids"]
- act_mask = fast_out["attention_mask"].to(device)
-
- act_ids = self._act_tokens_to_paligemma_tokens(act_ids).to(device)
- # Replace action with 0 to pad tokens
- act_ids = torch.where(
- act_ids == self.paligemma_tokenizer.vocab_size - 1 - self.fast_skip_tokens,
- self.pad_token_id,
- act_ids,
- )
-
- eos_token = torch.tensor(
- [self.paligemma_tokenizer.eos_token_id], dtype=torch.long, device=device
- ).expand(bsize, -1)
- eos_mask = torch.tensor([1], dtype=torch.long, device=device).expand(bsize, -1)
- bos = self.paligemma_tokenizer("Action: ", add_special_tokens=False, return_tensors="pt")
- bos_token = bos["input_ids"].expand(act_ids.shape[0], -1).to(device)
- bos_mask = bos["attention_mask"].expand(act_ids.shape[0], -1).to(device)
- act_ids = torch.cat([bos_token, act_ids, eos_token], dim=1)
- act_mask = torch.cat([bos_mask, act_mask, eos_mask], dim=1)
- act_mask = act_mask.to(device)
- else:
- act_ids = torch.empty(bsize, self.pad_token_id, dtype=torch.long, device=device)
- act_mask = torch.empty(bsize, 0, dtype=torch.long, device=device)
- final_ids = torch.cat([prefix_ids, act_ids], dim=1)
-
- final_mask = torch.cat([prefix_mask, act_mask], dim=1)
- batch_inputs = {"input_ids": final_ids.tolist(), "attention_mask": final_mask.tolist()}
-
- # Use tokenizer pad function
- padded_output = self.paligemma_tokenizer.pad(
- batch_inputs, padding="longest", max_length=180, return_tensors="pt"
- )
- padded_mask = padded_output["attention_mask"]
-
- # define tensor of padding lengths
- att_mask = (padded_mask != 0).cumsum(dim=1) > prefix_lens
-
- token_type_ids = self.create_token_type_ids(padded_mask=padded_mask, prefix_len=prefix_lens)
-
- padded_output["padded_mask"] = padded_output.pop("attention_mask")
- padded_output["attention_mask"] = att_mask
- # loss is computed not on prefix, and not on padding
- padded_output["loss_mask"] = att_mask & padded_output["padded_mask"]
- padded_output["token_type_ids"] = token_type_ids
- return padded_output
-
- def shift_padding_side(
- self,
- tokens: torch.Tensor,
- ar_mask: torch.Tensor,
- padding_mask: torch.Tensor,
- loss_mask: torch.Tensor,
- targets: torch.Tensor,
- token_type_ids: torch.Tensor,
- padding_side: str = "right",
- ) -> tuple[torch.Tensor]:
- if padding_side not in ["right", "left"]:
- return tokens, ar_mask, padding_mask, loss_mask, targets, token_type_ids
-
- new_tokens = torch.empty_like(tokens)
- new_ar_masks = torch.empty_like(ar_mask)
- new_padding_mask = torch.empty_like(padding_mask)
- new_loss_mask = torch.empty_like(loss_mask)
- new_targets = torch.empty_like(targets)
- new_token_type_ids = torch.empty_like(token_type_ids)
- batch_size = tokens.shape[0]
- for i in range(batch_size):
- padding_indices = torch.where(padding_mask[i] == 0)[0]
- non_padding_indices = torch.where(padding_mask[i] == 1)[0]
- if padding_side == "left":
- new_indices = torch.cat((padding_indices, non_padding_indices), dim=0)
- else:
- new_indices = torch.cat((non_padding_indices, padding_indices), dim=0)
- new_tokens[i] = tokens[i].index_select(0, new_indices)
- new_ar_masks[i] = ar_mask[i].index_select(0, new_indices)
- new_padding_mask[i] = padding_mask[i].index_select(0, new_indices)
- new_loss_mask[i] = loss_mask[i].index_select(0, new_indices)
- new_targets[i] = targets[i].index_select(0, new_indices)
- new_token_type_ids[i] = token_type_ids[i].index_select(0, new_indices)
-
- return new_tokens, new_ar_masks, new_padding_mask, new_loss_mask, new_targets, new_token_type_ids
-
- def forward(self, batch: dict[str, Tensor]):
- device = batch[OBS_ROBOT].device
- # TODO: keep like this or move to the policy .forward
- images, img_masks = self.prepare_images(batch)
-
- padded_outs = self.create_input_tokens(
- state=batch[OBS_ROBOT],
- lang_text=batch["task"],
- actions=batch[ACTION],
- )
-
- embs, pad_masks, _, targets, loss_mask, token_type_ids = self.embed_inputs(
- images,
- img_masks,
- padded_outs["input_ids"],
- padded_outs["padded_mask"],
- padded_outs["attention_mask"],
- padded_outs["loss_mask"],
- padded_outs["token_type_ids"],
- padding_side=self.padding_side,
- )
- position_ids = torch.cumsum(pad_masks, dim=1) - 1
- token_type_ids = token_type_ids.to(dtype=torch.int64)
- past_seen_tokens = 0
- cache_position = torch.arange(past_seen_tokens, past_seen_tokens + embs.shape[1], device=embs.device)
- pad_masks = block_causal_update_causal_mask(
- attention_mask=pad_masks,
- past_key_values=None,
- cache_position=cache_position,
- input_tensor=embs,
- token_type_ids=token_type_ids,
- dtype=self.pi0_paligemma.dtype,
- attn_implementation=self.pi0_paligemma.config.text_config._attn_implementation,
- )
- outputs = self.pi0_paligemma.forward(
- input_ids=None,
- token_type_ids=None,
- attention_mask=pad_masks,
- position_ids=position_ids,
- past_key_values=None,
- inputs_embeds=embs,
- use_cache=False,
- labels=None,
- )
-
- logits = outputs.logits
-
- loss_fct = nn.CrossEntropyLoss(reduction="none")
-
- # Shift left for next-step prediction
- logits = logits[:, :-1, :]
- targets = targets[:, 1:].to(device) # Shift targets
- loss_mask = loss_mask[:, 1:].to(device) # Ensure correct shape
-
- # Compute per-token loss
- token_loss = loss_fct(logits.reshape(-1, logits.shape[-1]), targets.reshape(-1))
-
- # Apply loss mask
- token_loss = token_loss * loss_mask.reshape(-1)
-
- # Compute final loss
- loss = token_loss.sum() / torch.clamp(loss_mask.sum(), min=1)
-
- # Return loss dictionary
- loss_dict = {"ce_loss": loss.item(), "loss": loss}
- return loss_dict
-
- def decode_actions_with_fast(
- self,
- tokens: list[list[int]],
- *,
- time_horizon: int | None = None,
- action_dim: int | None = None,
- relaxed_decoding: bool = True,
- ) -> np.array:
- """
- Adapt original decoding in FAST to always return actions instead of zeros.
- """
- self.time_horizon = (
- time_horizon or self.fast_tokenizer.time_horizon or self.fast_tokenizer.called_time_horizon
- )
- self.action_dim = (
- action_dim or self.fast_tokenizer.action_dim or self.fast_tokenizer.called_action_dim
- )
-
- # Cache the time horizon and action dimension for the next call
- self.called_time_horizon = self.time_horizon
- self.called_action_dim = self.action_dim
-
- assert self.time_horizon is not None and self.action_dim is not None, (
- "Tokenizer not initialized, call encode() once or pass in time_horizon and action_dim."
- )
-
- decoded_actions = []
- for token in tokens:
- try:
- decoded_tokens = self.fast_tokenizer.bpe_tokenizer.decode(token)
- decoded_dct_coeff = np.array(list(map(ord, decoded_tokens))) + self.fast_tokenizer.min_token
- if relaxed_decoding:
- # Expected sequence length
- expected_seq_len = self.time_horizon * self.action_dim
- diff = expected_seq_len - decoded_dct_coeff.shape[0]
- # Apply truncation if too long
- if diff < 0:
- decoded_dct_coeff = decoded_dct_coeff[:expected_seq_len] # Truncate on the right
- # Apply padding if too short
- elif diff > 0:
- decoded_dct_coeff = np.pad(
- decoded_dct_coeff, (0, diff), mode="constant", constant_values=0
- )
-
- decoded_dct_coeff = decoded_dct_coeff.reshape(-1, self.action_dim)
- assert decoded_dct_coeff.shape == (
- self.time_horizon,
- self.action_dim,
- ), (
- f"Decoded DCT coefficients have shape {decoded_dct_coeff.shape}, expected ({self.time_horizon}, {self.action_dim})"
- )
- except Exception as e:
- print(f"Error decoding tokens: {e}")
- print(f"Tokens: {token}")
- decoded_dct_coeff = np.zeros((self.time_horizon, self.action_dim))
- decoded_actions.append(idct(decoded_dct_coeff / self.fast_tokenizer.scale, axis=0, norm="ortho"))
- return np.stack(decoded_actions)
-
- def extract_actions(self, tokens: torch.Tensor, action_horizon: int, action_dim: int) -> torch.Tensor:
- """
- Extracts actions from predicted output tokens using the FAST model.
-
- Args:
- tokens (torch.Tensor): The input tensor of tokenized outputs.
- action_horizon (int): The number of timesteps for actions.
- action_dim (int): The dimensionality of each action.
-
- Returns:
- torch.Tensor: The extracted actions as a tensor of shape (action_horizon, action_dim).
- """
- # Decode predicted output tokens
- decoded_tokens = self.paligemma_tokenizer.batch_decode(tokens, skip_special_tokens=True)
- cleaned_tokens = [
- tokens_sequence.replace("Action:", "").replace(":", "").strip().split("|")[0].strip()
- for tokens_sequence in decoded_tokens
- ]
- raw_action_tokens = [
- self.processor.tokenizer.encode(sample_tokens, return_tensors="pt", padding=False)
- for sample_tokens in cleaned_tokens
- ] # something like this should be robust #looks good
- action_tokens = [
- self._act_tokens_to_paligemma_tokens(raw_action_token) for raw_action_token in raw_action_tokens
- ]
- # returns the tensor of decoded actions per sample in a list
- decoded_actions = [
- torch.tensor(
- self.decode_actions_with_fast(
- tok.tolist(),
- time_horizon=action_horizon,
- action_dim=action_dim,
- relaxed_decoding=self.config.relaxed_action_decoding,
- ),
- device=tokens.device,
- ).squeeze(0)
- for tok in action_tokens
- ]
-
- return torch.stack(
- decoded_actions,
- dim=0,
- )
-
- def generate_actions(self, batch: dict[str, Tensor]):
- # TODO: keep like this or move to the policy .forward
- images, img_masks = self.prepare_images(batch)
-
- padded_outs = self.create_input_tokens(state=batch[OBS_ROBOT], lang_text=batch["task"], actions=None)
- embs, pad_masks, att_masks2, targets, loss_mask, token_type_ids = self.embed_inputs(
- images,
- img_masks,
- padded_outs["input_ids"],
- padded_outs["padded_mask"],
- padded_outs["attention_mask"],
- padded_outs["loss_mask"],
- padded_outs["token_type_ids"],
- padding_side="left",
- )
- token_type_ids = token_type_ids.to(dtype=torch.int64)
- prefix_position_ids = torch.cumsum(pad_masks, dim=1) - 1
- output_tokens = self.pi0_paligemma.generate(
- input_ids=None,
- attention_mask=pad_masks,
- position_ids=prefix_position_ids,
- past_key_values=None,
- inputs_embeds=embs,
- use_cache=self.config.use_cache,
- max_new_tokens=self.config.max_decoding_steps,
- do_sample=False,
- num_beams=1,
- token_type_ids=token_type_ids,
- )
- actions = self.extract_actions(output_tokens, self.action_horizon, self.action_dim)
- return actions
-
- def embed_image(self, image: torch.Tensor):
- return self.pi0_paligemma.get_image_features(image)
-
- def embed_inputs(
- self,
- images,
- img_masks,
- tokens,
- pad_mask,
- ar_mask,
- loss_mask,
- token_type_ids,
- padding_side: str = "right",
- ):
- # TODO: avoid list in python and torch.cat ; prefer pre-allocation with torch.empty
- # images are a list of same size
- # vectorizing everything!
- device = images[0].device
- image_embedding_dim = images[0].shape[-1] # TODO should be from self.config
- all_images = torch.stack(images, dim=1).to(device)
- b, n, c, h, w = all_images.shape
- all_images = all_images.view(b * n, c, h, w)
- embedded = self.embed_image(all_images).to(device)
- b_n, p, image_embedding_dim = embedded.shape # Extract current dimensions
- m = b_n // b # Compute the number of images per sample dynamically
-
- # Reshape dynamically
- embedded = embedded.view(b, m, p, image_embedding_dim)
- tokens_embs = self.embed_tokens(tokens.to(device))
-
- img_masks = torch.stack(img_masks, dim=1).unsqueeze(-1).to(device)
- num_img_emb = embedded.shape[2]
- img_pad_masks = img_masks.repeat(1, 1, num_img_emb).view(b, -1)
- img_att_masks = torch.zeros((b, n, num_img_emb), dtype=torch.long, device=device).reshape(b, -1)
-
- image_target_tokens = (
- torch.ones((b, n, num_img_emb), dtype=torch.long, device=device) * self.pad_token_id
- ).reshape(b, -1)
- image_loss_mask = torch.zeros((b, n, num_img_emb), dtype=torch.long, device=device).reshape(b, -1)
-
- embedded = embedded.reshape(b, n * num_img_emb, image_embedding_dim) # Shape: (B, N*P, D)
-
- embs = torch.cat([embedded, tokens_embs], dim=1).to(device)
- pad_masks = torch.cat([img_pad_masks, pad_mask.to(device)], dim=1)
- att_masks = torch.cat([img_att_masks, ar_mask.to(device)], dim=1)
- loss_masks = torch.cat([image_loss_mask, loss_mask.to(device)], dim=1)
- targets = torch.cat([image_target_tokens, tokens.to(device)], dim=1)
- token_type_ids = torch.cat([img_att_masks, token_type_ids.to(device)], dim=1)
-
- # Shift pad tokens to the left (.generate()) or right (.train())
- embs, att_masks, pad_masks, loss_masks, targets, token_type_ids = self.shift_padding_side(
- embs, att_masks, pad_masks, loss_masks, targets, token_type_ids, padding_side=padding_side
- )
-
- targets = torch.where(targets == self.pad_token_id, self.ignore_index, targets)
- return embs, pad_masks, att_masks, targets, loss_masks, token_type_ids
-
-
-def resize_with_pad(img, width, height, pad_value=0, interpolate_like_pi=True):
- # assume no-op when width height fits already
- if img.ndim != 4:
- raise ValueError(f"(b,c,h,w) expected, but {img.shape}")
-
- cur_height, cur_width = img.shape[2:]
-
- ratio = max(cur_width / width, cur_height / height)
- resized_height = int(cur_height / ratio)
- resized_width = int(cur_width / ratio)
-
- if interpolate_like_pi:
- img = (img * 255.0).to(dtype=torch.uint8)
- img = img.permute(0, 2, 3, 1)
- original_device = img.device
- img = img.to(device="cpu").numpy()
- imgs = []
- for sub_img in img:
- sub_img = Image.fromarray(sub_img)
- resized_img = sub_img.resize((resized_width, resized_height), resample=2)
- resized_img = torch.from_numpy(np.array(resized_img))
- imgs.append(resized_img)
- img = torch.stack(imgs, dim=0)
- img = img.permute(0, 3, 1, 2)
- resized_img = img.to(device=original_device, dtype=torch.float32) / 255.0
- else:
- resized_img = F.interpolate(
- img, size=(resized_height, resized_width), mode="bilinear", align_corners=False
- )
-
- pad_height = max(0, int(height - resized_height))
- pad_width = max(0, int(width - resized_width))
-
- # pad on left and top of image
- padded_img = F.pad(resized_img, (pad_width, 0, pad_height, 0), value=pad_value)
- return padded_img
diff --git a/lerobot/common/policies/utils.py b/lerobot/common/policies/utils.py
deleted file mode 100644
index c06e620b..00000000
--- a/lerobot/common/policies/utils.py
+++ /dev/null
@@ -1,67 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import torch
-from torch import nn
-
-
-def populate_queues(queues, batch):
- for key in batch:
- # Ignore keys not in the queues already (leaving the responsibility to the caller to make sure the
- # queues have the keys they want).
- if key not in queues:
- continue
- if len(queues[key]) != queues[key].maxlen:
- # initialize by copying the first observation several times until the queue is full
- while len(queues[key]) != queues[key].maxlen:
- queues[key].append(batch[key])
- else:
- # add latest observation to the queue
- queues[key].append(batch[key])
- return queues
-
-
-def get_device_from_parameters(module: nn.Module) -> torch.device:
- """Get a module's device by checking one of its parameters.
-
- Note: assumes that all parameters have the same device
- """
- return next(iter(module.parameters())).device
-
-
-def get_dtype_from_parameters(module: nn.Module) -> torch.dtype:
- """Get a module's parameter dtype by checking one of its parameters.
-
- Note: assumes that all parameters have the same dtype.
- """
- return next(iter(module.parameters())).dtype
-
-
-def get_output_shape(module: nn.Module, input_shape: tuple) -> tuple:
- """
- Calculates the output shape of a PyTorch module given an input shape.
-
- Args:
- module (nn.Module): a PyTorch module
- input_shape (tuple): A tuple representing the input shape, e.g., (batch_size, channels, height, width)
-
- Returns:
- tuple: The output shape of the module.
- """
- dummy_input = torch.zeros(size=input_shape)
- with torch.inference_mode():
- output = module(dummy_input)
- return tuple(output.shape)
diff --git a/lerobot/common/robot_devices/cameras/configs.py b/lerobot/common/robot_devices/cameras/configs.py
deleted file mode 100644
index 013419a9..00000000
--- a/lerobot/common/robot_devices/cameras/configs.py
+++ /dev/null
@@ -1,114 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import abc
-from dataclasses import dataclass
-
-import draccus
-
-
-@dataclass
-class CameraConfig(draccus.ChoiceRegistry, abc.ABC):
- @property
- def type(self) -> str:
- return self.get_choice_name(self.__class__)
-
-
-@CameraConfig.register_subclass("opencv")
-@dataclass
-class OpenCVCameraConfig(CameraConfig):
- """
- Example of tested options for Intel Real Sense D405:
-
- ```python
- OpenCVCameraConfig(0, 30, 640, 480)
- OpenCVCameraConfig(0, 60, 640, 480)
- OpenCVCameraConfig(0, 90, 640, 480)
- OpenCVCameraConfig(0, 30, 1280, 720)
- ```
- """
-
- camera_index: int
- fps: int | None = None
- width: int | None = None
- height: int | None = None
- color_mode: str = "rgb"
- channels: int | None = None
- rotation: int | None = None
- mock: bool = False
-
- def __post_init__(self):
- if self.color_mode not in ["rgb", "bgr"]:
- raise ValueError(
- f"`color_mode` is expected to be 'rgb' or 'bgr', but {self.color_mode} is provided."
- )
-
- self.channels = 3
-
- if self.rotation not in [-90, None, 90, 180]:
- raise ValueError(f"`rotation` must be in [-90, None, 90, 180] (got {self.rotation})")
-
-
-@CameraConfig.register_subclass("intelrealsense")
-@dataclass
-class IntelRealSenseCameraConfig(CameraConfig):
- """
- Example of tested options for Intel Real Sense D405:
-
- ```python
- IntelRealSenseCameraConfig(128422271347, 30, 640, 480)
- IntelRealSenseCameraConfig(128422271347, 60, 640, 480)
- IntelRealSenseCameraConfig(128422271347, 90, 640, 480)
- IntelRealSenseCameraConfig(128422271347, 30, 1280, 720)
- IntelRealSenseCameraConfig(128422271347, 30, 640, 480, use_depth=True)
- IntelRealSenseCameraConfig(128422271347, 30, 640, 480, rotation=90)
- ```
- """
-
- name: str | None = None
- serial_number: int | None = None
- fps: int | None = None
- width: int | None = None
- height: int | None = None
- color_mode: str = "rgb"
- channels: int | None = None
- use_depth: bool = False
- force_hardware_reset: bool = True
- rotation: int | None = None
- mock: bool = False
-
- def __post_init__(self):
- # bool is stronger than is None, since it works with empty strings
- if bool(self.name) and bool(self.serial_number):
- raise ValueError(
- f"One of them must be set: name or serial_number, but {self.name=} and {self.serial_number=} provided."
- )
-
- if self.color_mode not in ["rgb", "bgr"]:
- raise ValueError(
- f"`color_mode` is expected to be 'rgb' or 'bgr', but {self.color_mode} is provided."
- )
-
- self.channels = 3
-
- at_least_one_is_not_none = self.fps is not None or self.width is not None or self.height is not None
- at_least_one_is_none = self.fps is None or self.width is None or self.height is None
- if at_least_one_is_not_none and at_least_one_is_none:
- raise ValueError(
- "For `fps`, `width` and `height`, either all of them need to be set, or none of them, "
- f"but {self.fps=}, {self.width=}, {self.height=} were provided."
- )
-
- if self.rotation not in [-90, None, 90, 180]:
- raise ValueError(f"`rotation` must be in [-90, None, 90, 180] (got {self.rotation})")
diff --git a/lerobot/common/robot_devices/cameras/intelrealsense.py b/lerobot/common/robot_devices/cameras/intelrealsense.py
deleted file mode 100644
index 3c3cf3c3..00000000
--- a/lerobot/common/robot_devices/cameras/intelrealsense.py
+++ /dev/null
@@ -1,538 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-This file contains utilities for recording frames from Intel Realsense cameras.
-"""
-
-import argparse
-import concurrent.futures
-import logging
-import math
-import shutil
-import threading
-import time
-import traceback
-from collections import Counter
-from pathlib import Path
-from threading import Thread
-
-import numpy as np
-from PIL import Image
-
-from lerobot.common.robot_devices.cameras.configs import IntelRealSenseCameraConfig
-from lerobot.common.robot_devices.utils import (
- RobotDeviceAlreadyConnectedError,
- RobotDeviceNotConnectedError,
- busy_wait,
-)
-from lerobot.common.utils.utils import capture_timestamp_utc
-
-SERIAL_NUMBER_INDEX = 1
-
-
-def find_cameras(raise_when_empty=True, mock=False) -> list[dict]:
- """
- Find the names and the serial numbers of the Intel RealSense cameras
- connected to the computer.
- """
- if mock:
- import tests.cameras.mock_pyrealsense2 as rs
- else:
- import pyrealsense2 as rs
-
- cameras = []
- for device in rs.context().query_devices():
- serial_number = int(device.get_info(rs.camera_info(SERIAL_NUMBER_INDEX)))
- name = device.get_info(rs.camera_info.name)
- cameras.append(
- {
- "serial_number": serial_number,
- "name": name,
- }
- )
-
- if raise_when_empty and len(cameras) == 0:
- raise OSError(
- "Not a single camera was detected. Try re-plugging, or re-installing `librealsense` and its python wrapper `pyrealsense2`, or updating the firmware."
- )
-
- return cameras
-
-
-def save_image(img_array, serial_number, frame_index, images_dir):
- try:
- img = Image.fromarray(img_array)
- path = images_dir / f"camera_{serial_number}_frame_{frame_index:06d}.png"
- path.parent.mkdir(parents=True, exist_ok=True)
- img.save(str(path), quality=100)
- logging.info(f"Saved image: {path}")
- except Exception as e:
- logging.error(f"Failed to save image for camera {serial_number} frame {frame_index}: {e}")
-
-
-def save_images_from_cameras(
- images_dir: Path,
- serial_numbers: list[int] | None = None,
- fps=None,
- width=None,
- height=None,
- record_time_s=2,
- mock=False,
-):
- """
- Initializes all the cameras and saves images to the directory. Useful to visually identify the camera
- associated to a given serial number.
- """
- if serial_numbers is None or len(serial_numbers) == 0:
- camera_infos = find_cameras(mock=mock)
- serial_numbers = [cam["serial_number"] for cam in camera_infos]
-
- if mock:
- import tests.cameras.mock_cv2 as cv2
- else:
- import cv2
-
- print("Connecting cameras")
- cameras = []
- for cam_sn in serial_numbers:
- print(f"{cam_sn=}")
- config = IntelRealSenseCameraConfig(
- serial_number=cam_sn, fps=fps, width=width, height=height, mock=mock
- )
- camera = IntelRealSenseCamera(config)
- camera.connect()
- print(
- f"IntelRealSenseCamera({camera.serial_number}, fps={camera.fps}, width={camera.capture_width}, height={camera.capture_height}, color_mode={camera.color_mode})"
- )
- cameras.append(camera)
-
- images_dir = Path(images_dir)
- if images_dir.exists():
- shutil.rmtree(
- images_dir,
- )
- images_dir.mkdir(parents=True, exist_ok=True)
-
- print(f"Saving images to {images_dir}")
- frame_index = 0
- start_time = time.perf_counter()
- try:
- with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
- while True:
- now = time.perf_counter()
-
- for camera in cameras:
- # If we use async_read when fps is None, the loop will go full speed, and we will end up
- # saving the same images from the cameras multiple times until the RAM/disk is full.
- image = camera.read() if fps is None else camera.async_read()
- if image is None:
- print("No Frame")
-
- bgr_converted_image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
-
- executor.submit(
- save_image,
- bgr_converted_image,
- camera.serial_number,
- frame_index,
- images_dir,
- )
-
- if fps is not None:
- dt_s = time.perf_counter() - now
- busy_wait(1 / fps - dt_s)
-
- if time.perf_counter() - start_time > record_time_s:
- break
-
- print(f"Frame: {frame_index:04d}\tLatency (ms): {(time.perf_counter() - now) * 1000:.2f}")
-
- frame_index += 1
- finally:
- print(f"Images have been saved to {images_dir}")
- for camera in cameras:
- camera.disconnect()
-
-
-class IntelRealSenseCamera:
- """
- The IntelRealSenseCamera class is similar to OpenCVCamera class but adds additional features for Intel Real Sense cameras:
- - is instantiated with the serial number of the camera - won't randomly change as it can be the case of OpenCVCamera for Linux,
- - can also be instantiated with the camera's name — if it's unique — using IntelRealSenseCamera.init_from_name(),
- - depth map can be returned.
-
- To find the camera indices of your cameras, you can run our utility script that will save a few frames for each camera:
- ```bash
- python lerobot/common/robot_devices/cameras/intelrealsense.py --images-dir outputs/images_from_intelrealsense_cameras
- ```
-
- When an IntelRealSenseCamera is instantiated, if no specific config is provided, the default fps, width, height and color_mode
- of the given camera will be used.
-
- Example of instantiating with a serial number:
- ```python
- from lerobot.common.robot_devices.cameras.configs import IntelRealSenseCameraConfig
-
- config = IntelRealSenseCameraConfig(serial_number=128422271347)
- camera = IntelRealSenseCamera(config)
- camera.connect()
- color_image = camera.read()
- # when done using the camera, consider disconnecting
- camera.disconnect()
- ```
-
- Example of instantiating with a name if it's unique:
- ```
- config = IntelRealSenseCameraConfig(name="Intel RealSense D405")
- ```
-
- Example of changing default fps, width, height and color_mode:
- ```python
- config = IntelRealSenseCameraConfig(serial_number=128422271347, fps=30, width=1280, height=720)
- config = IntelRealSenseCameraConfig(serial_number=128422271347, fps=90, width=640, height=480)
- config = IntelRealSenseCameraConfig(serial_number=128422271347, fps=90, width=640, height=480, color_mode="bgr")
- # Note: might error out upon `camera.connect()` if these settings are not compatible with the camera
- ```
-
- Example of returning depth:
- ```python
- config = IntelRealSenseCameraConfig(serial_number=128422271347, use_depth=True)
- camera = IntelRealSenseCamera(config)
- camera.connect()
- color_image, depth_map = camera.read()
- ```
- """
-
- def __init__(
- self,
- config: IntelRealSenseCameraConfig,
- ):
- self.config = config
- if config.name is not None:
- self.serial_number = self.find_serial_number_from_name(config.name)
- else:
- self.serial_number = config.serial_number
-
- # Store the raw (capture) resolution from the config.
- self.capture_width = config.width
- self.capture_height = config.height
-
- # If rotated by ±90, swap width and height.
- if config.rotation in [-90, 90]:
- self.width = config.height
- self.height = config.width
- else:
- self.width = config.width
- self.height = config.height
-
- self.fps = config.fps
- self.channels = config.channels
- self.color_mode = config.color_mode
- self.use_depth = config.use_depth
- self.force_hardware_reset = config.force_hardware_reset
- self.mock = config.mock
-
- self.camera = None
- self.is_connected = False
- self.thread = None
- self.stop_event = None
- self.color_image = None
- self.depth_map = None
- self.logs = {}
-
- if self.mock:
- import tests.cameras.mock_cv2 as cv2
- else:
- import cv2
-
- self.rotation = None
- if config.rotation == -90:
- self.rotation = cv2.ROTATE_90_COUNTERCLOCKWISE
- elif config.rotation == 90:
- self.rotation = cv2.ROTATE_90_CLOCKWISE
- elif config.rotation == 180:
- self.rotation = cv2.ROTATE_180
-
- def find_serial_number_from_name(self, name):
- camera_infos = find_cameras()
- camera_names = [cam["name"] for cam in camera_infos]
- this_name_count = Counter(camera_names)[name]
- if this_name_count > 1:
- # TODO(aliberts): Test this with multiple identical cameras (Aloha)
- raise ValueError(
- f"Multiple {name} cameras have been detected. Please use their serial number to instantiate them."
- )
-
- name_to_serial_dict = {cam["name"]: cam["serial_number"] for cam in camera_infos}
- cam_sn = name_to_serial_dict[name]
-
- return cam_sn
-
- def connect(self):
- if self.is_connected:
- raise RobotDeviceAlreadyConnectedError(
- f"IntelRealSenseCamera({self.serial_number}) is already connected."
- )
-
- if self.mock:
- import tests.cameras.mock_pyrealsense2 as rs
- else:
- import pyrealsense2 as rs
-
- config = rs.config()
- config.enable_device(str(self.serial_number))
-
- if self.fps and self.capture_width and self.capture_height:
- # TODO(rcadene): can we set rgb8 directly?
- config.enable_stream(
- rs.stream.color, self.capture_width, self.capture_height, rs.format.rgb8, self.fps
- )
- else:
- config.enable_stream(rs.stream.color)
-
- if self.use_depth:
- if self.fps and self.capture_width and self.capture_height:
- config.enable_stream(
- rs.stream.depth, self.capture_width, self.capture_height, rs.format.z16, self.fps
- )
- else:
- config.enable_stream(rs.stream.depth)
-
- self.camera = rs.pipeline()
- try:
- profile = self.camera.start(config)
- is_camera_open = True
- except RuntimeError:
- is_camera_open = False
- traceback.print_exc()
-
- # If the camera doesn't work, display the camera indices corresponding to
- # valid cameras.
- if not is_camera_open:
- # Verify that the provided `serial_number` is valid before printing the traceback
- camera_infos = find_cameras()
- serial_numbers = [cam["serial_number"] for cam in camera_infos]
- if self.serial_number not in serial_numbers:
- raise ValueError(
- f"`serial_number` is expected to be one of these available cameras {serial_numbers}, but {self.serial_number} is provided instead. "
- "To find the serial number you should use, run `python lerobot/common/robot_devices/cameras/intelrealsense.py`."
- )
-
- raise OSError(f"Can't access IntelRealSenseCamera({self.serial_number}).")
-
- color_stream = profile.get_stream(rs.stream.color)
- color_profile = color_stream.as_video_stream_profile()
- actual_fps = color_profile.fps()
- actual_width = color_profile.width()
- actual_height = color_profile.height()
-
- # Using `math.isclose` since actual fps can be a float (e.g. 29.9 instead of 30)
- if self.fps is not None and not math.isclose(self.fps, actual_fps, rel_tol=1e-3):
- # Using `OSError` since it's a broad that encompasses issues related to device communication
- raise OSError(
- f"Can't set {self.fps=} for IntelRealSenseCamera({self.serial_number}). Actual value is {actual_fps}."
- )
- if self.capture_width is not None and self.capture_width != actual_width:
- raise OSError(
- f"Can't set {self.capture_width=} for IntelRealSenseCamera({self.serial_number}). Actual value is {actual_width}."
- )
- if self.capture_height is not None and self.capture_height != actual_height:
- raise OSError(
- f"Can't set {self.capture_height=} for IntelRealSenseCamera({self.serial_number}). Actual value is {actual_height}."
- )
-
- self.fps = round(actual_fps)
- self.capture_width = round(actual_width)
- self.capture_height = round(actual_height)
-
- self.is_connected = True
-
- def read(self, temporary_color: str | None = None) -> np.ndarray | tuple[np.ndarray, np.ndarray]:
- """Read a frame from the camera returned in the format height x width x channels (e.g. 480 x 640 x 3)
- of type `np.uint8`, contrarily to the pytorch format which is float channel first.
-
- When `use_depth=True`, returns a tuple `(color_image, depth_map)` with a depth map in the format
- height x width (e.g. 480 x 640) of type np.uint16.
-
- Note: Reading a frame is done every `camera.fps` times per second, and it is blocking.
- If you are reading data from other sensors, we advise to use `camera.async_read()` which is non blocking version of `camera.read()`.
- """
- if not self.is_connected:
- raise RobotDeviceNotConnectedError(
- f"IntelRealSenseCamera({self.serial_number}) is not connected. Try running `camera.connect()` first."
- )
-
- if self.mock:
- import tests.cameras.mock_cv2 as cv2
- else:
- import cv2
-
- start_time = time.perf_counter()
-
- frame = self.camera.wait_for_frames(timeout_ms=5000)
-
- color_frame = frame.get_color_frame()
-
- if not color_frame:
- raise OSError(f"Can't capture color image from IntelRealSenseCamera({self.serial_number}).")
-
- color_image = np.asanyarray(color_frame.get_data())
-
- requested_color_mode = self.color_mode if temporary_color is None else temporary_color
- if requested_color_mode not in ["rgb", "bgr"]:
- raise ValueError(
- f"Expected color values are 'rgb' or 'bgr', but {requested_color_mode} is provided."
- )
-
- # IntelRealSense uses RGB format as default (red, green, blue).
- if requested_color_mode == "bgr":
- color_image = cv2.cvtColor(color_image, cv2.COLOR_RGB2BGR)
-
- h, w, _ = color_image.shape
- if h != self.capture_height or w != self.capture_width:
- raise OSError(
- f"Can't capture color image with expected height and width ({self.height} x {self.width}). ({h} x {w}) returned instead."
- )
-
- if self.rotation is not None:
- color_image = cv2.rotate(color_image, self.rotation)
-
- # log the number of seconds it took to read the image
- self.logs["delta_timestamp_s"] = time.perf_counter() - start_time
-
- # log the utc time at which the image was received
- self.logs["timestamp_utc"] = capture_timestamp_utc()
-
- if self.use_depth:
- depth_frame = frame.get_depth_frame()
- if not depth_frame:
- raise OSError(f"Can't capture depth image from IntelRealSenseCamera({self.serial_number}).")
-
- depth_map = np.asanyarray(depth_frame.get_data())
-
- h, w = depth_map.shape
- if h != self.capture_height or w != self.capture_width:
- raise OSError(
- f"Can't capture depth map with expected height and width ({self.height} x {self.width}). ({h} x {w}) returned instead."
- )
-
- if self.rotation is not None:
- depth_map = cv2.rotate(depth_map, self.rotation)
-
- return color_image, depth_map
- else:
- return color_image
-
- def read_loop(self):
- while not self.stop_event.is_set():
- if self.use_depth:
- self.color_image, self.depth_map = self.read()
- else:
- self.color_image = self.read()
-
- def async_read(self):
- """Access the latest color image"""
- if not self.is_connected:
- raise RobotDeviceNotConnectedError(
- f"IntelRealSenseCamera({self.serial_number}) is not connected. Try running `camera.connect()` first."
- )
-
- if self.thread is None:
- self.stop_event = threading.Event()
- self.thread = Thread(target=self.read_loop, args=())
- self.thread.daemon = True
- self.thread.start()
-
- num_tries = 0
- while self.color_image is None:
- # TODO(rcadene, aliberts): intelrealsense has diverged compared to opencv over here
- num_tries += 1
- time.sleep(1 / self.fps)
- if num_tries > self.fps and (self.thread.ident is None or not self.thread.is_alive()):
- raise Exception(
- "The thread responsible for `self.async_read()` took too much time to start. There might be an issue. Verify that `self.thread.start()` has been called."
- )
-
- if self.use_depth:
- return self.color_image, self.depth_map
- else:
- return self.color_image
-
- def disconnect(self):
- if not self.is_connected:
- raise RobotDeviceNotConnectedError(
- f"IntelRealSenseCamera({self.serial_number}) is not connected. Try running `camera.connect()` first."
- )
-
- if self.thread is not None and self.thread.is_alive():
- # wait for the thread to finish
- self.stop_event.set()
- self.thread.join()
- self.thread = None
- self.stop_event = None
-
- self.camera.stop()
- self.camera = None
-
- self.is_connected = False
-
- def __del__(self):
- if getattr(self, "is_connected", False):
- self.disconnect()
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser(
- description="Save a few frames using `IntelRealSenseCamera` for all cameras connected to the computer, or a selected subset."
- )
- parser.add_argument(
- "--serial-numbers",
- type=int,
- nargs="*",
- default=None,
- help="List of serial numbers used to instantiate the `IntelRealSenseCamera`. If not provided, find and use all available camera indices.",
- )
- parser.add_argument(
- "--fps",
- type=int,
- default=30,
- help="Set the number of frames recorded per seconds for all cameras. If not provided, use the default fps of each camera.",
- )
- parser.add_argument(
- "--width",
- type=int,
- default=640,
- help="Set the width for all cameras. If not provided, use the default width of each camera.",
- )
- parser.add_argument(
- "--height",
- type=int,
- default=480,
- help="Set the height for all cameras. If not provided, use the default height of each camera.",
- )
- parser.add_argument(
- "--images-dir",
- type=Path,
- default="outputs/images_from_intelrealsense_cameras",
- help="Set directory to save a few frames for each camera.",
- )
- parser.add_argument(
- "--record-time-s",
- type=float,
- default=2.0,
- help="Set the number of seconds used to record the frames. By default, 2 seconds.",
- )
- args = parser.parse_args()
- save_images_from_cameras(**vars(args))
diff --git a/lerobot/common/robot_devices/cameras/opencv.py b/lerobot/common/robot_devices/cameras/opencv.py
deleted file mode 100644
index c9226805..00000000
--- a/lerobot/common/robot_devices/cameras/opencv.py
+++ /dev/null
@@ -1,518 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-This file contains utilities for recording frames from cameras. For more info look at `OpenCVCamera` docstring.
-"""
-
-import argparse
-import concurrent.futures
-import math
-import platform
-import shutil
-import threading
-import time
-from pathlib import Path
-from threading import Thread
-
-import numpy as np
-from PIL import Image
-
-from lerobot.common.robot_devices.cameras.configs import OpenCVCameraConfig
-from lerobot.common.robot_devices.utils import (
- RobotDeviceAlreadyConnectedError,
- RobotDeviceNotConnectedError,
- busy_wait,
-)
-from lerobot.common.utils.utils import capture_timestamp_utc
-
-# The maximum opencv device index depends on your operating system. For instance,
-# if you have 3 cameras, they should be associated to index 0, 1, and 2. This is the case
-# on MacOS. However, on Ubuntu, the indices are different like 6, 16, 23.
-# When you change the USB port or reboot the computer, the operating system might
-# treat the same cameras as new devices. Thus we select a higher bound to search indices.
-MAX_OPENCV_INDEX = 60
-
-
-def find_cameras(raise_when_empty=False, max_index_search_range=MAX_OPENCV_INDEX, mock=False) -> list[dict]:
- cameras = []
- if platform.system() == "Linux":
- print("Linux detected. Finding available camera indices through scanning '/dev/video*' ports")
- possible_ports = [str(port) for port in Path("/dev").glob("video*")]
- ports = _find_cameras(possible_ports, mock=mock)
- for port in ports:
- cameras.append(
- {
- "port": port,
- "index": int(port.removeprefix("/dev/video")),
- }
- )
- else:
- print(
- "Mac or Windows detected. Finding available camera indices through "
- f"scanning all indices from 0 to {MAX_OPENCV_INDEX}"
- )
- possible_indices = range(max_index_search_range)
- indices = _find_cameras(possible_indices, mock=mock)
- for index in indices:
- cameras.append(
- {
- "port": None,
- "index": index,
- }
- )
-
- return cameras
-
-
-def _find_cameras(
- possible_camera_ids: list[int | str], raise_when_empty=False, mock=False
-) -> list[int | str]:
- if mock:
- import tests.cameras.mock_cv2 as cv2
- else:
- import cv2
-
- camera_ids = []
- for camera_idx in possible_camera_ids:
- camera = cv2.VideoCapture(camera_idx)
- is_open = camera.isOpened()
- camera.release()
-
- if is_open:
- print(f"Camera found at index {camera_idx}")
- camera_ids.append(camera_idx)
-
- if raise_when_empty and len(camera_ids) == 0:
- raise OSError(
- "Not a single camera was detected. Try re-plugging, or re-installing `opencv2`, "
- "or your camera driver, or make sure your camera is compatible with opencv2."
- )
-
- return camera_ids
-
-
-def is_valid_unix_path(path: str) -> bool:
- """Note: if 'path' points to a symlink, this will return True only if the target exists"""
- p = Path(path)
- return p.is_absolute() and p.exists()
-
-
-def get_camera_index_from_unix_port(port: Path) -> int:
- return int(str(port.resolve()).removeprefix("/dev/video"))
-
-
-def save_image(img_array, camera_index, frame_index, images_dir):
- img = Image.fromarray(img_array)
- path = images_dir / f"camera_{camera_index:02d}_frame_{frame_index:06d}.png"
- path.parent.mkdir(parents=True, exist_ok=True)
- img.save(str(path), quality=100)
-
-
-def save_images_from_cameras(
- images_dir: Path,
- camera_ids: list | None = None,
- fps=None,
- width=None,
- height=None,
- record_time_s=2,
- mock=False,
-):
- """
- Initializes all the cameras and saves images to the directory. Useful to visually identify the camera
- associated to a given camera index.
- """
- if camera_ids is None or len(camera_ids) == 0:
- camera_infos = find_cameras(mock=mock)
- camera_ids = [cam["index"] for cam in camera_infos]
-
- print("Connecting cameras")
- cameras = []
- for cam_idx in camera_ids:
- config = OpenCVCameraConfig(camera_index=cam_idx, fps=fps, width=width, height=height, mock=mock)
- camera = OpenCVCamera(config)
- camera.connect()
- print(
- f"OpenCVCamera({camera.camera_index}, fps={camera.fps}, width={camera.capture_width}, "
- f"height={camera.capture_height}, color_mode={camera.color_mode})"
- )
- cameras.append(camera)
-
- images_dir = Path(images_dir)
- if images_dir.exists():
- shutil.rmtree(
- images_dir,
- )
- images_dir.mkdir(parents=True, exist_ok=True)
-
- print(f"Saving images to {images_dir}")
- frame_index = 0
- start_time = time.perf_counter()
- with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
- while True:
- now = time.perf_counter()
-
- for camera in cameras:
- # If we use async_read when fps is None, the loop will go full speed, and we will endup
- # saving the same images from the cameras multiple times until the RAM/disk is full.
- image = camera.read() if fps is None else camera.async_read()
-
- executor.submit(
- save_image,
- image,
- camera.camera_index,
- frame_index,
- images_dir,
- )
-
- if fps is not None:
- dt_s = time.perf_counter() - now
- busy_wait(1 / fps - dt_s)
-
- print(f"Frame: {frame_index:04d}\tLatency (ms): {(time.perf_counter() - now) * 1000:.2f}")
-
- if time.perf_counter() - start_time > record_time_s:
- break
-
- frame_index += 1
-
- print(f"Images have been saved to {images_dir}")
-
-
-class OpenCVCamera:
- """
- The OpenCVCamera class allows to efficiently record images from cameras. It relies on opencv2 to communicate
- with the cameras. Most cameras are compatible. For more info, see the [Video I/O with OpenCV Overview](https://docs.opencv.org/4.x/d0/da7/videoio_overview.html).
-
- An OpenCVCamera instance requires a camera index (e.g. `OpenCVCamera(camera_index=0)`). When you only have one camera
- like a webcam of a laptop, the camera index is expected to be 0, but it might also be very different, and the camera index
- might change if you reboot your computer or re-plug your camera. This behavior depends on your operation system.
-
- To find the camera indices of your cameras, you can run our utility script that will be save a few frames for each camera:
- ```bash
- python lerobot/common/robot_devices/cameras/opencv.py --images-dir outputs/images_from_opencv_cameras
- ```
-
- When an OpenCVCamera is instantiated, if no specific config is provided, the default fps, width, height and color_mode
- of the given camera will be used.
-
- Example of usage:
- ```python
- from lerobot.common.robot_devices.cameras.configs import OpenCVCameraConfig
-
- config = OpenCVCameraConfig(camera_index=0)
- camera = OpenCVCamera(config)
- camera.connect()
- color_image = camera.read()
- # when done using the camera, consider disconnecting
- camera.disconnect()
- ```
-
- Example of changing default fps, width, height and color_mode:
- ```python
- config = OpenCVCameraConfig(camera_index=0, fps=30, width=1280, height=720)
- config = OpenCVCameraConfig(camera_index=0, fps=90, width=640, height=480)
- config = OpenCVCameraConfig(camera_index=0, fps=90, width=640, height=480, color_mode="bgr")
- # Note: might error out open `camera.connect()` if these settings are not compatible with the camera
- ```
- """
-
- def __init__(self, config: OpenCVCameraConfig):
- self.config = config
- self.camera_index = config.camera_index
- self.port = None
-
- # Linux uses ports for connecting to cameras
- if platform.system() == "Linux":
- if isinstance(self.camera_index, int):
- self.port = Path(f"/dev/video{self.camera_index}")
- elif isinstance(self.camera_index, str) and is_valid_unix_path(self.camera_index):
- self.port = Path(self.camera_index)
- # Retrieve the camera index from a potentially symlinked path
- self.camera_index = get_camera_index_from_unix_port(self.port)
- else:
- raise ValueError(f"Please check the provided camera_index: {self.camera_index}")
-
- # Store the raw (capture) resolution from the config.
- self.capture_width = config.width
- self.capture_height = config.height
-
- # If rotated by ±90, swap width and height.
- if config.rotation in [-90, 90]:
- self.width = config.height
- self.height = config.width
- else:
- self.width = config.width
- self.height = config.height
-
- self.fps = config.fps
- self.channels = config.channels
- self.color_mode = config.color_mode
- self.mock = config.mock
-
- self.camera = None
- self.is_connected = False
- self.thread = None
- self.stop_event = None
- self.color_image = None
- self.logs = {}
-
- if self.mock:
- import tests.cameras.mock_cv2 as cv2
- else:
- import cv2
-
- self.rotation = None
- if config.rotation == -90:
- self.rotation = cv2.ROTATE_90_COUNTERCLOCKWISE
- elif config.rotation == 90:
- self.rotation = cv2.ROTATE_90_CLOCKWISE
- elif config.rotation == 180:
- self.rotation = cv2.ROTATE_180
-
- def connect(self):
- if self.is_connected:
- raise RobotDeviceAlreadyConnectedError(f"OpenCVCamera({self.camera_index}) is already connected.")
-
- if self.mock:
- import tests.cameras.mock_cv2 as cv2
- else:
- import cv2
-
- # Use 1 thread to avoid blocking the main thread. Especially useful during data collection
- # when other threads are used to save the images.
- cv2.setNumThreads(1)
-
- backend = (
- cv2.CAP_V4L2
- if platform.system() == "Linux"
- else cv2.CAP_DSHOW
- if platform.system() == "Windows"
- else cv2.CAP_AVFOUNDATION
- if platform.system() == "Darwin"
- else cv2.CAP_ANY
- )
-
- camera_idx = f"/dev/video{self.camera_index}" if platform.system() == "Linux" else self.camera_index
- # First create a temporary camera trying to access `camera_index`,
- # and verify it is a valid camera by calling `isOpened`.
- tmp_camera = cv2.VideoCapture(camera_idx, backend)
- is_camera_open = tmp_camera.isOpened()
- # Release camera to make it accessible for `find_camera_indices`
- tmp_camera.release()
- del tmp_camera
-
- # If the camera doesn't work, display the camera indices corresponding to
- # valid cameras.
- if not is_camera_open:
- # Verify that the provided `camera_index` is valid before printing the traceback
- cameras_info = find_cameras()
- available_cam_ids = [cam["index"] for cam in cameras_info]
- if self.camera_index not in available_cam_ids:
- raise ValueError(
- f"`camera_index` is expected to be one of these available cameras {available_cam_ids}, but {self.camera_index} is provided instead. "
- "To find the camera index you should use, run `python lerobot/common/robot_devices/cameras/opencv.py`."
- )
-
- raise OSError(f"Can't access OpenCVCamera({camera_idx}).")
-
- # Secondly, create the camera that will be used downstream.
- # Note: For some unknown reason, calling `isOpened` blocks the camera which then
- # needs to be re-created.
- self.camera = cv2.VideoCapture(camera_idx, backend)
-
- if self.fps is not None:
- self.camera.set(cv2.CAP_PROP_FPS, self.fps)
- if self.capture_width is not None:
- self.camera.set(cv2.CAP_PROP_FRAME_WIDTH, self.capture_width)
- if self.capture_height is not None:
- self.camera.set(cv2.CAP_PROP_FRAME_HEIGHT, self.capture_height)
-
- actual_fps = self.camera.get(cv2.CAP_PROP_FPS)
- actual_width = self.camera.get(cv2.CAP_PROP_FRAME_WIDTH)
- actual_height = self.camera.get(cv2.CAP_PROP_FRAME_HEIGHT)
-
- # Using `math.isclose` since actual fps can be a float (e.g. 29.9 instead of 30)
- if self.fps is not None and not math.isclose(self.fps, actual_fps, rel_tol=1e-3):
- # Using `OSError` since it's a broad that encompasses issues related to device communication
- raise OSError(
- f"Can't set {self.fps=} for OpenCVCamera({self.camera_index}). Actual value is {actual_fps}."
- )
- if self.capture_width is not None and not math.isclose(
- self.capture_width, actual_width, rel_tol=1e-3
- ):
- raise OSError(
- f"Can't set {self.capture_width=} for OpenCVCamera({self.camera_index}). Actual value is {actual_width}."
- )
- if self.capture_height is not None and not math.isclose(
- self.capture_height, actual_height, rel_tol=1e-3
- ):
- raise OSError(
- f"Can't set {self.capture_height=} for OpenCVCamera({self.camera_index}). Actual value is {actual_height}."
- )
-
- self.fps = round(actual_fps)
- self.capture_width = round(actual_width)
- self.capture_height = round(actual_height)
- self.is_connected = True
-
- def read(self, temporary_color_mode: str | None = None) -> np.ndarray:
- """Read a frame from the camera returned in the format (height, width, channels)
- (e.g. 480 x 640 x 3), contrarily to the pytorch format which is channel first.
-
- Note: Reading a frame is done every `camera.fps` times per second, and it is blocking.
- If you are reading data from other sensors, we advise to use `camera.async_read()` which is non blocking version of `camera.read()`.
- """
- if not self.is_connected:
- raise RobotDeviceNotConnectedError(
- f"OpenCVCamera({self.camera_index}) is not connected. Try running `camera.connect()` first."
- )
-
- start_time = time.perf_counter()
-
- ret, color_image = self.camera.read()
-
- if not ret:
- raise OSError(f"Can't capture color image from camera {self.camera_index}.")
-
- requested_color_mode = self.color_mode if temporary_color_mode is None else temporary_color_mode
-
- if requested_color_mode not in ["rgb", "bgr"]:
- raise ValueError(
- f"Expected color values are 'rgb' or 'bgr', but {requested_color_mode} is provided."
- )
-
- # OpenCV uses BGR format as default (blue, green, red) for all operations, including displaying images.
- # However, Deep Learning framework such as LeRobot uses RGB format as default to train neural networks,
- # so we convert the image color from BGR to RGB.
- if requested_color_mode == "rgb":
- if self.mock:
- import tests.cameras.mock_cv2 as cv2
- else:
- import cv2
-
- color_image = cv2.cvtColor(color_image, cv2.COLOR_BGR2RGB)
-
- h, w, _ = color_image.shape
- if h != self.capture_height or w != self.capture_width:
- raise OSError(
- f"Can't capture color image with expected height and width ({self.height} x {self.width}). ({h} x {w}) returned instead."
- )
-
- if self.rotation is not None:
- color_image = cv2.rotate(color_image, self.rotation)
-
- # log the number of seconds it took to read the image
- self.logs["delta_timestamp_s"] = time.perf_counter() - start_time
-
- # log the utc time at which the image was received
- self.logs["timestamp_utc"] = capture_timestamp_utc()
-
- self.color_image = color_image
-
- return color_image
-
- def read_loop(self):
- while not self.stop_event.is_set():
- try:
- self.color_image = self.read()
- except Exception as e:
- print(f"Error reading in thread: {e}")
-
- def async_read(self):
- if not self.is_connected:
- raise RobotDeviceNotConnectedError(
- f"OpenCVCamera({self.camera_index}) is not connected. Try running `camera.connect()` first."
- )
-
- if self.thread is None:
- self.stop_event = threading.Event()
- self.thread = Thread(target=self.read_loop, args=())
- self.thread.daemon = True
- self.thread.start()
-
- num_tries = 0
- while True:
- if self.color_image is not None:
- return self.color_image
-
- time.sleep(1 / self.fps)
- num_tries += 1
- if num_tries > self.fps * 2:
- raise TimeoutError("Timed out waiting for async_read() to start.")
-
- def disconnect(self):
- if not self.is_connected:
- raise RobotDeviceNotConnectedError(
- f"OpenCVCamera({self.camera_index}) is not connected. Try running `camera.connect()` first."
- )
-
- if self.thread is not None:
- self.stop_event.set()
- self.thread.join() # wait for the thread to finish
- self.thread = None
- self.stop_event = None
-
- self.camera.release()
- self.camera = None
- self.is_connected = False
-
- def __del__(self):
- if getattr(self, "is_connected", False):
- self.disconnect()
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser(
- description="Save a few frames using `OpenCVCamera` for all cameras connected to the computer, or a selected subset."
- )
- parser.add_argument(
- "--camera-ids",
- type=int,
- nargs="*",
- default=None,
- help="List of camera indices used to instantiate the `OpenCVCamera`. If not provided, find and use all available camera indices.",
- )
- parser.add_argument(
- "--fps",
- type=int,
- default=None,
- help="Set the number of frames recorded per seconds for all cameras. If not provided, use the default fps of each camera.",
- )
- parser.add_argument(
- "--width",
- type=int,
- default=None,
- help="Set the width for all cameras. If not provided, use the default width of each camera.",
- )
- parser.add_argument(
- "--height",
- type=int,
- default=None,
- help="Set the height for all cameras. If not provided, use the default height of each camera.",
- )
- parser.add_argument(
- "--images-dir",
- type=Path,
- default="outputs/images_from_opencv_cameras",
- help="Set directory to save a few frames for each camera.",
- )
- parser.add_argument(
- "--record-time-s",
- type=float,
- default=4.0,
- help="Set the number of seconds used to record the frames. By default, 2 seconds.",
- )
- args = parser.parse_args()
- save_images_from_cameras(**vars(args))
diff --git a/lerobot/common/robot_devices/cameras/utils.py b/lerobot/common/robot_devices/cameras/utils.py
deleted file mode 100644
index c6431646..00000000
--- a/lerobot/common/robot_devices/cameras/utils.py
+++ /dev/null
@@ -1,67 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from typing import Protocol
-
-import numpy as np
-
-from lerobot.common.robot_devices.cameras.configs import (
- CameraConfig,
- IntelRealSenseCameraConfig,
- OpenCVCameraConfig,
-)
-
-
-# Defines a camera type
-class Camera(Protocol):
- def connect(self): ...
- def read(self, temporary_color: str | None = None) -> np.ndarray: ...
- def async_read(self) -> np.ndarray: ...
- def disconnect(self): ...
-
-
-def make_cameras_from_configs(camera_configs: dict[str, CameraConfig]) -> list[Camera]:
- cameras = {}
-
- for key, cfg in camera_configs.items():
- if cfg.type == "opencv":
- from lerobot.common.robot_devices.cameras.opencv import OpenCVCamera
-
- cameras[key] = OpenCVCamera(cfg)
-
- elif cfg.type == "intelrealsense":
- from lerobot.common.robot_devices.cameras.intelrealsense import IntelRealSenseCamera
-
- cameras[key] = IntelRealSenseCamera(cfg)
- else:
- raise ValueError(f"The camera type '{cfg.type}' is not valid.")
-
- return cameras
-
-
-def make_camera(camera_type, **kwargs) -> Camera:
- if camera_type == "opencv":
- from lerobot.common.robot_devices.cameras.opencv import OpenCVCamera
-
- config = OpenCVCameraConfig(**kwargs)
- return OpenCVCamera(config)
-
- elif camera_type == "intelrealsense":
- from lerobot.common.robot_devices.cameras.intelrealsense import IntelRealSenseCamera
-
- config = IntelRealSenseCameraConfig(**kwargs)
- return IntelRealSenseCamera(config)
-
- else:
- raise ValueError(f"The camera type '{camera_type}' is not valid.")
diff --git a/lerobot/common/robot_devices/control_configs.py b/lerobot/common/robot_devices/control_configs.py
deleted file mode 100644
index cb558c71..00000000
--- a/lerobot/common/robot_devices/control_configs.py
+++ /dev/null
@@ -1,134 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass
-from pathlib import Path
-
-import draccus
-
-from lerobot.common.robot_devices.robots.configs import RobotConfig
-from lerobot.configs import parser
-from lerobot.configs.policies import PreTrainedConfig
-
-
-@dataclass
-class ControlConfig(draccus.ChoiceRegistry):
- pass
-
-
-@ControlConfig.register_subclass("calibrate")
-@dataclass
-class CalibrateControlConfig(ControlConfig):
- # List of arms to calibrate (e.g. `--arms='["left_follower","right_follower"]' left_leader`)
- arms: list[str] | None = None
-
-
-@ControlConfig.register_subclass("teleoperate")
-@dataclass
-class TeleoperateControlConfig(ControlConfig):
- # Limit the maximum frames per second. By default, no limit.
- fps: int | None = None
- teleop_time_s: float | None = None
- # Display all cameras on screen
- display_data: bool = False
-
-
-@ControlConfig.register_subclass("record")
-@dataclass
-class RecordControlConfig(ControlConfig):
- # Dataset identifier. By convention it should match '{hf_username}/{dataset_name}' (e.g. `lerobot/test`).
- repo_id: str
- # A short but accurate description of the task performed during the recording (e.g. "Pick the Lego block and drop it in the box on the right.")
- single_task: str
- # Root directory where the dataset will be stored (e.g. 'dataset/path').
- root: str | Path | None = None
- policy: PreTrainedConfig | None = None
- # Limit the frames per second. By default, uses the policy fps.
- fps: int | None = None
- # Number of seconds before starting data collection. It allows the robot devices to warmup and synchronize.
- warmup_time_s: int | float = 10
- # Number of seconds for data recording for each episode.
- episode_time_s: int | float = 60
- # Number of seconds for resetting the environment after each episode.
- reset_time_s: int | float = 60
- # Number of episodes to record.
- num_episodes: int = 50
- # Encode frames in the dataset into video
- video: bool = True
- # Upload dataset to Hugging Face hub.
- push_to_hub: bool = True
- # Upload on private repository on the Hugging Face hub.
- private: bool = False
- # Add tags to your dataset on the hub.
- tags: list[str] | None = None
- # Number of subprocesses handling the saving of frames as PNG. Set to 0 to use threads only;
- # set to ≥1 to use subprocesses, each using threads to write images. The best number of processes
- # and threads depends on your system. We recommend 4 threads per camera with 0 processes.
- # If fps is unstable, adjust the thread count. If still unstable, try using 1 or more subprocesses.
- num_image_writer_processes: int = 0
- # Number of threads writing the frames as png images on disk, per camera.
- # Too many threads might cause unstable teleoperation fps due to main thread being blocked.
- # Not enough threads might cause low camera fps.
- num_image_writer_threads_per_camera: int = 4
- # Display all cameras on screen
- display_data: bool = False
- # Use vocal synthesis to read events.
- play_sounds: bool = True
- # Resume recording on an existing dataset.
- resume: bool = False
-
- def __post_init__(self):
- # HACK: We parse again the cli args here to get the pretrained path if there was one.
- policy_path = parser.get_path_arg("control.policy")
- if policy_path:
- cli_overrides = parser.get_cli_overrides("control.policy")
- self.policy = PreTrainedConfig.from_pretrained(policy_path, cli_overrides=cli_overrides)
- self.policy.pretrained_path = policy_path
-
-
-@ControlConfig.register_subclass("replay")
-@dataclass
-class ReplayControlConfig(ControlConfig):
- # Dataset identifier. By convention it should match '{hf_username}/{dataset_name}' (e.g. `lerobot/test`).
- repo_id: str
- # Index of the episode to replay.
- episode: int
- # Root directory where the dataset will be stored (e.g. 'dataset/path').
- root: str | Path | None = None
- # Limit the frames per second. By default, uses the dataset fps.
- fps: int | None = None
- # Use vocal synthesis to read events.
- play_sounds: bool = True
-
-
-@ControlConfig.register_subclass("remote_robot")
-@dataclass
-class RemoteRobotConfig(ControlConfig):
- log_interval: int = 100
- # Display all cameras on screen
- display_data: bool = False
- # Rerun configuration for remote robot (https://ref.rerun.io/docs/python/0.22.1/common/initialization_functions/#rerun.connect_tcp)
- viewer_ip: str | None = None
- viewer_port: str | None = None
-
-
-@dataclass
-class ControlPipelineConfig:
- robot: RobotConfig
- control: ControlConfig
-
- @classmethod
- def __get_path_fields__(cls) -> list[str]:
- """This enables the parser to load config from the policy using `--policy.path=local/dir`"""
- return ["control.policy"]
diff --git a/lerobot/common/robot_devices/control_utils.py b/lerobot/common/robot_devices/control_utils.py
deleted file mode 100644
index 9c629b16..00000000
--- a/lerobot/common/robot_devices/control_utils.py
+++ /dev/null
@@ -1,360 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-########################################################################################
-# Utilities
-########################################################################################
-
-
-import logging
-import time
-import traceback
-from contextlib import nullcontext
-from copy import copy
-from functools import cache
-
-import rerun as rr
-import torch
-from deepdiff import DeepDiff
-from termcolor import colored
-
-from lerobot.common.datasets.image_writer import safe_stop_image_writer
-from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
-from lerobot.common.datasets.utils import get_features_from_robot
-from lerobot.common.policies.pretrained import PreTrainedPolicy
-from lerobot.common.robot_devices.robots.utils import Robot
-from lerobot.common.robot_devices.utils import busy_wait
-from lerobot.common.utils.utils import get_safe_torch_device, has_method
-
-
-def log_control_info(robot: Robot, dt_s, episode_index=None, frame_index=None, fps=None):
- log_items = []
- if episode_index is not None:
- log_items.append(f"ep:{episode_index}")
- if frame_index is not None:
- log_items.append(f"frame:{frame_index}")
-
- def log_dt(shortname, dt_val_s):
- nonlocal log_items, fps
- info_str = f"{shortname}:{dt_val_s * 1000:5.2f} ({1 / dt_val_s:3.1f}hz)"
- if fps is not None:
- actual_fps = 1 / dt_val_s
- if actual_fps < fps - 1:
- info_str = colored(info_str, "yellow")
- log_items.append(info_str)
-
- # total step time displayed in milliseconds and its frequency
- log_dt("dt", dt_s)
-
- # TODO(aliberts): move robot-specific logs logic in robot.print_logs()
- if not robot.robot_type.startswith(("stretch", "realman")):
- for name in robot.leader_arms:
- key = f"read_leader_{name}_pos_dt_s"
- if key in robot.logs:
- log_dt("dtRlead", robot.logs[key])
-
- for name in robot.follower_arms:
- key = f"write_follower_{name}_goal_pos_dt_s"
- if key in robot.logs:
- log_dt("dtWfoll", robot.logs[key])
-
- key = f"read_follower_{name}_pos_dt_s"
- if key in robot.logs:
- log_dt("dtRfoll", robot.logs[key])
-
- for name in robot.cameras:
- key = f"read_camera_{name}_dt_s"
- if key in robot.logs:
- log_dt(f"dtR{name}", robot.logs[key])
-
- info_str = " ".join(log_items)
- logging.info(info_str)
-
-
-@cache
-def is_headless():
- """Detects if python is running without a monitor."""
- try:
- import pynput # noqa
-
- return False
- except Exception:
- print(
- "Error trying to import pynput. Switching to headless mode. "
- "As a result, the video stream from the cameras won't be shown, "
- "and you won't be able to change the control flow with keyboards. "
- "For more info, see traceback below.\n"
- )
- traceback.print_exc()
- print()
- return True
-
-
-def predict_action(observation, policy, device, use_amp):
- observation = copy(observation)
- with (
- torch.inference_mode(),
- torch.autocast(device_type=device.type) if device.type == "cuda" and use_amp else nullcontext(),
- ):
- # Convert to pytorch format: channel first and float32 in [0,1] with batch dimension
- for name in observation:
- # Skip all observations that are not tensors (e.g. text)
- if not isinstance(observation[name], torch.Tensor):
- continue
-
- if "image" in name:
- observation[name] = observation[name].type(torch.float32) / 255
- observation[name] = observation[name].permute(2, 0, 1).contiguous()
- observation[name] = observation[name].unsqueeze(0)
- observation[name] = observation[name].to(device)
-
- # Compute the next action with the policy
- # based on the current observation
- action = policy.select_action(observation)
-
- # Remove batch dimension
- action = action.squeeze(0)
-
- # Move to cpu, if not already the case
- action = action.to("cpu")
-
- return action
-
-
-def init_keyboard_listener():
- # Allow to exit early while recording an episode or resetting the environment,
- # by tapping the right arrow key '->'. This might require a sudo permission
- # to allow your terminal to monitor keyboard events.
- events = {}
- events["exit_early"] = False
- events["rerecord_episode"] = False
- events["stop_recording"] = False
-
- if is_headless():
- logging.warning(
- "Headless environment detected. On-screen cameras display and keyboard inputs will not be available."
- )
- listener = None
- return listener, events
-
- # Only import pynput if not in a headless environment
- from pynput import keyboard
-
- def on_press(key):
- try:
- if key == keyboard.Key.right:
- print("Right arrow key pressed. Exiting loop...")
- events["exit_early"] = True
- elif key == keyboard.Key.left:
- print("Left arrow key pressed. Exiting loop and rerecord the last episode...")
- events["rerecord_episode"] = True
- events["exit_early"] = True
- elif key == keyboard.Key.esc:
- print("Escape key pressed. Stopping data recording...")
- events["stop_recording"] = True
- events["exit_early"] = True
- except Exception as e:
- print(f"Error handling key press: {e}")
-
- listener = keyboard.Listener(on_press=on_press)
- listener.start()
-
- return listener, events
-
-
-def warmup_record(
- robot,
- events,
- enable_teleoperation,
- warmup_time_s,
- display_data,
- fps,
-):
- control_loop(
- robot=robot,
- control_time_s=warmup_time_s,
- display_data=display_data,
- events=events,
- fps=fps,
- teleoperate=enable_teleoperation,
- )
-
-
-def record_episode(
- robot,
- dataset,
- events,
- episode_time_s,
- display_data,
- policy,
- fps,
- single_task,
-):
- control_loop(
- robot=robot,
- control_time_s=episode_time_s,
- display_data=display_data,
- dataset=dataset,
- events=events,
- policy=policy,
- fps=fps,
- teleoperate=policy is None,
- single_task=single_task,
- )
-
-
-@safe_stop_image_writer
-def control_loop(
- robot,
- control_time_s=None,
- teleoperate=False,
- display_data=False,
- dataset: LeRobotDataset | None = None,
- events=None,
- policy: PreTrainedPolicy = None,
- fps: int | None = None,
- single_task: str | None = None,
-):
- # TODO(rcadene): Add option to record logs
- if not robot.is_connected:
- robot.connect()
-
- if events is None:
- events = {"exit_early": False}
-
- if control_time_s is None:
- control_time_s = float("inf")
-
- if teleoperate and policy is not None:
- raise ValueError("When `teleoperate` is True, `policy` should be None.")
-
- if dataset is not None and single_task is None:
- raise ValueError("You need to provide a task as argument in `single_task`.")
-
- if dataset is not None and fps is not None and dataset.fps != fps:
- raise ValueError(f"The dataset fps should be equal to requested fps ({dataset['fps']} != {fps}).")
-
- timestamp = 0
- start_episode_t = time.perf_counter()
-
- # Controls starts, if policy is given it needs cleaning up
- if policy is not None:
- policy.reset()
-
- while timestamp < control_time_s:
- start_loop_t = time.perf_counter()
-
- if teleoperate:
- observation, action = robot.teleop_step(record_data=True)
- else:
- observation = robot.capture_observation()
- action = None
- observation["task"] = [single_task]
- observation["robot_type"] = [policy.robot_type] if hasattr(policy, "robot_type") else [""]
- if policy is not None:
- pred_action = predict_action(
- observation, policy, get_safe_torch_device(policy.config.device), policy.config.use_amp
- )
- # Action can eventually be clipped using `max_relative_target`,
- # so action actually sent is saved in the dataset.
- action = robot.send_action(pred_action)
- action = {"action": action}
-
- if dataset is not None:
- observation = {k: v for k, v in observation.items() if k not in ["task", "robot_type"]}
- frame = {**observation, **action, "task": single_task}
- dataset.add_frame(frame)
-
- # TODO(Steven): This should be more general (for RemoteRobot instead of checking the name, but anyways it will change soon)
- if (display_data and not is_headless()) or (display_data and robot.robot_type.startswith("lekiwi")):
- if action is not None:
- for k, v in action.items():
- for i, vv in enumerate(v):
- rr.log(f"sent_{k}_{i}", rr.Scalar(vv.numpy()))
-
- image_keys = [key for key in observation if "image" in key]
- for key in image_keys:
- rr.log(key, rr.Image(observation[key].numpy()), static=True)
-
- if fps is not None:
- dt_s = time.perf_counter() - start_loop_t
- busy_wait(1 / fps - dt_s)
-
- dt_s = time.perf_counter() - start_loop_t
- log_control_info(robot, dt_s, fps=fps)
-
- timestamp = time.perf_counter() - start_episode_t
- if events["exit_early"]:
- events["exit_early"] = False
- break
-
-
-def reset_environment(robot, events, reset_time_s, fps):
- # TODO(rcadene): refactor warmup_record and reset_environment
- if has_method(robot, "teleop_safety_stop"):
- robot.teleop_safety_stop()
-
- control_loop(
- robot=robot,
- control_time_s=reset_time_s,
- events=events,
- fps=fps,
- teleoperate=True,
- )
-
-
-def stop_recording(robot, listener, display_data):
- robot.disconnect()
-
- if not is_headless() and listener is not None:
- listener.stop()
-
-
-def sanity_check_dataset_name(repo_id, policy_cfg):
- _, dataset_name = repo_id.split("/")
- # either repo_id doesnt start with "eval_" and there is no policy
- # or repo_id starts with "eval_" and there is a policy
-
- # Check if dataset_name starts with "eval_" but policy is missing
- if dataset_name.startswith("eval_") and policy_cfg is None:
- raise ValueError(
- f"Your dataset name begins with 'eval_' ({dataset_name}), but no policy is provided ({policy_cfg.type})."
- )
-
- # Check if dataset_name does not start with "eval_" but policy is provided
- if not dataset_name.startswith("eval_") and policy_cfg is not None:
- raise ValueError(
- f"Your dataset name does not begin with 'eval_' ({dataset_name}), but a policy is provided ({policy_cfg.type})."
- )
-
-
-def sanity_check_dataset_robot_compatibility(
- dataset: LeRobotDataset, robot: Robot, fps: int, use_videos: bool
-) -> None:
- fields = [
- ("robot_type", dataset.meta.robot_type, robot.robot_type),
- ("fps", dataset.fps, fps),
- ("features", dataset.features, get_features_from_robot(robot, use_videos)),
- ]
-
- mismatches = []
- for field, dataset_value, present_value in fields:
- diff = DeepDiff(dataset_value, present_value, exclude_regex_paths=[r".*\['info'\]$"])
- if diff:
- mismatches.append(f"{field}: expected {present_value}, got {dataset_value}")
-
- if mismatches:
- raise ValueError(
- "Dataset metadata compatibility check failed with mismatches:\n" + "\n".join(mismatches)
- )
diff --git a/lerobot/common/robot_devices/motors/dynamixel.py b/lerobot/common/robot_devices/motors/dynamixel.py
deleted file mode 100644
index 6096ceb5..00000000
--- a/lerobot/common/robot_devices/motors/dynamixel.py
+++ /dev/null
@@ -1,873 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import enum
-import logging
-import math
-import time
-import traceback
-from copy import deepcopy
-
-import numpy as np
-import tqdm
-
-from lerobot.common.robot_devices.motors.configs import DynamixelMotorsBusConfig
-from lerobot.common.robot_devices.utils import RobotDeviceAlreadyConnectedError, RobotDeviceNotConnectedError
-from lerobot.common.utils.utils import capture_timestamp_utc
-
-PROTOCOL_VERSION = 2.0
-BAUDRATE = 1_000_000
-TIMEOUT_MS = 1000
-
-MAX_ID_RANGE = 252
-
-# The following bounds define the lower and upper joints range (after calibration).
-# For joints in degree (i.e. revolute joints), their nominal range is [-180, 180] degrees
-# which corresponds to a half rotation on the left and half rotation on the right.
-# Some joints might require higher range, so we allow up to [-270, 270] degrees until
-# an error is raised.
-LOWER_BOUND_DEGREE = -270
-UPPER_BOUND_DEGREE = 270
-# For joints in percentage (i.e. joints that move linearly like the prismatic joint of a gripper),
-# their nominal range is [0, 100] %. For instance, for Aloha gripper, 0% is fully
-# closed, and 100% is fully open. To account for slight calibration issue, we allow up to
-# [-10, 110] until an error is raised.
-LOWER_BOUND_LINEAR = -10
-UPPER_BOUND_LINEAR = 110
-
-HALF_TURN_DEGREE = 180
-
-# https://emanual.robotis.com/docs/en/dxl/x/xl330-m077
-# https://emanual.robotis.com/docs/en/dxl/x/xl330-m288
-# https://emanual.robotis.com/docs/en/dxl/x/xl430-w250
-# https://emanual.robotis.com/docs/en/dxl/x/xm430-w350
-# https://emanual.robotis.com/docs/en/dxl/x/xm540-w270
-# https://emanual.robotis.com/docs/en/dxl/x/xc430-w150
-
-# data_name: (address, size_byte)
-X_SERIES_CONTROL_TABLE = {
- "Model_Number": (0, 2),
- "Model_Information": (2, 4),
- "Firmware_Version": (6, 1),
- "ID": (7, 1),
- "Baud_Rate": (8, 1),
- "Return_Delay_Time": (9, 1),
- "Drive_Mode": (10, 1),
- "Operating_Mode": (11, 1),
- "Secondary_ID": (12, 1),
- "Protocol_Type": (13, 1),
- "Homing_Offset": (20, 4),
- "Moving_Threshold": (24, 4),
- "Temperature_Limit": (31, 1),
- "Max_Voltage_Limit": (32, 2),
- "Min_Voltage_Limit": (34, 2),
- "PWM_Limit": (36, 2),
- "Current_Limit": (38, 2),
- "Acceleration_Limit": (40, 4),
- "Velocity_Limit": (44, 4),
- "Max_Position_Limit": (48, 4),
- "Min_Position_Limit": (52, 4),
- "Shutdown": (63, 1),
- "Torque_Enable": (64, 1),
- "LED": (65, 1),
- "Status_Return_Level": (68, 1),
- "Registered_Instruction": (69, 1),
- "Hardware_Error_Status": (70, 1),
- "Velocity_I_Gain": (76, 2),
- "Velocity_P_Gain": (78, 2),
- "Position_D_Gain": (80, 2),
- "Position_I_Gain": (82, 2),
- "Position_P_Gain": (84, 2),
- "Feedforward_2nd_Gain": (88, 2),
- "Feedforward_1st_Gain": (90, 2),
- "Bus_Watchdog": (98, 1),
- "Goal_PWM": (100, 2),
- "Goal_Current": (102, 2),
- "Goal_Velocity": (104, 4),
- "Profile_Acceleration": (108, 4),
- "Profile_Velocity": (112, 4),
- "Goal_Position": (116, 4),
- "Realtime_Tick": (120, 2),
- "Moving": (122, 1),
- "Moving_Status": (123, 1),
- "Present_PWM": (124, 2),
- "Present_Current": (126, 2),
- "Present_Velocity": (128, 4),
- "Present_Position": (132, 4),
- "Velocity_Trajectory": (136, 4),
- "Position_Trajectory": (140, 4),
- "Present_Input_Voltage": (144, 2),
- "Present_Temperature": (146, 1),
-}
-
-X_SERIES_BAUDRATE_TABLE = {
- 0: 9_600,
- 1: 57_600,
- 2: 115_200,
- 3: 1_000_000,
- 4: 2_000_000,
- 5: 3_000_000,
- 6: 4_000_000,
-}
-
-CALIBRATION_REQUIRED = ["Goal_Position", "Present_Position"]
-CONVERT_UINT32_TO_INT32_REQUIRED = ["Goal_Position", "Present_Position"]
-
-MODEL_CONTROL_TABLE = {
- "x_series": X_SERIES_CONTROL_TABLE,
- "xl330-m077": X_SERIES_CONTROL_TABLE,
- "xl330-m288": X_SERIES_CONTROL_TABLE,
- "xl430-w250": X_SERIES_CONTROL_TABLE,
- "xm430-w350": X_SERIES_CONTROL_TABLE,
- "xm540-w270": X_SERIES_CONTROL_TABLE,
- "xc430-w150": X_SERIES_CONTROL_TABLE,
-}
-
-MODEL_RESOLUTION = {
- "x_series": 4096,
- "xl330-m077": 4096,
- "xl330-m288": 4096,
- "xl430-w250": 4096,
- "xm430-w350": 4096,
- "xm540-w270": 4096,
- "xc430-w150": 4096,
-}
-
-MODEL_BAUDRATE_TABLE = {
- "x_series": X_SERIES_BAUDRATE_TABLE,
- "xl330-m077": X_SERIES_BAUDRATE_TABLE,
- "xl330-m288": X_SERIES_BAUDRATE_TABLE,
- "xl430-w250": X_SERIES_BAUDRATE_TABLE,
- "xm430-w350": X_SERIES_BAUDRATE_TABLE,
- "xm540-w270": X_SERIES_BAUDRATE_TABLE,
- "xc430-w150": X_SERIES_BAUDRATE_TABLE,
-}
-
-NUM_READ_RETRY = 10
-NUM_WRITE_RETRY = 10
-
-
-def convert_degrees_to_steps(degrees: float | np.ndarray, models: str | list[str]) -> np.ndarray:
- """This function converts the degree range to the step range for indicating motors rotation.
- It assumes a motor achieves a full rotation by going from -180 degree position to +180.
- The motor resolution (e.g. 4096) corresponds to the number of steps needed to achieve a full rotation.
- """
- resolutions = [MODEL_RESOLUTION[model] for model in models]
- steps = degrees / 180 * np.array(resolutions) / 2
- steps = steps.astype(int)
- return steps
-
-
-def convert_to_bytes(value, bytes, mock=False):
- if mock:
- return value
-
- import dynamixel_sdk as dxl
-
- # Note: No need to convert back into unsigned int, since this byte preprocessing
- # already handles it for us.
- if bytes == 1:
- data = [
- dxl.DXL_LOBYTE(dxl.DXL_LOWORD(value)),
- ]
- elif bytes == 2:
- data = [
- dxl.DXL_LOBYTE(dxl.DXL_LOWORD(value)),
- dxl.DXL_HIBYTE(dxl.DXL_LOWORD(value)),
- ]
- elif bytes == 4:
- data = [
- dxl.DXL_LOBYTE(dxl.DXL_LOWORD(value)),
- dxl.DXL_HIBYTE(dxl.DXL_LOWORD(value)),
- dxl.DXL_LOBYTE(dxl.DXL_HIWORD(value)),
- dxl.DXL_HIBYTE(dxl.DXL_HIWORD(value)),
- ]
- else:
- raise NotImplementedError(
- f"Value of the number of bytes to be sent is expected to be in [1, 2, 4], but "
- f"{bytes} is provided instead."
- )
- return data
-
-
-def get_group_sync_key(data_name, motor_names):
- group_key = f"{data_name}_" + "_".join(motor_names)
- return group_key
-
-
-def get_result_name(fn_name, data_name, motor_names):
- group_key = get_group_sync_key(data_name, motor_names)
- rslt_name = f"{fn_name}_{group_key}"
- return rslt_name
-
-
-def get_queue_name(fn_name, data_name, motor_names):
- group_key = get_group_sync_key(data_name, motor_names)
- queue_name = f"{fn_name}_{group_key}"
- return queue_name
-
-
-def get_log_name(var_name, fn_name, data_name, motor_names):
- group_key = get_group_sync_key(data_name, motor_names)
- log_name = f"{var_name}_{fn_name}_{group_key}"
- return log_name
-
-
-def assert_same_address(model_ctrl_table, motor_models, data_name):
- all_addr = []
- all_bytes = []
- for model in motor_models:
- addr, bytes = model_ctrl_table[model][data_name]
- all_addr.append(addr)
- all_bytes.append(bytes)
-
- if len(set(all_addr)) != 1:
- raise NotImplementedError(
- f"At least two motor models use a different address for `data_name`='{data_name}' ({list(zip(motor_models, all_addr, strict=False))}). Contact a LeRobot maintainer."
- )
-
- if len(set(all_bytes)) != 1:
- raise NotImplementedError(
- f"At least two motor models use a different bytes representation for `data_name`='{data_name}' ({list(zip(motor_models, all_bytes, strict=False))}). Contact a LeRobot maintainer."
- )
-
-
-class TorqueMode(enum.Enum):
- ENABLED = 1
- DISABLED = 0
-
-
-class DriveMode(enum.Enum):
- NON_INVERTED = 0
- INVERTED = 1
-
-
-class CalibrationMode(enum.Enum):
- # Joints with rotational motions are expressed in degrees in nominal range of [-180, 180]
- DEGREE = 0
- # Joints with linear motions (like gripper of Aloha) are expressed in nominal range of [0, 100]
- LINEAR = 1
-
-
-class JointOutOfRangeError(Exception):
- def __init__(self, message="Joint is out of range"):
- self.message = message
- super().__init__(self.message)
-
-
-class DynamixelMotorsBus:
- """
- The DynamixelMotorsBus class allows to efficiently read and write to the attached motors. It relies on
- the python dynamixel sdk to communicate with the motors. For more info, see the [Dynamixel SDK Documentation](https://emanual.robotis.com/docs/en/software/dynamixel/dynamixel_sdk/sample_code/python_read_write_protocol_2_0/#python-read-write-protocol-20).
-
- A DynamixelMotorsBus instance requires a port (e.g. `DynamixelMotorsBus(port="/dev/tty.usbmodem575E0031751"`)).
- To find the port, you can run our utility script:
- ```bash
- python lerobot/scripts/find_motors_bus_port.py
- >>> Finding all available ports for the MotorBus.
- >>> ['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
- >>> Remove the usb cable from your DynamixelMotorsBus and press Enter when done.
- >>> The port of this DynamixelMotorsBus is /dev/tty.usbmodem575E0031751.
- >>> Reconnect the usb cable.
- ```
-
- Example of usage for 1 motor connected to the bus:
- ```python
- motor_name = "gripper"
- motor_index = 6
- motor_model = "xl330-m288"
-
- config = DynamixelMotorsBusConfig(
- port="/dev/tty.usbmodem575E0031751",
- motors={motor_name: (motor_index, motor_model)},
- )
- motors_bus = DynamixelMotorsBus(config)
- motors_bus.connect()
-
- position = motors_bus.read("Present_Position")
-
- # move from a few motor steps as an example
- few_steps = 30
- motors_bus.write("Goal_Position", position + few_steps)
-
- # when done, consider disconnecting
- motors_bus.disconnect()
- ```
- """
-
- def __init__(
- self,
- config: DynamixelMotorsBusConfig,
- ):
- self.port = config.port
- self.motors = config.motors
- self.mock = config.mock
-
- self.model_ctrl_table = deepcopy(MODEL_CONTROL_TABLE)
- self.model_resolution = deepcopy(MODEL_RESOLUTION)
-
- self.port_handler = None
- self.packet_handler = None
- self.calibration = None
- self.is_connected = False
- self.group_readers = {}
- self.group_writers = {}
- self.logs = {}
-
- def connect(self):
- if self.is_connected:
- raise RobotDeviceAlreadyConnectedError(
- f"DynamixelMotorsBus({self.port}) is already connected. Do not call `motors_bus.connect()` twice."
- )
-
- if self.mock:
- import tests.motors.mock_dynamixel_sdk as dxl
- else:
- import dynamixel_sdk as dxl
-
- self.port_handler = dxl.PortHandler(self.port)
- self.packet_handler = dxl.PacketHandler(PROTOCOL_VERSION)
-
- try:
- if not self.port_handler.openPort():
- raise OSError(f"Failed to open port '{self.port}'.")
- except Exception:
- traceback.print_exc()
- print(
- "\nTry running `python lerobot/scripts/find_motors_bus_port.py` to make sure you are using the correct port.\n"
- )
- raise
-
- # Allow to read and write
- self.is_connected = True
-
- self.port_handler.setPacketTimeoutMillis(TIMEOUT_MS)
-
- def reconnect(self):
- if self.mock:
- import tests.motors.mock_dynamixel_sdk as dxl
- else:
- import dynamixel_sdk as dxl
-
- self.port_handler = dxl.PortHandler(self.port)
- self.packet_handler = dxl.PacketHandler(PROTOCOL_VERSION)
-
- if not self.port_handler.openPort():
- raise OSError(f"Failed to open port '{self.port}'.")
-
- self.is_connected = True
-
- def are_motors_configured(self):
- # Only check the motor indices and not baudrate, since if the motor baudrates are incorrect,
- # a ConnectionError will be raised anyway.
- try:
- return (self.motor_indices == self.read("ID")).all()
- except ConnectionError as e:
- print(e)
- return False
-
- def find_motor_indices(self, possible_ids=None, num_retry=2):
- if possible_ids is None:
- possible_ids = range(MAX_ID_RANGE)
-
- indices = []
- for idx in tqdm.tqdm(possible_ids):
- try:
- present_idx = self.read_with_motor_ids(self.motor_models, [idx], "ID", num_retry=num_retry)[0]
- except ConnectionError:
- continue
-
- if idx != present_idx:
- # sanity check
- raise OSError(
- "Motor index used to communicate through the bus is not the same as the one present in the motor memory. The motor memory might be damaged."
- )
- indices.append(idx)
-
- return indices
-
- def set_bus_baudrate(self, baudrate):
- present_bus_baudrate = self.port_handler.getBaudRate()
- if present_bus_baudrate != baudrate:
- print(f"Setting bus baud rate to {baudrate}. Previously {present_bus_baudrate}.")
- self.port_handler.setBaudRate(baudrate)
-
- if self.port_handler.getBaudRate() != baudrate:
- raise OSError("Failed to write bus baud rate.")
-
- @property
- def motor_names(self) -> list[str]:
- return list(self.motors.keys())
-
- @property
- def motor_models(self) -> list[str]:
- return [model for _, model in self.motors.values()]
-
- @property
- def motor_indices(self) -> list[int]:
- return [idx for idx, _ in self.motors.values()]
-
- def set_calibration(self, calibration: dict[str, list]):
- self.calibration = calibration
-
- def apply_calibration_autocorrect(self, values: np.ndarray | list, motor_names: list[str] | None):
- """This function applies the calibration, automatically detects out of range errors for motors values and attempts to correct.
-
- For more info, see docstring of `apply_calibration` and `autocorrect_calibration`.
- """
- try:
- values = self.apply_calibration(values, motor_names)
- except JointOutOfRangeError as e:
- print(e)
- self.autocorrect_calibration(values, motor_names)
- values = self.apply_calibration(values, motor_names)
- return values
-
- def apply_calibration(self, values: np.ndarray | list, motor_names: list[str] | None):
- """Convert from unsigned int32 joint position range [0, 2**32[ to the universal float32 nominal degree range ]-180.0, 180.0[ with
- a "zero position" at 0 degree.
-
- Note: We say "nominal degree range" since the motors can take values outside this range. For instance, 190 degrees, if the motor
- rotate more than a half a turn from the zero position. However, most motors can't rotate more than 180 degrees and will stay in this range.
-
- Joints values are original in [0, 2**32[ (unsigned int32). Each motor are expected to complete a full rotation
- when given a goal position that is + or - their resolution. For instance, dynamixel xl330-m077 have a resolution of 4096, and
- at any position in their original range, let's say the position 56734, they complete a full rotation clockwise by moving to 60830,
- or anticlockwise by moving to 52638. The position in the original range is arbitrary and might change a lot between each motor.
- To harmonize between motors of the same model, different robots, or even models of different brands, we propose to work
- in the centered nominal degree range ]-180, 180[.
- """
- if motor_names is None:
- motor_names = self.motor_names
-
- # Convert from unsigned int32 original range [0, 2**32] to signed float32 range
- values = values.astype(np.float32)
-
- for i, name in enumerate(motor_names):
- calib_idx = self.calibration["motor_names"].index(name)
- calib_mode = self.calibration["calib_mode"][calib_idx]
-
- if CalibrationMode[calib_mode] == CalibrationMode.DEGREE:
- drive_mode = self.calibration["drive_mode"][calib_idx]
- homing_offset = self.calibration["homing_offset"][calib_idx]
- _, model = self.motors[name]
- resolution = self.model_resolution[model]
-
- # Update direction of rotation of the motor to match between leader and follower.
- # In fact, the motor of the leader for a given joint can be assembled in an
- # opposite direction in term of rotation than the motor of the follower on the same joint.
- if drive_mode:
- values[i] *= -1
-
- # Convert from range [-2**31, 2**31] to
- # nominal range [-resolution//2, resolution//2] (e.g. [-2048, 2048])
- values[i] += homing_offset
-
- # Convert from range [-resolution//2, resolution//2] to
- # universal float32 centered degree range [-180, 180]
- # (e.g. 2048 / (4096 // 2) * 180 = 180)
- values[i] = values[i] / (resolution // 2) * HALF_TURN_DEGREE
-
- if (values[i] < LOWER_BOUND_DEGREE) or (values[i] > UPPER_BOUND_DEGREE):
- raise JointOutOfRangeError(
- f"Wrong motor position range detected for {name}. "
- f"Expected to be in nominal range of [-{HALF_TURN_DEGREE}, {HALF_TURN_DEGREE}] degrees (a full rotation), "
- f"with a maximum range of [{LOWER_BOUND_DEGREE}, {UPPER_BOUND_DEGREE}] degrees to account for joints that can rotate a bit more, "
- f"but present value is {values[i]} degree. "
- "This might be due to a cable connection issue creating an artificial 360 degrees jump in motor values. "
- "You need to recalibrate by running: `python lerobot/scripts/control_robot.py calibrate`"
- )
-
- elif CalibrationMode[calib_mode] == CalibrationMode.LINEAR:
- start_pos = self.calibration["start_pos"][calib_idx]
- end_pos = self.calibration["end_pos"][calib_idx]
-
- # Rescale the present position to a nominal range [0, 100] %,
- # useful for joints with linear motions like Aloha gripper
- values[i] = (values[i] - start_pos) / (end_pos - start_pos) * 100
-
- if (values[i] < LOWER_BOUND_LINEAR) or (values[i] > UPPER_BOUND_LINEAR):
- raise JointOutOfRangeError(
- f"Wrong motor position range detected for {name}. "
- f"Expected to be in nominal range of [0, 100] % (a full linear translation), "
- f"with a maximum range of [{LOWER_BOUND_LINEAR}, {UPPER_BOUND_LINEAR}] % to account for some imprecision during calibration, "
- f"but present value is {values[i]} %. "
- "This might be due to a cable connection issue creating an artificial jump in motor values. "
- "You need to recalibrate by running: `python lerobot/scripts/control_robot.py calibrate`"
- )
-
- return values
-
- def autocorrect_calibration(self, values: np.ndarray | list, motor_names: list[str] | None):
- """This function automatically detects issues with values of motors after calibration, and correct for these issues.
-
- Some motors might have values outside of expected maximum bounds after calibration.
- For instance, for a joint in degree, its value can be outside [-270, 270] degrees, which is totally unexpected given
- a nominal range of [-180, 180] degrees, which represents half a turn to the left or right starting from zero position.
-
- Known issues:
- #1: Motor value randomly shifts of a full turn, caused by hardware/connection errors.
- #2: Motor internal homing offset is shifted by a full turn, caused by using default calibration (e.g Aloha).
- #3: motor internal homing offset is shifted by less or more than a full turn, caused by using default calibration
- or by human error during manual calibration.
-
- Issues #1 and #2 can be solved by shifting the calibration homing offset by a full turn.
- Issue #3 will be visually detected by user and potentially captured by the safety feature `max_relative_target`,
- that will slow down the motor, raise an error asking to recalibrate. Manual recalibrating will solve the issue.
-
- Note: A full turn corresponds to 360 degrees but also to 4096 steps for a motor resolution of 4096.
- """
- if motor_names is None:
- motor_names = self.motor_names
-
- # Convert from unsigned int32 original range [0, 2**32] to signed float32 range
- values = values.astype(np.float32)
-
- for i, name in enumerate(motor_names):
- calib_idx = self.calibration["motor_names"].index(name)
- calib_mode = self.calibration["calib_mode"][calib_idx]
-
- if CalibrationMode[calib_mode] == CalibrationMode.DEGREE:
- drive_mode = self.calibration["drive_mode"][calib_idx]
- homing_offset = self.calibration["homing_offset"][calib_idx]
- _, model = self.motors[name]
- resolution = self.model_resolution[model]
-
- # Update direction of rotation of the motor to match between leader and follower.
- # In fact, the motor of the leader for a given joint can be assembled in an
- # opposite direction in term of rotation than the motor of the follower on the same joint.
- if drive_mode:
- values[i] *= -1
-
- # Convert from initial range to range [-180, 180] degrees
- calib_val = (values[i] + homing_offset) / (resolution // 2) * HALF_TURN_DEGREE
- in_range = (calib_val > LOWER_BOUND_DEGREE) and (calib_val < UPPER_BOUND_DEGREE)
-
- # Solve this inequality to find the factor to shift the range into [-180, 180] degrees
- # values[i] = (values[i] + homing_offset + resolution * factor) / (resolution // 2) * HALF_TURN_DEGREE
- # - HALF_TURN_DEGREE <= (values[i] + homing_offset + resolution * factor) / (resolution // 2) * HALF_TURN_DEGREE <= HALF_TURN_DEGREE
- # (- (resolution // 2) - values[i] - homing_offset) / resolution <= factor <= ((resolution // 2) - values[i] - homing_offset) / resolution
- low_factor = (-(resolution // 2) - values[i] - homing_offset) / resolution
- upp_factor = ((resolution // 2) - values[i] - homing_offset) / resolution
-
- elif CalibrationMode[calib_mode] == CalibrationMode.LINEAR:
- start_pos = self.calibration["start_pos"][calib_idx]
- end_pos = self.calibration["end_pos"][calib_idx]
-
- # Convert from initial range to range [0, 100] in %
- calib_val = (values[i] - start_pos) / (end_pos - start_pos) * 100
- in_range = (calib_val > LOWER_BOUND_LINEAR) and (calib_val < UPPER_BOUND_LINEAR)
-
- # Solve this inequality to find the factor to shift the range into [0, 100] %
- # values[i] = (values[i] - start_pos + resolution * factor) / (end_pos + resolution * factor - start_pos - resolution * factor) * 100
- # values[i] = (values[i] - start_pos + resolution * factor) / (end_pos - start_pos) * 100
- # 0 <= (values[i] - start_pos + resolution * factor) / (end_pos - start_pos) * 100 <= 100
- # (start_pos - values[i]) / resolution <= factor <= (end_pos - values[i]) / resolution
- low_factor = (start_pos - values[i]) / resolution
- upp_factor = (end_pos - values[i]) / resolution
-
- if not in_range:
- # Get first integer between the two bounds
- if low_factor < upp_factor:
- factor = math.ceil(low_factor)
-
- if factor > upp_factor:
- raise ValueError(f"No integer found between bounds [{low_factor=}, {upp_factor=}]")
- else:
- factor = math.ceil(upp_factor)
-
- if factor > low_factor:
- raise ValueError(f"No integer found between bounds [{low_factor=}, {upp_factor=}]")
-
- if CalibrationMode[calib_mode] == CalibrationMode.DEGREE:
- out_of_range_str = f"{LOWER_BOUND_DEGREE} < {calib_val} < {UPPER_BOUND_DEGREE} degrees"
- in_range_str = f"{LOWER_BOUND_DEGREE} < {calib_val} < {UPPER_BOUND_DEGREE} degrees"
- elif CalibrationMode[calib_mode] == CalibrationMode.LINEAR:
- out_of_range_str = f"{LOWER_BOUND_LINEAR} < {calib_val} < {UPPER_BOUND_LINEAR} %"
- in_range_str = f"{LOWER_BOUND_LINEAR} < {calib_val} < {UPPER_BOUND_LINEAR} %"
-
- logging.warning(
- f"Auto-correct calibration of motor '{name}' by shifting value by {abs(factor)} full turns, "
- f"from '{out_of_range_str}' to '{in_range_str}'."
- )
-
- # A full turn corresponds to 360 degrees but also to 4096 steps for a motor resolution of 4096.
- self.calibration["homing_offset"][calib_idx] += resolution * factor
-
- def revert_calibration(self, values: np.ndarray | list, motor_names: list[str] | None):
- """Inverse of `apply_calibration`."""
- if motor_names is None:
- motor_names = self.motor_names
-
- for i, name in enumerate(motor_names):
- calib_idx = self.calibration["motor_names"].index(name)
- calib_mode = self.calibration["calib_mode"][calib_idx]
-
- if CalibrationMode[calib_mode] == CalibrationMode.DEGREE:
- drive_mode = self.calibration["drive_mode"][calib_idx]
- homing_offset = self.calibration["homing_offset"][calib_idx]
- _, model = self.motors[name]
- resolution = self.model_resolution[model]
-
- # Convert from nominal 0-centered degree range [-180, 180] to
- # 0-centered resolution range (e.g. [-2048, 2048] for resolution=4096)
- values[i] = values[i] / HALF_TURN_DEGREE * (resolution // 2)
-
- # Subtract the homing offsets to come back to actual motor range of values
- # which can be arbitrary.
- values[i] -= homing_offset
-
- # Remove drive mode, which is the rotation direction of the motor, to come back to
- # actual motor rotation direction which can be arbitrary.
- if drive_mode:
- values[i] *= -1
-
- elif CalibrationMode[calib_mode] == CalibrationMode.LINEAR:
- start_pos = self.calibration["start_pos"][calib_idx]
- end_pos = self.calibration["end_pos"][calib_idx]
-
- # Convert from nominal lnear range of [0, 100] % to
- # actual motor range of values which can be arbitrary.
- values[i] = values[i] / 100 * (end_pos - start_pos) + start_pos
-
- values = np.round(values).astype(np.int32)
- return values
-
- def read_with_motor_ids(self, motor_models, motor_ids, data_name, num_retry=NUM_READ_RETRY):
- if self.mock:
- import tests.motors.mock_dynamixel_sdk as dxl
- else:
- import dynamixel_sdk as dxl
-
- return_list = True
- if not isinstance(motor_ids, list):
- return_list = False
- motor_ids = [motor_ids]
-
- assert_same_address(self.model_ctrl_table, self.motor_models, data_name)
- addr, bytes = self.model_ctrl_table[motor_models[0]][data_name]
- group = dxl.GroupSyncRead(self.port_handler, self.packet_handler, addr, bytes)
- for idx in motor_ids:
- group.addParam(idx)
-
- for _ in range(num_retry):
- comm = group.txRxPacket()
- if comm == dxl.COMM_SUCCESS:
- break
-
- if comm != dxl.COMM_SUCCESS:
- raise ConnectionError(
- f"Read failed due to communication error on port {self.port_handler.port_name} for indices {motor_ids}: "
- f"{self.packet_handler.getTxRxResult(comm)}"
- )
-
- values = []
- for idx in motor_ids:
- value = group.getData(idx, addr, bytes)
- values.append(value)
-
- if return_list:
- return values
- else:
- return values[0]
-
- def read(self, data_name, motor_names: str | list[str] | None = None):
- if not self.is_connected:
- raise RobotDeviceNotConnectedError(
- f"DynamixelMotorsBus({self.port}) is not connected. You need to run `motors_bus.connect()`."
- )
-
- start_time = time.perf_counter()
-
- if self.mock:
- import tests.motors.mock_dynamixel_sdk as dxl
- else:
- import dynamixel_sdk as dxl
-
- if motor_names is None:
- motor_names = self.motor_names
-
- if isinstance(motor_names, str):
- motor_names = [motor_names]
-
- motor_ids = []
- models = []
- for name in motor_names:
- motor_idx, model = self.motors[name]
- motor_ids.append(motor_idx)
- models.append(model)
-
- assert_same_address(self.model_ctrl_table, models, data_name)
- addr, bytes = self.model_ctrl_table[model][data_name]
- group_key = get_group_sync_key(data_name, motor_names)
-
- if data_name not in self.group_readers:
- # create new group reader
- self.group_readers[group_key] = dxl.GroupSyncRead(
- self.port_handler, self.packet_handler, addr, bytes
- )
- for idx in motor_ids:
- self.group_readers[group_key].addParam(idx)
-
- for _ in range(NUM_READ_RETRY):
- comm = self.group_readers[group_key].txRxPacket()
- if comm == dxl.COMM_SUCCESS:
- break
-
- if comm != dxl.COMM_SUCCESS:
- raise ConnectionError(
- f"Read failed due to communication error on port {self.port} for group_key {group_key}: "
- f"{self.packet_handler.getTxRxResult(comm)}"
- )
-
- values = []
- for idx in motor_ids:
- value = self.group_readers[group_key].getData(idx, addr, bytes)
- values.append(value)
-
- values = np.array(values)
-
- # Convert to signed int to use range [-2048, 2048] for our motor positions.
- if data_name in CONVERT_UINT32_TO_INT32_REQUIRED:
- values = values.astype(np.int32)
-
- if data_name in CALIBRATION_REQUIRED and self.calibration is not None:
- values = self.apply_calibration_autocorrect(values, motor_names)
-
- # log the number of seconds it took to read the data from the motors
- delta_ts_name = get_log_name("delta_timestamp_s", "read", data_name, motor_names)
- self.logs[delta_ts_name] = time.perf_counter() - start_time
-
- # log the utc time at which the data was received
- ts_utc_name = get_log_name("timestamp_utc", "read", data_name, motor_names)
- self.logs[ts_utc_name] = capture_timestamp_utc()
-
- return values
-
- def write_with_motor_ids(self, motor_models, motor_ids, data_name, values, num_retry=NUM_WRITE_RETRY):
- if self.mock:
- import tests.motors.mock_dynamixel_sdk as dxl
- else:
- import dynamixel_sdk as dxl
-
- if not isinstance(motor_ids, list):
- motor_ids = [motor_ids]
- if not isinstance(values, list):
- values = [values]
-
- assert_same_address(self.model_ctrl_table, motor_models, data_name)
- addr, bytes = self.model_ctrl_table[motor_models[0]][data_name]
- group = dxl.GroupSyncWrite(self.port_handler, self.packet_handler, addr, bytes)
- for idx, value in zip(motor_ids, values, strict=True):
- data = convert_to_bytes(value, bytes, self.mock)
- group.addParam(idx, data)
-
- for _ in range(num_retry):
- comm = group.txPacket()
- if comm == dxl.COMM_SUCCESS:
- break
-
- if comm != dxl.COMM_SUCCESS:
- raise ConnectionError(
- f"Write failed due to communication error on port {self.port_handler.port_name} for indices {motor_ids}: "
- f"{self.packet_handler.getTxRxResult(comm)}"
- )
-
- def write(self, data_name, values: int | float | np.ndarray, motor_names: str | list[str] | None = None):
- if not self.is_connected:
- raise RobotDeviceNotConnectedError(
- f"DynamixelMotorsBus({self.port}) is not connected. You need to run `motors_bus.connect()`."
- )
-
- start_time = time.perf_counter()
-
- if self.mock:
- import tests.motors.mock_dynamixel_sdk as dxl
- else:
- import dynamixel_sdk as dxl
-
- if motor_names is None:
- motor_names = self.motor_names
-
- if isinstance(motor_names, str):
- motor_names = [motor_names]
-
- if isinstance(values, (int, float, np.integer)):
- values = [int(values)] * len(motor_names)
-
- values = np.array(values)
-
- motor_ids = []
- models = []
- for name in motor_names:
- motor_idx, model = self.motors[name]
- motor_ids.append(motor_idx)
- models.append(model)
-
- if data_name in CALIBRATION_REQUIRED and self.calibration is not None:
- values = self.revert_calibration(values, motor_names)
-
- values = values.tolist()
-
- assert_same_address(self.model_ctrl_table, models, data_name)
- addr, bytes = self.model_ctrl_table[model][data_name]
- group_key = get_group_sync_key(data_name, motor_names)
-
- init_group = data_name not in self.group_readers
- if init_group:
- self.group_writers[group_key] = dxl.GroupSyncWrite(
- self.port_handler, self.packet_handler, addr, bytes
- )
-
- for idx, value in zip(motor_ids, values, strict=True):
- data = convert_to_bytes(value, bytes, self.mock)
- if init_group:
- self.group_writers[group_key].addParam(idx, data)
- else:
- self.group_writers[group_key].changeParam(idx, data)
-
- comm = self.group_writers[group_key].txPacket()
- if comm != dxl.COMM_SUCCESS:
- raise ConnectionError(
- f"Write failed due to communication error on port {self.port} for group_key {group_key}: "
- f"{self.packet_handler.getTxRxResult(comm)}"
- )
-
- # log the number of seconds it took to write the data to the motors
- delta_ts_name = get_log_name("delta_timestamp_s", "write", data_name, motor_names)
- self.logs[delta_ts_name] = time.perf_counter() - start_time
-
- # TODO(rcadene): should we log the time before sending the write command?
- # log the utc time when the write has been completed
- ts_utc_name = get_log_name("timestamp_utc", "write", data_name, motor_names)
- self.logs[ts_utc_name] = capture_timestamp_utc()
-
- def disconnect(self):
- if not self.is_connected:
- raise RobotDeviceNotConnectedError(
- f"DynamixelMotorsBus({self.port}) is not connected. Try running `motors_bus.connect()` first."
- )
-
- if self.port_handler is not None:
- self.port_handler.closePort()
- self.port_handler = None
-
- self.packet_handler = None
- self.group_readers = {}
- self.group_writers = {}
- self.is_connected = False
-
- def __del__(self):
- if getattr(self, "is_connected", False):
- self.disconnect()
diff --git a/lerobot/common/robot_devices/motors/feetech.py b/lerobot/common/robot_devices/motors/feetech.py
deleted file mode 100644
index 64c7f413..00000000
--- a/lerobot/common/robot_devices/motors/feetech.py
+++ /dev/null
@@ -1,898 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import enum
-import logging
-import math
-import time
-import traceback
-from copy import deepcopy
-
-import numpy as np
-import tqdm
-
-from lerobot.common.robot_devices.motors.configs import FeetechMotorsBusConfig
-from lerobot.common.robot_devices.utils import RobotDeviceAlreadyConnectedError, RobotDeviceNotConnectedError
-from lerobot.common.utils.utils import capture_timestamp_utc
-
-PROTOCOL_VERSION = 0
-BAUDRATE = 1_000_000
-TIMEOUT_MS = 1000
-
-MAX_ID_RANGE = 252
-
-# The following bounds define the lower and upper joints range (after calibration).
-# For joints in degree (i.e. revolute joints), their nominal range is [-180, 180] degrees
-# which corresponds to a half rotation on the left and half rotation on the right.
-# Some joints might require higher range, so we allow up to [-270, 270] degrees until
-# an error is raised.
-LOWER_BOUND_DEGREE = -270
-UPPER_BOUND_DEGREE = 270
-# For joints in percentage (i.e. joints that move linearly like the prismatic joint of a gripper),
-# their nominal range is [0, 100] %. For instance, for Aloha gripper, 0% is fully
-# closed, and 100% is fully open. To account for slight calibration issue, we allow up to
-# [-10, 110] until an error is raised.
-LOWER_BOUND_LINEAR = -10
-UPPER_BOUND_LINEAR = 110
-
-HALF_TURN_DEGREE = 180
-
-
-# See this link for STS3215 Memory Table:
-# https://docs.google.com/spreadsheets/d/1GVs7W1VS1PqdhA1nW-abeyAHhTUxKUdR/edit?usp=sharing&ouid=116566590112741600240&rtpof=true&sd=true
-# data_name: (address, size_byte)
-SCS_SERIES_CONTROL_TABLE = {
- "Model": (3, 2),
- "ID": (5, 1),
- "Baud_Rate": (6, 1),
- "Return_Delay": (7, 1),
- "Response_Status_Level": (8, 1),
- "Min_Angle_Limit": (9, 2),
- "Max_Angle_Limit": (11, 2),
- "Max_Temperature_Limit": (13, 1),
- "Max_Voltage_Limit": (14, 1),
- "Min_Voltage_Limit": (15, 1),
- "Max_Torque_Limit": (16, 2),
- "Phase": (18, 1),
- "Unloading_Condition": (19, 1),
- "LED_Alarm_Condition": (20, 1),
- "P_Coefficient": (21, 1),
- "D_Coefficient": (22, 1),
- "I_Coefficient": (23, 1),
- "Minimum_Startup_Force": (24, 2),
- "CW_Dead_Zone": (26, 1),
- "CCW_Dead_Zone": (27, 1),
- "Protection_Current": (28, 2),
- "Angular_Resolution": (30, 1),
- "Offset": (31, 2),
- "Mode": (33, 1),
- "Protective_Torque": (34, 1),
- "Protection_Time": (35, 1),
- "Overload_Torque": (36, 1),
- "Speed_closed_loop_P_proportional_coefficient": (37, 1),
- "Over_Current_Protection_Time": (38, 1),
- "Velocity_closed_loop_I_integral_coefficient": (39, 1),
- "Torque_Enable": (40, 1),
- "Acceleration": (41, 1),
- "Goal_Position": (42, 2),
- "Goal_Time": (44, 2),
- "Goal_Speed": (46, 2),
- "Torque_Limit": (48, 2),
- "Lock": (55, 1),
- "Present_Position": (56, 2),
- "Present_Speed": (58, 2),
- "Present_Load": (60, 2),
- "Present_Voltage": (62, 1),
- "Present_Temperature": (63, 1),
- "Status": (65, 1),
- "Moving": (66, 1),
- "Present_Current": (69, 2),
- # Not in the Memory Table
- "Maximum_Acceleration": (85, 2),
-}
-
-SCS_SERIES_BAUDRATE_TABLE = {
- 0: 1_000_000,
- 1: 500_000,
- 2: 250_000,
- 3: 128_000,
- 4: 115_200,
- 5: 57_600,
- 6: 38_400,
- 7: 19_200,
-}
-
-CALIBRATION_REQUIRED = ["Goal_Position", "Present_Position"]
-CONVERT_UINT32_TO_INT32_REQUIRED = ["Goal_Position", "Present_Position"]
-
-
-MODEL_CONTROL_TABLE = {
- "scs_series": SCS_SERIES_CONTROL_TABLE,
- "sts3215": SCS_SERIES_CONTROL_TABLE,
-}
-
-MODEL_RESOLUTION = {
- "scs_series": 4096,
- "sts3215": 4096,
-}
-
-MODEL_BAUDRATE_TABLE = {
- "scs_series": SCS_SERIES_BAUDRATE_TABLE,
- "sts3215": SCS_SERIES_BAUDRATE_TABLE,
-}
-
-# High number of retries is needed for feetech compared to dynamixel motors.
-NUM_READ_RETRY = 20
-NUM_WRITE_RETRY = 20
-
-
-def convert_degrees_to_steps(degrees: float | np.ndarray, models: str | list[str]) -> np.ndarray:
- """This function converts the degree range to the step range for indicating motors rotation.
- It assumes a motor achieves a full rotation by going from -180 degree position to +180.
- The motor resolution (e.g. 4096) corresponds to the number of steps needed to achieve a full rotation.
- """
- resolutions = [MODEL_RESOLUTION[model] for model in models]
- steps = degrees / 180 * np.array(resolutions) / 2
- steps = steps.astype(int)
- return steps
-
-
-def convert_to_bytes(value, bytes, mock=False):
- if mock:
- return value
-
- import scservo_sdk as scs
-
- # Note: No need to convert back into unsigned int, since this byte preprocessing
- # already handles it for us.
- if bytes == 1:
- data = [
- scs.SCS_LOBYTE(scs.SCS_LOWORD(value)),
- ]
- elif bytes == 2:
- data = [
- scs.SCS_LOBYTE(scs.SCS_LOWORD(value)),
- scs.SCS_HIBYTE(scs.SCS_LOWORD(value)),
- ]
- elif bytes == 4:
- data = [
- scs.SCS_LOBYTE(scs.SCS_LOWORD(value)),
- scs.SCS_HIBYTE(scs.SCS_LOWORD(value)),
- scs.SCS_LOBYTE(scs.SCS_HIWORD(value)),
- scs.SCS_HIBYTE(scs.SCS_HIWORD(value)),
- ]
- else:
- raise NotImplementedError(
- f"Value of the number of bytes to be sent is expected to be in [1, 2, 4], but "
- f"{bytes} is provided instead."
- )
- return data
-
-
-def get_group_sync_key(data_name, motor_names):
- group_key = f"{data_name}_" + "_".join(motor_names)
- return group_key
-
-
-def get_result_name(fn_name, data_name, motor_names):
- group_key = get_group_sync_key(data_name, motor_names)
- rslt_name = f"{fn_name}_{group_key}"
- return rslt_name
-
-
-def get_queue_name(fn_name, data_name, motor_names):
- group_key = get_group_sync_key(data_name, motor_names)
- queue_name = f"{fn_name}_{group_key}"
- return queue_name
-
-
-def get_log_name(var_name, fn_name, data_name, motor_names):
- group_key = get_group_sync_key(data_name, motor_names)
- log_name = f"{var_name}_{fn_name}_{group_key}"
- return log_name
-
-
-def assert_same_address(model_ctrl_table, motor_models, data_name):
- all_addr = []
- all_bytes = []
- for model in motor_models:
- addr, bytes = model_ctrl_table[model][data_name]
- all_addr.append(addr)
- all_bytes.append(bytes)
-
- if len(set(all_addr)) != 1:
- raise NotImplementedError(
- f"At least two motor models use a different address for `data_name`='{data_name}' ({list(zip(motor_models, all_addr, strict=False))}). Contact a LeRobot maintainer."
- )
-
- if len(set(all_bytes)) != 1:
- raise NotImplementedError(
- f"At least two motor models use a different bytes representation for `data_name`='{data_name}' ({list(zip(motor_models, all_bytes, strict=False))}). Contact a LeRobot maintainer."
- )
-
-
-class TorqueMode(enum.Enum):
- ENABLED = 1
- DISABLED = 0
-
-
-class DriveMode(enum.Enum):
- NON_INVERTED = 0
- INVERTED = 1
-
-
-class CalibrationMode(enum.Enum):
- # Joints with rotational motions are expressed in degrees in nominal range of [-180, 180]
- DEGREE = 0
- # Joints with linear motions (like gripper of Aloha) are expressed in nominal range of [0, 100]
- LINEAR = 1
-
-
-class JointOutOfRangeError(Exception):
- def __init__(self, message="Joint is out of range"):
- self.message = message
- super().__init__(self.message)
-
-
-class FeetechMotorsBus:
- """
- The FeetechMotorsBus class allows to efficiently read and write to the attached motors. It relies on
- the python feetech sdk to communicate with the motors. For more info, see the [feetech SDK Documentation](https://emanual.robotis.com/docs/en/software/feetech/feetech_sdk/sample_code/python_read_write_protocol_2_0/#python-read-write-protocol-20).
-
- A FeetechMotorsBus instance requires a port (e.g. `FeetechMotorsBus(port="/dev/tty.usbmodem575E0031751"`)).
- To find the port, you can run our utility script:
- ```bash
- python lerobot/scripts/find_motors_bus_port.py
- >>> Finding all available ports for the MotorsBus.
- >>> ['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
- >>> Remove the usb cable from your FeetechMotorsBus and press Enter when done.
- >>> The port of this FeetechMotorsBus is /dev/tty.usbmodem575E0031751.
- >>> Reconnect the usb cable.
- ```
-
- Example of usage for 1 motor connected to the bus:
- ```python
- motor_name = "gripper"
- motor_index = 6
- motor_model = "sts3215"
-
- config = FeetechMotorsBusConfig(
- port="/dev/tty.usbmodem575E0031751",
- motors={motor_name: (motor_index, motor_model)},
- )
- motors_bus = FeetechMotorsBus(config)
- motors_bus.connect()
-
- position = motors_bus.read("Present_Position")
-
- # move from a few motor steps as an example
- few_steps = 30
- motors_bus.write("Goal_Position", position + few_steps)
-
- # when done, consider disconnecting
- motors_bus.disconnect()
- ```
- """
-
- def __init__(
- self,
- config: FeetechMotorsBusConfig,
- ):
- self.port = config.port
- self.motors = config.motors
- self.mock = config.mock
-
- self.model_ctrl_table = deepcopy(MODEL_CONTROL_TABLE)
- self.model_resolution = deepcopy(MODEL_RESOLUTION)
-
- self.port_handler = None
- self.packet_handler = None
- self.calibration = None
- self.is_connected = False
- self.group_readers = {}
- self.group_writers = {}
- self.logs = {}
-
- self.track_positions = {}
-
- def connect(self):
- if self.is_connected:
- raise RobotDeviceAlreadyConnectedError(
- f"FeetechMotorsBus({self.port}) is already connected. Do not call `motors_bus.connect()` twice."
- )
-
- if self.mock:
- import tests.motors.mock_scservo_sdk as scs
- else:
- import scservo_sdk as scs
-
- self.port_handler = scs.PortHandler(self.port)
- self.packet_handler = scs.PacketHandler(PROTOCOL_VERSION)
-
- try:
- if not self.port_handler.openPort():
- raise OSError(f"Failed to open port '{self.port}'.")
- except Exception:
- traceback.print_exc()
- print(
- "\nTry running `python lerobot/scripts/find_motors_bus_port.py` to make sure you are using the correct port.\n"
- )
- raise
-
- # Allow to read and write
- self.is_connected = True
-
- self.port_handler.setPacketTimeoutMillis(TIMEOUT_MS)
-
- def reconnect(self):
- if self.mock:
- import tests.motors.mock_scservo_sdk as scs
- else:
- import scservo_sdk as scs
-
- self.port_handler = scs.PortHandler(self.port)
- self.packet_handler = scs.PacketHandler(PROTOCOL_VERSION)
-
- if not self.port_handler.openPort():
- raise OSError(f"Failed to open port '{self.port}'.")
-
- self.is_connected = True
-
- def are_motors_configured(self):
- # Only check the motor indices and not baudrate, since if the motor baudrates are incorrect,
- # a ConnectionError will be raised anyway.
- try:
- return (self.motor_indices == self.read("ID")).all()
- except ConnectionError as e:
- print(e)
- return False
-
- def find_motor_indices(self, possible_ids=None, num_retry=2):
- if possible_ids is None:
- possible_ids = range(MAX_ID_RANGE)
-
- indices = []
- for idx in tqdm.tqdm(possible_ids):
- try:
- present_idx = self.read_with_motor_ids(self.motor_models, [idx], "ID", num_retry=num_retry)[0]
- except ConnectionError:
- continue
-
- if idx != present_idx:
- # sanity check
- raise OSError(
- "Motor index used to communicate through the bus is not the same as the one present in the motor memory. The motor memory might be damaged."
- )
- indices.append(idx)
-
- return indices
-
- def set_bus_baudrate(self, baudrate):
- present_bus_baudrate = self.port_handler.getBaudRate()
- if present_bus_baudrate != baudrate:
- print(f"Setting bus baud rate to {baudrate}. Previously {present_bus_baudrate}.")
- self.port_handler.setBaudRate(baudrate)
-
- if self.port_handler.getBaudRate() != baudrate:
- raise OSError("Failed to write bus baud rate.")
-
- @property
- def motor_names(self) -> list[str]:
- return list(self.motors.keys())
-
- @property
- def motor_models(self) -> list[str]:
- return [model for _, model in self.motors.values()]
-
- @property
- def motor_indices(self) -> list[int]:
- return [idx for idx, _ in self.motors.values()]
-
- def set_calibration(self, calibration: dict[str, list]):
- self.calibration = calibration
-
- def apply_calibration_autocorrect(self, values: np.ndarray | list, motor_names: list[str] | None):
- """This function apply the calibration, automatically detects out of range errors for motors values and attempt to correct.
-
- For more info, see docstring of `apply_calibration` and `autocorrect_calibration`.
- """
- try:
- values = self.apply_calibration(values, motor_names)
- except JointOutOfRangeError as e:
- print(e)
- self.autocorrect_calibration(values, motor_names)
- values = self.apply_calibration(values, motor_names)
- return values
-
- def apply_calibration(self, values: np.ndarray | list, motor_names: list[str] | None):
- """Convert from unsigned int32 joint position range [0, 2**32[ to the universal float32 nominal degree range ]-180.0, 180.0[ with
- a "zero position" at 0 degree.
-
- Note: We say "nominal degree range" since the motors can take values outside this range. For instance, 190 degrees, if the motor
- rotate more than a half a turn from the zero position. However, most motors can't rotate more than 180 degrees and will stay in this range.
-
- Joints values are original in [0, 2**32[ (unsigned int32). Each motor are expected to complete a full rotation
- when given a goal position that is + or - their resolution. For instance, feetech xl330-m077 have a resolution of 4096, and
- at any position in their original range, let's say the position 56734, they complete a full rotation clockwise by moving to 60830,
- or anticlockwise by moving to 52638. The position in the original range is arbitrary and might change a lot between each motor.
- To harmonize between motors of the same model, different robots, or even models of different brands, we propose to work
- in the centered nominal degree range ]-180, 180[.
- """
- if motor_names is None:
- motor_names = self.motor_names
-
- # Convert from unsigned int32 original range [0, 2**32] to signed float32 range
- values = values.astype(np.float32)
-
- for i, name in enumerate(motor_names):
- calib_idx = self.calibration["motor_names"].index(name)
- calib_mode = self.calibration["calib_mode"][calib_idx]
-
- if CalibrationMode[calib_mode] == CalibrationMode.DEGREE:
- drive_mode = self.calibration["drive_mode"][calib_idx]
- homing_offset = self.calibration["homing_offset"][calib_idx]
- _, model = self.motors[name]
- resolution = self.model_resolution[model]
-
- # Update direction of rotation of the motor to match between leader and follower.
- # In fact, the motor of the leader for a given joint can be assembled in an
- # opposite direction in term of rotation than the motor of the follower on the same joint.
- if drive_mode:
- values[i] *= -1
-
- # Convert from range [-2**31, 2**31[ to
- # nominal range ]-resolution, resolution[ (e.g. ]-2048, 2048[)
- values[i] += homing_offset
-
- # Convert from range ]-resolution, resolution[ to
- # universal float32 centered degree range ]-180, 180[
- values[i] = values[i] / (resolution // 2) * HALF_TURN_DEGREE
-
- if (values[i] < LOWER_BOUND_DEGREE) or (values[i] > UPPER_BOUND_DEGREE):
- raise JointOutOfRangeError(
- f"Wrong motor position range detected for {name}. "
- f"Expected to be in nominal range of [-{HALF_TURN_DEGREE}, {HALF_TURN_DEGREE}] degrees (a full rotation), "
- f"with a maximum range of [{LOWER_BOUND_DEGREE}, {UPPER_BOUND_DEGREE}] degrees to account for joints that can rotate a bit more, "
- f"but present value is {values[i]} degree. "
- "This might be due to a cable connection issue creating an artificial 360 degrees jump in motor values. "
- "You need to recalibrate by running: `python lerobot/scripts/control_robot.py calibrate`"
- )
-
- elif CalibrationMode[calib_mode] == CalibrationMode.LINEAR:
- start_pos = self.calibration["start_pos"][calib_idx]
- end_pos = self.calibration["end_pos"][calib_idx]
-
- # Rescale the present position to a nominal range [0, 100] %,
- # useful for joints with linear motions like Aloha gripper
- values[i] = (values[i] - start_pos) / (end_pos - start_pos) * 100
-
- if (values[i] < LOWER_BOUND_LINEAR) or (values[i] > UPPER_BOUND_LINEAR):
- raise JointOutOfRangeError(
- f"Wrong motor position range detected for {name}. "
- f"Expected to be in nominal range of [0, 100] % (a full linear translation), "
- f"with a maximum range of [{LOWER_BOUND_LINEAR}, {UPPER_BOUND_LINEAR}] % to account for some imprecision during calibration, "
- f"but present value is {values[i]} %. "
- "This might be due to a cable connection issue creating an artificial jump in motor values. "
- "You need to recalibrate by running: `python lerobot/scripts/control_robot.py calibrate`"
- )
-
- return values
-
- def autocorrect_calibration(self, values: np.ndarray | list, motor_names: list[str] | None):
- """This function automatically detects issues with values of motors after calibration, and correct for these issues.
-
- Some motors might have values outside of expected maximum bounds after calibration.
- For instance, for a joint in degree, its value can be outside [-270, 270] degrees, which is totally unexpected given
- a nominal range of [-180, 180] degrees, which represents half a turn to the left or right starting from zero position.
-
- Known issues:
- #1: Motor value randomly shifts of a full turn, caused by hardware/connection errors.
- #2: Motor internal homing offset is shifted of a full turn, caused by using default calibration (e.g Aloha).
- #3: motor internal homing offset is shifted of less or more than a full turn, caused by using default calibration
- or by human error during manual calibration.
-
- Issues #1 and #2 can be solved by shifting the calibration homing offset by a full turn.
- Issue #3 will be visually detected by user and potentially captured by the safety feature `max_relative_target`,
- that will slow down the motor, raise an error asking to recalibrate. Manual recalibrating will solve the issue.
-
- Note: A full turn corresponds to 360 degrees but also to 4096 steps for a motor resolution of 4096.
- """
- if motor_names is None:
- motor_names = self.motor_names
-
- # Convert from unsigned int32 original range [0, 2**32] to signed float32 range
- values = values.astype(np.float32)
-
- for i, name in enumerate(motor_names):
- calib_idx = self.calibration["motor_names"].index(name)
- calib_mode = self.calibration["calib_mode"][calib_idx]
-
- if CalibrationMode[calib_mode] == CalibrationMode.DEGREE:
- drive_mode = self.calibration["drive_mode"][calib_idx]
- homing_offset = self.calibration["homing_offset"][calib_idx]
- _, model = self.motors[name]
- resolution = self.model_resolution[model]
-
- if drive_mode:
- values[i] *= -1
-
- # Convert from initial range to range [-180, 180] degrees
- calib_val = (values[i] + homing_offset) / (resolution // 2) * HALF_TURN_DEGREE
- in_range = (calib_val > LOWER_BOUND_DEGREE) and (calib_val < UPPER_BOUND_DEGREE)
-
- # Solve this inequality to find the factor to shift the range into [-180, 180] degrees
- # values[i] = (values[i] + homing_offset + resolution * factor) / (resolution // 2) * HALF_TURN_DEGREE
- # - HALF_TURN_DEGREE <= (values[i] + homing_offset + resolution * factor) / (resolution // 2) * HALF_TURN_DEGREE <= HALF_TURN_DEGREE
- # (- HALF_TURN_DEGREE / HALF_TURN_DEGREE * (resolution // 2) - values[i] - homing_offset) / resolution <= factor <= (HALF_TURN_DEGREE / 180 * (resolution // 2) - values[i] - homing_offset) / resolution
- low_factor = (
- -HALF_TURN_DEGREE / HALF_TURN_DEGREE * (resolution // 2) - values[i] - homing_offset
- ) / resolution
- upp_factor = (
- HALF_TURN_DEGREE / HALF_TURN_DEGREE * (resolution // 2) - values[i] - homing_offset
- ) / resolution
-
- elif CalibrationMode[calib_mode] == CalibrationMode.LINEAR:
- start_pos = self.calibration["start_pos"][calib_idx]
- end_pos = self.calibration["end_pos"][calib_idx]
-
- # Convert from initial range to range [0, 100] in %
- calib_val = (values[i] - start_pos) / (end_pos - start_pos) * 100
- in_range = (calib_val > LOWER_BOUND_LINEAR) and (calib_val < UPPER_BOUND_LINEAR)
-
- # Solve this inequality to find the factor to shift the range into [0, 100] %
- # values[i] = (values[i] - start_pos + resolution * factor) / (end_pos + resolution * factor - start_pos - resolution * factor) * 100
- # values[i] = (values[i] - start_pos + resolution * factor) / (end_pos - start_pos) * 100
- # 0 <= (values[i] - start_pos + resolution * factor) / (end_pos - start_pos) * 100 <= 100
- # (start_pos - values[i]) / resolution <= factor <= (end_pos - values[i]) / resolution
- low_factor = (start_pos - values[i]) / resolution
- upp_factor = (end_pos - values[i]) / resolution
-
- if not in_range:
- # Get first integer between the two bounds
- if low_factor < upp_factor:
- factor = math.ceil(low_factor)
-
- if factor > upp_factor:
- raise ValueError(f"No integer found between bounds [{low_factor=}, {upp_factor=}]")
- else:
- factor = math.ceil(upp_factor)
-
- if factor > low_factor:
- raise ValueError(f"No integer found between bounds [{low_factor=}, {upp_factor=}]")
-
- if CalibrationMode[calib_mode] == CalibrationMode.DEGREE:
- out_of_range_str = f"{LOWER_BOUND_DEGREE} < {calib_val} < {UPPER_BOUND_DEGREE} degrees"
- in_range_str = f"{LOWER_BOUND_DEGREE} < {calib_val} < {UPPER_BOUND_DEGREE} degrees"
- elif CalibrationMode[calib_mode] == CalibrationMode.LINEAR:
- out_of_range_str = f"{LOWER_BOUND_LINEAR} < {calib_val} < {UPPER_BOUND_LINEAR} %"
- in_range_str = f"{LOWER_BOUND_LINEAR} < {calib_val} < {UPPER_BOUND_LINEAR} %"
-
- logging.warning(
- f"Auto-correct calibration of motor '{name}' by shifting value by {abs(factor)} full turns, "
- f"from '{out_of_range_str}' to '{in_range_str}'."
- )
-
- # A full turn corresponds to 360 degrees but also to 4096 steps for a motor resolution of 4096.
- self.calibration["homing_offset"][calib_idx] += resolution * factor
-
- def revert_calibration(self, values: np.ndarray | list, motor_names: list[str] | None):
- """Inverse of `apply_calibration`."""
- if motor_names is None:
- motor_names = self.motor_names
-
- for i, name in enumerate(motor_names):
- calib_idx = self.calibration["motor_names"].index(name)
- calib_mode = self.calibration["calib_mode"][calib_idx]
-
- if CalibrationMode[calib_mode] == CalibrationMode.DEGREE:
- drive_mode = self.calibration["drive_mode"][calib_idx]
- homing_offset = self.calibration["homing_offset"][calib_idx]
- _, model = self.motors[name]
- resolution = self.model_resolution[model]
-
- # Convert from nominal 0-centered degree range [-180, 180] to
- # 0-centered resolution range (e.g. [-2048, 2048] for resolution=4096)
- values[i] = values[i] / HALF_TURN_DEGREE * (resolution // 2)
-
- # Subtract the homing offsets to come back to actual motor range of values
- # which can be arbitrary.
- values[i] -= homing_offset
-
- # Remove drive mode, which is the rotation direction of the motor, to come back to
- # actual motor rotation direction which can be arbitrary.
- if drive_mode:
- values[i] *= -1
-
- elif CalibrationMode[calib_mode] == CalibrationMode.LINEAR:
- start_pos = self.calibration["start_pos"][calib_idx]
- end_pos = self.calibration["end_pos"][calib_idx]
-
- # Convert from nominal lnear range of [0, 100] % to
- # actual motor range of values which can be arbitrary.
- values[i] = values[i] / 100 * (end_pos - start_pos) + start_pos
-
- values = np.round(values).astype(np.int32)
- return values
-
- def avoid_rotation_reset(self, values, motor_names, data_name):
- if data_name not in self.track_positions:
- self.track_positions[data_name] = {
- "prev": [None] * len(self.motor_names),
- # Assume False at initialization
- "below_zero": [False] * len(self.motor_names),
- "above_max": [False] * len(self.motor_names),
- }
-
- track = self.track_positions[data_name]
-
- if motor_names is None:
- motor_names = self.motor_names
-
- for i, name in enumerate(motor_names):
- idx = self.motor_names.index(name)
-
- if track["prev"][idx] is None:
- track["prev"][idx] = values[i]
- continue
-
- # Detect a full rotation occurred
- if abs(track["prev"][idx] - values[i]) > 2048:
- # Position went below 0 and got reset to 4095
- if track["prev"][idx] < values[i]:
- # So we set negative value by adding a full rotation
- values[i] -= 4096
-
- # Position went above 4095 and got reset to 0
- elif track["prev"][idx] > values[i]:
- # So we add a full rotation
- values[i] += 4096
-
- track["prev"][idx] = values[i]
-
- return values
-
- def read_with_motor_ids(self, motor_models, motor_ids, data_name, num_retry=NUM_READ_RETRY):
- if self.mock:
- import tests.motors.mock_scservo_sdk as scs
- else:
- import scservo_sdk as scs
-
- return_list = True
- if not isinstance(motor_ids, list):
- return_list = False
- motor_ids = [motor_ids]
-
- assert_same_address(self.model_ctrl_table, self.motor_models, data_name)
- addr, bytes = self.model_ctrl_table[motor_models[0]][data_name]
- group = scs.GroupSyncRead(self.port_handler, self.packet_handler, addr, bytes)
- for idx in motor_ids:
- group.addParam(idx)
-
- for _ in range(num_retry):
- comm = group.txRxPacket()
- if comm == scs.COMM_SUCCESS:
- break
-
- if comm != scs.COMM_SUCCESS:
- raise ConnectionError(
- f"Read failed due to communication error on port {self.port_handler.port_name} for indices {motor_ids}: "
- f"{self.packet_handler.getTxRxResult(comm)}"
- )
-
- values = []
- for idx in motor_ids:
- value = group.getData(idx, addr, bytes)
- values.append(value)
-
- if return_list:
- return values
- else:
- return values[0]
-
- def read(self, data_name, motor_names: str | list[str] | None = None):
- if self.mock:
- import tests.motors.mock_scservo_sdk as scs
- else:
- import scservo_sdk as scs
-
- if not self.is_connected:
- raise RobotDeviceNotConnectedError(
- f"FeetechMotorsBus({self.port}) is not connected. You need to run `motors_bus.connect()`."
- )
-
- start_time = time.perf_counter()
-
- if motor_names is None:
- motor_names = self.motor_names
-
- if isinstance(motor_names, str):
- motor_names = [motor_names]
-
- motor_ids = []
- models = []
- for name in motor_names:
- motor_idx, model = self.motors[name]
- motor_ids.append(motor_idx)
- models.append(model)
-
- assert_same_address(self.model_ctrl_table, models, data_name)
- addr, bytes = self.model_ctrl_table[model][data_name]
- group_key = get_group_sync_key(data_name, motor_names)
-
- if data_name not in self.group_readers:
- # Very Important to flush the buffer!
- self.port_handler.ser.reset_output_buffer()
- self.port_handler.ser.reset_input_buffer()
-
- # create new group reader
- self.group_readers[group_key] = scs.GroupSyncRead(
- self.port_handler, self.packet_handler, addr, bytes
- )
- for idx in motor_ids:
- self.group_readers[group_key].addParam(idx)
-
- for _ in range(NUM_READ_RETRY):
- comm = self.group_readers[group_key].txRxPacket()
- if comm == scs.COMM_SUCCESS:
- break
-
- if comm != scs.COMM_SUCCESS:
- raise ConnectionError(
- f"Read failed due to communication error on port {self.port} for group_key {group_key}: "
- f"{self.packet_handler.getTxRxResult(comm)}"
- )
-
- values = []
- for idx in motor_ids:
- value = self.group_readers[group_key].getData(idx, addr, bytes)
- values.append(value)
-
- values = np.array(values)
-
- # Convert to signed int to use range [-2048, 2048] for our motor positions.
- if data_name in CONVERT_UINT32_TO_INT32_REQUIRED:
- values = values.astype(np.int32)
-
- if data_name in CALIBRATION_REQUIRED:
- values = self.avoid_rotation_reset(values, motor_names, data_name)
-
- if data_name in CALIBRATION_REQUIRED and self.calibration is not None:
- values = self.apply_calibration_autocorrect(values, motor_names)
-
- # log the number of seconds it took to read the data from the motors
- delta_ts_name = get_log_name("delta_timestamp_s", "read", data_name, motor_names)
- self.logs[delta_ts_name] = time.perf_counter() - start_time
-
- # log the utc time at which the data was received
- ts_utc_name = get_log_name("timestamp_utc", "read", data_name, motor_names)
- self.logs[ts_utc_name] = capture_timestamp_utc()
-
- return values
-
- def write_with_motor_ids(self, motor_models, motor_ids, data_name, values, num_retry=NUM_WRITE_RETRY):
- if self.mock:
- import tests.motors.mock_scservo_sdk as scs
- else:
- import scservo_sdk as scs
-
- if not isinstance(motor_ids, list):
- motor_ids = [motor_ids]
- if not isinstance(values, list):
- values = [values]
-
- assert_same_address(self.model_ctrl_table, motor_models, data_name)
- addr, bytes = self.model_ctrl_table[motor_models[0]][data_name]
- group = scs.GroupSyncWrite(self.port_handler, self.packet_handler, addr, bytes)
- for idx, value in zip(motor_ids, values, strict=True):
- data = convert_to_bytes(value, bytes, self.mock)
- group.addParam(idx, data)
-
- for _ in range(num_retry):
- comm = group.txPacket()
- if comm == scs.COMM_SUCCESS:
- break
-
- if comm != scs.COMM_SUCCESS:
- raise ConnectionError(
- f"Write failed due to communication error on port {self.port_handler.port_name} for indices {motor_ids}: "
- f"{self.packet_handler.getTxRxResult(comm)}"
- )
-
- def write(self, data_name, values: int | float | np.ndarray, motor_names: str | list[str] | None = None):
- if not self.is_connected:
- raise RobotDeviceNotConnectedError(
- f"FeetechMotorsBus({self.port}) is not connected. You need to run `motors_bus.connect()`."
- )
-
- start_time = time.perf_counter()
-
- if self.mock:
- import tests.motors.mock_scservo_sdk as scs
- else:
- import scservo_sdk as scs
-
- if motor_names is None:
- motor_names = self.motor_names
-
- if isinstance(motor_names, str):
- motor_names = [motor_names]
-
- if isinstance(values, (int, float, np.integer)):
- values = [int(values)] * len(motor_names)
-
- values = np.array(values)
-
- motor_ids = []
- models = []
- for name in motor_names:
- motor_idx, model = self.motors[name]
- motor_ids.append(motor_idx)
- models.append(model)
-
- if data_name in CALIBRATION_REQUIRED and self.calibration is not None:
- values = self.revert_calibration(values, motor_names)
-
- values = values.tolist()
-
- assert_same_address(self.model_ctrl_table, models, data_name)
- addr, bytes = self.model_ctrl_table[model][data_name]
- group_key = get_group_sync_key(data_name, motor_names)
-
- init_group = data_name not in self.group_readers
- if init_group:
- self.group_writers[group_key] = scs.GroupSyncWrite(
- self.port_handler, self.packet_handler, addr, bytes
- )
-
- for idx, value in zip(motor_ids, values, strict=True):
- data = convert_to_bytes(value, bytes, self.mock)
- if init_group:
- self.group_writers[group_key].addParam(idx, data)
- else:
- self.group_writers[group_key].changeParam(idx, data)
-
- comm = self.group_writers[group_key].txPacket()
- if comm != scs.COMM_SUCCESS:
- raise ConnectionError(
- f"Write failed due to communication error on port {self.port} for group_key {group_key}: "
- f"{self.packet_handler.getTxRxResult(comm)}"
- )
-
- # log the number of seconds it took to write the data to the motors
- delta_ts_name = get_log_name("delta_timestamp_s", "write", data_name, motor_names)
- self.logs[delta_ts_name] = time.perf_counter() - start_time
-
- # TODO(rcadene): should we log the time before sending the write command?
- # log the utc time when the write has been completed
- ts_utc_name = get_log_name("timestamp_utc", "write", data_name, motor_names)
- self.logs[ts_utc_name] = capture_timestamp_utc()
-
- def disconnect(self):
- if not self.is_connected:
- raise RobotDeviceNotConnectedError(
- f"FeetechMotorsBus({self.port}) is not connected. Try running `motors_bus.connect()` first."
- )
-
- if self.port_handler is not None:
- self.port_handler.closePort()
- self.port_handler = None
-
- self.packet_handler = None
- self.group_readers = {}
- self.group_writers = {}
- self.is_connected = False
-
- def __del__(self):
- if getattr(self, "is_connected", False):
- self.disconnect()
diff --git a/lerobot/common/robot_devices/motors/utils.py b/lerobot/common/robot_devices/motors/utils.py
deleted file mode 100644
index 07d404d7..00000000
--- a/lerobot/common/robot_devices/motors/utils.py
+++ /dev/null
@@ -1,80 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from typing import Protocol
-
-from lerobot.common.robot_devices.motors.configs import (
- DynamixelMotorsBusConfig,
- FeetechMotorsBusConfig,
- MotorsBusConfig,
-)
-
-
-class MotorsBus(Protocol):
- def motor_names(self): ...
- def set_calibration(self): ...
- def apply_calibration(self): ...
- def revert_calibration(self): ...
- def read(self): ...
- def write(self): ...
-
-
-def make_motors_buses_from_configs(motors_bus_configs: dict[str, MotorsBusConfig]) -> list[MotorsBus]:
- motors_buses = {}
-
- for key, cfg in motors_bus_configs.items():
- if cfg.type == "dynamixel":
- from lerobot.common.robot_devices.motors.dynamixel import DynamixelMotorsBus
-
- motors_buses[key] = DynamixelMotorsBus(cfg)
-
- elif cfg.type == "feetech":
- from lerobot.common.robot_devices.motors.feetech import FeetechMotorsBus
-
- motors_buses[key] = FeetechMotorsBus(cfg)
-
- elif cfg.type == "realman":
- from lerobot.common.robot_devices.motors.realman import RealmanMotorsBus
-
- motors_buses[key] = RealmanMotorsBus(cfg)
-
- elif cfg.type == "realman_dual":
- from lerobot.common.robot_devices.motors.realman_dual import RealmanDualMotorsBus
-
- motors_buses[key] = RealmanDualMotorsBus(cfg)
- else:
- raise ValueError(f"The motor type '{cfg.type}' is not valid.")
-
- return motors_buses
-
-
-def make_motors_bus(motor_type: str, **kwargs) -> MotorsBus:
- if motor_type == "dynamixel":
- from lerobot.common.robot_devices.motors.dynamixel import DynamixelMotorsBus
-
- config = DynamixelMotorsBusConfig(**kwargs)
- return DynamixelMotorsBus(config)
-
- elif motor_type == "feetech":
- from lerobot.common.robot_devices.motors.feetech import FeetechMotorsBus
-
- config = FeetechMotorsBusConfig(**kwargs)
- return FeetechMotorsBus(config)
-
- else:
- raise ValueError(f"The motor type '{motor_type}' is not valid.")
-
-
-def get_motor_names(arm: dict[str, MotorsBus]) -> list:
- return [f"{arm}_{motor}" for arm, bus in arm.items() for motor in bus.motors]
\ No newline at end of file
diff --git a/lerobot/common/robot_devices/robots/configs.py b/lerobot/common/robot_devices/robots/configs.py
deleted file mode 100644
index bfd9edf3..00000000
--- a/lerobot/common/robot_devices/robots/configs.py
+++ /dev/null
@@ -1,822 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import abc
-from dataclasses import dataclass, field
-from typing import Sequence
-
-import draccus
-
-from lerobot.common.robot_devices.cameras.configs import (
- CameraConfig,
- IntelRealSenseCameraConfig,
- OpenCVCameraConfig,
-)
-from lerobot.common.robot_devices.motors.configs import (
- DynamixelMotorsBusConfig,
- FeetechMotorsBusConfig,
- MotorsBusConfig,
- RealmanMotorsBusConfig,
- RealmanDualMotorsBusConfig
-)
-
-
-@dataclass
-class RobotConfig(draccus.ChoiceRegistry, abc.ABC):
- @property
- def type(self) -> str:
- return self.get_choice_name(self.__class__)
-
-
-# TODO(rcadene, aliberts): remove ManipulatorRobotConfig abstraction
-@dataclass
-class ManipulatorRobotConfig(RobotConfig):
- leader_arms: dict[str, MotorsBusConfig] = field(default_factory=lambda: {})
- follower_arms: dict[str, MotorsBusConfig] = field(default_factory=lambda: {})
- cameras: dict[str, CameraConfig] = field(default_factory=lambda: {})
-
- # Optionally limit the magnitude of the relative positional target vector for safety purposes.
- # Set this to a positive scalar to have the same value for all motors, or a list that is the same length
- # as the number of motors in your follower arms (assumes all follower arms have the same number of
- # motors).
- max_relative_target: list[float] | float | None = None
-
- # Optionally set the leader arm in torque mode with the gripper motor set to this angle. This makes it
- # possible to squeeze the gripper and have it spring back to an open position on its own. If None, the
- # gripper is not put in torque mode.
- gripper_open_degree: float | None = None
-
- mock: bool = False
-
- def __post_init__(self):
- if self.mock:
- for arm in self.leader_arms.values():
- if not arm.mock:
- arm.mock = True
- for arm in self.follower_arms.values():
- if not arm.mock:
- arm.mock = True
- for cam in self.cameras.values():
- if not cam.mock:
- cam.mock = True
-
- if self.max_relative_target is not None and isinstance(self.max_relative_target, Sequence):
- for name in self.follower_arms:
- if len(self.follower_arms[name].motors) != len(self.max_relative_target):
- raise ValueError(
- f"len(max_relative_target)={len(self.max_relative_target)} but the follower arm with name {name} has "
- f"{len(self.follower_arms[name].motors)} motors. Please make sure that the "
- f"`max_relative_target` list has as many parameters as there are motors per arm. "
- "Note: This feature does not yet work with robots where different follower arms have "
- "different numbers of motors."
- )
-
-
-@RobotConfig.register_subclass("aloha")
-@dataclass
-class AlohaRobotConfig(ManipulatorRobotConfig):
- # Specific to Aloha, LeRobot comes with default calibration files. Assuming the motors have been
- # properly assembled, no manual calibration step is expected. If you need to run manual calibration,
- # simply update this path to ".cache/calibration/aloha"
- calibration_dir: str = ".cache/calibration/aloha_default"
-
- # /!\ FOR SAFETY, READ THIS /!\
- # `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
- # Set this to a positive scalar to have the same value for all motors, or a list that is the same length as
- # the number of motors in your follower arms.
- # For Aloha, for every goal position request, motor rotations are capped at 5 degrees by default.
- # When you feel more confident with teleoperation or running the policy, you can extend
- # this safety limit and even removing it by setting it to `null`.
- # Also, everything is expected to work safely out-of-the-box, but we highly advise to
- # first try to teleoperate the grippers only (by commenting out the rest of the motors in this yaml),
- # then to gradually add more motors (by uncommenting), until you can teleoperate both arms fully
- max_relative_target: int | None = 5
-
- leader_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "left": DynamixelMotorsBusConfig(
- # window_x
- port="/dev/ttyDXL_leader_left",
- motors={
- # name: (index, model)
- "waist": [1, "xm430-w350"],
- "shoulder": [2, "xm430-w350"],
- "shoulder_shadow": [3, "xm430-w350"],
- "elbow": [4, "xm430-w350"],
- "elbow_shadow": [5, "xm430-w350"],
- "forearm_roll": [6, "xm430-w350"],
- "wrist_angle": [7, "xm430-w350"],
- "wrist_rotate": [8, "xl430-w250"],
- "gripper": [9, "xc430-w150"],
- },
- ),
- "right": DynamixelMotorsBusConfig(
- # window_x
- port="/dev/ttyDXL_leader_right",
- motors={
- # name: (index, model)
- "waist": [1, "xm430-w350"],
- "shoulder": [2, "xm430-w350"],
- "shoulder_shadow": [3, "xm430-w350"],
- "elbow": [4, "xm430-w350"],
- "elbow_shadow": [5, "xm430-w350"],
- "forearm_roll": [6, "xm430-w350"],
- "wrist_angle": [7, "xm430-w350"],
- "wrist_rotate": [8, "xl430-w250"],
- "gripper": [9, "xc430-w150"],
- },
- ),
- }
- )
-
- follower_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "left": DynamixelMotorsBusConfig(
- port="/dev/ttyDXL_follower_left",
- motors={
- # name: (index, model)
- "waist": [1, "xm540-w270"],
- "shoulder": [2, "xm540-w270"],
- "shoulder_shadow": [3, "xm540-w270"],
- "elbow": [4, "xm540-w270"],
- "elbow_shadow": [5, "xm540-w270"],
- "forearm_roll": [6, "xm540-w270"],
- "wrist_angle": [7, "xm540-w270"],
- "wrist_rotate": [8, "xm430-w350"],
- "gripper": [9, "xm430-w350"],
- },
- ),
- "right": DynamixelMotorsBusConfig(
- port="/dev/ttyDXL_follower_right",
- motors={
- # name: (index, model)
- "waist": [1, "xm540-w270"],
- "shoulder": [2, "xm540-w270"],
- "shoulder_shadow": [3, "xm540-w270"],
- "elbow": [4, "xm540-w270"],
- "elbow_shadow": [5, "xm540-w270"],
- "forearm_roll": [6, "xm540-w270"],
- "wrist_angle": [7, "xm540-w270"],
- "wrist_rotate": [8, "xm430-w350"],
- "gripper": [9, "xm430-w350"],
- },
- ),
- }
- )
-
- # Troubleshooting: If one of your IntelRealSense cameras freeze during
- # data recording due to bandwidth limit, you might need to plug the camera
- # on another USB hub or PCIe card.
- cameras: dict[str, CameraConfig] = field(
- default_factory=lambda: {
- "cam_high": IntelRealSenseCameraConfig(
- serial_number=128422271347,
- fps=30,
- width=640,
- height=480,
- ),
- "cam_low": IntelRealSenseCameraConfig(
- serial_number=130322270656,
- fps=30,
- width=640,
- height=480,
- ),
- "cam_left_wrist": IntelRealSenseCameraConfig(
- serial_number=218622272670,
- fps=30,
- width=640,
- height=480,
- ),
- "cam_right_wrist": IntelRealSenseCameraConfig(
- serial_number=130322272300,
- fps=30,
- width=640,
- height=480,
- ),
- }
- )
-
- mock: bool = False
-
-
-@RobotConfig.register_subclass("koch")
-@dataclass
-class KochRobotConfig(ManipulatorRobotConfig):
- calibration_dir: str = ".cache/calibration/koch"
- # `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
- # Set this to a positive scalar to have the same value for all motors, or a list that is the same length as
- # the number of motors in your follower arms.
- max_relative_target: int | None = None
-
- leader_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "main": DynamixelMotorsBusConfig(
- port="/dev/tty.usbmodem585A0085511",
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "xl330-m077"],
- "shoulder_lift": [2, "xl330-m077"],
- "elbow_flex": [3, "xl330-m077"],
- "wrist_flex": [4, "xl330-m077"],
- "wrist_roll": [5, "xl330-m077"],
- "gripper": [6, "xl330-m077"],
- },
- ),
- }
- )
-
- follower_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "main": DynamixelMotorsBusConfig(
- port="/dev/tty.usbmodem585A0076891",
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "xl430-w250"],
- "shoulder_lift": [2, "xl430-w250"],
- "elbow_flex": [3, "xl330-m288"],
- "wrist_flex": [4, "xl330-m288"],
- "wrist_roll": [5, "xl330-m288"],
- "gripper": [6, "xl330-m288"],
- },
- ),
- }
- )
-
- cameras: dict[str, CameraConfig] = field(
- default_factory=lambda: {
- "laptop": OpenCVCameraConfig(
- camera_index=0,
- fps=30,
- width=640,
- height=480,
- ),
- "phone": OpenCVCameraConfig(
- camera_index=1,
- fps=30,
- width=640,
- height=480,
- ),
- }
- )
-
- # ~ Koch specific settings ~
- # Sets the leader arm in torque mode with the gripper motor set to this angle. This makes it possible
- # to squeeze the gripper and have it spring back to an open position on its own.
- gripper_open_degree: float = 35.156
-
- mock: bool = False
-
-
-@RobotConfig.register_subclass("koch_bimanual")
-@dataclass
-class KochBimanualRobotConfig(ManipulatorRobotConfig):
- calibration_dir: str = ".cache/calibration/koch_bimanual"
- # `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
- # Set this to a positive scalar to have the same value for all motors, or a list that is the same length as
- # the number of motors in your follower arms.
- max_relative_target: int | None = None
-
- leader_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "left": DynamixelMotorsBusConfig(
- port="/dev/tty.usbmodem585A0085511",
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "xl330-m077"],
- "shoulder_lift": [2, "xl330-m077"],
- "elbow_flex": [3, "xl330-m077"],
- "wrist_flex": [4, "xl330-m077"],
- "wrist_roll": [5, "xl330-m077"],
- "gripper": [6, "xl330-m077"],
- },
- ),
- "right": DynamixelMotorsBusConfig(
- port="/dev/tty.usbmodem575E0031751",
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "xl330-m077"],
- "shoulder_lift": [2, "xl330-m077"],
- "elbow_flex": [3, "xl330-m077"],
- "wrist_flex": [4, "xl330-m077"],
- "wrist_roll": [5, "xl330-m077"],
- "gripper": [6, "xl330-m077"],
- },
- ),
- }
- )
-
- follower_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "left": DynamixelMotorsBusConfig(
- port="/dev/tty.usbmodem585A0076891",
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "xl430-w250"],
- "shoulder_lift": [2, "xl430-w250"],
- "elbow_flex": [3, "xl330-m288"],
- "wrist_flex": [4, "xl330-m288"],
- "wrist_roll": [5, "xl330-m288"],
- "gripper": [6, "xl330-m288"],
- },
- ),
- "right": DynamixelMotorsBusConfig(
- port="/dev/tty.usbmodem575E0032081",
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "xl430-w250"],
- "shoulder_lift": [2, "xl430-w250"],
- "elbow_flex": [3, "xl330-m288"],
- "wrist_flex": [4, "xl330-m288"],
- "wrist_roll": [5, "xl330-m288"],
- "gripper": [6, "xl330-m288"],
- },
- ),
- }
- )
-
- cameras: dict[str, CameraConfig] = field(
- default_factory=lambda: {
- "laptop": OpenCVCameraConfig(
- camera_index=0,
- fps=30,
- width=640,
- height=480,
- ),
- "phone": OpenCVCameraConfig(
- camera_index=1,
- fps=30,
- width=640,
- height=480,
- ),
- }
- )
-
- # ~ Koch specific settings ~
- # Sets the leader arm in torque mode with the gripper motor set to this angle. This makes it possible
- # to squeeze the gripper and have it spring back to an open position on its own.
- gripper_open_degree: float = 35.156
-
- mock: bool = False
-
-
-@RobotConfig.register_subclass("moss")
-@dataclass
-class MossRobotConfig(ManipulatorRobotConfig):
- calibration_dir: str = ".cache/calibration/moss"
- # `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
- # Set this to a positive scalar to have the same value for all motors, or a list that is the same length as
- # the number of motors in your follower arms.
- max_relative_target: int | None = None
-
- leader_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "main": FeetechMotorsBusConfig(
- port="/dev/tty.usbmodem58760431091",
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "sts3215"],
- "shoulder_lift": [2, "sts3215"],
- "elbow_flex": [3, "sts3215"],
- "wrist_flex": [4, "sts3215"],
- "wrist_roll": [5, "sts3215"],
- "gripper": [6, "sts3215"],
- },
- ),
- }
- )
-
- follower_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "main": FeetechMotorsBusConfig(
- port="/dev/tty.usbmodem585A0076891",
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "sts3215"],
- "shoulder_lift": [2, "sts3215"],
- "elbow_flex": [3, "sts3215"],
- "wrist_flex": [4, "sts3215"],
- "wrist_roll": [5, "sts3215"],
- "gripper": [6, "sts3215"],
- },
- ),
- }
- )
-
- cameras: dict[str, CameraConfig] = field(
- default_factory=lambda: {
- "laptop": OpenCVCameraConfig(
- camera_index=0,
- fps=30,
- width=640,
- height=480,
- ),
- "phone": OpenCVCameraConfig(
- camera_index=1,
- fps=30,
- width=640,
- height=480,
- ),
- }
- )
-
- mock: bool = False
-
-
-@RobotConfig.register_subclass("so101")
-@dataclass
-class So101RobotConfig(ManipulatorRobotConfig):
- calibration_dir: str = ".cache/calibration/so101"
- # `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
- # Set this to a positive scalar to have the same value for all motors, or a list that is the same length as
- # the number of motors in your follower arms.
- max_relative_target: int | None = None
-
- leader_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "main": FeetechMotorsBusConfig(
- port="/dev/tty.usbmodem58760431091",
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "sts3215"],
- "shoulder_lift": [2, "sts3215"],
- "elbow_flex": [3, "sts3215"],
- "wrist_flex": [4, "sts3215"],
- "wrist_roll": [5, "sts3215"],
- "gripper": [6, "sts3215"],
- },
- ),
- }
- )
-
- follower_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "main": FeetechMotorsBusConfig(
- port="/dev/tty.usbmodem585A0076891",
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "sts3215"],
- "shoulder_lift": [2, "sts3215"],
- "elbow_flex": [3, "sts3215"],
- "wrist_flex": [4, "sts3215"],
- "wrist_roll": [5, "sts3215"],
- "gripper": [6, "sts3215"],
- },
- ),
- }
- )
-
- cameras: dict[str, CameraConfig] = field(
- default_factory=lambda: {
- "laptop": OpenCVCameraConfig(
- camera_index=0,
- fps=30,
- width=640,
- height=480,
- ),
- "phone": OpenCVCameraConfig(
- camera_index=1,
- fps=30,
- width=640,
- height=480,
- ),
- }
- )
-
- mock: bool = False
-
-
-@RobotConfig.register_subclass("so100")
-@dataclass
-class So100RobotConfig(ManipulatorRobotConfig):
- calibration_dir: str = ".cache/calibration/so100"
- # `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
- # Set this to a positive scalar to have the same value for all motors, or a list that is the same length as
- # the number of motors in your follower arms.
- max_relative_target: int | None = None
-
- leader_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "main": FeetechMotorsBusConfig(
- port="/dev/tty.usbmodem58760431091",
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "sts3215"],
- "shoulder_lift": [2, "sts3215"],
- "elbow_flex": [3, "sts3215"],
- "wrist_flex": [4, "sts3215"],
- "wrist_roll": [5, "sts3215"],
- "gripper": [6, "sts3215"],
- },
- ),
- }
- )
-
- follower_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "main": FeetechMotorsBusConfig(
- port="/dev/tty.usbmodem585A0076891",
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "sts3215"],
- "shoulder_lift": [2, "sts3215"],
- "elbow_flex": [3, "sts3215"],
- "wrist_flex": [4, "sts3215"],
- "wrist_roll": [5, "sts3215"],
- "gripper": [6, "sts3215"],
- },
- ),
- }
- )
-
- cameras: dict[str, CameraConfig] = field(
- default_factory=lambda: {
- "laptop": OpenCVCameraConfig(
- camera_index=0,
- fps=30,
- width=640,
- height=480,
- ),
- "phone": OpenCVCameraConfig(
- camera_index=1,
- fps=30,
- width=640,
- height=480,
- ),
- }
- )
-
- mock: bool = False
-
-
-@RobotConfig.register_subclass("stretch")
-@dataclass
-class StretchRobotConfig(RobotConfig):
- # `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
- # Set this to a positive scalar to have the same value for all motors, or a list that is the same length as
- # the number of motors in your follower arms.
- max_relative_target: int | None = None
-
- cameras: dict[str, CameraConfig] = field(
- default_factory=lambda: {
- "navigation": OpenCVCameraConfig(
- camera_index="/dev/hello-nav-head-camera",
- fps=10,
- width=1280,
- height=720,
- rotation=-90,
- ),
- "head": IntelRealSenseCameraConfig(
- name="Intel RealSense D435I",
- fps=30,
- width=640,
- height=480,
- rotation=90,
- ),
- "wrist": IntelRealSenseCameraConfig(
- name="Intel RealSense D405",
- fps=30,
- width=640,
- height=480,
- ),
- }
- )
-
- mock: bool = False
-
-
-@RobotConfig.register_subclass("lekiwi")
-@dataclass
-class LeKiwiRobotConfig(RobotConfig):
- # `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
- # Set this to a positive scalar to have the same value for all motors, or a list that is the same length as
- # the number of motors in your follower arms.
- max_relative_target: int | None = None
-
- # Network Configuration
- ip: str = "192.168.0.193"
- port: int = 5555
- video_port: int = 5556
-
- cameras: dict[str, CameraConfig] = field(
- default_factory=lambda: {
- "front": OpenCVCameraConfig(
- camera_index="/dev/video0", fps=30, width=640, height=480, rotation=90
- ),
- "wrist": OpenCVCameraConfig(
- camera_index="/dev/video2", fps=30, width=640, height=480, rotation=180
- ),
- }
- )
-
- calibration_dir: str = ".cache/calibration/lekiwi"
-
- leader_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "main": FeetechMotorsBusConfig(
- port="/dev/tty.usbmodem585A0077581",
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "sts3215"],
- "shoulder_lift": [2, "sts3215"],
- "elbow_flex": [3, "sts3215"],
- "wrist_flex": [4, "sts3215"],
- "wrist_roll": [5, "sts3215"],
- "gripper": [6, "sts3215"],
- },
- ),
- }
- )
-
- follower_arms: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "main": FeetechMotorsBusConfig(
- port="/dev/ttyACM0",
- motors={
- # name: (index, model)
- "shoulder_pan": [1, "sts3215"],
- "shoulder_lift": [2, "sts3215"],
- "elbow_flex": [3, "sts3215"],
- "wrist_flex": [4, "sts3215"],
- "wrist_roll": [5, "sts3215"],
- "gripper": [6, "sts3215"],
- "left_wheel": (7, "sts3215"),
- "back_wheel": (8, "sts3215"),
- "right_wheel": (9, "sts3215"),
- },
- ),
- }
- )
-
- teleop_keys: dict[str, str] = field(
- default_factory=lambda: {
- # Movement
- "forward": "w",
- "backward": "s",
- "left": "a",
- "right": "d",
- "rotate_left": "z",
- "rotate_right": "x",
- # Speed control
- "speed_up": "r",
- "speed_down": "f",
- # quit teleop
- "quit": "q",
- }
- )
-
- mock: bool = False
-
-
-@RobotConfig.register_subclass("realman")
-@dataclass
-class RealmanRobotConfig(RobotConfig):
- inference_time: bool = False
- max_gripper: int = 990
- min_gripper: int = 10
- servo_config_file: str = "/home/maic/LYT/lerobot/lerobot/common/robot_devices/teleop/servo_arm.yaml"
-
-
- left_follower_arm: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "main": RealmanMotorsBusConfig(
- ip = "192.168.3.18",
- port = 8080,
- motors={
- # name: (index, model)
- "joint_1": [1, "realman"],
- "joint_2": [2, "realman"],
- "joint_3": [3, "realman"],
- "joint_4": [4, "realman"],
- "joint_5": [5, "realman"],
- "joint_6": [6, "realman"],
- "gripper": [7, "realman"],
- },
- init_joint = {'joint': [-90, 90, 90, 90, 90, -90, 10]}
- )
- }
- )
-
- cameras: dict[str, CameraConfig] = field(
- default_factory=lambda: {
- # "one": OpenCVCameraConfig(
- # camera_index=4,
- # fps=30,
- # width=640,
- # height=480,
- # ),
- "left": IntelRealSenseCameraConfig(
- serial_number="153122077516",
- fps=30,
- width=640,
- height=480,
- use_depth=False
- ),
- # "right": IntelRealSenseCameraConfig(
- # serial_number="405622075165",
- # fps=30,
- # width=640,
- # height=480,
- # use_depth=False
- # ),
- "front": IntelRealSenseCameraConfig(
- serial_number="145422072751",
- fps=30,
- width=640,
- height=480,
- use_depth=False
- ),
- "high": IntelRealSenseCameraConfig(
- serial_number="145422072193",
- fps=30,
- width=640,
- height=480,
- use_depth=False
- ),
- }
- )
-
-
-
-@RobotConfig.register_subclass("realman_dual")
-@dataclass
-class RealmanDualRobotConfig(RobotConfig):
- inference_time: bool = False
- max_gripper: int = 990
- min_gripper: int = 10
- servo_config_file: str = "/home/maic/LYT/lerobot/lerobot/common/robot_devices/teleop/servo_dual.yaml"
- left_end_control_guid: str = '0300b14bff1100003708000010010000'
- right_end_control_guid: str = '030003f05e0400008e02000010010000'
-
- follower_arm: dict[str, MotorsBusConfig] = field(
- default_factory=lambda: {
- "main": RealmanDualMotorsBusConfig(
- axis= {'left_joint': 6, 'left_gripper': 1, 'right_joint': 6, 'right_gripper': 1},
- left_ip = "192.168.3.18",
- left_port = 8080,
- right_ip = "192.168.3.19",
- right_port = 8080,
- init_joint = {'joint': [15, 90, 10, 80, 110, 0, 10, -15, 90, 10, -80, 110, 0, 10]},
- motors={
- # name: (index, model)
- "left_joint_1": [1, "realman"],
- "left_joint_2": [2, "realman"],
- "left_joint_3": [3, "realman"],
- "left_joint_4": [4, "realman"],
- "left_joint_5": [5, "realman"],
- "left_joint_6": [6, "realman"],
- "left_gripper": [7, "realman"],
- "right_joint_1": [8, "realman"],
- "right_joint_2": [9, "realman"],
- "right_joint_3": [10, "realman"],
- "right_joint_4": [11, "realman"],
- "right_joint_5": [12, "realman"],
- "right_joint_6": [13, "realman"],
- "right_gripper": [14, "realman"]
- },
- )
- }
- )
-
- cameras: dict[str, CameraConfig] = field(
- default_factory=lambda: {
- "left": IntelRealSenseCameraConfig(
- serial_number="153122077516",
- fps=30,
- width=640,
- height=480,
- use_depth=False
- ),
- "right": IntelRealSenseCameraConfig(
- serial_number="405622075165",
- fps=30,
- width=640,
- height=480,
- use_depth=False
- ),
- # "front": IntelRealSenseCameraConfig(
- # serial_number="145422072751",
- # fps=30,
- # width=640,
- # height=480,
- # use_depth=False
- # ),
- # "high": IntelRealSenseCameraConfig(
- # serial_number="145422072193",
- # fps=30,
- # width=640,
- # height=480,
- # use_depth=False
- # ),
- }
- )
\ No newline at end of file
diff --git a/lerobot/common/robot_devices/robots/dynamixel_calibration.py b/lerobot/common/robot_devices/robots/dynamixel_calibration.py
deleted file mode 100644
index 98fe8754..00000000
--- a/lerobot/common/robot_devices/robots/dynamixel_calibration.py
+++ /dev/null
@@ -1,144 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Logic to calibrate a robot arm built with dynamixel motors"""
-# TODO(rcadene, aliberts): move this logic into the robot code when refactoring
-
-import numpy as np
-
-from lerobot.common.robot_devices.motors.dynamixel import (
- CalibrationMode,
- TorqueMode,
- convert_degrees_to_steps,
-)
-from lerobot.common.robot_devices.motors.utils import MotorsBus
-
-URL_TEMPLATE = (
- "https://raw.githubusercontent.com/huggingface/lerobot/main/media/{robot}/{arm}_{position}.webp"
-)
-
-# The following positions are provided in nominal degree range ]-180, +180[
-# For more info on these constants, see comments in the code where they get used.
-ZERO_POSITION_DEGREE = 0
-ROTATED_POSITION_DEGREE = 90
-
-
-def assert_drive_mode(drive_mode):
- # `drive_mode` is in [0,1] with 0 means original rotation direction for the motor, and 1 means inverted.
- if not np.all(np.isin(drive_mode, [0, 1])):
- raise ValueError(f"`drive_mode` contains values other than 0 or 1: ({drive_mode})")
-
-
-def apply_drive_mode(position, drive_mode):
- assert_drive_mode(drive_mode)
- # Convert `drive_mode` from [0, 1] with 0 indicates original rotation direction and 1 inverted,
- # to [-1, 1] with 1 indicates original rotation direction and -1 inverted.
- signed_drive_mode = -(drive_mode * 2 - 1)
- position *= signed_drive_mode
- return position
-
-
-def compute_nearest_rounded_position(position, models):
- delta_turn = convert_degrees_to_steps(ROTATED_POSITION_DEGREE, models)
- nearest_pos = np.round(position.astype(float) / delta_turn) * delta_turn
- return nearest_pos.astype(position.dtype)
-
-
-def run_arm_calibration(arm: MotorsBus, robot_type: str, arm_name: str, arm_type: str):
- """This function ensures that a neural network trained on data collected on a given robot
- can work on another robot. For instance before calibration, setting a same goal position
- for each motor of two different robots will get two very different positions. But after calibration,
- the two robots will move to the same position.To this end, this function computes the homing offset
- and the drive mode for each motor of a given robot.
-
- Homing offset is used to shift the motor position to a ]-2048, +2048[ nominal range (when the motor uses 2048 steps
- to complete a half a turn). This range is set around an arbitrary "zero position" corresponding to all motor positions
- being 0. During the calibration process, you will need to manually move the robot to this "zero position".
-
- Drive mode is used to invert the rotation direction of the motor. This is useful when some motors have been assembled
- in the opposite orientation for some robots. During the calibration process, you will need to manually move the robot
- to the "rotated position".
-
- After calibration, the homing offsets and drive modes are stored in a cache.
-
- Example of usage:
- ```python
- run_arm_calibration(arm, "koch", "left", "follower")
- ```
- """
- if (arm.read("Torque_Enable") != TorqueMode.DISABLED.value).any():
- raise ValueError("To run calibration, the torque must be disabled on all motors.")
-
- print(f"\nRunning calibration of {robot_type} {arm_name} {arm_type}...")
-
- print("\nMove arm to zero position")
- print("See: " + URL_TEMPLATE.format(robot=robot_type, arm=arm_type, position="zero"))
- input("Press Enter to continue...")
-
- # We arbitrarily chose our zero target position to be a straight horizontal position with gripper upwards and closed.
- # It is easy to identify and all motors are in a "quarter turn" position. Once calibration is done, this position will
- # correspond to every motor angle being 0. If you set all 0 as Goal Position, the arm will move in this position.
- zero_target_pos = convert_degrees_to_steps(ZERO_POSITION_DEGREE, arm.motor_models)
-
- # Compute homing offset so that `present_position + homing_offset ~= target_position`.
- zero_pos = arm.read("Present_Position")
- zero_nearest_pos = compute_nearest_rounded_position(zero_pos, arm.motor_models)
- homing_offset = zero_target_pos - zero_nearest_pos
-
- # The rotated target position corresponds to a rotation of a quarter turn from the zero position.
- # This allows to identify the rotation direction of each motor.
- # For instance, if the motor rotates 90 degree, and its value is -90 after applying the homing offset, then we know its rotation direction
- # is inverted. However, for the calibration being successful, we need everyone to follow the same target position.
- # Sometimes, there is only one possible rotation direction. For instance, if the gripper is closed, there is only one direction which
- # corresponds to opening the gripper. When the rotation direction is ambiguous, we arbitrarily rotate clockwise from the point of view
- # of the previous motor in the kinetic chain.
- print("\nMove arm to rotated target position")
- print("See: " + URL_TEMPLATE.format(robot=robot_type, arm=arm_type, position="rotated"))
- input("Press Enter to continue...")
-
- rotated_target_pos = convert_degrees_to_steps(ROTATED_POSITION_DEGREE, arm.motor_models)
-
- # Find drive mode by rotating each motor by a quarter of a turn.
- # Drive mode indicates if the motor rotation direction should be inverted (=1) or not (=0).
- rotated_pos = arm.read("Present_Position")
- drive_mode = (rotated_pos < zero_pos).astype(np.int32)
-
- # Re-compute homing offset to take into account drive mode
- rotated_drived_pos = apply_drive_mode(rotated_pos, drive_mode)
- rotated_nearest_pos = compute_nearest_rounded_position(rotated_drived_pos, arm.motor_models)
- homing_offset = rotated_target_pos - rotated_nearest_pos
-
- print("\nMove arm to rest position")
- print("See: " + URL_TEMPLATE.format(robot=robot_type, arm=arm_type, position="rest"))
- input("Press Enter to continue...")
- print()
-
- # Joints with rotational motions are expressed in degrees in nominal range of [-180, 180]
- calib_mode = [CalibrationMode.DEGREE.name] * len(arm.motor_names)
-
- # TODO(rcadene): make type of joints (DEGREE or LINEAR) configurable from yaml?
- if robot_type in ["aloha"] and "gripper" in arm.motor_names:
- # Joints with linear motions (like gripper of Aloha) are expressed in nominal range of [0, 100]
- calib_idx = arm.motor_names.index("gripper")
- calib_mode[calib_idx] = CalibrationMode.LINEAR.name
-
- calib_data = {
- "homing_offset": homing_offset.tolist(),
- "drive_mode": drive_mode.tolist(),
- "start_pos": zero_pos.tolist(),
- "end_pos": rotated_pos.tolist(),
- "calib_mode": calib_mode,
- "motor_names": arm.motor_names,
- }
- return calib_data
diff --git a/lerobot/common/robot_devices/robots/feetech_calibration.py b/lerobot/common/robot_devices/robots/feetech_calibration.py
deleted file mode 100644
index 343a6a28..00000000
--- a/lerobot/common/robot_devices/robots/feetech_calibration.py
+++ /dev/null
@@ -1,506 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Logic to calibrate a robot arm built with feetech motors"""
-# TODO(rcadene, aliberts): move this logic into the robot code when refactoring
-
-import time
-
-import numpy as np
-
-from lerobot.common.robot_devices.motors.feetech import (
- CalibrationMode,
- TorqueMode,
- convert_degrees_to_steps,
-)
-from lerobot.common.robot_devices.motors.utils import MotorsBus
-
-URL_TEMPLATE = (
- "https://raw.githubusercontent.com/huggingface/lerobot/main/media/{robot}/{arm}_{position}.webp"
-)
-
-# The following positions are provided in nominal degree range ]-180, +180[
-# For more info on these constants, see comments in the code where they get used.
-ZERO_POSITION_DEGREE = 0
-ROTATED_POSITION_DEGREE = 90
-
-
-def reset_middle_positions(arm: MotorsBus):
- input("Please move the robot to the new middle position for calibration, then press Enter...")
- # Write 128 to Torque_Enable for all motors.
- arm.write("Torque_Enable", 128)
-
-
-def assert_drive_mode(drive_mode):
- # `drive_mode` is in [0,1] with 0 means original rotation direction for the motor, and 1 means inverted.
- if not np.all(np.isin(drive_mode, [0, 1])):
- raise ValueError(f"`drive_mode` contains values other than 0 or 1: ({drive_mode})")
-
-
-def apply_drive_mode(position, drive_mode):
- assert_drive_mode(drive_mode)
- # Convert `drive_mode` from [0, 1] with 0 indicates original rotation direction and 1 inverted,
- # to [-1, 1] with 1 indicates original rotation direction and -1 inverted.
- signed_drive_mode = -(drive_mode * 2 - 1)
- position *= signed_drive_mode
- return position
-
-
-def move_until_block(arm, motor_name, positive_direction=True, while_move_hook=None):
- count = 0
- while True:
- present_pos = arm.read("Present_Position", motor_name)
- if positive_direction:
- # Move +100 steps every time. Lower the steps to lower the speed at which the arm moves.
- arm.write("Goal_Position", present_pos + 100, motor_name)
- else:
- arm.write("Goal_Position", present_pos - 100, motor_name)
-
- if while_move_hook is not None:
- while_move_hook()
-
- present_pos = arm.read("Present_Position", motor_name).item()
- present_speed = arm.read("Present_Speed", motor_name).item()
- present_current = arm.read("Present_Current", motor_name).item()
- # present_load = arm.read("Present_Load", motor_name).item()
- # present_voltage = arm.read("Present_Voltage", motor_name).item()
- # present_temperature = arm.read("Present_Temperature", motor_name).item()
-
- # print(f"{present_pos=}")
- # print(f"{present_speed=}")
- # print(f"{present_current=}")
- # print(f"{present_load=}")
- # print(f"{present_voltage=}")
- # print(f"{present_temperature=}")
-
- if present_speed == 0 and present_current > 40:
- count += 1
- if count > 100 or present_current > 300:
- return present_pos
- else:
- count = 0
-
-
-def move_to_calibrate(
- arm,
- motor_name,
- invert_drive_mode=False,
- positive_first=True,
- in_between_move_hook=None,
- while_move_hook=None,
-):
- initial_pos = arm.read("Present_Position", motor_name)
-
- if positive_first:
- p_present_pos = move_until_block(
- arm, motor_name, positive_direction=True, while_move_hook=while_move_hook
- )
- else:
- n_present_pos = move_until_block(
- arm, motor_name, positive_direction=False, while_move_hook=while_move_hook
- )
-
- if in_between_move_hook is not None:
- in_between_move_hook()
-
- if positive_first:
- n_present_pos = move_until_block(
- arm, motor_name, positive_direction=False, while_move_hook=while_move_hook
- )
- else:
- p_present_pos = move_until_block(
- arm, motor_name, positive_direction=True, while_move_hook=while_move_hook
- )
-
- zero_pos = (n_present_pos + p_present_pos) / 2
-
- calib_data = {
- "initial_pos": initial_pos,
- "homing_offset": zero_pos if invert_drive_mode else -zero_pos,
- "invert_drive_mode": invert_drive_mode,
- "drive_mode": -1 if invert_drive_mode else 0,
- "zero_pos": zero_pos,
- "start_pos": n_present_pos if invert_drive_mode else p_present_pos,
- "end_pos": p_present_pos if invert_drive_mode else n_present_pos,
- }
- return calib_data
-
-
-def apply_offset(calib, offset):
- calib["zero_pos"] += offset
- if calib["drive_mode"]:
- calib["homing_offset"] += offset
- else:
- calib["homing_offset"] -= offset
- return calib
-
-
-def run_arm_auto_calibration(arm: MotorsBus, robot_type: str, arm_name: str, arm_type: str):
- if robot_type == "so100":
- return run_arm_auto_calibration_so100(arm, robot_type, arm_name, arm_type)
- elif robot_type == "moss":
- return run_arm_auto_calibration_moss(arm, robot_type, arm_name, arm_type)
- else:
- raise ValueError(robot_type)
-
-
-def run_arm_auto_calibration_so100(arm: MotorsBus, robot_type: str, arm_name: str, arm_type: str):
- """All the offsets and magic numbers are hand tuned, and are unique to SO-100 follower arms"""
- if (arm.read("Torque_Enable") != TorqueMode.DISABLED.value).any():
- raise ValueError("To run calibration, the torque must be disabled on all motors.")
-
- if not (robot_type == "so100" and arm_type == "follower"):
- raise NotImplementedError("Auto calibration only supports the follower of so100 arms for now.")
-
- print(f"\nRunning calibration of {robot_type} {arm_name} {arm_type}...")
-
- print("\nMove arm to initial position")
- print("See: " + URL_TEMPLATE.format(robot=robot_type, arm=arm_type, position="initial"))
- input("Press Enter to continue...")
-
- # Lower the acceleration of the motors (in [0,254])
- initial_acceleration = arm.read("Acceleration")
- arm.write("Lock", 0)
- arm.write("Acceleration", 10)
- time.sleep(1)
-
- arm.write("Torque_Enable", TorqueMode.ENABLED.value)
-
- print(f'{arm.read("Present_Position", "elbow_flex")=}')
-
- calib = {}
-
- init_wf_pos = arm.read("Present_Position", "wrist_flex")
- init_sl_pos = arm.read("Present_Position", "shoulder_lift")
- init_ef_pos = arm.read("Present_Position", "elbow_flex")
- arm.write("Goal_Position", init_wf_pos - 800, "wrist_flex")
- arm.write("Goal_Position", init_sl_pos + 150 + 1024, "shoulder_lift")
- arm.write("Goal_Position", init_ef_pos - 2048, "elbow_flex")
- time.sleep(2)
-
- print("Calibrate shoulder_pan")
- calib["shoulder_pan"] = move_to_calibrate(arm, "shoulder_pan")
- arm.write("Goal_Position", calib["shoulder_pan"]["zero_pos"], "shoulder_pan")
- time.sleep(1)
-
- print("Calibrate gripper")
- calib["gripper"] = move_to_calibrate(arm, "gripper", invert_drive_mode=True)
- time.sleep(1)
-
- print("Calibrate wrist_flex")
- calib["wrist_flex"] = move_to_calibrate(arm, "wrist_flex")
- calib["wrist_flex"] = apply_offset(calib["wrist_flex"], offset=80)
-
- def in_between_move_hook():
- nonlocal arm, calib
- time.sleep(2)
- ef_pos = arm.read("Present_Position", "elbow_flex")
- sl_pos = arm.read("Present_Position", "shoulder_lift")
- arm.write("Goal_Position", ef_pos + 1024, "elbow_flex")
- arm.write("Goal_Position", sl_pos - 1024, "shoulder_lift")
- time.sleep(2)
-
- print("Calibrate elbow_flex")
- calib["elbow_flex"] = move_to_calibrate(
- arm, "elbow_flex", positive_first=False, in_between_move_hook=in_between_move_hook
- )
- calib["elbow_flex"] = apply_offset(calib["elbow_flex"], offset=80 - 1024)
-
- arm.write("Goal_Position", calib["elbow_flex"]["zero_pos"] + 1024 + 512, "elbow_flex")
- time.sleep(1)
-
- def in_between_move_hook():
- nonlocal arm, calib
- arm.write("Goal_Position", calib["elbow_flex"]["zero_pos"], "elbow_flex")
-
- print("Calibrate shoulder_lift")
- calib["shoulder_lift"] = move_to_calibrate(
- arm,
- "shoulder_lift",
- invert_drive_mode=True,
- positive_first=False,
- in_between_move_hook=in_between_move_hook,
- )
- # add an 30 steps as offset to align with body
- calib["shoulder_lift"] = apply_offset(calib["shoulder_lift"], offset=1024 - 50)
-
- def while_move_hook():
- nonlocal arm, calib
- positions = {
- "shoulder_lift": round(calib["shoulder_lift"]["zero_pos"] - 1600),
- "elbow_flex": round(calib["elbow_flex"]["zero_pos"] + 1700),
- "wrist_flex": round(calib["wrist_flex"]["zero_pos"] + 800),
- "gripper": round(calib["gripper"]["end_pos"]),
- }
- arm.write("Goal_Position", list(positions.values()), list(positions.keys()))
-
- arm.write("Goal_Position", round(calib["shoulder_lift"]["zero_pos"] - 1600), "shoulder_lift")
- time.sleep(2)
- arm.write("Goal_Position", round(calib["elbow_flex"]["zero_pos"] + 1700), "elbow_flex")
- time.sleep(2)
- arm.write("Goal_Position", round(calib["wrist_flex"]["zero_pos"] + 800), "wrist_flex")
- time.sleep(2)
- arm.write("Goal_Position", round(calib["gripper"]["end_pos"]), "gripper")
- time.sleep(2)
-
- print("Calibrate wrist_roll")
- calib["wrist_roll"] = move_to_calibrate(
- arm, "wrist_roll", invert_drive_mode=True, positive_first=False, while_move_hook=while_move_hook
- )
-
- arm.write("Goal_Position", calib["wrist_roll"]["zero_pos"], "wrist_roll")
- time.sleep(1)
- arm.write("Goal_Position", calib["gripper"]["start_pos"], "gripper")
- time.sleep(1)
- arm.write("Goal_Position", calib["wrist_flex"]["zero_pos"], "wrist_flex")
- time.sleep(1)
- arm.write("Goal_Position", calib["elbow_flex"]["zero_pos"] + 2048, "elbow_flex")
- arm.write("Goal_Position", calib["shoulder_lift"]["zero_pos"] - 2048, "shoulder_lift")
- time.sleep(1)
- arm.write("Goal_Position", calib["shoulder_pan"]["zero_pos"], "shoulder_pan")
- time.sleep(1)
-
- calib_modes = []
- for name in arm.motor_names:
- if name == "gripper":
- calib_modes.append(CalibrationMode.LINEAR.name)
- else:
- calib_modes.append(CalibrationMode.DEGREE.name)
-
- calib_dict = {
- "homing_offset": [calib[name]["homing_offset"] for name in arm.motor_names],
- "drive_mode": [calib[name]["drive_mode"] for name in arm.motor_names],
- "start_pos": [calib[name]["start_pos"] for name in arm.motor_names],
- "end_pos": [calib[name]["end_pos"] for name in arm.motor_names],
- "calib_mode": calib_modes,
- "motor_names": arm.motor_names,
- }
-
- # Re-enable original accerlation
- arm.write("Lock", 0)
- arm.write("Acceleration", initial_acceleration)
- time.sleep(1)
-
- return calib_dict
-
-
-def run_arm_auto_calibration_moss(arm: MotorsBus, robot_type: str, arm_name: str, arm_type: str):
- """All the offsets and magic numbers are hand tuned, and are unique to SO-100 follower arms"""
- if (arm.read("Torque_Enable") != TorqueMode.DISABLED.value).any():
- raise ValueError("To run calibration, the torque must be disabled on all motors.")
-
- if not (robot_type == "moss" and arm_type == "follower"):
- raise NotImplementedError("Auto calibration only supports the follower of moss arms for now.")
-
- print(f"\nRunning calibration of {robot_type} {arm_name} {arm_type}...")
-
- print("\nMove arm to initial position")
- print("See: " + URL_TEMPLATE.format(robot=robot_type, arm=arm_type, position="initial"))
- input("Press Enter to continue...")
-
- # Lower the acceleration of the motors (in [0,254])
- initial_acceleration = arm.read("Acceleration")
- arm.write("Lock", 0)
- arm.write("Acceleration", 10)
- time.sleep(1)
-
- arm.write("Torque_Enable", TorqueMode.ENABLED.value)
-
- sl_pos = arm.read("Present_Position", "shoulder_lift")
- arm.write("Goal_Position", sl_pos - 1024 - 450, "shoulder_lift")
- ef_pos = arm.read("Present_Position", "elbow_flex")
- arm.write("Goal_Position", ef_pos + 1024 + 450, "elbow_flex")
- time.sleep(2)
-
- calib = {}
-
- print("Calibrate shoulder_pan")
- calib["shoulder_pan"] = move_to_calibrate(arm, "shoulder_pan")
- arm.write("Goal_Position", calib["shoulder_pan"]["zero_pos"], "shoulder_pan")
- time.sleep(1)
-
- print("Calibrate gripper")
- calib["gripper"] = move_to_calibrate(arm, "gripper", invert_drive_mode=True)
- time.sleep(1)
-
- print("Calibrate wrist_flex")
- calib["wrist_flex"] = move_to_calibrate(arm, "wrist_flex", invert_drive_mode=True)
- calib["wrist_flex"] = apply_offset(calib["wrist_flex"], offset=-210 + 1024)
-
- wr_pos = arm.read("Present_Position", "wrist_roll")
- arm.write("Goal_Position", calib["wrist_flex"]["zero_pos"] - 1024, "wrist_flex")
- time.sleep(1)
- arm.write("Goal_Position", wr_pos - 1024, "wrist_roll")
- time.sleep(1)
- arm.write("Goal_Position", calib["wrist_flex"]["zero_pos"] - 2048, "wrist_flex")
- time.sleep(1)
- arm.write("Goal_Position", calib["gripper"]["end_pos"], "gripper")
- time.sleep(1)
-
- print("Calibrate wrist_roll")
- calib["wrist_roll"] = move_to_calibrate(arm, "wrist_roll", invert_drive_mode=True)
- calib["wrist_roll"] = apply_offset(calib["wrist_roll"], offset=790)
-
- arm.write("Goal_Position", calib["wrist_roll"]["zero_pos"] - 1024, "wrist_roll")
- arm.write("Goal_Position", calib["gripper"]["start_pos"], "gripper")
- arm.write("Goal_Position", calib["wrist_flex"]["zero_pos"] - 1024, "wrist_flex")
- time.sleep(1)
- arm.write("Goal_Position", calib["wrist_roll"]["zero_pos"], "wrist_roll")
- arm.write("Goal_Position", calib["wrist_flex"]["zero_pos"] - 2048, "wrist_flex")
-
- def in_between_move_elbow_flex_hook():
- nonlocal arm, calib
- arm.write("Goal_Position", calib["wrist_flex"]["zero_pos"], "wrist_flex")
-
- print("Calibrate elbow_flex")
- calib["elbow_flex"] = move_to_calibrate(
- arm,
- "elbow_flex",
- invert_drive_mode=True,
- in_between_move_hook=in_between_move_elbow_flex_hook,
- )
- arm.write("Goal_Position", calib["wrist_flex"]["zero_pos"] - 1024, "wrist_flex")
-
- def in_between_move_shoulder_lift_hook():
- nonlocal arm, calib
- sl = arm.read("Present_Position", "shoulder_lift")
- arm.write("Goal_Position", sl - 1500, "shoulder_lift")
- time.sleep(1)
- arm.write("Goal_Position", calib["elbow_flex"]["zero_pos"] + 1536, "elbow_flex")
- time.sleep(1)
- arm.write("Goal_Position", calib["wrist_flex"]["start_pos"], "wrist_flex")
- time.sleep(1)
-
- print("Calibrate shoulder_lift")
- calib["shoulder_lift"] = move_to_calibrate(
- arm, "shoulder_lift", in_between_move_hook=in_between_move_shoulder_lift_hook
- )
- calib["shoulder_lift"] = apply_offset(calib["shoulder_lift"], offset=-1024)
-
- arm.write("Goal_Position", calib["wrist_flex"]["zero_pos"] - 1024, "wrist_flex")
- time.sleep(1)
- arm.write("Goal_Position", calib["shoulder_lift"]["zero_pos"] + 2048, "shoulder_lift")
- arm.write("Goal_Position", calib["elbow_flex"]["zero_pos"] - 1024 - 400, "elbow_flex")
- time.sleep(2)
-
- calib_modes = []
- for name in arm.motor_names:
- if name == "gripper":
- calib_modes.append(CalibrationMode.LINEAR.name)
- else:
- calib_modes.append(CalibrationMode.DEGREE.name)
-
- calib_dict = {
- "homing_offset": [calib[name]["homing_offset"] for name in arm.motor_names],
- "drive_mode": [calib[name]["drive_mode"] for name in arm.motor_names],
- "start_pos": [calib[name]["start_pos"] for name in arm.motor_names],
- "end_pos": [calib[name]["end_pos"] for name in arm.motor_names],
- "calib_mode": calib_modes,
- "motor_names": arm.motor_names,
- }
-
- # Re-enable original accerlation
- arm.write("Lock", 0)
- arm.write("Acceleration", initial_acceleration)
- time.sleep(1)
-
- return calib_dict
-
-
-def run_arm_manual_calibration(arm: MotorsBus, robot_type: str, arm_name: str, arm_type: str):
- """This function ensures that a neural network trained on data collected on a given robot
- can work on another robot. For instance before calibration, setting a same goal position
- for each motor of two different robots will get two very different positions. But after calibration,
- the two robots will move to the same position.To this end, this function computes the homing offset
- and the drive mode for each motor of a given robot.
-
- Homing offset is used to shift the motor position to a ]-2048, +2048[ nominal range (when the motor uses 2048 steps
- to complete a half a turn). This range is set around an arbitrary "zero position" corresponding to all motor positions
- being 0. During the calibration process, you will need to manually move the robot to this "zero position".
-
- Drive mode is used to invert the rotation direction of the motor. This is useful when some motors have been assembled
- in the opposite orientation for some robots. During the calibration process, you will need to manually move the robot
- to the "rotated position".
-
- After calibration, the homing offsets and drive modes are stored in a cache.
-
- Example of usage:
- ```python
- run_arm_calibration(arm, "so100", "left", "follower")
- ```
- """
- if (arm.read("Torque_Enable") != TorqueMode.DISABLED.value).any():
- raise ValueError("To run calibration, the torque must be disabled on all motors.")
-
- print(f"\nRunning calibration of {robot_type} {arm_name} {arm_type}...")
-
- reset_middle_positions(arm)
-
- print("\nMove arm to zero position")
- print("See: " + URL_TEMPLATE.format(robot=robot_type, arm=arm_type, position="zero"))
- input("Press Enter to continue...")
-
- # We arbitrarily chose our zero target position to be a straight horizontal position with gripper upwards and closed.
- # It is easy to identify and all motors are in a "quarter turn" position. Once calibration is done, this position will
- # correspond to every motor angle being 0. If you set all 0 as Goal Position, the arm will move in this position.
- zero_target_pos = convert_degrees_to_steps(ZERO_POSITION_DEGREE, arm.motor_models)
-
- # Compute homing offset so that `present_position + homing_offset ~= target_position`.
- zero_pos = arm.read("Present_Position")
- homing_offset = zero_target_pos - zero_pos
-
- # The rotated target position corresponds to a rotation of a quarter turn from the zero position.
- # This allows to identify the rotation direction of each motor.
- # For instance, if the motor rotates 90 degree, and its value is -90 after applying the homing offset, then we know its rotation direction
- # is inverted. However, for the calibration being successful, we need everyone to follow the same target position.
- # Sometimes, there is only one possible rotation direction. For instance, if the gripper is closed, there is only one direction which
- # corresponds to opening the gripper. When the rotation direction is ambiguous, we arbitrarily rotate clockwise from the point of view
- # of the previous motor in the kinetic chain.
- print("\nMove arm to rotated target position")
- print("See: " + URL_TEMPLATE.format(robot=robot_type, arm=arm_type, position="rotated"))
- input("Press Enter to continue...")
-
- rotated_target_pos = convert_degrees_to_steps(ROTATED_POSITION_DEGREE, arm.motor_models)
-
- # Find drive mode by rotating each motor by a quarter of a turn.
- # Drive mode indicates if the motor rotation direction should be inverted (=1) or not (=0).
- rotated_pos = arm.read("Present_Position")
- drive_mode = (rotated_pos < zero_pos).astype(np.int32)
-
- # Re-compute homing offset to take into account drive mode
- rotated_drived_pos = apply_drive_mode(rotated_pos, drive_mode)
- homing_offset = rotated_target_pos - rotated_drived_pos
-
- print("\nMove arm to rest position")
- print("See: " + URL_TEMPLATE.format(robot=robot_type, arm=arm_type, position="rest"))
- input("Press Enter to continue...")
- print()
-
- # Joints with rotational motions are expressed in degrees in nominal range of [-180, 180]
- calib_modes = []
- for name in arm.motor_names:
- if name == "gripper":
- calib_modes.append(CalibrationMode.LINEAR.name)
- else:
- calib_modes.append(CalibrationMode.DEGREE.name)
-
- calib_dict = {
- "homing_offset": homing_offset.tolist(),
- "drive_mode": drive_mode.tolist(),
- "start_pos": zero_pos.tolist(),
- "end_pos": rotated_pos.tolist(),
- "calib_mode": calib_modes,
- "motor_names": arm.motor_names,
- }
- return calib_dict
diff --git a/lerobot/common/robot_devices/robots/lekiwi_remote.py b/lerobot/common/robot_devices/robots/lekiwi_remote.py
deleted file mode 100644
index 7bf52d21..00000000
--- a/lerobot/common/robot_devices/robots/lekiwi_remote.py
+++ /dev/null
@@ -1,224 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import base64
-import json
-import threading
-import time
-from pathlib import Path
-
-import cv2
-import zmq
-
-from lerobot.common.robot_devices.robots.mobile_manipulator import LeKiwi
-
-
-def setup_zmq_sockets(config):
- context = zmq.Context()
- cmd_socket = context.socket(zmq.PULL)
- cmd_socket.setsockopt(zmq.CONFLATE, 1)
- cmd_socket.bind(f"tcp://*:{config.port}")
-
- video_socket = context.socket(zmq.PUSH)
- video_socket.setsockopt(zmq.CONFLATE, 1)
- video_socket.bind(f"tcp://*:{config.video_port}")
-
- return context, cmd_socket, video_socket
-
-
-def run_camera_capture(cameras, images_lock, latest_images_dict, stop_event):
- while not stop_event.is_set():
- local_dict = {}
- for name, cam in cameras.items():
- frame = cam.async_read()
- ret, buffer = cv2.imencode(".jpg", frame, [int(cv2.IMWRITE_JPEG_QUALITY), 90])
- if ret:
- local_dict[name] = base64.b64encode(buffer).decode("utf-8")
- else:
- local_dict[name] = ""
- with images_lock:
- latest_images_dict.update(local_dict)
- time.sleep(0.01)
-
-
-def calibrate_follower_arm(motors_bus, calib_dir_str):
- """
- Calibrates the follower arm. Attempts to load an existing calibration file;
- if not found, runs manual calibration and saves the result.
- """
- calib_dir = Path(calib_dir_str)
- calib_dir.mkdir(parents=True, exist_ok=True)
- calib_file = calib_dir / "main_follower.json"
- try:
- from lerobot.common.robot_devices.robots.feetech_calibration import run_arm_manual_calibration
- except ImportError:
- print("[WARNING] Calibration function not available. Skipping calibration.")
- return
-
- if calib_file.exists():
- with open(calib_file) as f:
- calibration = json.load(f)
- print(f"[INFO] Loaded calibration from {calib_file}")
- else:
- print("[INFO] Calibration file not found. Running manual calibration...")
- calibration = run_arm_manual_calibration(motors_bus, "lekiwi", "follower_arm", "follower")
- print(f"[INFO] Calibration complete. Saving to {calib_file}")
- with open(calib_file, "w") as f:
- json.dump(calibration, f)
- try:
- motors_bus.set_calibration(calibration)
- print("[INFO] Applied calibration for follower arm.")
- except Exception as e:
- print(f"[WARNING] Could not apply calibration: {e}")
-
-
-def run_lekiwi(robot_config):
- """
- Runs the LeKiwi robot:
- - Sets up cameras and connects them.
- - Initializes the follower arm motors.
- - Calibrates the follower arm if necessary.
- - Creates ZeroMQ sockets for receiving commands and streaming observations.
- - Processes incoming commands (arm and wheel commands) and sends back sensor and camera data.
- """
- # Import helper functions and classes
- from lerobot.common.robot_devices.cameras.utils import make_cameras_from_configs
- from lerobot.common.robot_devices.motors.feetech import FeetechMotorsBus, TorqueMode
-
- # Initialize cameras from the robot configuration.
- cameras = make_cameras_from_configs(robot_config.cameras)
- for cam in cameras.values():
- cam.connect()
-
- # Initialize the motors bus using the follower arm configuration.
- motor_config = robot_config.follower_arms.get("main")
- if motor_config is None:
- print("[ERROR] Follower arm 'main' configuration not found.")
- return
- motors_bus = FeetechMotorsBus(motor_config)
- motors_bus.connect()
-
- # Calibrate the follower arm.
- calibrate_follower_arm(motors_bus, robot_config.calibration_dir)
-
- # Create the LeKiwi robot instance.
- robot = LeKiwi(motors_bus)
-
- # Define the expected arm motor IDs.
- arm_motor_ids = ["shoulder_pan", "shoulder_lift", "elbow_flex", "wrist_flex", "wrist_roll", "gripper"]
-
- # Disable torque for each arm motor.
- for motor in arm_motor_ids:
- motors_bus.write("Torque_Enable", TorqueMode.DISABLED.value, motor)
-
- # Set up ZeroMQ sockets.
- context, cmd_socket, video_socket = setup_zmq_sockets(robot_config)
-
- # Start the camera capture thread.
- latest_images_dict = {}
- images_lock = threading.Lock()
- stop_event = threading.Event()
- cam_thread = threading.Thread(
- target=run_camera_capture, args=(cameras, images_lock, latest_images_dict, stop_event), daemon=True
- )
- cam_thread.start()
-
- last_cmd_time = time.time()
- print("LeKiwi robot server started. Waiting for commands...")
-
- try:
- while True:
- loop_start_time = time.time()
-
- # Process incoming commands (non-blocking).
- while True:
- try:
- msg = cmd_socket.recv_string(zmq.NOBLOCK)
- except zmq.Again:
- break
- try:
- data = json.loads(msg)
- # Process arm position commands.
- if "arm_positions" in data:
- arm_positions = data["arm_positions"]
- if not isinstance(arm_positions, list):
- print(f"[ERROR] Invalid arm_positions: {arm_positions}")
- elif len(arm_positions) < len(arm_motor_ids):
- print(
- f"[WARNING] Received {len(arm_positions)} arm positions, expected {len(arm_motor_ids)}"
- )
- else:
- for motor, pos in zip(arm_motor_ids, arm_positions, strict=False):
- motors_bus.write("Goal_Position", pos, motor)
- # Process wheel (base) commands.
- if "raw_velocity" in data:
- raw_command = data["raw_velocity"]
- # Expect keys: "left_wheel", "back_wheel", "right_wheel".
- command_speeds = [
- int(raw_command.get("left_wheel", 0)),
- int(raw_command.get("back_wheel", 0)),
- int(raw_command.get("right_wheel", 0)),
- ]
- robot.set_velocity(command_speeds)
- last_cmd_time = time.time()
- except Exception as e:
- print(f"[ERROR] Parsing message failed: {e}")
-
- # Watchdog: stop the robot if no command is received for over 0.5 seconds.
- now = time.time()
- if now - last_cmd_time > 0.5:
- robot.stop()
- last_cmd_time = now
-
- # Read current wheel speeds from the robot.
- current_velocity = robot.read_velocity()
-
- # Read the follower arm state from the motors bus.
- follower_arm_state = []
- for motor in arm_motor_ids:
- try:
- pos = motors_bus.read("Present_Position", motor)
- # Convert the position to a float (or use as is if already numeric).
- follower_arm_state.append(float(pos) if not isinstance(pos, (int, float)) else pos)
- except Exception as e:
- print(f"[ERROR] Reading motor {motor} failed: {e}")
-
- # Get the latest camera images.
- with images_lock:
- images_dict_copy = dict(latest_images_dict)
-
- # Build the observation dictionary.
- observation = {
- "images": images_dict_copy,
- "present_speed": current_velocity,
- "follower_arm_state": follower_arm_state,
- }
- # Send the observation over the video socket.
- video_socket.send_string(json.dumps(observation))
-
- # Ensure a short sleep to avoid overloading the CPU.
- elapsed = time.time() - loop_start_time
- time.sleep(
- max(0.033 - elapsed, 0)
- ) # If robot jitters increase the sleep and monitor cpu load with `top` in cmd
- except KeyboardInterrupt:
- print("Shutting down LeKiwi server.")
- finally:
- stop_event.set()
- cam_thread.join()
- robot.stop()
- motors_bus.disconnect()
- cmd_socket.close()
- video_socket.close()
- context.term()
diff --git a/lerobot/common/robot_devices/robots/manipulator.py b/lerobot/common/robot_devices/robots/manipulator.py
deleted file mode 100644
index ebf7c399..00000000
--- a/lerobot/common/robot_devices/robots/manipulator.py
+++ /dev/null
@@ -1,627 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Contains logic to instantiate a robot, read information from its motors and cameras,
-and send orders to its motors.
-"""
-# TODO(rcadene, aliberts): reorganize the codebase into one file per robot, with the associated
-# calibration procedure, to make it easy for people to add their own robot.
-
-import json
-import logging
-import time
-import warnings
-from pathlib import Path
-
-import numpy as np
-import torch
-
-from lerobot.common.robot_devices.cameras.utils import make_cameras_from_configs
-from lerobot.common.robot_devices.motors.utils import MotorsBus, make_motors_buses_from_configs
-from lerobot.common.robot_devices.robots.configs import ManipulatorRobotConfig
-from lerobot.common.robot_devices.robots.utils import get_arm_id
-from lerobot.common.robot_devices.utils import RobotDeviceAlreadyConnectedError, RobotDeviceNotConnectedError
-
-
-def ensure_safe_goal_position(
- goal_pos: torch.Tensor, present_pos: torch.Tensor, max_relative_target: float | list[float]
-):
- # Cap relative action target magnitude for safety.
- diff = goal_pos - present_pos
- max_relative_target = torch.tensor(max_relative_target)
- safe_diff = torch.minimum(diff, max_relative_target)
- safe_diff = torch.maximum(safe_diff, -max_relative_target)
- safe_goal_pos = present_pos + safe_diff
-
- if not torch.allclose(goal_pos, safe_goal_pos):
- logging.warning(
- "Relative goal position magnitude had to be clamped to be safe.\n"
- f" requested relative goal position target: {diff}\n"
- f" clamped relative goal position target: {safe_diff}"
- )
-
- return safe_goal_pos
-
-
-class ManipulatorRobot:
- # TODO(rcadene): Implement force feedback
- """This class allows to control any manipulator robot of various number of motors.
-
- Non exhaustive list of robots:
- - [Koch v1.0](https://github.com/AlexanderKoch-Koch/low_cost_robot), with and without the wrist-to-elbow expansion, developed
- by Alexander Koch from [Tau Robotics](https://tau-robotics.com)
- - [Koch v1.1](https://github.com/jess-moss/koch-v1-1) developed by Jess Moss
- - [Aloha](https://www.trossenrobotics.com/aloha-kits) developed by Trossen Robotics
-
- Example of instantiation, a pre-defined robot config is required:
- ```python
- robot = ManipulatorRobot(KochRobotConfig())
- ```
-
- Example of overwriting motors during instantiation:
- ```python
- # Defines how to communicate with the motors of the leader and follower arms
- leader_arms = {
- "main": DynamixelMotorsBusConfig(
- port="/dev/tty.usbmodem575E0031751",
- motors={
- # name: (index, model)
- "shoulder_pan": (1, "xl330-m077"),
- "shoulder_lift": (2, "xl330-m077"),
- "elbow_flex": (3, "xl330-m077"),
- "wrist_flex": (4, "xl330-m077"),
- "wrist_roll": (5, "xl330-m077"),
- "gripper": (6, "xl330-m077"),
- },
- ),
- }
- follower_arms = {
- "main": DynamixelMotorsBusConfig(
- port="/dev/tty.usbmodem575E0032081",
- motors={
- # name: (index, model)
- "shoulder_pan": (1, "xl430-w250"),
- "shoulder_lift": (2, "xl430-w250"),
- "elbow_flex": (3, "xl330-m288"),
- "wrist_flex": (4, "xl330-m288"),
- "wrist_roll": (5, "xl330-m288"),
- "gripper": (6, "xl330-m288"),
- },
- ),
- }
- robot_config = KochRobotConfig(leader_arms=leader_arms, follower_arms=follower_arms)
- robot = ManipulatorRobot(robot_config)
- ```
-
- Example of overwriting cameras during instantiation:
- ```python
- # Defines how to communicate with 2 cameras connected to the computer.
- # Here, the webcam of the laptop and the phone (connected in USB to the laptop)
- # can be reached respectively using the camera indices 0 and 1. These indices can be
- # arbitrary. See the documentation of `OpenCVCamera` to find your own camera indices.
- cameras = {
- "laptop": OpenCVCamera(camera_index=0, fps=30, width=640, height=480),
- "phone": OpenCVCamera(camera_index=1, fps=30, width=640, height=480),
- }
- robot = ManipulatorRobot(KochRobotConfig(cameras=cameras))
- ```
-
- Once the robot is instantiated, connect motors buses and cameras if any (Required):
- ```python
- robot.connect()
- ```
-
- Example of highest frequency teleoperation, which doesn't require cameras:
- ```python
- while True:
- robot.teleop_step()
- ```
-
- Example of highest frequency data collection from motors and cameras (if any):
- ```python
- while True:
- observation, action = robot.teleop_step(record_data=True)
- ```
-
- Example of controlling the robot with a policy:
- ```python
- while True:
- # Uses the follower arms and cameras to capture an observation
- observation = robot.capture_observation()
-
- # Assumes a policy has been instantiated
- with torch.inference_mode():
- action = policy.select_action(observation)
-
- # Orders the robot to move
- robot.send_action(action)
- ```
-
- Example of disconnecting which is not mandatory since we disconnect when the object is deleted:
- ```python
- robot.disconnect()
- ```
- """
-
- def __init__(
- self,
- config: ManipulatorRobotConfig,
- ):
- self.config = config
- self.robot_type = self.config.type
- self.calibration_dir = Path(self.config.calibration_dir)
- self.leader_arms = make_motors_buses_from_configs(self.config.leader_arms)
- self.follower_arms = make_motors_buses_from_configs(self.config.follower_arms)
- self.cameras = make_cameras_from_configs(self.config.cameras)
- self.is_connected = False
- self.logs = {}
-
- def get_motor_names(self, arm: dict[str, MotorsBus]) -> list:
- return [f"{arm}_{motor}" for arm, bus in arm.items() for motor in bus.motors]
-
- @property
- def camera_features(self) -> dict:
- cam_ft = {}
- for cam_key, cam in self.cameras.items():
- key = f"observation.images.{cam_key}"
- cam_ft[key] = {
- "shape": (cam.height, cam.width, cam.channels),
- "names": ["height", "width", "channels"],
- "info": None,
- }
- return cam_ft
-
- @property
- def motor_features(self) -> dict:
- action_names = self.get_motor_names(self.leader_arms)
- state_names = self.get_motor_names(self.leader_arms)
- return {
- "action": {
- "dtype": "float32",
- "shape": (len(action_names),),
- "names": action_names,
- },
- "observation.state": {
- "dtype": "float32",
- "shape": (len(state_names),),
- "names": state_names,
- },
- }
-
- @property
- def features(self):
- return {**self.motor_features, **self.camera_features}
-
- @property
- def has_camera(self):
- return len(self.cameras) > 0
-
- @property
- def num_cameras(self):
- return len(self.cameras)
-
- @property
- def available_arms(self):
- available_arms = []
- for name in self.follower_arms:
- arm_id = get_arm_id(name, "follower")
- available_arms.append(arm_id)
- for name in self.leader_arms:
- arm_id = get_arm_id(name, "leader")
- available_arms.append(arm_id)
- return available_arms
-
- def connect(self):
- if self.is_connected:
- raise RobotDeviceAlreadyConnectedError(
- "ManipulatorRobot is already connected. Do not run `robot.connect()` twice."
- )
-
- if not self.leader_arms and not self.follower_arms and not self.cameras:
- raise ValueError(
- "ManipulatorRobot doesn't have any device to connect. See example of usage in docstring of the class."
- )
-
- # Connect the arms
- for name in self.follower_arms:
- print(f"Connecting {name} follower arm.")
- self.follower_arms[name].connect()
- for name in self.leader_arms:
- print(f"Connecting {name} leader arm.")
- self.leader_arms[name].connect()
-
- if self.robot_type in ["koch", "koch_bimanual", "aloha"]:
- from lerobot.common.robot_devices.motors.dynamixel import TorqueMode
- elif self.robot_type in ["so100", "so101", "moss", "lekiwi"]:
- from lerobot.common.robot_devices.motors.feetech import TorqueMode
-
- # We assume that at connection time, arms are in a rest position, and torque can
- # be safely disabled to run calibration and/or set robot preset configurations.
- for name in self.follower_arms:
- self.follower_arms[name].write("Torque_Enable", TorqueMode.DISABLED.value)
- for name in self.leader_arms:
- self.leader_arms[name].write("Torque_Enable", TorqueMode.DISABLED.value)
-
- self.activate_calibration()
-
- # Set robot preset (e.g. torque in leader gripper for Koch v1.1)
- if self.robot_type in ["koch", "koch_bimanual"]:
- self.set_koch_robot_preset()
- elif self.robot_type == "aloha":
- self.set_aloha_robot_preset()
- elif self.robot_type in ["so100", "so101", "moss", "lekiwi"]:
- self.set_so100_robot_preset()
-
- # Enable torque on all motors of the follower arms
- for name in self.follower_arms:
- print(f"Activating torque on {name} follower arm.")
- self.follower_arms[name].write("Torque_Enable", 1)
-
- if self.config.gripper_open_degree is not None:
- if self.robot_type not in ["koch", "koch_bimanual"]:
- raise NotImplementedError(
- f"{self.robot_type} does not support position AND current control in the handle, which is require to set the gripper open."
- )
- # Set the leader arm in torque mode with the gripper motor set to an angle. This makes it possible
- # to squeeze the gripper and have it spring back to an open position on its own.
- for name in self.leader_arms:
- self.leader_arms[name].write("Torque_Enable", 1, "gripper")
- self.leader_arms[name].write("Goal_Position", self.config.gripper_open_degree, "gripper")
-
- # Check both arms can be read
- for name in self.follower_arms:
- self.follower_arms[name].read("Present_Position")
- for name in self.leader_arms:
- self.leader_arms[name].read("Present_Position")
-
- # Connect the cameras
- for name in self.cameras:
- self.cameras[name].connect()
-
- self.is_connected = True
-
- def activate_calibration(self):
- """After calibration all motors function in human interpretable ranges.
- Rotations are expressed in degrees in nominal range of [-180, 180],
- and linear motions (like gripper of Aloha) in nominal range of [0, 100].
- """
-
- def load_or_run_calibration_(name, arm, arm_type):
- arm_id = get_arm_id(name, arm_type)
- arm_calib_path = self.calibration_dir / f"{arm_id}.json"
-
- if arm_calib_path.exists():
- with open(arm_calib_path) as f:
- calibration = json.load(f)
- else:
- # TODO(rcadene): display a warning in __init__ if calibration file not available
- print(f"Missing calibration file '{arm_calib_path}'")
-
- if self.robot_type in ["koch", "koch_bimanual", "aloha"]:
- from lerobot.common.robot_devices.robots.dynamixel_calibration import run_arm_calibration
-
- calibration = run_arm_calibration(arm, self.robot_type, name, arm_type)
-
- elif self.robot_type in ["so100", "so101", "moss", "lekiwi"]:
- from lerobot.common.robot_devices.robots.feetech_calibration import (
- run_arm_manual_calibration,
- )
-
- calibration = run_arm_manual_calibration(arm, self.robot_type, name, arm_type)
-
- print(f"Calibration is done! Saving calibration file '{arm_calib_path}'")
- arm_calib_path.parent.mkdir(parents=True, exist_ok=True)
- with open(arm_calib_path, "w") as f:
- json.dump(calibration, f)
-
- return calibration
-
- for name, arm in self.follower_arms.items():
- calibration = load_or_run_calibration_(name, arm, "follower")
- arm.set_calibration(calibration)
- for name, arm in self.leader_arms.items():
- calibration = load_or_run_calibration_(name, arm, "leader")
- arm.set_calibration(calibration)
-
- def set_koch_robot_preset(self):
- def set_operating_mode_(arm):
- from lerobot.common.robot_devices.motors.dynamixel import TorqueMode
-
- if (arm.read("Torque_Enable") != TorqueMode.DISABLED.value).any():
- raise ValueError("To run set robot preset, the torque must be disabled on all motors.")
-
- # Use 'extended position mode' for all motors except gripper, because in joint mode the servos can't
- # rotate more than 360 degrees (from 0 to 4095) And some mistake can happen while assembling the arm,
- # you could end up with a servo with a position 0 or 4095 at a crucial point See [
- # https://emanual.robotis.com/docs/en/dxl/x/x_series/#operating-mode11]
- all_motors_except_gripper = [name for name in arm.motor_names if name != "gripper"]
- if len(all_motors_except_gripper) > 0:
- # 4 corresponds to Extended Position on Koch motors
- arm.write("Operating_Mode", 4, all_motors_except_gripper)
-
- # Use 'position control current based' for gripper to be limited by the limit of the current.
- # For the follower gripper, it means it can grasp an object without forcing too much even tho,
- # it's goal position is a complete grasp (both gripper fingers are ordered to join and reach a touch).
- # For the leader gripper, it means we can use it as a physical trigger, since we can force with our finger
- # to make it move, and it will move back to its original target position when we release the force.
- # 5 corresponds to Current Controlled Position on Koch gripper motors "xl330-m077, xl330-m288"
- arm.write("Operating_Mode", 5, "gripper")
-
- for name in self.follower_arms:
- set_operating_mode_(self.follower_arms[name])
-
- # Set better PID values to close the gap between recorded states and actions
- # TODO(rcadene): Implement an automatic procedure to set optimal PID values for each motor
- self.follower_arms[name].write("Position_P_Gain", 1500, "elbow_flex")
- self.follower_arms[name].write("Position_I_Gain", 0, "elbow_flex")
- self.follower_arms[name].write("Position_D_Gain", 600, "elbow_flex")
-
- if self.config.gripper_open_degree is not None:
- for name in self.leader_arms:
- set_operating_mode_(self.leader_arms[name])
-
- # Enable torque on the gripper of the leader arms, and move it to 45 degrees,
- # so that we can use it as a trigger to close the gripper of the follower arms.
- self.leader_arms[name].write("Torque_Enable", 1, "gripper")
- self.leader_arms[name].write("Goal_Position", self.config.gripper_open_degree, "gripper")
-
- def set_aloha_robot_preset(self):
- def set_shadow_(arm):
- # Set secondary/shadow ID for shoulder and elbow. These joints have two motors.
- # As a result, if only one of them is required to move to a certain position,
- # the other will follow. This is to avoid breaking the motors.
- if "shoulder_shadow" in arm.motor_names:
- shoulder_idx = arm.read("ID", "shoulder")
- arm.write("Secondary_ID", shoulder_idx, "shoulder_shadow")
-
- if "elbow_shadow" in arm.motor_names:
- elbow_idx = arm.read("ID", "elbow")
- arm.write("Secondary_ID", elbow_idx, "elbow_shadow")
-
- for name in self.follower_arms:
- set_shadow_(self.follower_arms[name])
-
- for name in self.leader_arms:
- set_shadow_(self.leader_arms[name])
-
- for name in self.follower_arms:
- # Set a velocity limit of 131 as advised by Trossen Robotics
- self.follower_arms[name].write("Velocity_Limit", 131)
-
- # Use 'extended position mode' for all motors except gripper, because in joint mode the servos can't
- # rotate more than 360 degrees (from 0 to 4095) And some mistake can happen while assembling the arm,
- # you could end up with a servo with a position 0 or 4095 at a crucial point See [
- # https://emanual.robotis.com/docs/en/dxl/x/x_series/#operating-mode11]
- all_motors_except_gripper = [
- name for name in self.follower_arms[name].motor_names if name != "gripper"
- ]
- if len(all_motors_except_gripper) > 0:
- # 4 corresponds to Extended Position on Aloha motors
- self.follower_arms[name].write("Operating_Mode", 4, all_motors_except_gripper)
-
- # Use 'position control current based' for follower gripper to be limited by the limit of the current.
- # It can grasp an object without forcing too much even tho,
- # it's goal position is a complete grasp (both gripper fingers are ordered to join and reach a touch).
- # 5 corresponds to Current Controlled Position on Aloha gripper follower "xm430-w350"
- self.follower_arms[name].write("Operating_Mode", 5, "gripper")
-
- # Note: We can't enable torque on the leader gripper since "xc430-w150" doesn't have
- # a Current Controlled Position mode.
-
- if self.config.gripper_open_degree is not None:
- warnings.warn(
- f"`gripper_open_degree` is set to {self.config.gripper_open_degree}, but None is expected for Aloha instead",
- stacklevel=1,
- )
-
- def set_so100_robot_preset(self):
- for name in self.follower_arms:
- # Mode=0 for Position Control
- self.follower_arms[name].write("Mode", 0)
- # Set P_Coefficient to lower value to avoid shakiness (Default is 32)
- self.follower_arms[name].write("P_Coefficient", 16)
- # Set I_Coefficient and D_Coefficient to default value 0 and 32
- self.follower_arms[name].write("I_Coefficient", 0)
- self.follower_arms[name].write("D_Coefficient", 32)
- # Close the write lock so that Maximum_Acceleration gets written to EPROM address,
- # which is mandatory for Maximum_Acceleration to take effect after rebooting.
- self.follower_arms[name].write("Lock", 0)
- # Set Maximum_Acceleration to 254 to speedup acceleration and deceleration of
- # the motors. Note: this configuration is not in the official STS3215 Memory Table
- self.follower_arms[name].write("Maximum_Acceleration", 254)
- self.follower_arms[name].write("Acceleration", 254)
-
- def teleop_step(
- self, record_data=False
- ) -> None | tuple[dict[str, torch.Tensor], dict[str, torch.Tensor]]:
- if not self.is_connected:
- raise RobotDeviceNotConnectedError(
- "ManipulatorRobot is not connected. You need to run `robot.connect()`."
- )
-
- # Prepare to assign the position of the leader to the follower
- leader_pos = {}
- for name in self.leader_arms:
- before_lread_t = time.perf_counter()
- leader_pos[name] = self.leader_arms[name].read("Present_Position")
- leader_pos[name] = torch.from_numpy(leader_pos[name])
- self.logs[f"read_leader_{name}_pos_dt_s"] = time.perf_counter() - before_lread_t
-
- # Send goal position to the follower
- follower_goal_pos = {}
- for name in self.follower_arms:
- before_fwrite_t = time.perf_counter()
- goal_pos = leader_pos[name]
-
- # Cap goal position when too far away from present position.
- # Slower fps expected due to reading from the follower.
- if self.config.max_relative_target is not None:
- present_pos = self.follower_arms[name].read("Present_Position")
- present_pos = torch.from_numpy(present_pos)
- goal_pos = ensure_safe_goal_position(goal_pos, present_pos, self.config.max_relative_target)
-
- # Used when record_data=True
- follower_goal_pos[name] = goal_pos
-
- goal_pos = goal_pos.numpy().astype(np.float32)
- self.follower_arms[name].write("Goal_Position", goal_pos)
- self.logs[f"write_follower_{name}_goal_pos_dt_s"] = time.perf_counter() - before_fwrite_t
-
- # Early exit when recording data is not requested
- if not record_data:
- return
-
- # TODO(rcadene): Add velocity and other info
- # Read follower position
- follower_pos = {}
- for name in self.follower_arms:
- before_fread_t = time.perf_counter()
- follower_pos[name] = self.follower_arms[name].read("Present_Position")
- follower_pos[name] = torch.from_numpy(follower_pos[name])
- self.logs[f"read_follower_{name}_pos_dt_s"] = time.perf_counter() - before_fread_t
-
- # Create state by concatenating follower current position
- state = []
- for name in self.follower_arms:
- if name in follower_pos:
- state.append(follower_pos[name])
- state = torch.cat(state)
-
- # Create action by concatenating follower goal position
- action = []
- for name in self.follower_arms:
- if name in follower_goal_pos:
- action.append(follower_goal_pos[name])
- action = torch.cat(action)
-
- # Capture images from cameras
- images = {}
- for name in self.cameras:
- before_camread_t = time.perf_counter()
- images[name] = self.cameras[name].async_read()
- images[name] = torch.from_numpy(images[name])
- self.logs[f"read_camera_{name}_dt_s"] = self.cameras[name].logs["delta_timestamp_s"]
- self.logs[f"async_read_camera_{name}_dt_s"] = time.perf_counter() - before_camread_t
-
- # Populate output dictionaries
- obs_dict, action_dict = {}, {}
- obs_dict["observation.state"] = state
- action_dict["action"] = action
- for name in self.cameras:
- obs_dict[f"observation.images.{name}"] = images[name]
-
- return obs_dict, action_dict
-
- def capture_observation(self):
- """The returned observations do not have a batch dimension."""
- if not self.is_connected:
- raise RobotDeviceNotConnectedError(
- "ManipulatorRobot is not connected. You need to run `robot.connect()`."
- )
-
- # Read follower position
- follower_pos = {}
- for name in self.follower_arms:
- before_fread_t = time.perf_counter()
- follower_pos[name] = self.follower_arms[name].read("Present_Position")
- follower_pos[name] = torch.from_numpy(follower_pos[name])
- self.logs[f"read_follower_{name}_pos_dt_s"] = time.perf_counter() - before_fread_t
-
- # Create state by concatenating follower current position
- state = []
- for name in self.follower_arms:
- if name in follower_pos:
- state.append(follower_pos[name])
- state = torch.cat(state)
-
- # Capture images from cameras
- images = {}
- for name in self.cameras:
- before_camread_t = time.perf_counter()
- images[name] = self.cameras[name].async_read()
- images[name] = torch.from_numpy(images[name])
- self.logs[f"read_camera_{name}_dt_s"] = self.cameras[name].logs["delta_timestamp_s"]
- self.logs[f"async_read_camera_{name}_dt_s"] = time.perf_counter() - before_camread_t
-
- # Populate output dictionaries and format to pytorch
- obs_dict = {}
- obs_dict["observation.state"] = state
- for name in self.cameras:
- obs_dict[f"observation.images.{name}"] = images[name]
- return obs_dict
-
- def send_action(self, action: torch.Tensor) -> torch.Tensor:
- """Command the follower arms to move to a target joint configuration.
-
- The relative action magnitude may be clipped depending on the configuration parameter
- `max_relative_target`. In this case, the action sent differs from original action.
- Thus, this function always returns the action actually sent.
-
- Args:
- action: tensor containing the concatenated goal positions for the follower arms.
- """
- if not self.is_connected:
- raise RobotDeviceNotConnectedError(
- "ManipulatorRobot is not connected. You need to run `robot.connect()`."
- )
-
- from_idx = 0
- to_idx = 0
- action_sent = []
- for name in self.follower_arms:
- # Get goal position of each follower arm by splitting the action vector
- to_idx += len(self.follower_arms[name].motor_names)
- goal_pos = action[from_idx:to_idx]
- from_idx = to_idx
-
- # Cap goal position when too far away from present position.
- # Slower fps expected due to reading from the follower.
- if self.config.max_relative_target is not None:
- present_pos = self.follower_arms[name].read("Present_Position")
- present_pos = torch.from_numpy(present_pos)
- goal_pos = ensure_safe_goal_position(goal_pos, present_pos, self.config.max_relative_target)
-
- # Save tensor to concat and return
- action_sent.append(goal_pos)
-
- # Send goal position to each follower
- goal_pos = goal_pos.numpy().astype(np.float32)
- self.follower_arms[name].write("Goal_Position", goal_pos)
-
- return torch.cat(action_sent)
-
- def print_logs(self):
- pass
- # TODO(aliberts): move robot-specific logs logic here
-
- def disconnect(self):
- if not self.is_connected:
- raise RobotDeviceNotConnectedError(
- "ManipulatorRobot is not connected. You need to run `robot.connect()` before disconnecting."
- )
-
- for name in self.follower_arms:
- self.follower_arms[name].disconnect()
-
- for name in self.leader_arms:
- self.leader_arms[name].disconnect()
-
- for name in self.cameras:
- self.cameras[name].disconnect()
-
- self.is_connected = False
-
- def __del__(self):
- if getattr(self, "is_connected", False):
- self.disconnect()
diff --git a/lerobot/common/robot_devices/robots/mobile_manipulator.py b/lerobot/common/robot_devices/robots/mobile_manipulator.py
deleted file mode 100644
index 385e218b..00000000
--- a/lerobot/common/robot_devices/robots/mobile_manipulator.py
+++ /dev/null
@@ -1,703 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import base64
-import json
-import os
-import sys
-from pathlib import Path
-
-import cv2
-import numpy as np
-import torch
-import zmq
-
-from lerobot.common.robot_devices.cameras.utils import make_cameras_from_configs
-from lerobot.common.robot_devices.motors.feetech import TorqueMode
-from lerobot.common.robot_devices.motors.utils import MotorsBus, make_motors_buses_from_configs
-from lerobot.common.robot_devices.robots.configs import LeKiwiRobotConfig
-from lerobot.common.robot_devices.robots.feetech_calibration import run_arm_manual_calibration
-from lerobot.common.robot_devices.robots.utils import get_arm_id
-from lerobot.common.robot_devices.utils import RobotDeviceNotConnectedError
-
-PYNPUT_AVAILABLE = True
-try:
- # Only import if there's a valid X server or if we're not on a Pi
- if ("DISPLAY" not in os.environ) and ("linux" in sys.platform):
- print("No DISPLAY set. Skipping pynput import.")
- raise ImportError("pynput blocked intentionally due to no display.")
-
- from pynput import keyboard
-except ImportError:
- keyboard = None
- PYNPUT_AVAILABLE = False
-except Exception as e:
- keyboard = None
- PYNPUT_AVAILABLE = False
- print(f"Could not import pynput: {e}")
-
-
-class MobileManipulator:
- """
- MobileManipulator is a class for connecting to and controlling a remote mobile manipulator robot.
- The robot includes a three omniwheel mobile base and a remote follower arm.
- The leader arm is connected locally (on the laptop) and its joint positions are recorded and then
- forwarded to the remote follower arm (after applying a safety clamp).
- In parallel, keyboard teleoperation is used to generate raw velocity commands for the wheels.
- """
-
- def __init__(self, config: LeKiwiRobotConfig):
- """
- Expected keys in config:
- - ip, port, video_port for the remote connection.
- - calibration_dir, leader_arms, follower_arms, max_relative_target, etc.
- """
- self.robot_type = config.type
- self.config = config
- self.remote_ip = config.ip
- self.remote_port = config.port
- self.remote_port_video = config.video_port
- self.calibration_dir = Path(self.config.calibration_dir)
- self.logs = {}
-
- self.teleop_keys = self.config.teleop_keys
-
- # For teleoperation, the leader arm (local) is used to record the desired arm pose.
- self.leader_arms = make_motors_buses_from_configs(self.config.leader_arms)
-
- self.follower_arms = make_motors_buses_from_configs(self.config.follower_arms)
-
- self.cameras = make_cameras_from_configs(self.config.cameras)
-
- self.is_connected = False
-
- self.last_frames = {}
- self.last_present_speed = {}
- self.last_remote_arm_state = torch.zeros(6, dtype=torch.float32)
-
- # Define three speed levels and a current index
- self.speed_levels = [
- {"xy": 0.1, "theta": 30}, # slow
- {"xy": 0.2, "theta": 60}, # medium
- {"xy": 0.3, "theta": 90}, # fast
- ]
- self.speed_index = 0 # Start at slow
-
- # ZeroMQ context and sockets.
- self.context = None
- self.cmd_socket = None
- self.video_socket = None
-
- # Keyboard state for base teleoperation.
- self.running = True
- self.pressed_keys = {
- "forward": False,
- "backward": False,
- "left": False,
- "right": False,
- "rotate_left": False,
- "rotate_right": False,
- }
-
- if PYNPUT_AVAILABLE:
- print("pynput is available - enabling local keyboard listener.")
- self.listener = keyboard.Listener(
- on_press=self.on_press,
- on_release=self.on_release,
- )
- self.listener.start()
- else:
- print("pynput not available - skipping local keyboard listener.")
- self.listener = None
-
- def get_motor_names(self, arms: dict[str, MotorsBus]) -> list:
- return [f"{arm}_{motor}" for arm, bus in arms.items() for motor in bus.motors]
-
- @property
- def camera_features(self) -> dict:
- cam_ft = {}
- for cam_key, cam in self.cameras.items():
- key = f"observation.images.{cam_key}"
- cam_ft[key] = {
- "shape": (cam.height, cam.width, cam.channels),
- "names": ["height", "width", "channels"],
- "info": None,
- }
- return cam_ft
-
- @property
- def motor_features(self) -> dict:
- follower_arm_names = [
- "shoulder_pan",
- "shoulder_lift",
- "elbow_flex",
- "wrist_flex",
- "wrist_roll",
- "gripper",
- ]
- observations = ["x_mm", "y_mm", "theta"]
- combined_names = follower_arm_names + observations
- return {
- "action": {
- "dtype": "float32",
- "shape": (len(combined_names),),
- "names": combined_names,
- },
- "observation.state": {
- "dtype": "float32",
- "shape": (len(combined_names),),
- "names": combined_names,
- },
- }
-
- @property
- def features(self):
- return {**self.motor_features, **self.camera_features}
-
- @property
- def has_camera(self):
- return len(self.cameras) > 0
-
- @property
- def num_cameras(self):
- return len(self.cameras)
-
- @property
- def available_arms(self):
- available = []
- for name in self.leader_arms:
- available.append(get_arm_id(name, "leader"))
- for name in self.follower_arms:
- available.append(get_arm_id(name, "follower"))
- return available
-
- def on_press(self, key):
- try:
- # Movement
- if key.char == self.teleop_keys["forward"]:
- self.pressed_keys["forward"] = True
- elif key.char == self.teleop_keys["backward"]:
- self.pressed_keys["backward"] = True
- elif key.char == self.teleop_keys["left"]:
- self.pressed_keys["left"] = True
- elif key.char == self.teleop_keys["right"]:
- self.pressed_keys["right"] = True
- elif key.char == self.teleop_keys["rotate_left"]:
- self.pressed_keys["rotate_left"] = True
- elif key.char == self.teleop_keys["rotate_right"]:
- self.pressed_keys["rotate_right"] = True
-
- # Quit teleoperation
- elif key.char == self.teleop_keys["quit"]:
- self.running = False
- return False
-
- # Speed control
- elif key.char == self.teleop_keys["speed_up"]:
- self.speed_index = min(self.speed_index + 1, 2)
- print(f"Speed index increased to {self.speed_index}")
- elif key.char == self.teleop_keys["speed_down"]:
- self.speed_index = max(self.speed_index - 1, 0)
- print(f"Speed index decreased to {self.speed_index}")
-
- except AttributeError:
- # e.g., if key is special like Key.esc
- if key == keyboard.Key.esc:
- self.running = False
- return False
-
- def on_release(self, key):
- try:
- if hasattr(key, "char"):
- if key.char == self.teleop_keys["forward"]:
- self.pressed_keys["forward"] = False
- elif key.char == self.teleop_keys["backward"]:
- self.pressed_keys["backward"] = False
- elif key.char == self.teleop_keys["left"]:
- self.pressed_keys["left"] = False
- elif key.char == self.teleop_keys["right"]:
- self.pressed_keys["right"] = False
- elif key.char == self.teleop_keys["rotate_left"]:
- self.pressed_keys["rotate_left"] = False
- elif key.char == self.teleop_keys["rotate_right"]:
- self.pressed_keys["rotate_right"] = False
- except AttributeError:
- pass
-
- def connect(self):
- if not self.leader_arms:
- raise ValueError("MobileManipulator has no leader arm to connect.")
- for name in self.leader_arms:
- print(f"Connecting {name} leader arm.")
- self.calibrate_leader()
-
- # Set up ZeroMQ sockets to communicate with the remote mobile robot.
- self.context = zmq.Context()
- self.cmd_socket = self.context.socket(zmq.PUSH)
- connection_string = f"tcp://{self.remote_ip}:{self.remote_port}"
- self.cmd_socket.connect(connection_string)
- self.cmd_socket.setsockopt(zmq.CONFLATE, 1)
- self.video_socket = self.context.socket(zmq.PULL)
- video_connection = f"tcp://{self.remote_ip}:{self.remote_port_video}"
- self.video_socket.connect(video_connection)
- self.video_socket.setsockopt(zmq.CONFLATE, 1)
- print(
- f"[INFO] Connected to remote robot at {connection_string} and video stream at {video_connection}."
- )
- self.is_connected = True
-
- def load_or_run_calibration_(self, name, arm, arm_type):
- arm_id = get_arm_id(name, arm_type)
- arm_calib_path = self.calibration_dir / f"{arm_id}.json"
-
- if arm_calib_path.exists():
- with open(arm_calib_path) as f:
- calibration = json.load(f)
- else:
- print(f"Missing calibration file '{arm_calib_path}'")
- calibration = run_arm_manual_calibration(arm, self.robot_type, name, arm_type)
- print(f"Calibration is done! Saving calibration file '{arm_calib_path}'")
- arm_calib_path.parent.mkdir(parents=True, exist_ok=True)
- with open(arm_calib_path, "w") as f:
- json.dump(calibration, f)
-
- return calibration
-
- def calibrate_leader(self):
- for name, arm in self.leader_arms.items():
- # Connect the bus
- arm.connect()
-
- # Disable torque on all motors
- for motor_id in arm.motors:
- arm.write("Torque_Enable", TorqueMode.DISABLED.value, motor_id)
-
- # Now run calibration
- calibration = self.load_or_run_calibration_(name, arm, "leader")
- arm.set_calibration(calibration)
-
- def calibrate_follower(self):
- for name, bus in self.follower_arms.items():
- bus.connect()
-
- # Disable torque on all motors
- for motor_id in bus.motors:
- bus.write("Torque_Enable", 0, motor_id)
-
- # Then filter out wheels
- arm_only_dict = {k: v for k, v in bus.motors.items() if not k.startswith("wheel_")}
- if not arm_only_dict:
- continue
-
- original_motors = bus.motors
- bus.motors = arm_only_dict
-
- calibration = self.load_or_run_calibration_(name, bus, "follower")
- bus.set_calibration(calibration)
-
- bus.motors = original_motors
-
- def _get_data(self):
- """
- Polls the video socket for up to 15 ms. If data arrives, decode only
- the *latest* message, returning frames, speed, and arm state. If
- nothing arrives for any field, use the last known values.
- """
- frames = {}
- present_speed = {}
- remote_arm_state_tensor = torch.zeros(6, dtype=torch.float32)
-
- # Poll up to 15 ms
- poller = zmq.Poller()
- poller.register(self.video_socket, zmq.POLLIN)
- socks = dict(poller.poll(15))
- if self.video_socket not in socks or socks[self.video_socket] != zmq.POLLIN:
- # No new data arrived → reuse ALL old data
- return (self.last_frames, self.last_present_speed, self.last_remote_arm_state)
-
- # Drain all messages, keep only the last
- last_msg = None
- while True:
- try:
- obs_string = self.video_socket.recv_string(zmq.NOBLOCK)
- last_msg = obs_string
- except zmq.Again:
- break
-
- if not last_msg:
- # No new message → also reuse old
- return (self.last_frames, self.last_present_speed, self.last_remote_arm_state)
-
- # Decode only the final message
- try:
- observation = json.loads(last_msg)
-
- images_dict = observation.get("images", {})
- new_speed = observation.get("present_speed", {})
- new_arm_state = observation.get("follower_arm_state", None)
-
- # Convert images
- for cam_name, image_b64 in images_dict.items():
- if image_b64:
- jpg_data = base64.b64decode(image_b64)
- np_arr = np.frombuffer(jpg_data, dtype=np.uint8)
- frame_candidate = cv2.imdecode(np_arr, cv2.IMREAD_COLOR)
- if frame_candidate is not None:
- frames[cam_name] = frame_candidate
-
- # If remote_arm_state is None and frames is None there is no message then use the previous message
- if new_arm_state is not None and frames is not None:
- self.last_frames = frames
-
- remote_arm_state_tensor = torch.tensor(new_arm_state, dtype=torch.float32)
- self.last_remote_arm_state = remote_arm_state_tensor
-
- present_speed = new_speed
- self.last_present_speed = new_speed
- else:
- frames = self.last_frames
-
- remote_arm_state_tensor = self.last_remote_arm_state
-
- present_speed = self.last_present_speed
-
- except Exception as e:
- print(f"[DEBUG] Error decoding video message: {e}")
- # If decode fails, fall back to old data
- return (self.last_frames, self.last_present_speed, self.last_remote_arm_state)
-
- return frames, present_speed, remote_arm_state_tensor
-
- def _process_present_speed(self, present_speed: dict) -> torch.Tensor:
- state_tensor = torch.zeros(3, dtype=torch.int32)
- if present_speed:
- decoded = {key: MobileManipulator.raw_to_degps(value) for key, value in present_speed.items()}
- if "1" in decoded:
- state_tensor[0] = decoded["1"]
- if "2" in decoded:
- state_tensor[1] = decoded["2"]
- if "3" in decoded:
- state_tensor[2] = decoded["3"]
- return state_tensor
-
- def teleop_step(
- self, record_data: bool = False
- ) -> None | tuple[dict[str, torch.Tensor], dict[str, torch.Tensor]]:
- if not self.is_connected:
- raise RobotDeviceNotConnectedError("MobileManipulator is not connected. Run `connect()` first.")
-
- speed_setting = self.speed_levels[self.speed_index]
- xy_speed = speed_setting["xy"] # e.g. 0.1, 0.25, or 0.4
- theta_speed = speed_setting["theta"] # e.g. 30, 60, or 90
-
- # Prepare to assign the position of the leader to the follower
- arm_positions = []
- for name in self.leader_arms:
- pos = self.leader_arms[name].read("Present_Position")
- pos_tensor = torch.from_numpy(pos).float()
- arm_positions.extend(pos_tensor.tolist())
-
- y_cmd = 0.0 # m/s forward/backward
- x_cmd = 0.0 # m/s lateral
- theta_cmd = 0.0 # deg/s rotation
- if self.pressed_keys["forward"]:
- y_cmd += xy_speed
- if self.pressed_keys["backward"]:
- y_cmd -= xy_speed
- if self.pressed_keys["left"]:
- x_cmd += xy_speed
- if self.pressed_keys["right"]:
- x_cmd -= xy_speed
- if self.pressed_keys["rotate_left"]:
- theta_cmd += theta_speed
- if self.pressed_keys["rotate_right"]:
- theta_cmd -= theta_speed
-
- wheel_commands = self.body_to_wheel_raw(x_cmd, y_cmd, theta_cmd)
-
- message = {"raw_velocity": wheel_commands, "arm_positions": arm_positions}
- self.cmd_socket.send_string(json.dumps(message))
-
- if not record_data:
- return
-
- obs_dict = self.capture_observation()
-
- arm_state_tensor = torch.tensor(arm_positions, dtype=torch.float32)
-
- wheel_velocity_tuple = self.wheel_raw_to_body(wheel_commands)
- wheel_velocity_mm = (
- wheel_velocity_tuple[0] * 1000.0,
- wheel_velocity_tuple[1] * 1000.0,
- wheel_velocity_tuple[2],
- )
- wheel_tensor = torch.tensor(wheel_velocity_mm, dtype=torch.float32)
- action_tensor = torch.cat([arm_state_tensor, wheel_tensor])
- action_dict = {"action": action_tensor}
-
- return obs_dict, action_dict
-
- def capture_observation(self) -> dict:
- """
- Capture observations from the remote robot: current follower arm positions,
- present wheel speeds (converted to body-frame velocities: x, y, theta),
- and a camera frame.
- """
- if not self.is_connected:
- raise RobotDeviceNotConnectedError("Not connected. Run `connect()` first.")
-
- frames, present_speed, remote_arm_state_tensor = self._get_data()
-
- body_state = self.wheel_raw_to_body(present_speed)
-
- body_state_mm = (body_state[0] * 1000.0, body_state[1] * 1000.0, body_state[2]) # Convert x,y to mm/s
- wheel_state_tensor = torch.tensor(body_state_mm, dtype=torch.float32)
- combined_state_tensor = torch.cat((remote_arm_state_tensor, wheel_state_tensor), dim=0)
-
- obs_dict = {"observation.state": combined_state_tensor}
-
- # Loop over each configured camera
- for cam_name, cam in self.cameras.items():
- frame = frames.get(cam_name, None)
- if frame is None:
- # Create a black image using the camera's configured width, height, and channels
- frame = np.zeros((cam.height, cam.width, cam.channels), dtype=np.uint8)
- obs_dict[f"observation.images.{cam_name}"] = torch.from_numpy(frame)
-
- return obs_dict
-
- def send_action(self, action: torch.Tensor) -> torch.Tensor:
- if not self.is_connected:
- raise RobotDeviceNotConnectedError("Not connected. Run `connect()` first.")
-
- # Ensure the action tensor has at least 9 elements:
- # - First 6: arm positions.
- # - Last 3: base commands.
- if action.numel() < 9:
- # Pad with zeros if there are not enough elements.
- padded = torch.zeros(9, dtype=action.dtype)
- padded[: action.numel()] = action
- action = padded
-
- # Extract arm and base actions.
- arm_actions = action[:6].flatten()
- base_actions = action[6:].flatten()
-
- x_cmd_mm = base_actions[0].item() # mm/s
- y_cmd_mm = base_actions[1].item() # mm/s
- theta_cmd = base_actions[2].item() # deg/s
-
- # Convert mm/s to m/s for the kinematics calculations.
- x_cmd = x_cmd_mm / 1000.0 # m/s
- y_cmd = y_cmd_mm / 1000.0 # m/s
-
- # Compute wheel commands from body commands.
- wheel_commands = self.body_to_wheel_raw(x_cmd, y_cmd, theta_cmd)
-
- arm_positions_list = arm_actions.tolist()
-
- message = {"raw_velocity": wheel_commands, "arm_positions": arm_positions_list}
- self.cmd_socket.send_string(json.dumps(message))
-
- return action
-
- def print_logs(self):
- pass
-
- def disconnect(self):
- if not self.is_connected:
- raise RobotDeviceNotConnectedError("Not connected.")
- if self.cmd_socket:
- stop_cmd = {
- "raw_velocity": {"left_wheel": 0, "back_wheel": 0, "right_wheel": 0},
- "arm_positions": {},
- }
- self.cmd_socket.send_string(json.dumps(stop_cmd))
- self.cmd_socket.close()
- if self.video_socket:
- self.video_socket.close()
- if self.context:
- self.context.term()
- if PYNPUT_AVAILABLE:
- self.listener.stop()
- self.is_connected = False
- print("[INFO] Disconnected from remote robot.")
-
- def __del__(self):
- if getattr(self, "is_connected", False):
- self.disconnect()
- if PYNPUT_AVAILABLE:
- self.listener.stop()
-
- @staticmethod
- def degps_to_raw(degps: float) -> int:
- steps_per_deg = 4096.0 / 360.0
- speed_in_steps = abs(degps) * steps_per_deg
- speed_int = int(round(speed_in_steps))
- if speed_int > 0x7FFF:
- speed_int = 0x7FFF
- if degps < 0:
- return speed_int | 0x8000
- else:
- return speed_int & 0x7FFF
-
- @staticmethod
- def raw_to_degps(raw_speed: int) -> float:
- steps_per_deg = 4096.0 / 360.0
- magnitude = raw_speed & 0x7FFF
- degps = magnitude / steps_per_deg
- if raw_speed & 0x8000:
- degps = -degps
- return degps
-
- def body_to_wheel_raw(
- self,
- x_cmd: float,
- y_cmd: float,
- theta_cmd: float,
- wheel_radius: float = 0.05,
- base_radius: float = 0.125,
- max_raw: int = 3000,
- ) -> dict:
- """
- Convert desired body-frame velocities into wheel raw commands.
-
- Parameters:
- x_cmd : Linear velocity in x (m/s).
- y_cmd : Linear velocity in y (m/s).
- theta_cmd : Rotational velocity (deg/s).
- wheel_radius: Radius of each wheel (meters).
- base_radius : Distance from the center of rotation to each wheel (meters).
- max_raw : Maximum allowed raw command (ticks) per wheel.
-
- Returns:
- A dictionary with wheel raw commands:
- {"left_wheel": value, "back_wheel": value, "right_wheel": value}.
-
- Notes:
- - Internally, the method converts theta_cmd to rad/s for the kinematics.
- - The raw command is computed from the wheels angular speed in deg/s
- using degps_to_raw(). If any command exceeds max_raw, all commands
- are scaled down proportionally.
- """
- # Convert rotational velocity from deg/s to rad/s.
- theta_rad = theta_cmd * (np.pi / 180.0)
- # Create the body velocity vector [x, y, theta_rad].
- velocity_vector = np.array([x_cmd, y_cmd, theta_rad])
-
- # Define the wheel mounting angles (defined from y axis cw)
- angles = np.radians(np.array([300, 180, 60]))
- # Build the kinematic matrix: each row maps body velocities to a wheel’s linear speed.
- # The third column (base_radius) accounts for the effect of rotation.
- m = np.array([[np.cos(a), np.sin(a), base_radius] for a in angles])
-
- # Compute each wheel’s linear speed (m/s) and then its angular speed (rad/s).
- wheel_linear_speeds = m.dot(velocity_vector)
- wheel_angular_speeds = wheel_linear_speeds / wheel_radius
-
- # Convert wheel angular speeds from rad/s to deg/s.
- wheel_degps = wheel_angular_speeds * (180.0 / np.pi)
-
- # Scaling
- steps_per_deg = 4096.0 / 360.0
- raw_floats = [abs(degps) * steps_per_deg for degps in wheel_degps]
- max_raw_computed = max(raw_floats)
- if max_raw_computed > max_raw:
- scale = max_raw / max_raw_computed
- wheel_degps = wheel_degps * scale
-
- # Convert each wheel’s angular speed (deg/s) to a raw integer.
- wheel_raw = [MobileManipulator.degps_to_raw(deg) for deg in wheel_degps]
-
- return {"left_wheel": wheel_raw[0], "back_wheel": wheel_raw[1], "right_wheel": wheel_raw[2]}
-
- def wheel_raw_to_body(
- self, wheel_raw: dict, wheel_radius: float = 0.05, base_radius: float = 0.125
- ) -> tuple:
- """
- Convert wheel raw command feedback back into body-frame velocities.
-
- Parameters:
- wheel_raw : Dictionary with raw wheel commands (keys: "left_wheel", "back_wheel", "right_wheel").
- wheel_radius: Radius of each wheel (meters).
- base_radius : Distance from the robot center to each wheel (meters).
-
- Returns:
- A tuple (x_cmd, y_cmd, theta_cmd) where:
- x_cmd : Linear velocity in x (m/s).
- y_cmd : Linear velocity in y (m/s).
- theta_cmd : Rotational velocity in deg/s.
- """
- # Extract the raw values in order.
- raw_list = [
- int(wheel_raw.get("left_wheel", 0)),
- int(wheel_raw.get("back_wheel", 0)),
- int(wheel_raw.get("right_wheel", 0)),
- ]
-
- # Convert each raw command back to an angular speed in deg/s.
- wheel_degps = np.array([MobileManipulator.raw_to_degps(r) for r in raw_list])
- # Convert from deg/s to rad/s.
- wheel_radps = wheel_degps * (np.pi / 180.0)
- # Compute each wheel’s linear speed (m/s) from its angular speed.
- wheel_linear_speeds = wheel_radps * wheel_radius
-
- # Define the wheel mounting angles (defined from y axis cw)
- angles = np.radians(np.array([300, 180, 60]))
- m = np.array([[np.cos(a), np.sin(a), base_radius] for a in angles])
-
- # Solve the inverse kinematics: body_velocity = M⁻¹ · wheel_linear_speeds.
- m_inv = np.linalg.inv(m)
- velocity_vector = m_inv.dot(wheel_linear_speeds)
- x_cmd, y_cmd, theta_rad = velocity_vector
- theta_cmd = theta_rad * (180.0 / np.pi)
- return (x_cmd, y_cmd, theta_cmd)
-
-
-class LeKiwi:
- def __init__(self, motor_bus):
- """
- Initializes the LeKiwi with Feetech motors bus.
- """
- self.motor_bus = motor_bus
- self.motor_ids = ["left_wheel", "back_wheel", "right_wheel"]
-
- # Initialize motors in velocity mode.
- self.motor_bus.write("Lock", 0)
- self.motor_bus.write("Mode", [1, 1, 1], self.motor_ids)
- self.motor_bus.write("Lock", 1)
- print("Motors set to velocity mode.")
-
- def read_velocity(self):
- """
- Reads the raw speeds for all wheels. Returns a dictionary with motor names:
- """
- raw_speeds = self.motor_bus.read("Present_Speed", self.motor_ids)
- return {
- "left_wheel": int(raw_speeds[0]),
- "back_wheel": int(raw_speeds[1]),
- "right_wheel": int(raw_speeds[2]),
- }
-
- def set_velocity(self, command_speeds):
- """
- Sends raw velocity commands (16-bit encoded values) directly to the motor bus.
- The order of speeds must correspond to self.motor_ids.
- """
- self.motor_bus.write("Goal_Speed", command_speeds, self.motor_ids)
-
- def stop(self):
- """Stops the robot by setting all motor speeds to zero."""
- self.motor_bus.write("Goal_Speed", [0, 0, 0], self.motor_ids)
- print("Motors stopped.")
diff --git a/lerobot/common/robot_devices/robots/realman.py b/lerobot/common/robot_devices/robots/realman.py
deleted file mode 100644
index 2578179a..00000000
--- a/lerobot/common/robot_devices/robots/realman.py
+++ /dev/null
@@ -1,292 +0,0 @@
-"""
- Teleoperation Realman with a PS5 controller and
-"""
-
-import time
-import torch
-import numpy as np
-from dataclasses import dataclass, field, replace
-from collections import deque
-from lerobot.common.robot_devices.teleop.realman_single import HybridController
-from lerobot.common.robot_devices.motors.utils import get_motor_names, make_motors_buses_from_configs
-from lerobot.common.robot_devices.cameras.utils import make_cameras_from_configs
-from lerobot.common.robot_devices.utils import RobotDeviceAlreadyConnectedError, RobotDeviceNotConnectedError
-from lerobot.common.robot_devices.robots.configs import RealmanRobotConfig
-
-
-class RealmanRobot:
- def __init__(self, config: RealmanRobotConfig | None = None, **kwargs):
- if config is None:
- config = RealmanRobotConfig()
- # Overwrite config arguments using kwargs
- self.config = replace(config, **kwargs)
- self.robot_type = self.config.type
- self.inference_time = self.config.inference_time # if it is inference time
-
- # build cameras
- self.cameras = make_cameras_from_configs(self.config.cameras)
-
- # build realman motors
- self.piper_motors = make_motors_buses_from_configs(self.config.left_follower_arm)
- self.arm = self.piper_motors['main']
-
- # build init teleop info
- self.init_info = {
- 'init_joint': self.arm.init_joint_position,
- 'init_pose': self.arm.init_pose,
- 'max_gripper': config.max_gripper,
- 'min_gripper': config.min_gripper,
- 'servo_config_file': config.servo_config_file
- }
-
- # build state-action cache
- self.joint_queue = deque(maxlen=2)
- self.last_endpose = self.arm.init_pose
-
- # build gamepad teleop
- if not self.inference_time:
- self.teleop = HybridController(self.init_info)
- else:
- self.teleop = None
-
- self.logs = {}
- self.is_connected = False
-
- @property
- def camera_features(self) -> dict:
- cam_ft = {}
- for cam_key, cam in self.cameras.items():
- key = f"observation.images.{cam_key}"
- cam_ft[key] = {
- "shape": (cam.height, cam.width, cam.channels),
- "names": ["height", "width", "channels"],
- "info": None,
- }
- return cam_ft
-
-
- @property
- def motor_features(self) -> dict:
- action_names = get_motor_names(self.piper_motors)
- state_names = get_motor_names(self.piper_motors)
- return {
- "action": {
- "dtype": "float32",
- "shape": (len(action_names),),
- "names": action_names,
- },
- "observation.state": {
- "dtype": "float32",
- "shape": (len(state_names),),
- "names": state_names,
- },
- }
-
- @property
- def has_camera(self):
- return len(self.cameras) > 0
-
- @property
- def num_cameras(self):
- return len(self.cameras)
-
-
- def connect(self) -> None:
- """Connect RealmanArm and cameras"""
- if self.is_connected:
- raise RobotDeviceAlreadyConnectedError(
- "RealmanArm is already connected. Do not run `robot.connect()` twice."
- )
-
- # connect RealmanArm
- self.arm.connect(enable=True)
- print("RealmanArm conneted")
-
- # connect cameras
- for name in self.cameras:
- self.cameras[name].connect()
- self.is_connected = self.is_connected and self.cameras[name].is_connected
- print(f"camera {name} conneted")
-
- print("All connected")
- self.is_connected = True
-
- self.run_calibration()
-
-
- def disconnect(self) -> None:
- """move to home position, disenable piper and cameras"""
- # move piper to home position, disable
- if not self.inference_time:
- self.teleop.stop()
-
- # disconnect piper
- self.arm.safe_disconnect()
- print("RealmanArm disable after 5 seconds")
- time.sleep(5)
- self.arm.connect(enable=False)
-
- # disconnect cameras
- if len(self.cameras) > 0:
- for cam in self.cameras.values():
- cam.disconnect()
-
- self.is_connected = False
-
-
- def run_calibration(self):
- """move piper to the home position"""
- if not self.is_connected:
- raise ConnectionError()
-
- self.arm.apply_calibration()
- if not self.inference_time:
- self.teleop.reset()
-
-
- def teleop_step(
- self, record_data=False
- ) -> None | tuple[dict[str, torch.Tensor], dict[str, torch.Tensor]]:
- if not self.is_connected:
- raise ConnectionError()
-
- if self.teleop is None and self.inference_time:
- self.teleop = HybridController(self.init_info)
-
- # read target pose state as
- before_read_t = time.perf_counter()
- state = self.arm.read() # read current joint position from robot
- action = self.teleop.get_action() # target joint position and pose end pos from gamepad
- self.logs["read_pos_dt_s"] = time.perf_counter() - before_read_t
-
- if action['control_mode'] == 'joint':
- # 关节控制模式(主模式)
- current_pose = self.arm.read_current_arm_endpose_state()
- self.teleop.update_endpose_state(current_pose)
-
- target_joints = action['joint_angles'][:-1]
- self.arm.write_gripper(action['gripper'])
- print(action['gripper'])
- if action['master_controller_status']['infrared'] == 1:
- if action['master_controller_status']['button'] == 1:
- self.arm.write_joint_canfd(target_joints)
- else:
- self.arm.write_joint_slow(target_joints)
-
- # do action
- before_write_t = time.perf_counter()
- self.joint_queue.append(list(self.arm.read().values()))
- self.logs["write_pos_dt_s"] = time.perf_counter() - before_write_t
-
- else:
- target_pose = list(action['end_pose'])
- # do action
- before_write_t = time.perf_counter()
- if self.last_endpose != target_pose:
- self.arm.write_endpose_canfd(target_pose)
- self.last_endpose = target_pose
- self.arm.write_gripper(action['gripper'])
-
- target_joints = self.arm.read_current_arm_joint_state()
- self.joint_queue.append(list(self.arm.read().values()))
- self.teleop.update_joint_state(target_joints)
- self.logs["write_pos_dt_s"] = time.perf_counter() - before_write_t
-
- if not record_data:
- return
-
- state = torch.as_tensor(list(self.joint_queue[0]), dtype=torch.float32)
- action = torch.as_tensor(list(self.joint_queue[-1]), dtype=torch.float32)
-
- # Capture images from cameras
- images = {}
- for name in self.cameras:
- before_camread_t = time.perf_counter()
- images[name] = self.cameras[name].async_read()
- images[name] = torch.from_numpy(images[name])
- self.logs[f"read_camera_{name}_dt_s"] = self.cameras[name].logs["delta_timestamp_s"]
- self.logs[f"async_read_camera_{name}_dt_s"] = time.perf_counter() - before_camread_t
-
- # Populate output dictionnaries
- obs_dict, action_dict = {}, {}
- obs_dict["observation.state"] = state
- action_dict["action"] = action
- for name in self.cameras:
- obs_dict[f"observation.images.{name}"] = images[name]
-
- return obs_dict, action_dict
-
-
-
- def send_action(self, action: torch.Tensor) -> torch.Tensor:
- """Write the predicted actions from policy to the motors"""
- if not self.is_connected:
- raise RobotDeviceNotConnectedError(
- "Piper is not connected. You need to run `robot.connect()`."
- )
-
- # send to motors, torch to list
- target_joints = action.tolist()
- len_joint = len(target_joints) - 1
- target_joints = [target_joints[i]*180 for i in range(len_joint)] + [target_joints[-1]]
- target_joints[-1] = int(target_joints[-1]*500 + 500)
- self.arm.write(target_joints)
-
- return action
-
-
-
- def capture_observation(self) -> dict:
- """capture current images and joint positions"""
- if not self.is_connected:
- raise RobotDeviceNotConnectedError(
- "Piper is not connected. You need to run `robot.connect()`."
- )
-
- # read current joint positions
- before_read_t = time.perf_counter()
- state = self.arm.read() # 6 joints + 1 gripper
- self.logs["read_pos_dt_s"] = time.perf_counter() - before_read_t
-
- state = torch.as_tensor(list(state.values()), dtype=torch.float32)
-
- # read images from cameras
- images = {}
- for name in self.cameras:
- before_camread_t = time.perf_counter()
- images[name] = self.cameras[name].async_read()
- images[name] = torch.from_numpy(images[name])
- self.logs[f"read_camera_{name}_dt_s"] = self.cameras[name].logs["delta_timestamp_s"]
- self.logs[f"async_read_camera_{name}_dt_s"] = time.perf_counter() - before_camread_t
-
- # Populate output dictionnaries and format to pytorch
- obs_dict = {}
- obs_dict["observation.state"] = state
- for name in self.cameras:
- obs_dict[f"observation.images.{name}"] = images[name]
- return obs_dict
-
- def teleop_safety_stop(self):
- """ move to home position after record one episode """
- self.run_calibration()
-
-
- def __del__(self):
- if self.is_connected:
- self.disconnect()
- if not self.inference_time:
- self.teleop.stop()
-
-
-if __name__ == '__main__':
- robot = RealmanRobot()
- robot.connect()
- # robot.run_calibration()
- while True:
- robot.teleop_step(record_data=True)
-
- robot.capture_observation()
- dummy_action = torch.Tensor([-0.40586111280653214, 0.5522833506266276, 0.4998166826036241, -0.3539944542778863, -0.524433347913954, 0.9064999898274739, 0.482])
- robot.send_action(dummy_action)
- robot.disconnect()
- print('ok')
\ No newline at end of file
diff --git a/lerobot/common/robot_devices/robots/realman_dual.py b/lerobot/common/robot_devices/robots/realman_dual.py
deleted file mode 100644
index 109dfc25..00000000
--- a/lerobot/common/robot_devices/robots/realman_dual.py
+++ /dev/null
@@ -1,322 +0,0 @@
-"""
- Teleoperation Realman with a PS5 controller and
-"""
-
-import time
-import torch
-import numpy as np
-import logging
-from typing import Optional, Tuple, Dict
-from dataclasses import dataclass, field, replace
-from collections import deque
-from lerobot.common.robot_devices.teleop.realman_aloha_dual import HybridController
-from lerobot.common.robot_devices.motors.utils import get_motor_names, make_motors_buses_from_configs
-from lerobot.common.robot_devices.cameras.utils import make_cameras_from_configs
-from lerobot.common.robot_devices.utils import RobotDeviceAlreadyConnectedError, RobotDeviceNotConnectedError
-from lerobot.common.robot_devices.robots.configs import RealmanDualRobotConfig
-from lerobot.common.robot_devices.robots.utils import ask_llm
-
-
-
-class RealmanDualRobot:
- def __init__(self, config: RealmanDualRobotConfig | None = None, **kwargs):
- if config is None:
- config = RealmanDualRobotConfig()
- # Overwrite config arguments using kwargs
- self.config = replace(config, **kwargs)
- self.robot_type = self.config.type
- self.inference_time = self.config.inference_time # if it is inference time
-
- # build cameras
- self.cameras = make_cameras_from_configs(self.config.cameras)
- # build realman motors
- self.piper_motors = make_motors_buses_from_configs(self.config.follower_arm)
- self.arm = self.piper_motors['main']
-
- # 初始化遥操作
- self._initialize_teleop()
- # init state
- self._initialize_state()
-
- def _initialize_teleop(self):
- """初始化遥操作"""
- self.init_info = {
- 'init_joint': self.arm.init_joint_position,
- 'init_pose': self.arm.init_pose,
- 'max_gripper': self.config.max_gripper,
- 'min_gripper': self.config.min_gripper,
- 'servo_config_file': self.config.servo_config_file,
- 'end_control_info': {'left': self.config.left_end_control_guid , 'right': self.config.right_end_control_guid}
- }
-
- if not self.inference_time:
- self.teleop = HybridController(self.init_info)
- else:
- self.teleop = None
-
- def _initialize_state(self):
- """初始化状态"""
- self.joint_queue = deque(maxlen=2)
- self.last_endpose = self.arm.init_pose
- self.logs = {}
- self.is_connected = False
-
- def _read_robot_state(self) -> dict:
- """读取机器人状态"""
- before_read_t = time.perf_counter()
- from copy import deepcopy
- state = deepcopy(self.arm.read())
- self.logs["read_pos_dt_s"] = time.perf_counter() - before_read_t
- return state
-
- def _execute_action(self, action: dict, state: dict):
- """执行动作"""
- before_write_t = time.perf_counter()
-
- if action['control_mode'] == 'joint':
- # self.arm.write_action(action, state)
- pass
- else:
- if list(action['pose'].values()) != list(state['pose'].values()):
- pose = list(action['pose'].values())
- self.arm.write_endpose_canfd(pose)
-
- elif list(action['joint'].values()) != list(state['joint'].values()):
- target_joint = list(action['joint'].values())
- self.arm.write(target_joint)
-
- self.logs["write_pos_dt_s"] = time.perf_counter() - before_write_t
-
- def _prepare_record_data(self) -> Tuple[Dict, Dict]:
- """准备记录数据"""
- if len(self.joint_queue) < 2:
- return {}, {}
-
- state = torch.as_tensor(list(self.joint_queue[0]), dtype=torch.float32)
- action = torch.as_tensor(list(self.joint_queue[-1]), dtype=torch.float32)
- # 捕获图像
- images = self._capture_images()
- # 构建输出字典
- obs_dict = {
- "observation.state": state,
- **{f"observation.images.{name}": img for name, img in images.items()}
- }
- action_dict = {"action": action}
- return obs_dict, action_dict
-
- def _update_state_queue(self):
- """更新状态队列"""
- current_state = self.arm.read()['joint']
- current_state_lst = []
- for data in current_state:
- if "joint" in data:
- current_state_lst.append(current_state[data] / 180)
- elif "gripper" in data:
- current_state_lst.append((current_state[data]-500)/500)
- self.joint_queue.append(current_state_lst)
-
- def _capture_images(self) -> Dict[str, torch.Tensor]:
- """捕获图像"""
- images = {}
- for name, camera in self.cameras.items():
- before_camread_t = time.perf_counter()
- image = camera.async_read()
- images[name] = torch.from_numpy(image)
-
- self.logs[f"read_camera_{name}_dt_s"] = camera.logs["delta_timestamp_s"]
- self.logs[f"async_read_camera_{name}_dt_s"] = time.perf_counter() - before_camread_t
- return images
-
- @property
- def camera_features(self) -> dict:
- cam_ft = {}
- for cam_key, cam in self.cameras.items():
- key = f"observation.images.{cam_key}"
- cam_ft[key] = {
- "shape": (cam.height, cam.width, cam.channels),
- "names": ["height", "width", "channels"],
- "info": None,
- }
- return cam_ft
-
-
- @property
- def motor_features(self) -> dict:
- action_names = get_motor_names(self.piper_motors)
- state_names = get_motor_names(self.piper_motors)
- return {
- "action": {
- "dtype": "float32",
- "shape": (len(action_names),),
- "names": action_names,
- },
- "observation.state": {
- "dtype": "float32",
- "shape": (len(state_names),),
- "names": state_names,
- },
- }
-
- @property
- def has_camera(self):
- return len(self.cameras) > 0
-
- @property
- def num_cameras(self):
- return len(self.cameras)
-
-
- def connect(self) -> None:
- """Connect RealmanArm and cameras"""
- if self.is_connected:
- raise RobotDeviceAlreadyConnectedError(
- "RealmanArm is already connected. Do not run `robot.connect()` twice."
- )
-
- # connect RealmanArm
- self.arm.connect(enable=True)
- print("RealmanArm conneted")
-
- # connect cameras
- for name in self.cameras:
- self.cameras[name].connect()
- self.is_connected = self.is_connected and self.cameras[name].is_connected
- print(f"camera {name} conneted")
-
- print("All connected")
- self.is_connected = True
-
- self.run_calibration()
-
-
- def disconnect(self) -> None:
- """move to home position, disenable piper and cameras"""
- # move piper to home position, disable
- if not self.inference_time:
- self.teleop.stop()
-
- # disconnect piper
- self.arm.safe_disconnect()
- print("RealmanArm disable after 5 seconds")
- time.sleep(5)
- self.arm.connect(enable=False)
-
- # disconnect cameras
- if len(self.cameras) > 0:
- for cam in self.cameras.values():
- cam.disconnect()
-
- self.is_connected = False
-
-
- def run_calibration(self):
- """move piper to the home position"""
- if not self.is_connected:
- raise ConnectionError()
-
- self.arm.apply_calibration()
- if not self.inference_time:
- self.teleop.reset()
-
- def teleop_step(
- self, record_data=False
- ) -> None | tuple[dict[str, torch.Tensor], dict[str, torch.Tensor]]:
- if not self.is_connected:
- raise ConnectionError()
-
- if self.teleop is None and self.inference_time:
- self.teleop = HybridController(self.init_info)
-
- try:
- # 读取当前状态
- state = self._read_robot_state()
- # 获取动作
- action = self.teleop.get_action(state)
- self._execute_action(action, state)
- # 更新状态队列
- self._update_state_queue()
- time.sleep(0.019) # 50HZ
-
- if record_data:
- data = self._prepare_record_data()
- if data[0]:
- # # ask_llm("将超声仪左下角试管架上的试管移动到超声仪中", data[0])
- pass
- return data
-
- except Exception as e:
- logging.error(f"遥操作步骤失败: {e}")
- return None
-
-
- def send_action(self, action: torch.Tensor) -> torch.Tensor:
- """Write the predicted actions from policy to the motors"""
- if not self.is_connected:
- raise RobotDeviceNotConnectedError(
- "Piper is not connected. You need to run `robot.connect()`."
- )
-
- # send to motors, torch to list
- target_joints = action.tolist()
- len_joint = len(target_joints) - 1
- target_joints = [target_joints[i]*180 for i in range(len_joint)] + [target_joints[-1]]
- target_joints[-1] = int(target_joints[-1]*500 + 500)
- self.arm.write(target_joints)
-
- return action
-
-
- def capture_observation(self) -> dict:
- """capture current images and joint positions"""
- if not self.is_connected:
- raise RobotDeviceNotConnectedError(
- "Piper is not connected. You need to run `robot.connect()`."
- )
-
- # read current joint positions
- before_read_t = time.perf_counter()
- state = self.arm.read() # 6 joints + 1 gripper
- self.logs["read_pos_dt_s"] = time.perf_counter() - before_read_t
-
- state = torch.as_tensor(list(state.values()), dtype=torch.float32)
-
- # read images from cameras
- images = {}
- for name in self.cameras:
- before_camread_t = time.perf_counter()
- images[name] = self.cameras[name].async_read()
- images[name] = torch.from_numpy(images[name])
- self.logs[f"read_camera_{name}_dt_s"] = self.cameras[name].logs["delta_timestamp_s"]
- self.logs[f"async_read_camera_{name}_dt_s"] = time.perf_counter() - before_camread_t
-
- # Populate output dictionnaries and format to pytorch
- obs_dict = {}
- obs_dict["observation.state"] = state
- for name in self.cameras:
- obs_dict[f"observation.images.{name}"] = images[name]
- return obs_dict
-
- def teleop_safety_stop(self):
- """ move to home position after record one episode """
- self.run_calibration()
-
-
- def __del__(self):
- if self.is_connected:
- self.disconnect()
- if not self.inference_time:
- self.teleop.stop()
-
-
-if __name__ == '__main__':
- robot = RealmanDualRobot()
- robot.connect()
- # robot.run_calibration()
- while True:
- robot.teleop_step(record_data=True)
-
- # robot.capture_observation()
- # dummy_action = torch.Tensor([-0.40586111280653214, 0.5522833506266276, 0.4998166826036241, -0.3539944542778863, -0.524433347913954, 0.9064999898274739, 0.482])
- # robot.send_action(dummy_action)
- # robot.disconnect()
- # print('ok')
\ No newline at end of file
diff --git a/lerobot/common/robot_devices/robots/stretch.py b/lerobot/common/robot_devices/robots/stretch.py
deleted file mode 100644
index 9cfe6e49..00000000
--- a/lerobot/common/robot_devices/robots/stretch.py
+++ /dev/null
@@ -1,208 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import time
-from dataclasses import replace
-
-import torch
-from stretch_body.gamepad_teleop import GamePadTeleop
-from stretch_body.robot import Robot as StretchAPI
-from stretch_body.robot_params import RobotParams
-
-from lerobot.common.robot_devices.robots.configs import StretchRobotConfig
-
-
-class StretchRobot(StretchAPI):
- """Wrapper of stretch_body.robot.Robot"""
-
- def __init__(self, config: StretchRobotConfig | None = None, **kwargs):
- super().__init__()
- if config is None:
- self.config = StretchRobotConfig(**kwargs)
- else:
- # Overwrite config arguments using kwargs
- self.config = replace(config, **kwargs)
-
- self.robot_type = self.config.type
- self.cameras = self.config.cameras
- self.is_connected = False
- self.teleop = None
- self.logs = {}
-
- # TODO(aliberts): test this
- RobotParams.set_logging_level("WARNING")
- RobotParams.set_logging_formatter("brief_console_formatter")
-
- self.state_keys = None
- self.action_keys = None
-
- def connect(self) -> None:
- self.is_connected = self.startup()
- if not self.is_connected:
- print("Another process is already using Stretch. Try running 'stretch_free_robot_process.py'")
- raise ConnectionError()
-
- for name in self.cameras:
- self.cameras[name].connect()
- self.is_connected = self.is_connected and self.cameras[name].is_connected
-
- if not self.is_connected:
- print("Could not connect to the cameras, check that all cameras are plugged-in.")
- raise ConnectionError()
-
- self.run_calibration()
-
- def run_calibration(self) -> None:
- if not self.is_homed():
- self.home()
-
- def teleop_step(
- self, record_data=False
- ) -> None | tuple[dict[str, torch.Tensor], dict[str, torch.Tensor]]:
- # TODO(aliberts): return ndarrays instead of torch.Tensors
- if not self.is_connected:
- raise ConnectionError()
-
- if self.teleop is None:
- self.teleop = GamePadTeleop(robot_instance=False)
- self.teleop.startup(robot=self)
-
- before_read_t = time.perf_counter()
- state = self.get_state()
- action = self.teleop.gamepad_controller.get_state()
- self.logs["read_pos_dt_s"] = time.perf_counter() - before_read_t
-
- before_write_t = time.perf_counter()
- self.teleop.do_motion(robot=self)
- self.push_command()
- self.logs["write_pos_dt_s"] = time.perf_counter() - before_write_t
-
- if self.state_keys is None:
- self.state_keys = list(state)
-
- if not record_data:
- return
-
- state = torch.as_tensor(list(state.values()))
- action = torch.as_tensor(list(action.values()))
-
- # Capture images from cameras
- images = {}
- for name in self.cameras:
- before_camread_t = time.perf_counter()
- images[name] = self.cameras[name].async_read()
- images[name] = torch.from_numpy(images[name])
- self.logs[f"read_camera_{name}_dt_s"] = self.cameras[name].logs["delta_timestamp_s"]
- self.logs[f"async_read_camera_{name}_dt_s"] = time.perf_counter() - before_camread_t
-
- # Populate output dictionaries
- obs_dict, action_dict = {}, {}
- obs_dict["observation.state"] = state
- action_dict["action"] = action
- for name in self.cameras:
- obs_dict[f"observation.images.{name}"] = images[name]
-
- return obs_dict, action_dict
-
- def get_state(self) -> dict:
- status = self.get_status()
- return {
- "head_pan.pos": status["head"]["head_pan"]["pos"],
- "head_tilt.pos": status["head"]["head_tilt"]["pos"],
- "lift.pos": status["lift"]["pos"],
- "arm.pos": status["arm"]["pos"],
- "wrist_pitch.pos": status["end_of_arm"]["wrist_pitch"]["pos"],
- "wrist_roll.pos": status["end_of_arm"]["wrist_roll"]["pos"],
- "wrist_yaw.pos": status["end_of_arm"]["wrist_yaw"]["pos"],
- "gripper.pos": status["end_of_arm"]["stretch_gripper"]["pos"],
- "base_x.vel": status["base"]["x_vel"],
- "base_y.vel": status["base"]["y_vel"],
- "base_theta.vel": status["base"]["theta_vel"],
- }
-
- def capture_observation(self) -> dict:
- # TODO(aliberts): return ndarrays instead of torch.Tensors
- before_read_t = time.perf_counter()
- state = self.get_state()
- self.logs["read_pos_dt_s"] = time.perf_counter() - before_read_t
-
- if self.state_keys is None:
- self.state_keys = list(state)
-
- state = torch.as_tensor(list(state.values()))
-
- # Capture images from cameras
- images = {}
- for name in self.cameras:
- before_camread_t = time.perf_counter()
- images[name] = self.cameras[name].async_read()
- images[name] = torch.from_numpy(images[name])
- self.logs[f"read_camera_{name}_dt_s"] = self.cameras[name].logs["delta_timestamp_s"]
- self.logs[f"async_read_camera_{name}_dt_s"] = time.perf_counter() - before_camread_t
-
- # Populate output dictionaries
- obs_dict = {}
- obs_dict["observation.state"] = state
- for name in self.cameras:
- obs_dict[f"observation.images.{name}"] = images[name]
-
- return obs_dict
-
- def send_action(self, action: torch.Tensor) -> torch.Tensor:
- # TODO(aliberts): return ndarrays instead of torch.Tensors
- if not self.is_connected:
- raise ConnectionError()
-
- if self.teleop is None:
- self.teleop = GamePadTeleop(robot_instance=False)
- self.teleop.startup(robot=self)
-
- if self.action_keys is None:
- dummy_action = self.teleop.gamepad_controller.get_state()
- self.action_keys = list(dummy_action.keys())
-
- action_dict = dict(zip(self.action_keys, action.tolist(), strict=True))
-
- before_write_t = time.perf_counter()
- self.teleop.do_motion(state=action_dict, robot=self)
- self.push_command()
- self.logs["write_pos_dt_s"] = time.perf_counter() - before_write_t
-
- # TODO(aliberts): return action_sent when motion is limited
- return action
-
- def print_logs(self) -> None:
- pass
- # TODO(aliberts): move robot-specific logs logic here
-
- def teleop_safety_stop(self) -> None:
- if self.teleop is not None:
- self.teleop._safety_stop(robot=self)
-
- def disconnect(self) -> None:
- self.stop()
- if self.teleop is not None:
- self.teleop.gamepad_controller.stop()
- self.teleop.stop()
-
- if len(self.cameras) > 0:
- for cam in self.cameras.values():
- cam.disconnect()
-
- self.is_connected = False
-
- def __del__(self):
- self.disconnect()
diff --git a/lerobot/common/robot_devices/robots/utils.py b/lerobot/common/robot_devices/robots/utils.py
deleted file mode 100644
index 1f740fb4..00000000
--- a/lerobot/common/robot_devices/robots/utils.py
+++ /dev/null
@@ -1,143 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-conversation_history = []
-conversation_client = None
-
-
-from typing import Protocol, Dict
-
-# Robot configuration imports
-from lerobot.common.robot_devices.robots.configs import (
- AlohaRobotConfig,
- KochBimanualRobotConfig,
- KochRobotConfig,
- LeKiwiRobotConfig,
- ManipulatorRobotConfig,
- MossRobotConfig,
- RobotConfig,
- So100RobotConfig,
- So101RobotConfig,
- StretchRobotConfig,
- RealmanRobotConfig,
- RealmanDualRobotConfig
-)
-
-# Added library imports for LLM interaction
-from openai import OpenAI
-import base64
-import os
-import cv2
-import torch
-import numpy as np
-from pynput import keyboard
-import time
-import json
-from datetime import datetime
-
-def get_arm_id(name, arm_type):
- """Returns the string identifier of a robot arm."""
- return f"{name}_{arm_type}"
-
-
-class Robot(Protocol):
- robot_type: str
- features: dict
- cameras: Dict
-
- def connect(self): ...
- def run_calibration(self): ...
- def teleop_step(self, record_data=False): ...
- def capture_observation(self) -> Dict: ...
- def send_action(self, action): ...
- def disconnect(self): ...
-
-
-def make_robot_config(robot_type: str, **kwargs) -> RobotConfig:
- # ... (此函数内容保持不变) ...
- if robot_type == "aloha":
- return AlohaRobotConfig(**kwargs)
- elif robot_type == "koch":
- return KochRobotConfig(**kwargs)
- elif robot_type == "koch_bimanual":
- return KochBimanualRobotConfig(**kwargs)
- elif robot_type == "moss":
- return MossRobotConfig(**kwargs)
- elif robot_type == "so100":
- return So100RobotConfig(**kwargs)
- elif robot_type == "so101":
- return So101RobotConfig(**kwargs)
- elif robot_type == "stretch":
- return StretchRobotConfig(**kwargs)
- elif robot_type == "lekiwi":
- return LeKiwiRobotConfig(**kwargs)
- elif robot_type == 'realman':
- return RealmanRobotConfig(**kwargs)
- elif robot_type == 'realman_dual':
- return RealmanDualRobotConfig(**kwargs)
- else:
- raise ValueError(f"Robot type '{robot_type}' is not available.")
-
-
-def make_robot_from_config(config: RobotConfig):
- # ... (此函数内容保持不变) ...
- if isinstance(config, ManipulatorRobotConfig):
- from lerobot.common.robot_devices.robots.manipulator import ManipulatorRobot
- return ManipulatorRobot(config)
- elif isinstance(config, LeKiwiRobotConfig):
- from lerobot.common.robot_devices.robots.mobile_manipulator import MobileManipulator
- return MobileManipulator(config)
- elif isinstance(config, RealmanRobotConfig):
- from lerobot.common.robot_devices.robots.realman import RealmanRobot
- return RealmanRobot(config)
- elif isinstance(config, RealmanDualRobotConfig):
- from lerobot.common.robot_devices.robots.realman_dual import RealmanDualRobot
- return RealmanDualRobot(config)
- else:
- from lerobot.common.robot_devices.robots.stretch import StretchRobot
- return StretchRobot(config)
-
-
-def make_robot(robot_type: str, **kwargs) -> Robot:
- config = make_robot_config(robot_type, **kwargs)
- return make_robot_from_config(config)
-
-# -------------------- LLM 交互功能区 --------------------
-
-def encode_image_to_base64(image_tensor: torch.Tensor) -> str:
- """将PyTorch张量格式的图像编码为Base64字符串"""
- image_np = image_tensor.cpu().numpy().astype(np.uint8)
- image_bgr = cv2.cvtColor(image_np, cv2.COLOR_RGB2BGR)
- success, buffer = cv2.imencode(".jpg", image_bgr)
- if not success:
- raise ValueError("图像编码失败")
- return base64.b64encode(buffer).decode("utf-8")
-
-# 在文件顶部添加全局变量来管理会话
-conversation_history = []
-conversation_client = None
-
-def ask_llm(query: str, state: dict):
-
- prompt = """ """
-
- api_key = os.getenv("OPENAI_API_KEY")
- base_url = os.getenv("OPENAI_BASE_URL")
- client = OpenAI(api_key=api_key, base_url=base_url)
- # keys = [key for key in state]
- # import pdb
- # pdb.set_trace()
-
-
- pass
\ No newline at end of file
diff --git a/lerobot/common/robot_devices/teleop/find_gamepad.py b/lerobot/common/robot_devices/teleop/find_gamepad.py
deleted file mode 100644
index 71bf2c2e..00000000
--- a/lerobot/common/robot_devices/teleop/find_gamepad.py
+++ /dev/null
@@ -1,20 +0,0 @@
-import pygame
-
-def find_controller_index():
- # 获取所有 pygame 控制器的设备路径
- pygame.init()
- pygame_joysticks = {}
- if pygame.joystick.get_count():
- for i in range(pygame.joystick.get_count()):
- joystick = pygame.joystick.Joystick(i)
- joystick.init()
- pygame_joysticks[joystick.get_guid()] = {
- 'index': i,
- 'device_name': joystick.get_name()
- }
-
- return pygame_joysticks
-
-
-if __name__ == '__main__':
- print(find_controller_index())
\ No newline at end of file
diff --git a/lerobot/common/robot_devices/teleop/gamepad.py b/lerobot/common/robot_devices/teleop/gamepad.py
deleted file mode 100644
index b40e3098..00000000
--- a/lerobot/common/robot_devices/teleop/gamepad.py
+++ /dev/null
@@ -1,421 +0,0 @@
-import pygame
-import threading
-import time
-import logging
-from typing import Dict
-from dataclasses import dataclass
-from lerobot.common.robot_devices.teleop.find_gamepad import find_controller_index
-from lerobot.common.robot_devices.teleop.servo_server import ServoArmServer
-
-
-class RealmanAlohaMaster:
- def __init__(self, config):
- self.config = config
- self._initialize_master_arm()
-
- def _initialize_master_arm(self):
- """初始化主控臂"""
- try:
- self.master_dual_arm = ServoArmServer(self.config.config_file)
- except Exception as e:
- logging.error(f"初始化主控臂失败: {e}")
- raise
-
- def get_action(self) -> Dict:
- """获取控制动作"""
- try:
- master_joint_actions = self.master_dual_arm.get_joint_data()
- return self._format_action(master_joint_actions)
- except Exception as e:
- logging.error(f"获取动作失败: {e}")
-
- def _format_action(self, master_joint_actions: dict) -> dict:
- """格式化动作数据"""
- master_controller_status = {
- 'left': master_joint_actions['left_controller_status'],
- 'right': master_joint_actions['right_controller_status']
- }
-
- return {
- 'control_mode': 'joint',
- 'master_joint_actions': master_joint_actions['dual_joint_actions'],
- 'left_joint_actions': master_joint_actions['left_joint_actions'][:-1],
- 'right_joint_actions': master_joint_actions['right_joint_actions'][:-1],
- 'left_gripper_actions': master_joint_actions['left_joint_actions'][-1], # 修复bug
- 'right_gripper_actions': master_joint_actions['right_joint_actions'][-1],
- 'master_controller_status': master_controller_status
- }
-
- def stop(self):
- """停止控制器"""
- try:
- if hasattr(self, 'master_dual_arm') and self.master_dual_arm:
- self.master_dual_arm.shutdown()
- print("混合控制器已退出")
- except Exception as e:
- logging.error(f"停止控制器失败: {e}")
-
-
-
-class DummyEndposeMaster:
- def __init__(self, config):
- # 初始化pygame
- pygame.init()
- pygame.joystick.init()
- # 获取所有 USB 游戏控制器的信息
- self.joysticks = find_controller_index()
- print(self.joysticks)
- self.control_info = config.end_control_info
- left_stick = self._init_stick('left')
- right_stick = self._init_stick('right')
- self.controllers = [left_stick, right_stick]
-
- def _init_stick(self, arm_name:str):
- stick_info = {}
- stick_info['index'] = self.joysticks[self.control_info[arm_name]]['index']
- stick_info['guid'] = self.control_info[arm_name]
- stick_info['name'] = f'{arm_name}'
- device_name = self.joysticks[self.control_info[arm_name]]['device_name']
- stick = XboxStick(stick_info) if "Xbox" in device_name else FlightStick(stick_info)
- stick.start_polling()
- return stick
-
- def get_action(self, state) -> Dict:
- from copy import deepcopy
-
- new_state = deepcopy(state)
- gamepad_action = {}
- xyz = []
- rxryrz = []
- gripper = []
- """获取控制动作"""
- try:
- for i, controller in enumerate(self.controllers):
- # states = controller.get_raw_states()
- gamepad_action.update(controller.get_control_signal(controller.name))
- xyz += [f"{controller.name}_x", f"{controller.name}_y", f"{controller.name}_z"]
- rxryrz += [f"{controller.name}_joint_4", f"{controller.name}_joint_5", f"{controller.name}_joint_6"]
- gripper += [f"{controller.name}_gripper"]
-
- for name in xyz:
- new_state['pose'][name] += (gamepad_action[name] * gamepad_action['xyz_vel'] * gamepad_action[name.split('_')[0]+'_ratio'])
-
- for name in gripper:
- new_state['joint'][name] += int(gamepad_action[name] * gamepad_action['gripper_vel'] * gamepad_action[name.split('_')[0]+'_ratio'])
- new_state['joint'][name] = min(990, max(0, new_state['joint'][name]))
-
- for name in rxryrz:
- new_state['joint'][name] += (gamepad_action[name] * gamepad_action['rxyz_vel'] * gamepad_action[name.split('_')[0]+'_ratio'])
-
- new_state['control_mode'] = 'endpose'
- return new_state
-
- except Exception as e:
- logging.error(f"获取动作失败: {e}")
-
- def stop(self):
- """停止控制器"""
- try:
- # 停止轮询线程
- for controller in self.controllers:
- controller.stop_polling()
- except Exception as e:
- logging.error(f"停止控制器失败: {e}")
-
-
-
-class ControllerBase:
- def __init__(self, joystick_info: dict):
- # 初始化手柄对象
- self.joystick = pygame.joystick.Joystick(joystick_info['index'])
- self.joystick.init()
- self.name = joystick_info['name']
- self.guid = joystick_info['guid']
-
- # 存储所有控制器状态的字典
- self.states = {
- 'buttons': [False] * self.joystick.get_numbuttons(), # 按钮状态
- 'axes': [0.0] * self.joystick.get_numaxes(), # 摇杆和轴状态
- 'hats': [(0, 0)] * self.joystick.get_numhats() # 舵状态
- }
-
- # deadzone
- self.deadzone = 0.15
- # validzone
- self.validzone = 0.05
- self.ratio = 1
- self.gripper_vel = 100
- self.rxyz_vel = 5
- self.xyz_vel = 0.02
- self.scale_up = 2
- self.scale_down = 10
-
- # 线程控制标志
- self.running = False
-
- def start_polling(self):
- """启动线程以轮询控制器状态"""
- if not self.running:
- self.running = True
- self.thread = threading.Thread(target=self._poll_controller)
- self.thread.start()
-
- def stop_polling(self):
- """停止线程"""
- if self.running:
- self.running = False
- self.thread.join()
-
- def _poll_controller(self):
- """后台线程函数,用于轮询控制器状态"""
- while self.running:
- # 处理pygame事件
- pygame.event.pump()
-
- # 获取按钮状态
- for i in range(self.joystick.get_numbuttons()):
- self.states['buttons'][i] = self.joystick.get_button(i)
-
- # 获取摇杆和轴状态(通常范围是 -1.0 到 1.0)
- for i in range(self.joystick.get_numaxes()):
- self.states['axes'][i] = self.joystick.get_axis(i)
-
- # 获取舵状态(通常返回一个元组 (x, y),值范围为 -1, 0, 1)
- for i in range(self.joystick.get_numhats()):
- self.states['hats'][i] = self.joystick.get_hat(i)
-
- # 控制轮询频率
- time.sleep(0.01)
-
- def get_raw_states(self):
- """获取当前控制器状态"""
- return self.states
-
-class FlightStick(ControllerBase):
- def __init__(self, joystick_info):
- super().__init__(joystick_info)
-
- def get_x_control_signal(self):
- x = 0
- if self.states['axes'][0] > self.validzone:
- x = 1
- elif self.states['axes'][0] < -self.validzone:
- x = -1
- return x
-
- def get_y_control_signal(self):
- y = 0
- if self.states['axes'][1] > self.validzone:
- y = -1
- elif self.states['axes'][1] < -self.validzone:
- y = 1
- return y
-
- def get_z_control_signal(self):
- z = 0
- if self.states['buttons'][0]:
- z = 1
- elif self.states['buttons'][1]:
- z = -1
- return z
-
- def get_gripper_control_signal(self):
- gripper = 0
- if self.states['buttons'][2] == 1:
- gripper = 1
- elif self.states['buttons'][3] == 1:
- gripper = -1
- return gripper
-
- def get_ratio_control_signal(self):
- ratio = self.ratio
- if self.states['axes'][2] > 0.8:
- ratio = self.ratio / self.scale_down
- elif self.states['axes'][2] < -0.8:
- ratio = self.ratio * self.scale_up
- return ratio
-
- def get_rx_control_signal(self):
- rx = 0
- if self.states['hats'][0][0] == -1:
- rx = 1
- elif self.states['hats'][0][0] == 1:
- rx = -1
- else:
- rx = 0
- return rx
-
- def get_ry_control_signal(self):
- ry = 0
- if self.states['hats'][0][1] == 1:
- ry = -1
- elif self.states['hats'][0][1] == -1:
- ry = 1
- else:
- ry = 0
- return ry
-
- def get_rz_control_signal(self):
- rz = 0
- if self.states['axes'][3] < -self.validzone:
- rz = -1
- elif self.states['axes'][3] > self.validzone:
- rz = 1
- else:
- rz = 0
- return rz
-
- def get_control_signal(self, prefix: str = ""):
- """获取所有控制信号"""
- return {
- f'{prefix}_x': self.get_x_control_signal(),
- f'{prefix}_y': self.get_y_control_signal(),
- f'{prefix}_z': self.get_z_control_signal(),
- f'{prefix}_joint_4': self.get_rx_control_signal(),
- f'{prefix}_joint_5': self.get_ry_control_signal(),
- f'{prefix}_joint_6': self.get_rz_control_signal(),
- f'{prefix}_gripper': self.get_gripper_control_signal(),
- f'{prefix}_ratio': self.get_ratio_control_signal(),
- 'gripper_vel': self.gripper_vel,
- 'rxyz_vel': self.rxyz_vel,
- 'xyz_vel': self.xyz_vel
- }
-
-
-
-class XboxStick(ControllerBase):
- def __init__(self, joystick_info: dict):
- super().__init__(joystick_info)
-
- def get_x_control_signal(self):
- """获取 X 轴控制信号"""
- x = 0
- if self.states['hats'][0][0] == -1:
- x = 1
- elif self.states['hats'][0][0] == 1:
- x = -1
- return x
-
- def get_y_control_signal(self):
- """获取 Y 轴控制信号"""
- y = 0
- if self.states['hats'][0][1] == 1:
- y = -1
- elif self.states['hats'][0][1] == -1:
- y = 1
- return y
-
- def get_z_control_signal(self):
- """获取 Z 轴控制信号"""
- z = 0
- if self.states['axes'][4] > self.deadzone: # A 按钮
- z = -1
- elif self.states['axes'][4] < -self.deadzone: # B 按钮
- z = 1
- return z
-
- def get_ratio_control_signal(self):
- """获取速度控制信号"""
- ratio = self.ratio
- if self.states['axes'][2] > 0.8: # LT 按钮
- ratio = self.ratio * self.scale_up
- elif self.states['axes'][5] > 0.8: # RT 按钮
- ratio = self.ratio / self.scale_down
- return ratio
-
- def get_gripper_control_signal(self):
- gripper = 0
- if self.states['buttons'][0] == 1:
- gripper = 1
- elif self.states['buttons'][1] == 1:
- gripper = -1
- return gripper
-
- def get_rx_control_signal(self):
- """获取 RX 轴控制信号"""
- rx = 0
- if self.states['axes'][0] > self.deadzone: # 左舵
- rx = -1
- elif self.states['axes'][0] < -self.deadzone: # 右舵
- rx = 1
- return rx
-
- def get_ry_control_signal(self):
- """获取 RY 轴控制信号"""
- ry = 0
- if self.states['axes'][1] > self.deadzone: # 上舵
- ry = 1
- elif self.states['axes'][1] < -self.deadzone: # 下舵
- ry = -1
- return ry
-
- def get_rz_control_signal(self):
- """获取 RZ 轴控制信号"""
- rz = 0
- if self.states['buttons'][4] == 1: # 左摇杆
- rz = 1
- elif self.states['buttons'][5] == 1: # 右摇杆
- rz = -1
- return rz
-
- def get_control_signal(self, prefix: str = ""):
- """获取所有控制信号"""
- return {
- f'{prefix}_x': self.get_x_control_signal(),
- f'{prefix}_y': self.get_y_control_signal(),
- f'{prefix}_z': self.get_z_control_signal(),
- f'{prefix}_joint_4': self.get_rx_control_signal(),
- f'{prefix}_joint_5': self.get_ry_control_signal(),
- f'{prefix}_joint_6': self.get_rz_control_signal(),
- f'{prefix}_gripper': self.get_gripper_control_signal(),
- f'{prefix}_ratio': self.get_ratio_control_signal(),
- 'gripper_vel': self.gripper_vel,
- 'rxyz_vel': self.rxyz_vel,
- 'xyz_vel': self.xyz_vel
- }
-
-
-@dataclass
-class ControllerConfig:
- """控制器配置"""
- init_joint: list
- init_pose: list
- max_gripper: int
- min_gripper: int
- config_file: str
- end_control_info: dict
-
-
-def parse_init_info(init_info: dict) -> ControllerConfig:
- """解析初始化信息"""
- return ControllerConfig(
- init_joint=init_info['init_joint'],
- init_pose=init_info.get('init_pose', [0]*12),
- max_gripper=init_info['max_gripper'],
- min_gripper=init_info['min_gripper'],
- config_file=init_info['servo_config_file'],
- end_control_info=init_info['end_control_info']
- )
-
-
-
-if __name__ == "__main__":
- config = {
- 'init_joint': {'joint': [-170, 90, 0, 90, 120, 0, 10, 170, 90, 0, -90, 120, 0, 10]},
- 'init_pose': {},
- 'max_gripper': {},
- 'min_gripper': {},
- 'servo_config_file': {},
- 'end_control_info': {'left': "0300b14bff1100003708000010010000" , 'right': '0300509d5e040000120b000009050000'}
- }
- config = parse_init_info(config)
- endpose_arm = DummyEndposeMaster(config)
- while True:
- gamepad_action = {}
- xyz = []
- for i, controller in enumerate(endpose_arm.controllers):
- # states = controller.get_raw_states()
- gamepad_action.update(controller.get_control_signal(controller.name))
- xyz += [f"{controller.name}_x", f"{controller.name}_y", f"{controller.name}_z"]
- time.sleep(1)
- print(gamepad_action)
\ No newline at end of file
diff --git a/lerobot/common/robot_devices/teleop/realman_aloha_dual.py b/lerobot/common/robot_devices/teleop/realman_aloha_dual.py
deleted file mode 100644
index 559c6f46..00000000
--- a/lerobot/common/robot_devices/teleop/realman_aloha_dual.py
+++ /dev/null
@@ -1,76 +0,0 @@
-import time
-import logging
-from typing import Dict
-from dataclasses import dataclass
-from lerobot.common.robot_devices.teleop.gamepad import RealmanAlohaMaster, DummyEndposeMaster
-
-
-@dataclass
-class ControllerConfig:
- """控制器配置"""
- init_joint: list
- init_pose: list
- max_gripper: int
- min_gripper: int
- config_file: str
- end_control_info: dict
-
-
-class HybridController:
- def __init__(self, init_info):
- self.config = self._parse_init_info(init_info)
- self.joint = self.config.init_joint.copy()
- self.pose = self.config.init_pose.copy()
-
- self.joint_arm = RealmanAlohaMaster(self.config)
- self.endpose_arm = DummyEndposeMaster(self.config)
-
- def _parse_init_info(self, init_info: dict) -> ControllerConfig:
- """解析初始化信息"""
- return ControllerConfig(
- init_joint=init_info['init_joint'],
- init_pose=init_info.get('init_pose', [0]*12),
- max_gripper=init_info['max_gripper'],
- min_gripper=init_info['min_gripper'],
- config_file=init_info['servo_config_file'],
- end_control_info=init_info['end_control_info']
- )
-
- def get_action(self, state) -> Dict:
- """获取控制动作"""
- try:
- endpose_action = self.endpose_arm.get_action(state)
- return endpose_action
- # return self.joint_arm.get_action()
-
- except Exception as e:
- logging.error(f"获取动作失败: {e}")
-
- def stop(self):
- self.joint_arm.stop()
-
- def reset(self):
- """重置控制器"""
- self.joint = self.config.init_joint.copy()
- self.pose = self.config.init_pose.copy()
- self.joint_control_mode = True
-
-
-# 使用示例
-if __name__ == "__main__":
- init_info = {
- 'init_joint': [-175, 90, 90, 45, 90, -90, 10, 175, 90, 90, -45, 90, 90, 10],
- 'init_pose': [[-0.0305, 0.125938, 0.13153, 3.141, 0.698, -1.57, -0.030486, -0.11487, 0.144707, 3.141, 0.698, 1.57]],
- 'max_gripper': 990,
- 'min_gripper': 10,
- 'servo_config_file': '/home/maic/LYT/lerobot/lerobot/common/robot_devices/teleop/servo_dual.yaml',
- 'end_control_info': {'left': '0300b14bff1100003708000010010000', 'right': '030003f05e0400008e02000010010000'}
- }
- arm_controller = HybridController(init_info)
- time.sleep(1)
- try:
- while True:
- print(arm_controller.get_action())
- time.sleep(1)
- except KeyboardInterrupt:
- arm_controller.stop()
\ No newline at end of file
diff --git a/lerobot/common/robot_devices/teleop/realman_mix.yaml b/lerobot/common/robot_devices/teleop/realman_mix.yaml
deleted file mode 100644
index f26386b6..00000000
--- a/lerobot/common/robot_devices/teleop/realman_mix.yaml
+++ /dev/null
@@ -1,4 +0,0 @@
-init_joint: [-90, 90, 90, -90, -90, 90]
-max_gripper: 990
-min_gripper: 10
-servo_config_file: "/home/maic/LYT/lerobot/lerobot/common/robot_devices/teleop/servo_arm.yaml"
\ No newline at end of file
diff --git a/lerobot/common/robot_devices/teleop/realman_single.py b/lerobot/common/robot_devices/teleop/realman_single.py
deleted file mode 100644
index 330d3ab1..00000000
--- a/lerobot/common/robot_devices/teleop/realman_single.py
+++ /dev/null
@@ -1,466 +0,0 @@
-import pygame
-import threading
-import time
-import serial
-import binascii
-import logging
-import yaml
-from typing import Dict
-from Robotic_Arm.rm_robot_interface import *
-
-
-
-class ServoArm:
- def __init__(self, config_file="config.yaml"):
- """初始化机械臂的串口连接并发送初始数据。
-
- Args:
- config_file (str): 配置文件的路径。
- """
- self.config = self._load_config(config_file)
- self.port = self.config["port"]
- self.baudrate = self.config["baudrate"]
- self.joint_hex_data = self.config["joint_hex_data"]
- self.control_hex_data = self.config["control_hex_data"]
- self.arm_axis = self.config.get("arm_axis", 7)
-
- try:
- self.serial_conn = serial.Serial(self.port, self.baudrate, timeout=0)
- self.bytes_to_send = binascii.unhexlify(self.joint_hex_data.replace(" ", ""))
- self.serial_conn.write(self.bytes_to_send)
- time.sleep(1)
- self.connected = True
- logging.info(f"串口连接成功: {self.port}")
- except Exception as e:
- logging.error(f"串口连接失败: {e}")
- self.connected = False
-
- def _load_config(self, config_file):
- """加载配置文件。
-
- Args:
- config_file (str): 配置文件的路径。
-
- Returns:
- dict: 配置文件内容。
- """
- try:
- with open(config_file, "r") as file:
- config = yaml.safe_load(file)
- return config
- except Exception as e:
- logging.error(f"配置文件加载失败: {e}")
- # 返回默认配置
- return {
- "port": "/dev/ttyUSB0",
- "baudrate": 460800,
- "joint_hex_data": "55 AA 02 00 00 67",
- "control_hex_data": "55 AA 08 00 00 B9",
- "arm_axis": 6
- }
-
- def _bytes_to_signed_int(self, byte_data):
- """将字节数据转换为有符号整数。
-
- Args:
- byte_data (bytes): 字节数据。
-
- Returns:
- int: 有符号整数。
- """
- return int.from_bytes(byte_data, byteorder="little", signed=True)
-
- def _parse_joint_data(self, hex_received):
- """解析接收到的十六进制数据并提取关节数据。
-
- Args:
- hex_received (str): 接收到的十六进制字符串数据。
-
- Returns:
- dict: 解析后的关节数据。
- """
- logging.debug(f"hex_received: {hex_received}")
- joints = {}
- for i in range(self.arm_axis):
- start = 14 + i * 10
- end = start + 8
- joint_hex = hex_received[start:end]
- joint_byte_data = bytearray.fromhex(joint_hex)
- joint_value = self._bytes_to_signed_int(joint_byte_data) / 10000.0
- joints[f"joint_{i+1}"] = joint_value
- grasp_start = 14 + self.arm_axis*10
- grasp_hex = hex_received[grasp_start:grasp_start+8]
- grasp_byte_data = bytearray.fromhex(grasp_hex)
- # 夹爪进行归一化处理
- grasp_value = self._bytes_to_signed_int(grasp_byte_data)/1000
-
- joints["grasp"] = grasp_value
- return joints
-
- def _parse_controller_data(self, hex_received):
- status = {
- 'infrared': 0,
- 'button': 0
- }
- if len(hex_received) == 18:
- status['infrared'] = self._bytes_to_signed_int(bytearray.fromhex(hex_received[12:14]))
- status['button'] = self._bytes_to_signed_int(bytearray.fromhex(hex_received[14:16]))
- # print(infrared)
- return status
-
- def get_joint_actions(self):
- """从串口读取数据并解析关节动作。
-
- Returns:
- dict: 包含关节数据的字典。
- """
- if not self.connected:
- return {}
-
- try:
- self.serial_conn.write(self.bytes_to_send)
- time.sleep(0.02)
- bytes_received = self.serial_conn.read(self.serial_conn.inWaiting())
- if len(bytes_received) == 0:
- return {}
-
- hex_received = binascii.hexlify(bytes_received).decode("utf-8").upper()
- actions = self._parse_joint_data(hex_received)
- return actions
- except Exception as e:
- logging.error(f"读取串口数据错误: {e}")
- return {}
-
- def get_controller_status(self):
- bytes_to_send = binascii.unhexlify(self.control_hex_data.replace(" ", ""))
- self.serial_conn.write(bytes_to_send)
- time.sleep(0.02)
- bytes_received = self.serial_conn.read(self.serial_conn.inWaiting())
- hex_received = binascii.hexlify(bytes_received).decode("utf-8").upper()
- # print("control status:", hex_received)
- status = self._parse_controller_data(hex_received)
- return status
-
- def close(self):
- """关闭串口连接"""
- if self.connected and hasattr(self, 'serial_conn'):
- self.serial_conn.close()
- self.connected = False
- logging.info("串口连接已关闭")
-
-
-class HybridController:
- def __init__(self, init_info):
- # 初始化pygame和手柄
- pygame.init()
- pygame.joystick.init()
-
- # 检查是否有连接的手柄
- if pygame.joystick.get_count() == 0:
- raise Exception("未检测到手柄")
-
- # 初始化手柄
- self.joystick = pygame.joystick.Joystick(0)
- self.joystick.init()
- # 摇杆死区
- self.deadzone = 0.15
- # 控制模式: True为关节控制(主模式),False为末端控制
- self.joint_control_mode = True
- # 精细控制模式
- self.fine_control_mode = False
-
- # 初始化末端姿态和关节角度
- self.init_joint = init_info['init_joint']
- self.init_pose = init_info.get('init_pose', [0]*6)
- self.max_gripper = init_info['max_gripper']
- self.min_gripper = init_info['min_gripper']
- servo_config_file = init_info['servo_config_file']
- self.joint = self.init_joint.copy()
- self.pose = self.init_pose.copy()
- self.pose_speeds = [0.0] * 6
- self.joint_speeds = [0.0] * 6
- self.tozero = False
-
- # 主臂关节状态
- self.master_joint_actions = {}
- self.master_controller_status = {}
- self.use_master_arm = False
-
- # 末端位姿限制
- self.pose_limits = [
- (-0.800, 0.800), # X (m)
- (-0.800, 0.800), # Y (m)
- (-0.800, 0.800), # Z (m)
- (-3.14, 3.14), # RX (rad)
- (-3.14, 3.14), # RY (rad)
- (-3.14, 3.14) # RZ (rad)
- ]
-
- # 关节角度限制 (度)
- self.joint_limits = [
- (-180, 180), # joint 1
- (-180, 180), # joint 2
- (-180, 180), # joint 3
- (-180, 180), # joint 4
- (-180, 180), # joint 5
- (-180, 180) # joint 6
- ]
-
- # 控制参数
- self.linear_step = 0.002 # 线性移动步长(m)
- self.angular_step = 0.01 # 角度步长(rad)
-
- # 夹爪状态和速度
- self.gripper_speed = 10
- self.gripper = self.min_gripper
-
- # 初始化串口通信(主臂关节状态获取)
- self.servo_arm = None
- if servo_config_file:
- try:
- self.servo_arm = ServoArm(servo_config_file)
- self.use_master_arm = True
- logging.info("串口主臂连接成功,启用主从控制模式")
- except Exception as e:
- logging.error(f"串口主臂连接失败: {e}")
- self.use_master_arm = False
-
- # 启动更新线程
- self.running = True
- self.thread = threading.Thread(target=self.update_controller)
- self.thread.start()
-
- print("混合控制器已启动")
- print("主控制模式: 关节控制")
- if self.use_master_arm:
- print("主从控制: 启用")
- print("Back按钮: 切换控制模式(关节/末端)")
- print("L3按钮: 切换精细控制模式")
- print("Start按钮: 重置到初始位置")
-
- def _apply_nonlinear_mapping(self, value):
- """应用非线性映射以提高控制精度"""
- sign = 1 if value >= 0 else -1
- return sign * (abs(value) ** 2)
-
- def _normalize_angle(self, angle):
- """将角度归一化到[-π, π]范围内"""
- import math
- while angle > math.pi:
- angle -= 2 * math.pi
- while angle < -math.pi:
- angle += 2 * math.pi
- return angle
-
- def update_controller(self):
- while self.running:
- try:
- pygame.event.pump()
- except Exception as e:
- print(f"控制器错误: {e}")
- self.stop()
- continue
-
- # 检查控制模式切换 (Back按钮)
- if self.joystick.get_button(6): # Back按钮
- self.joint_control_mode = not self.joint_control_mode
- mode_str = "关节控制" if self.joint_control_mode else "末端位姿控制"
- print(f"切换到{mode_str}模式")
- time.sleep(0.3) # 防止多次触发
-
- # 检查精细控制模式切换 (L3按钮)
- if self.joystick.get_button(10): # L3按钮
- self.fine_control_mode = not self.fine_control_mode
- print(f"切换到{'精细' if self.fine_control_mode else '普通'}控制模式")
- time.sleep(0.3) # 防止多次触发
-
- # 检查重置按钮 (Start按钮)
- if self.joystick.get_button(7): # Start按钮
- print("重置机械臂到初始位置...")
- # print("init_joint", self.init_joint.copy())
- self.tozero = True
- self.joint = self.init_joint.copy()
- self.pose = self.init_pose.copy()
- self.pose_speeds = [0.0] * 6
- self.joint_speeds = [0.0] * 6
- self.gripper_speed = 10
- self.gripper = self.min_gripper
- print("机械臂已重置到初始位置")
- time.sleep(0.3) # 防止多次触发
-
- # 从串口获取主臂关节状态
- if self.servo_arm and self.servo_arm.connected:
- try:
- self.master_joint_actions = self.servo_arm.get_joint_actions()
- self.master_controller_status = self.servo_arm.get_controller_status()
- if self.master_joint_actions:
- logging.debug(f"主臂关节状态: {self.master_joint_actions}")
-
- except Exception as e:
- logging.error(f"获取主臂状态错误: {e}")
- self.master_joint_actions = {}
- # print(self.master_joint_actions)
-
- # 根据控制模式更新相应的控制逻辑
- if self.joint_control_mode:
- # 关节控制模式下,优先使用主臂数据,Xbox作为辅助
- self.update_joint_control()
- else:
- # 末端控制模式,使用Xbox控制
- self.update_end_pose()
- time.sleep(0.02)
- # print('gripper:', self.gripper)
-
- def update_joint_control(self):
- """更新关节角度控制 - 优先使用主臂数据"""
- if self.use_master_arm and self.master_joint_actions:
- # 主从控制模式:直接使用主臂的关节角度
- try:
- # 将主臂关节角度映射到从臂
- for i in range(6): # 假设只有6个关节需要控制
- joint_key = f"joint_{i+1}"
- if joint_key in self.master_joint_actions:
- # 直接使用主臂的关节角度(已经是度数)
- self.joint[i] = self.master_joint_actions[joint_key]
-
- # 应用关节限制
- min_val, max_val = self.joint_limits[i]
- self.joint[i] = max(min_val, min(max_val, self.joint[i]))
-
- # print(self.joint)
- logging.debug(f"主臂关节映射到从臂: {self.joint[:6]}")
-
- except Exception as e:
- logging.error(f"主臂数据映射错误: {e}")
-
- # 如果有主臂夹爪数据,使用主臂夹爪状态
- if self.use_master_arm and "grasp" in self.master_joint_actions:
- self.gripper = self.master_joint_actions["grasp"] * 1000
- self.joint[-1] = self.gripper
-
-
- def update_end_pose(self):
- """更新末端位姿控制"""
- # 根据控制模式调整步长
- current_linear_step = self.linear_step * (0.1 if self.fine_control_mode else 1.0)
- current_angular_step = self.angular_step * (0.1 if self.fine_control_mode else 1.0)
-
- # 方向键控制XY
- hat = self.joystick.get_hat(0)
- hat_up = hat[1] == 1 # Y+
- hat_down = hat[1] == -1 # Y-
- hat_left = hat[0] == -1 # X-
- hat_right = hat[0] == 1 # X+
-
- # 右摇杆控制Z
- right_y_raw = -self.joystick.get_axis(4)
- # 左摇杆控制RZ
- left_y_raw = -self.joystick.get_axis(1)
-
- # 应用死区
- right_y = 0.0 if abs(right_y_raw) < self.deadzone else right_y_raw
- left_y = 0.0 if abs(left_y_raw) < self.deadzone else left_y_raw
-
- # 计算各轴速度
- self.pose_speeds[1] = current_linear_step if hat_up else (-current_linear_step if hat_down else 0.0) # Y
- self.pose_speeds[0] = -current_linear_step if hat_left else (current_linear_step if hat_right else 0.0) # X
-
- # 设置Z速度(右摇杆Y轴控制)
- z_mapping = self._apply_nonlinear_mapping(right_y)
- self.pose_speeds[2] = z_mapping * current_linear_step # Z
-
- # L1/R1控制RX旋转
- LB = self.joystick.get_button(4) # RX-
- RB = self.joystick.get_button(5) # RX+
- self.pose_speeds[3] = (-current_angular_step if LB else (current_angular_step if RB else 0.0))
-
- # △/□控制RY旋转
- triangle = self.joystick.get_button(2) # RY+
- square = self.joystick.get_button(3) # RY-
- self.pose_speeds[4] = (current_angular_step if triangle else (-current_angular_step if square else 0.0))
-
- # 左摇杆Y轴控制RZ旋转
- rz_mapping = self._apply_nonlinear_mapping(left_y)
- self.pose_speeds[5] = rz_mapping * current_angular_step * 2 # RZ
-
- # 夹爪控制(圈/叉)
- circle = self.joystick.get_button(1) # 夹爪开
- cross = self.joystick.get_button(0) # 夹爪关
- if circle:
- self.gripper = min(self.max_gripper, self.gripper + self.gripper_speed)
- elif cross:
- self.gripper = max(self.min_gripper, self.gripper - self.gripper_speed)
-
- # 更新末端位姿
- for i in range(6):
- self.pose[i] += self.pose_speeds[i]
-
- # 角度归一化处理
- for i in range(3, 6):
- self.pose[i] = self._normalize_angle(self.pose[i])
-
- def update_joint_state(self, joint):
- self.joint = joint
- # self.tozero = False
-
- def update_endpose_state(self, end_pose):
- self.pose = end_pose
- # self.tozero = False
-
- def update_tozero_state(self, tozero):
- self.tozero = tozero
-
-
- def get_action(self) -> Dict:
- """获取当前控制命令"""
- return {
- 'control_mode': 'joint' if self.joint_control_mode else 'end_pose',
- 'use_master_arm': self.use_master_arm,
- 'master_joint_actions': self.master_joint_actions,
- 'master_controller_status': self.master_controller_status,
- 'end_pose': self.pose,
- 'joint_angles': self.joint,
- 'gripper': int(self.gripper),
- 'tozero': self.tozero
- }
-
- def stop(self):
- """停止控制器"""
- self.running = False
- if self.thread.is_alive():
- self.thread.join()
- if self.servo_arm:
- self.servo_arm.close()
- pygame.quit()
- print("混合控制器已退出")
-
- def reset(self):
- """重置到初始状态"""
- self.joint = self.init_joint.copy()
- self.pose = self.init_pose.copy()
- self.pose_speeds = [0.0] * 6
- self.joint_speeds = [0.0] * 6
- self.gripper_speed = 10
- self.gripper = self.min_gripper
- print("已重置到初始状态")
-
-
-# 使用示例
-if __name__ == "__main__":
- # 初始化睿尔曼机械臂
- arm = RoboticArm(rm_thread_mode_e.RM_TRIPLE_MODE_E)
- # 创建机械臂连接
- handle = arm.rm_create_robot_arm("192.168.3.18", 8080)
- print(f"机械臂连接ID: {handle.id}")
- init_pose = arm.rm_get_current_arm_state()[1]['pose']
-
- with open('/home/maic/LYT/lerobot/lerobot/common/robot_devices/teleop/realman_mix.yaml', "r") as file:
- config = yaml.safe_load(file)
- config['init_pose'] = init_pose
- arm_controller = HybridController(config)
- try:
- while True:
- print(arm_controller.get_action())
- time.sleep(0.1)
- except KeyboardInterrupt:
- arm_controller.stop()
\ No newline at end of file
diff --git a/lerobot/common/robot_devices/teleop/servo_arm.yaml b/lerobot/common/robot_devices/teleop/servo_arm.yaml
deleted file mode 100644
index 06d28ee2..00000000
--- a/lerobot/common/robot_devices/teleop/servo_arm.yaml
+++ /dev/null
@@ -1,6 +0,0 @@
-port: /dev/ttyUSB0
-right_port: /dev/ttyUSB1
-baudrate: 460800
-joint_hex_data: "55 AA 02 00 00 67"
-control_hex_data: "55 AA 08 00 00 B9"
-arm_axis: 6
diff --git a/lerobot/common/robot_devices/teleop/servo_dual.yaml b/lerobot/common/robot_devices/teleop/servo_dual.yaml
deleted file mode 100644
index e90e0199..00000000
--- a/lerobot/common/robot_devices/teleop/servo_dual.yaml
+++ /dev/null
@@ -1,6 +0,0 @@
-left_port: /dev/ttyUSB0
-right_port: /dev/ttyUSB1
-baudrate: 460800
-joint_hex_data: "55 AA 02 00 00 67"
-control_hex_data: "55 AA 08 00 00 B9"
-arm_axis: 6
diff --git a/lerobot/common/robot_devices/teleop/servo_server.py b/lerobot/common/robot_devices/teleop/servo_server.py
deleted file mode 100644
index b615eb17..00000000
--- a/lerobot/common/robot_devices/teleop/servo_server.py
+++ /dev/null
@@ -1,321 +0,0 @@
-import threading
-import time
-import serial
-import binascii
-import logging
-import yaml
-from typing import Dict
-
-# logging.basicConfig(
-# level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s"
-# )
-
-
-class ServoArmServer:
- def __init__(self, config_file="servo_dual.yaml"):
- """初始化服务器,创建左右机械臂实例"""
- self.config_file = config_file
- self.left_servo_arm = None
- self.right_servo_arm = None
- self.running = False
- self.data_lock = threading.Lock()
- self.latest_data = {
- 'left_joint_actions': {},
- 'right_joint_actions': {},
- 'left_controller_status': {},
- 'right_controller_status': {},
- 'timestamp': time.time()
- }
-
- # 初始化机械臂
- self._initialize_arms()
- # 启动数据采集线程
- self._start_data_collection()
-
-
- def _initialize_arms(self):
- """初始化左右机械臂"""
- try:
- self.left_servo_arm = ServoArm(self.config_file, "left_port")
- logging.info("左master机械臂初始化成功")
- except Exception as e:
- logging.error(f"左master机械臂初始化失败: {e}")
-
- try:
- self.right_servo_arm = ServoArm(self.config_file, "right_port")
- logging.info("右master机械臂初始化成功")
- except Exception as e:
- logging.error(f"右master机械臂初始化失败: {e}")
-
- def _start_data_collection(self):
- """启动数据采集线程"""
- self.running = True
-
- # 创建左臂数据采集线程
- self.left_data_thread = threading.Thread(target=self._left_arm_data_loop)
- self.left_data_thread.daemon = True
- self.left_data_thread.start()
-
- # 创建右臂数据采集线程
- self.right_data_thread = threading.Thread(target=self._right_arm_data_loop)
- self.right_data_thread.daemon = True
- self.right_data_thread.start()
-
- logging.info("左右机械臂数据采集线程已启动")
-
- def _left_arm_data_loop(self):
- """左机械臂数据采集循环"""
- while self.running:
- try:
- left_actions = {}
- left_controller_status = {}
-
- # 获取左机械臂数据
- if self.left_servo_arm and self.left_servo_arm.connected:
- left_actions = self.left_servo_arm.get_joint_actions()
- left_controller_status = self.left_servo_arm.get_controller_status()
-
- if self._check_val_safety(left_actions) == False:
- time.sleep(0.02)
- continue
- # 更新左机械臂数据
- with self.data_lock:
- self.latest_data['left_joint_actions'] = [left_actions[k] for k in left_actions]
- self.latest_data['left_controller_status'] = left_controller_status
- # 更新dual_joint_actions
- if self.latest_data['right_joint_actions']:
- self.latest_data['dual_joint_actions'] = self.latest_data['left_joint_actions'] + self.latest_data['right_joint_actions']
- self.latest_data['timestamp'] = time.time()
-
- time.sleep(0.02) # 50Hz采集频率
-
- except Exception as e:
- logging.error(f"左机械臂数据采集错误: {e}")
- time.sleep(0.1)
-
- def _right_arm_data_loop(self):
- """右机械臂数据采集循环"""
- while self.running:
- try:
- right_actions = {}
- right_controller_status = {}
-
- # 获取右机械臂数据
- if self.right_servo_arm and self.right_servo_arm.connected:
- right_actions = self.right_servo_arm.get_joint_actions()
- right_controller_status = self.right_servo_arm.get_controller_status()
-
- if self._check_val_safety(right_actions) == False:
- time.sleep(0.02)
- continue
- # 更新右机械臂数据
- with self.data_lock:
- self.latest_data['right_joint_actions'] = [right_actions[k] for k in right_actions]
- self.latest_data['right_controller_status'] = right_controller_status
- # 更新dual_joint_actions
- if self.latest_data['left_joint_actions']:
- self.latest_data['dual_joint_actions'] = self.latest_data['left_joint_actions'] + self.latest_data['right_joint_actions']
- self.latest_data['timestamp'] = time.time()
-
- time.sleep(0.02) # 50Hz采集频率
-
- except Exception as e:
- logging.error(f"右机械臂数据采集错误: {e}")
- time.sleep(0.1)
-
- def _check_val_safety(self, data: dict):
- data = [data[k] for k in data]
- ret = True
- if len(data) != self.left_servo_arm.arm_axis + 1:
- ret = False
- for v in data:
- if v > 180 or v < -180:
- ret = False
- return ret
-
- # ZeroRPC 服务方法
- def get_joint_data(self):
- """获取最新的关节数据"""
- with self.data_lock:
- return self.latest_data.copy()
-
- def get_left_joint_actions(self):
- """获取左机械臂关节数据和控制器状态"""
- with self.data_lock:
- return {
- 'data': self.latest_data['left_joint_actions'],
- 'controller_status': self.latest_data['left_controller_status'],
- 'timestamp': self.latest_data['timestamp']
- }
-
- def get_right_joint_actions(self):
- """获取右机械臂关节数据和控制器状态"""
- with self.data_lock:
- return {
- 'data': self.latest_data['right_joint_actions'],
- 'controller_status': self.latest_data['right_controller_status'],
- 'timestamp': self.latest_data['timestamp']
- }
-
- def get_connection_status(self):
- """获取连接状态"""
- return {
- 'left_connected': self.left_servo_arm.connected if self.left_servo_arm else False,
- 'right_connected': self.right_servo_arm.connected if self.right_servo_arm else False,
- 'server_running': self.running
- }
-
- def ping(self):
- """测试连接"""
- return "pong"
-
- def shutdown(self):
- """关闭服务器"""
- logging.info("正在关闭服务器...")
- self.running = False
-
- if self.left_servo_arm:
- self.left_servo_arm.close()
- if self.right_servo_arm:
- self.right_servo_arm.close()
-
- return "Server shutdown"
-
-
-class ServoArm:
- def __init__(self, config_file="config.yaml", port_name="left_port"):
- """初始化机械臂的串口连接并发送初始数据。
-
- Args:
- config_file (str): 配置文件的路径。
- """
- self.config = self._load_config(config_file)
- self.port = self.config[port_name]
- self.baudrate = self.config["baudrate"]
- self.joint_hex_data = self.config["joint_hex_data"]
- self.control_hex_data = self.config["control_hex_data"]
- self.arm_axis = self.config.get("arm_axis", 7)
-
- try:
- self.serial_conn = serial.Serial(self.port, self.baudrate, timeout=0)
- self.bytes_to_send = binascii.unhexlify(self.joint_hex_data.replace(" ", ""))
- self.serial_conn.write(self.bytes_to_send)
- time.sleep(1)
- self.connected = True
- logging.info(f"串口连接成功: {self.port}")
- except Exception as e:
- logging.error(f"串口连接失败: {e}")
- self.connected = False
-
- def _load_config(self, config_file):
- """加载配置文件。
-
- Args:
- config_file (str): 配置文件的路径。
-
- Returns:
- dict: 配置文件内容。
- """
- try:
- with open(config_file, "r") as file:
- config = yaml.safe_load(file)
- return config
- except Exception as e:
- logging.error(f"配置文件加载失败: {e}")
- # 返回默认配置
- return {
- "port": "/dev/ttyUSB0",
- "baudrate": 460800,
- "joint_hex_data": "55 AA 02 00 00 67",
- "control_hex_data": "55 AA 08 00 00 B9",
- "arm_axis": 6
- }
-
- def _bytes_to_signed_int(self, byte_data):
- """将字节数据转换为有符号整数。
-
- Args:
- byte_data (bytes): 字节数据。
-
- Returns:
- int: 有符号整数。
- """
- return int.from_bytes(byte_data, byteorder="little", signed=True)
-
- def _parse_joint_data(self, hex_received):
- """解析接收到的十六进制数据并提取关节数据。
-
- Args:
- hex_received (str): 接收到的十六进制字符串数据。
-
- Returns:
- dict: 解析后的关节数据。
- """
- logging.debug(f"hex_received: {hex_received}")
- joints = {}
- for i in range(self.arm_axis):
- start = 14 + i * 10
- end = start + 8
- joint_hex = hex_received[start:end]
- joint_byte_data = bytearray.fromhex(joint_hex)
- joint_value = self._bytes_to_signed_int(joint_byte_data) / 10000.0
- joints[f"joint_{i+1}"] = joint_value #/ 180
- grasp_start = 14 + self.arm_axis*10
- grasp_hex = hex_received[grasp_start:grasp_start+8]
- grasp_byte_data = bytearray.fromhex(grasp_hex)
- # 夹爪进行归一化处理
- grasp_value = self._bytes_to_signed_int(grasp_byte_data)/1000
-
- joints["grasp"] = grasp_value
- return joints
-
- def _parse_controller_data(self, hex_received):
- status = {
- 'infrared': 0,
- 'button': 0
- }
- if len(hex_received) == 18:
- status['infrared'] = self._bytes_to_signed_int(bytearray.fromhex(hex_received[12:14]))
- status['button'] = self._bytes_to_signed_int(bytearray.fromhex(hex_received[14:16]))
- # print(infrared)
- return status
-
- def get_joint_actions(self):
- """从串口读取数据并解析关节动作。
-
- Returns:
- dict: 包含关节数据的字典。
- """
- if not self.connected:
- return {}
-
- try:
- self.serial_conn.write(self.bytes_to_send)
- time.sleep(0.025)
- bytes_received = self.serial_conn.read(self.serial_conn.inWaiting())
- if len(bytes_received) == 0:
- return {}
-
- hex_received = binascii.hexlify(bytes_received).decode("utf-8").upper()
- actions = self._parse_joint_data(hex_received)
- return actions
- except Exception as e:
- logging.error(f"读取串口数据错误: {e}")
- return {}
-
- def get_controller_status(self):
- bytes_to_send = binascii.unhexlify(self.control_hex_data.replace(" ", ""))
- self.serial_conn.write(bytes_to_send)
- time.sleep(0.025)
- bytes_received = self.serial_conn.read(self.serial_conn.inWaiting())
- hex_received = binascii.hexlify(bytes_received).decode("utf-8").upper()
- # print("control status:", hex_received)
- status = self._parse_controller_data(hex_received)
- return status
-
- def close(self):
- """关闭串口连接"""
- if self.connected and hasattr(self, 'serial_conn'):
- self.serial_conn.close()
- self.connected = False
- logging.info("串口连接已关闭")
\ No newline at end of file
diff --git a/lerobot/common/robot_devices/utils.py b/lerobot/common/robot_devices/utils.py
deleted file mode 100644
index 837c9d2e..00000000
--- a/lerobot/common/robot_devices/utils.py
+++ /dev/null
@@ -1,65 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import platform
-import time
-
-
-def busy_wait(seconds):
- if platform.system() == "Darwin":
- # On Mac, `time.sleep` is not accurate and we need to use this while loop trick,
- # but it consumes CPU cycles.
- # TODO(rcadene): find an alternative: from python 11, time.sleep is precise
- end_time = time.perf_counter() + seconds
- while time.perf_counter() < end_time:
- pass
- else:
- # On Linux time.sleep is accurate
- if seconds > 0:
- time.sleep(seconds)
-
-
-def safe_disconnect(func):
- # TODO(aliberts): Allow to pass custom exceptions
- # (e.g. ThreadServiceExit, KeyboardInterrupt, SystemExit, UnpluggedError, DynamixelCommError)
- def wrapper(robot, *args, **kwargs):
- try:
- return func(robot, *args, **kwargs)
- except Exception as e:
- if robot.is_connected:
- robot.disconnect()
- raise e
-
- return wrapper
-
-
-class RobotDeviceNotConnectedError(Exception):
- """Exception raised when the robot device is not connected."""
-
- def __init__(
- self, message="This robot device is not connected. Try calling `robot_device.connect()` first."
- ):
- self.message = message
- super().__init__(self.message)
-
-
-class RobotDeviceAlreadyConnectedError(Exception):
- """Exception raised when the robot device is already connected."""
-
- def __init__(
- self,
- message="This robot device is already connected. Try not calling `robot_device.connect()` twice.",
- ):
- self.message = message
- super().__init__(self.message)
diff --git a/lerobot/common/utils/import_utils.py b/lerobot/common/utils/import_utils.py
deleted file mode 100644
index cd5f8245..00000000
--- a/lerobot/common/utils/import_utils.py
+++ /dev/null
@@ -1,59 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import importlib
-import logging
-
-
-def is_package_available(pkg_name: str, return_version: bool = False) -> tuple[bool, str] | bool:
- """Copied from https://github.com/huggingface/transformers/blob/main/src/transformers/utils/import_utils.py
- Check if the package spec exists and grab its version to avoid importing a local directory.
- **Note:** this doesn't work for all packages.
- """
- package_exists = importlib.util.find_spec(pkg_name) is not None
- package_version = "N/A"
- if package_exists:
- try:
- # Primary method to get the package version
- package_version = importlib.metadata.version(pkg_name)
- except importlib.metadata.PackageNotFoundError:
- # Fallback method: Only for "torch" and versions containing "dev"
- if pkg_name == "torch":
- try:
- package = importlib.import_module(pkg_name)
- temp_version = getattr(package, "__version__", "N/A")
- # Check if the version contains "dev"
- if "dev" in temp_version:
- package_version = temp_version
- package_exists = True
- else:
- package_exists = False
- except ImportError:
- # If the package can't be imported, it's not available
- package_exists = False
- else:
- # For packages other than "torch", don't attempt the fallback and set as not available
- package_exists = False
- logging.debug(f"Detected {pkg_name} version: {package_version}")
- if return_version:
- return package_exists, package_version
- else:
- return package_exists
-
-
-_torch_available, _torch_version = is_package_available("torch", return_version=True)
-_gym_xarm_available = is_package_available("gym_xarm")
-_gym_aloha_available = is_package_available("gym_aloha")
-_gym_pusht_available = is_package_available("gym_pusht")
diff --git a/lerobot/common/utils/utils.py b/lerobot/common/utils/utils.py
deleted file mode 100644
index b72fde50..00000000
--- a/lerobot/common/utils/utils.py
+++ /dev/null
@@ -1,231 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import logging
-import os
-import os.path as osp
-import platform
-import subprocess
-from copy import copy
-from datetime import datetime, timezone
-from pathlib import Path
-
-import numpy as np
-import torch
-
-
-def none_or_int(value):
- if value == "None":
- return None
- return int(value)
-
-
-def inside_slurm():
- """Check whether the python process was launched through slurm"""
- # TODO(rcadene): return False for interactive mode `--pty bash`
- return "SLURM_JOB_ID" in os.environ
-
-
-def auto_select_torch_device() -> torch.device:
- """Tries to select automatically a torch device."""
- if torch.cuda.is_available():
- logging.info("Cuda backend detected, using cuda.")
- return torch.device("cuda")
- elif torch.backends.mps.is_available():
- logging.info("Metal backend detected, using cuda.")
- return torch.device("mps")
- else:
- logging.warning("No accelerated backend detected. Using default cpu, this will be slow.")
- return torch.device("cpu")
-
-
-# TODO(Steven): Remove log. log shouldn't be an argument, this should be handled by the logger level
-def get_safe_torch_device(try_device: str, log: bool = False) -> torch.device:
- """Given a string, return a torch.device with checks on whether the device is available."""
- try_device = str(try_device)
- match try_device:
- case "cuda":
- assert torch.cuda.is_available()
- device = torch.device("cuda")
- case "mps":
- assert torch.backends.mps.is_available()
- device = torch.device("mps")
- case "cpu":
- device = torch.device("cpu")
- if log:
- logging.warning("Using CPU, this will be slow.")
- case _:
- device = torch.device(try_device)
- if log:
- logging.warning(f"Using custom {try_device} device.")
-
- return device
-
-
-def get_safe_dtype(dtype: torch.dtype, device: str | torch.device):
- """
- mps is currently not compatible with float64
- """
- if isinstance(device, torch.device):
- device = device.type
- if device == "mps" and dtype == torch.float64:
- return torch.float32
- else:
- return dtype
-
-
-def is_torch_device_available(try_device: str) -> bool:
- try_device = str(try_device) # Ensure try_device is a string
- if try_device == "cuda":
- return torch.cuda.is_available()
- elif try_device == "mps":
- return torch.backends.mps.is_available()
- elif try_device == "cpu":
- return True
- else:
- raise ValueError(f"Unknown device {try_device}. Supported devices are: cuda, mps or cpu.")
-
-
-def is_amp_available(device: str):
- if device in ["cuda", "cpu"]:
- return True
- elif device == "mps":
- return False
- else:
- raise ValueError(f"Unknown device '{device}.")
-
-
-def init_logging():
- def custom_format(record):
- dt = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
- fnameline = f"{record.pathname}:{record.lineno}"
- message = f"{record.levelname} {dt} {fnameline[-15:]:>15} {record.msg}"
- return message
-
- logging.basicConfig(level=logging.INFO)
-
- for handler in logging.root.handlers[:]:
- logging.root.removeHandler(handler)
-
- formatter = logging.Formatter()
- formatter.format = custom_format
- console_handler = logging.StreamHandler()
- console_handler.setFormatter(formatter)
- logging.getLogger().addHandler(console_handler)
-
-
-def format_big_number(num, precision=0):
- suffixes = ["", "K", "M", "B", "T", "Q"]
- divisor = 1000.0
-
- for suffix in suffixes:
- if abs(num) < divisor:
- return f"{num:.{precision}f}{suffix}"
- num /= divisor
-
- return num
-
-
-def _relative_path_between(path1: Path, path2: Path) -> Path:
- """Returns path1 relative to path2."""
- path1 = path1.absolute()
- path2 = path2.absolute()
- try:
- return path1.relative_to(path2)
- except ValueError: # most likely because path1 is not a subpath of path2
- common_parts = Path(osp.commonpath([path1, path2])).parts
- return Path(
- "/".join([".."] * (len(path2.parts) - len(common_parts)) + list(path1.parts[len(common_parts) :]))
- )
-
-
-def print_cuda_memory_usage():
- """Use this function to locate and debug memory leak."""
- import gc
-
- gc.collect()
- # Also clear the cache if you want to fully release the memory
- torch.cuda.empty_cache()
- print("Current GPU Memory Allocated: {:.2f} MB".format(torch.cuda.memory_allocated(0) / 1024**2))
- print("Maximum GPU Memory Allocated: {:.2f} MB".format(torch.cuda.max_memory_allocated(0) / 1024**2))
- print("Current GPU Memory Reserved: {:.2f} MB".format(torch.cuda.memory_reserved(0) / 1024**2))
- print("Maximum GPU Memory Reserved: {:.2f} MB".format(torch.cuda.max_memory_reserved(0) / 1024**2))
-
-
-def capture_timestamp_utc():
- return datetime.now(timezone.utc)
-
-
-def say(text, blocking=False):
- system = platform.system()
-
- if system == "Darwin":
- cmd = ["say", text]
-
- elif system == "Linux":
- # cmd = ["spd-say", text]
- cmd = ["edge-playback", "-t", text]
- if blocking:
- cmd.append("--wait")
-
- elif system == "Windows":
- cmd = [
- "PowerShell",
- "-Command",
- "Add-Type -AssemblyName System.Speech; "
- f"(New-Object System.Speech.Synthesis.SpeechSynthesizer).Speak('{text}')",
- ]
-
- else:
- raise RuntimeError("Unsupported operating system for text-to-speech.")
-
- if blocking:
- subprocess.run(cmd, check=True)
- else:
- subprocess.Popen(cmd, creationflags=subprocess.CREATE_NO_WINDOW if system == "Windows" else 0)
-
-
-def log_say(text, play_sounds, blocking=False):
- logging.info(text)
-
- if play_sounds:
- say(text, blocking)
-
-
-def get_channel_first_image_shape(image_shape: tuple) -> tuple:
- shape = copy(image_shape)
- if shape[2] < shape[0] and shape[2] < shape[1]: # (h, w, c) -> (c, h, w)
- shape = (shape[2], shape[0], shape[1])
- elif not (shape[0] < shape[1] and shape[0] < shape[2]):
- raise ValueError(image_shape)
-
- return shape
-
-
-def has_method(cls: object, method_name: str) -> bool:
- return hasattr(cls, method_name) and callable(getattr(cls, method_name))
-
-
-def is_valid_numpy_dtype_string(dtype_str: str) -> bool:
- """
- Return True if a given string can be converted to a numpy dtype.
- """
- try:
- # Attempt to convert the string to a numpy dtype
- np.dtype(dtype_str)
- return True
- except TypeError:
- # If a TypeError is raised, the string is not a valid dtype
- return False
diff --git a/lerobot/scripts/configure_motor.py b/lerobot/scripts/configure_motor.py
deleted file mode 100644
index b0dc8a97..00000000
--- a/lerobot/scripts/configure_motor.py
+++ /dev/null
@@ -1,176 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
-This script configure a single motor at a time to a given ID and baudrate.
-
-Example of usage:
-```bash
-python lerobot/scripts/configure_motor.py \
- --port /dev/tty.usbmodem585A0080521 \
- --brand feetech \
- --model sts3215 \
- --baudrate 1000000 \
- --ID 1
-```
-"""
-
-import argparse
-import time
-
-
-def get_motor_bus_cls(brand: str) -> tuple:
- if brand == "feetech":
- from lerobot.common.robot_devices.motors.configs import FeetechMotorsBusConfig
- from lerobot.common.robot_devices.motors.feetech import (
- MODEL_BAUDRATE_TABLE,
- SCS_SERIES_BAUDRATE_TABLE,
- FeetechMotorsBus,
- )
-
- return FeetechMotorsBusConfig, FeetechMotorsBus, MODEL_BAUDRATE_TABLE, SCS_SERIES_BAUDRATE_TABLE
-
- elif brand == "dynamixel":
- from lerobot.common.robot_devices.motors.configs import DynamixelMotorsBusConfig
- from lerobot.common.robot_devices.motors.dynamixel import (
- MODEL_BAUDRATE_TABLE,
- X_SERIES_BAUDRATE_TABLE,
- DynamixelMotorsBus,
- )
-
- return DynamixelMotorsBusConfig, DynamixelMotorsBus, MODEL_BAUDRATE_TABLE, X_SERIES_BAUDRATE_TABLE
-
- else:
- raise ValueError(
- f"Currently we do not support this motor brand: {brand}. We currently support feetech and dynamixel motors."
- )
-
-
-def configure_motor(port, brand, model, motor_idx_des, baudrate_des):
- motor_bus_config_cls, motor_bus_cls, model_baudrate_table, series_baudrate_table = get_motor_bus_cls(
- brand
- )
-
- # Check if the provided model exists in the model_baud_rate_table
- if model not in model_baudrate_table:
- raise ValueError(
- f"Invalid model '{model}' for brand '{brand}'. Supported models: {list(model_baudrate_table.keys())}"
- )
-
- # Setup motor names, indices, and models
- motor_name = "motor"
- motor_index_arbitrary = motor_idx_des # Use the motor ID passed via argument
- motor_model = model # Use the motor model passed via argument
-
- config = motor_bus_config_cls(port=port, motors={motor_name: (motor_index_arbitrary, motor_model)})
-
- # Initialize the MotorBus with the correct port and motor configurations
- motor_bus = motor_bus_cls(config=config)
-
- # Try to connect to the motor bus and handle any connection-specific errors
- try:
- motor_bus.connect()
- print(f"Connected on port {motor_bus.port}")
- except OSError as e:
- print(f"Error occurred when connecting to the motor bus: {e}")
- return
-
- # Motor bus is connected, proceed with the rest of the operations
- try:
- print("Scanning all baudrates and motor indices")
- all_baudrates = set(series_baudrate_table.values())
- motor_index = -1 # Set the motor index to an out-of-range value.
-
- for baudrate in all_baudrates:
- motor_bus.set_bus_baudrate(baudrate)
- present_ids = motor_bus.find_motor_indices(list(range(1, 10)))
- if len(present_ids) > 1:
- raise ValueError(
- "Error: More than one motor ID detected. This script is designed to only handle one motor at a time. Please disconnect all but one motor."
- )
-
- if len(present_ids) == 1:
- if motor_index != -1:
- raise ValueError(
- "Error: More than one motor ID detected. This script is designed to only handle one motor at a time. Please disconnect all but one motor."
- )
- motor_index = present_ids[0]
- break
-
- if motor_index == -1:
- raise ValueError("No motors detected. Please ensure you have one motor connected.")
-
- print(f"Motor index found at: {motor_index}")
-
- if brand == "feetech":
- # Allows ID and BAUDRATE to be written in memory
- motor_bus.write_with_motor_ids(motor_bus.motor_models, motor_index, "Lock", 0)
-
- if baudrate != baudrate_des:
- print(f"Setting its baudrate to {baudrate_des}")
- baudrate_idx = list(series_baudrate_table.values()).index(baudrate_des)
-
- # The write can fail, so we allow retries
- motor_bus.write_with_motor_ids(motor_bus.motor_models, motor_index, "Baud_Rate", baudrate_idx)
- time.sleep(0.5)
- motor_bus.set_bus_baudrate(baudrate_des)
- present_baudrate_idx = motor_bus.read_with_motor_ids(
- motor_bus.motor_models, motor_index, "Baud_Rate", num_retry=2
- )
-
- if present_baudrate_idx != baudrate_idx:
- raise OSError("Failed to write baudrate.")
-
- print(f"Setting its index to desired index {motor_idx_des}")
- if brand == "feetech":
- motor_bus.write_with_motor_ids(motor_bus.motor_models, motor_index, "Lock", 0)
- motor_bus.write_with_motor_ids(motor_bus.motor_models, motor_index, "ID", motor_idx_des)
-
- present_idx = motor_bus.read_with_motor_ids(motor_bus.motor_models, motor_idx_des, "ID", num_retry=2)
- if present_idx != motor_idx_des:
- raise OSError("Failed to write index.")
-
- if brand == "feetech":
- # Set Maximum_Acceleration to 254 to speedup acceleration and deceleration of
- # the motors. Note: this configuration is not in the official STS3215 Memory Table
- motor_bus.write("Lock", 0)
- motor_bus.write("Maximum_Acceleration", 254)
-
- motor_bus.write("Goal_Position", 2048)
- time.sleep(4)
- print("Present Position", motor_bus.read("Present_Position"))
-
- motor_bus.write("Offset", 0)
- time.sleep(4)
- print("Offset", motor_bus.read("Offset"))
-
- except Exception as e:
- print(f"Error occurred during motor configuration: {e}")
-
- finally:
- motor_bus.disconnect()
- print("Disconnected from motor bus.")
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--port", type=str, required=True, help="Motors bus port (e.g. dynamixel,feetech)")
- parser.add_argument("--brand", type=str, required=True, help="Motor brand (e.g. dynamixel,feetech)")
- parser.add_argument("--model", type=str, required=True, help="Motor model (e.g. xl330-m077,sts3215)")
- parser.add_argument("--ID", type=int, required=True, help="Desired ID of the current motor (e.g. 1,2,3)")
- parser.add_argument(
- "--baudrate", type=int, default=1000000, help="Desired baudrate for the motor (default: 1000000)"
- )
- args = parser.parse_args()
-
- configure_motor(args.port, args.brand, args.model, args.ID, args.baudrate)
diff --git a/lerobot/scripts/control_robot.py b/lerobot/scripts/control_robot.py
deleted file mode 100644
index 0295071d..00000000
--- a/lerobot/scripts/control_robot.py
+++ /dev/null
@@ -1,439 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
-Utilities to control a robot.
-
-Useful to record a dataset, replay a recorded episode, run the policy on your robot
-and record an evaluation dataset, and to recalibrate your robot if needed.
-
-Examples of usage:
-
-- Recalibrate your robot:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=so100 \
- --control.type=calibrate
-```
-
-- Unlimited teleoperation at highest frequency (~200 Hz is expected), to exit with CTRL+C:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=so100 \
- --robot.cameras='{}' \
- --control.type=teleoperate
-
-# Add the cameras from the robot definition to visualize them:
-python lerobot/scripts/control_robot.py \
- --robot.type=so100 \
- --control.type=teleoperate
-```
-
-- Unlimited teleoperation at a limited frequency of 30 Hz, to simulate data recording frequency:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=so100 \
- --control.type=teleoperate \
- --control.fps=30
-```
-
-- Record one episode in order to test replay:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=so100 \
- --control.type=record \
- --control.fps=30 \
- --control.single_task="Grasp a lego block and put it in the bin." \
- --control.repo_id=$USER/koch_test \
- --control.num_episodes=1 \
- --control.push_to_hub=True
-```
-
-- Visualize dataset:
-```bash
-python lerobot/scripts/visualize_dataset.py \
- --repo-id $USER/koch_test \
- --episode-index 0
-```
-
-- Replay this test episode:
-```bash
-python lerobot/scripts/control_robot.py replay \
- --robot.type=so100 \
- --control.type=replay \
- --control.fps=30 \
- --control.repo_id=$USER/koch_test \
- --control.episode=0
-```
-
-- Record a full dataset in order to train a policy, with 2 seconds of warmup,
-30 seconds of recording for each episode, and 10 seconds to reset the environment in between episodes:
-```bash
-python lerobot/scripts/control_robot.py record \
- --robot.type=so100 \
- --control.type=record \
- --control.fps 30 \
- --control.repo_id=$USER/koch_pick_place_lego \
- --control.num_episodes=50 \
- --control.warmup_time_s=2 \
- --control.episode_time_s=30 \
- --control.reset_time_s=10
-```
-
-- For remote controlled robots like LeKiwi, run this script on the robot edge device (e.g. RaspBerryPi):
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=lekiwi \
- --control.type=remote_robot
-```
-
-**NOTE**: You can use your keyboard to control data recording flow.
-- Tap right arrow key '->' to early exit while recording an episode and go to resseting the environment.
-- Tap right arrow key '->' to early exit while resetting the environment and got to recording the next episode.
-- Tap left arrow key '<-' to early exit and re-record the current episode.
-- Tap escape key 'esc' to stop the data recording.
-This might require a sudo permission to allow your terminal to monitor keyboard events.
-
-**NOTE**: You can resume/continue data recording by running the same data recording command and adding `--control.resume=true`.
-
-- Train on this dataset with the ACT policy:
-```bash
-python lerobot/scripts/train.py \
- --dataset.repo_id=${HF_USER}/koch_pick_place_lego \
- --policy.type=act \
- --output_dir=outputs/train/act_koch_pick_place_lego \
- --job_name=act_koch_pick_place_lego \
- --device=cuda \
- --wandb.enable=true
-```
-
-- Run the pretrained policy on the robot:
-```bash
-python lerobot/scripts/control_robot.py \
- --robot.type=so100 \
- --control.type=record \
- --control.fps=30 \
- --control.single_task="Grasp a lego block and put it in the bin." \
- --control.repo_id=$USER/eval_act_koch_pick_place_lego \
- --control.num_episodes=10 \
- --control.warmup_time_s=2 \
- --control.episode_time_s=30 \
- --control.reset_time_s=10 \
- --control.push_to_hub=true \
- --control.policy.path=outputs/train/act_koch_pick_place_lego/checkpoints/080000/pretrained_model
-```
-"""
-
-import logging
-import os
-import time
-from dataclasses import asdict
-from pprint import pformat
-
-import rerun as rr
-
-# from safetensors.torch import load_file, save_file
-from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
-from lerobot.common.policies.factory import make_policy
-from lerobot.common.robot_devices.control_configs import (
- CalibrateControlConfig,
- ControlConfig,
- ControlPipelineConfig,
- RecordControlConfig,
- RemoteRobotConfig,
- ReplayControlConfig,
- TeleoperateControlConfig,
-)
-from lerobot.common.robot_devices.control_utils import (
- control_loop,
- init_keyboard_listener,
- is_headless,
- log_control_info,
- record_episode,
- reset_environment,
- sanity_check_dataset_name,
- sanity_check_dataset_robot_compatibility,
- stop_recording,
- warmup_record,
-)
-from lerobot.common.robot_devices.robots.utils import Robot, make_robot_from_config
-from lerobot.common.robot_devices.utils import busy_wait, safe_disconnect
-from lerobot.common.utils.utils import has_method, init_logging, log_say
-from lerobot.configs import parser
-
-########################################################################################
-# Control modes
-########################################################################################
-
-
-@safe_disconnect
-def calibrate(robot: Robot, cfg: CalibrateControlConfig):
- # TODO(aliberts): move this code in robots' classes
- if robot.robot_type.startswith("stretch"):
- if not robot.is_connected:
- robot.connect()
- if not robot.is_homed():
- robot.home()
- return
-
- arms = robot.available_arms if cfg.arms is None else cfg.arms
- unknown_arms = [arm_id for arm_id in arms if arm_id not in robot.available_arms]
- available_arms_str = " ".join(robot.available_arms)
- unknown_arms_str = " ".join(unknown_arms)
-
- if arms is None or len(arms) == 0:
- raise ValueError(
- "No arm provided. Use `--arms` as argument with one or more available arms.\n"
- f"For instance, to recalibrate all arms add: `--arms {available_arms_str}`"
- )
-
- if len(unknown_arms) > 0:
- raise ValueError(
- f"Unknown arms provided ('{unknown_arms_str}'). Available arms are `{available_arms_str}`."
- )
-
- for arm_id in arms:
- arm_calib_path = robot.calibration_dir / f"{arm_id}.json"
- if arm_calib_path.exists():
- print(f"Removing '{arm_calib_path}'")
- arm_calib_path.unlink()
- else:
- print(f"Calibration file not found '{arm_calib_path}'")
-
- if robot.is_connected:
- robot.disconnect()
-
- if robot.robot_type.startswith("lekiwi") and "main_follower" in arms:
- print("Calibrating only the lekiwi follower arm 'main_follower'...")
- robot.calibrate_follower()
- return
-
- if robot.robot_type.startswith("lekiwi") and "main_leader" in arms:
- print("Calibrating only the lekiwi leader arm 'main_leader'...")
- robot.calibrate_leader()
- return
-
- # Calling `connect` automatically runs calibration
- # when the calibration file is missing
- robot.connect()
- robot.disconnect()
- print("Calibration is done! You can now teleoperate and record datasets!")
-
-
-@safe_disconnect
-def teleoperate(robot: Robot, cfg: TeleoperateControlConfig):
- control_loop(
- robot,
- control_time_s=cfg.teleop_time_s,
- fps=cfg.fps,
- teleoperate=True,
- display_data=cfg.display_data,
- )
-
-
-@safe_disconnect
-def record(
- robot: Robot,
- cfg: RecordControlConfig,
-) -> LeRobotDataset:
- # TODO(rcadene): Add option to record logs
- if cfg.resume:
- dataset = LeRobotDataset(
- cfg.repo_id,
- root=cfg.root,
- )
- if len(robot.cameras) > 0:
- dataset.start_image_writer(
- num_processes=cfg.num_image_writer_processes,
- num_threads=cfg.num_image_writer_threads_per_camera * len(robot.cameras),
- )
- sanity_check_dataset_robot_compatibility(dataset, robot, cfg.fps, cfg.video)
- else:
- # Create empty dataset or load existing saved episodes
- sanity_check_dataset_name(cfg.repo_id, cfg.policy)
- dataset = LeRobotDataset.create(
- cfg.repo_id,
- cfg.fps,
- root=cfg.root,
- robot=robot,
- use_videos=cfg.video,
- image_writer_processes=cfg.num_image_writer_processes,
- image_writer_threads=cfg.num_image_writer_threads_per_camera * len(robot.cameras),
- )
-
- # Load pretrained policy
- policy = None if cfg.policy is None else make_policy(cfg.policy, ds_meta=dataset.meta)
- if not robot.is_connected:
- robot.connect()
-
- listener, events = init_keyboard_listener()
-
- # Execute a few seconds without recording to:
- # 1. teleoperate the robot to move it in starting position if no policy provided,
- # 2. give times to the robot devices to connect and start synchronizing,
- # 3. place the cameras windows on screen
- enable_teleoperation = policy is None
- log_say("Warmup record", cfg.play_sounds)
- warmup_record(robot, events, enable_teleoperation, cfg.warmup_time_s, cfg.display_data, cfg.fps)
-
- if has_method(robot, "teleop_safety_stop"):
- robot.teleop_safety_stop()
-
- # import pdb
- # pdb.set_trace()
-
- recorded_episodes = 0
- while True:
- if recorded_episodes >= cfg.num_episodes:
- break
-
- log_say(f"Recording episode {dataset.num_episodes}", cfg.play_sounds)
- record_episode(
- robot=robot,
- dataset=dataset,
- events=events,
- episode_time_s=cfg.episode_time_s,
- display_data=cfg.display_data,
- policy=policy,
- fps=cfg.fps,
- single_task=cfg.single_task,
- )
-
- # Execute a few seconds without recording to give time to manually reset the environment
- # Current code logic doesn't allow to teleoperate during this time.
- # TODO(rcadene): add an option to enable teleoperation during reset
- # Skip reset for the last episode to be recorded
- if not events["stop_recording"] and (
- (recorded_episodes < cfg.num_episodes - 1) or events["rerecord_episode"]
- ):
- log_say("Reset the environment", cfg.play_sounds)
- reset_environment(robot, events, cfg.reset_time_s, cfg.fps)
-
- if events["rerecord_episode"]:
- log_say("Re-record episode", cfg.play_sounds)
- events["rerecord_episode"] = False
- events["exit_early"] = False
- dataset.clear_episode_buffer()
- continue
-
- dataset.save_episode()
- recorded_episodes += 1
-
- if events["stop_recording"]:
- break
-
- log_say("Stop recording", cfg.play_sounds, blocking=True)
- stop_recording(robot, listener, cfg.display_data)
-
- if cfg.push_to_hub:
- dataset.push_to_hub(tags=cfg.tags, private=cfg.private)
-
- log_say("Exiting", cfg.play_sounds)
- return dataset
-
-
-@safe_disconnect
-def replay(
- robot: Robot,
- cfg: ReplayControlConfig,
-):
- # TODO(rcadene, aliberts): refactor with control_loop, once `dataset` is an instance of LeRobotDataset
- # TODO(rcadene): Add option to record logs
-
- dataset = LeRobotDataset(cfg.repo_id, root=cfg.root, episodes=[cfg.episode])
- actions = dataset.hf_dataset.select_columns("action")
-
- if not robot.is_connected:
- robot.connect()
-
- log_say("Replaying episode", cfg.play_sounds, blocking=True)
- for idx in range(dataset.num_frames):
- start_episode_t = time.perf_counter()
-
- action = actions[idx]["action"]
- robot.send_action(action)
-
- dt_s = time.perf_counter() - start_episode_t
- busy_wait(1 / cfg.fps - dt_s)
-
- dt_s = time.perf_counter() - start_episode_t
- log_control_info(robot, dt_s, fps=cfg.fps)
-
-
-def _init_rerun(control_config: ControlConfig, session_name: str = "lerobot_control_loop") -> None:
- """Initializes the Rerun SDK for visualizing the control loop.
-
- Args:
- control_config: Configuration determining data display and robot type.
- session_name: Rerun session name. Defaults to "lerobot_control_loop".
-
- Raises:
- ValueError: If viewer IP is missing for non-remote configurations with display enabled.
- """
- if (control_config.display_data and not is_headless()) or (
- control_config.display_data and isinstance(control_config, RemoteRobotConfig)
- ):
- # Configure Rerun flush batch size default to 8KB if not set
- batch_size = os.getenv("RERUN_FLUSH_NUM_BYTES", "8000")
- os.environ["RERUN_FLUSH_NUM_BYTES"] = batch_size
-
- # Initialize Rerun based on configuration
- rr.init(session_name)
- if isinstance(control_config, RemoteRobotConfig):
- viewer_ip = control_config.viewer_ip
- viewer_port = control_config.viewer_port
- if not viewer_ip or not viewer_port:
- raise ValueError(
- "Viewer IP & Port are required for remote config. Set via config file/CLI or disable control_config.display_data."
- )
- logging.info(f"Connecting to viewer at {viewer_ip}:{viewer_port}")
- rr.connect_tcp(f"{viewer_ip}:{viewer_port}")
- else:
- # Get memory limit for rerun viewer parameters
- memory_limit = os.getenv("LEROBOT_RERUN_MEMORY_LIMIT", "10%")
- rr.spawn(memory_limit=memory_limit)
-
-
-@parser.wrap()
-def control_robot(cfg: ControlPipelineConfig):
- init_logging()
- logging.info(pformat(asdict(cfg)))
-
- robot = make_robot_from_config(cfg.robot)
-
- # TODO(Steven): Blueprint for fixed window size
-
- if isinstance(cfg.control, CalibrateControlConfig):
- calibrate(robot, cfg.control)
- elif isinstance(cfg.control, TeleoperateControlConfig):
- _init_rerun(control_config=cfg.control, session_name="lerobot_control_loop_teleop")
- teleoperate(robot, cfg.control)
- elif isinstance(cfg.control, RecordControlConfig):
- _init_rerun(control_config=cfg.control, session_name="lerobot_control_loop_record")
- record(robot, cfg.control)
- elif isinstance(cfg.control, ReplayControlConfig):
- replay(robot, cfg.control)
- elif isinstance(cfg.control, RemoteRobotConfig):
- from lerobot.common.robot_devices.robots.lekiwi_remote import run_lekiwi
-
- _init_rerun(control_config=cfg.control, session_name="lerobot_control_loop_remote")
- run_lekiwi(cfg.robot)
-
- if robot.is_connected:
- # Disconnect manually to avoid a "Core dump" during process
- # termination due to camera threads not properly exiting.
- robot.disconnect()
-
-
-if __name__ == "__main__":
- control_robot()
diff --git a/lerobot/scripts/control_sim_robot.py b/lerobot/scripts/control_sim_robot.py
deleted file mode 100644
index 5347822c..00000000
--- a/lerobot/scripts/control_sim_robot.py
+++ /dev/null
@@ -1,561 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
-Utilities to control a robot in simulation.
-
-Useful to record a dataset, replay a recorded episode and record an evaluation dataset.
-
-Examples of usage:
-
-
-- Unlimited teleoperation at a limited frequency of 30 Hz, to simulate data recording frequency.
- You can modify this value depending on how fast your simulation can run:
-```bash
-python lerobot/scripts/control_robot.py teleoperate \
- --fps 30 \
- --robot-path lerobot/configs/robot/your_robot_config.yaml \
- --sim-config lerobot/configs/env/your_sim_config.yaml
-```
-
-- Record one episode in order to test replay:
-```bash
-python lerobot/scripts/control_sim_robot.py record \
- --robot-path lerobot/configs/robot/your_robot_config.yaml \
- --sim-config lerobot/configs/env/your_sim_config.yaml \
- --fps 30 \
- --repo-id $USER/robot_sim_test \
- --num-episodes 1 \
- --run-compute-stats 0
-```
-
-Enable the --push-to-hub 1 to push the recorded dataset to the huggingface hub.
-
-- Visualize dataset:
-```bash
-python lerobot/scripts/visualize_dataset.py \
- --repo-id $USER/robot_sim_test \
- --episode-index 0
-```
-
-- Replay a sequence of test episodes:
-```bash
-python lerobot/scripts/control_sim_robot.py replay \
- --robot-path lerobot/configs/robot/your_robot_config.yaml \
- --sim-config lerobot/configs/env/your_sim_config.yaml \
- --fps 30 \
- --repo-id $USER/robot_sim_test \
- --episode 0
-```
-Note: The seed is saved, therefore, during replay we can load the same environment state as the one during collection.
-
-- Record a full dataset in order to train a policy,
-30 seconds of recording for each episode, and 10 seconds to reset the environment in between episodes:
-```bash
-python lerobot/scripts/control_sim_robot.py record \
- --robot-path lerobot/configs/robot/your_robot_config.yaml \
- --sim-config lerobot/configs/env/your_sim_config.yaml \
- --fps 30 \
- --repo-id $USER/robot_sim_test \
- --num-episodes 50 \
- --episode-time-s 30 \
-```
-
-**NOTE**: You can use your keyboard to control data recording flow.
-- Tap right arrow key '->' to early exit while recording an episode and go to resetting the environment.
-- Tap right arrow key '->' to early exit while resetting the environment and got to recording the next episode.
-- Tap left arrow key '<-' to early exit and re-record the current episode.
-- Tap escape key 'esc' to stop the data recording.
-This might require a sudo permission to allow your terminal to monitor keyboard events.
-
-**NOTE**: You can resume/continue data recording by running the same data recording command twice.
-"""
-
-import argparse
-import importlib
-import logging
-import time
-from pathlib import Path
-
-import cv2
-import gymnasium as gym
-import numpy as np
-import torch
-
-from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
-from lerobot.common.robot_devices.control_utils import (
- init_keyboard_listener,
- init_policy,
- is_headless,
- log_control_info,
- predict_action,
- sanity_check_dataset_name,
- sanity_check_dataset_robot_compatibility,
- stop_recording,
-)
-from lerobot.common.robot_devices.robots.utils import Robot, make_robot
-from lerobot.common.robot_devices.utils import busy_wait
-from lerobot.common.utils.utils import init_hydra_config, init_logging, log_say
-
-raise NotImplementedError("This script is currently deactivated")
-
-DEFAULT_FEATURES = {
- "next.reward": {
- "dtype": "float32",
- "shape": (1,),
- "names": None,
- },
- "next.success": {
- "dtype": "bool",
- "shape": (1,),
- "names": None,
- },
- "seed": {
- "dtype": "int64",
- "shape": (1,),
- "names": None,
- },
- "timestamp": {
- "dtype": "float32",
- "shape": (1,),
- "names": None,
- },
-}
-
-
-########################################################################################
-# Utilities
-########################################################################################
-def none_or_int(value):
- if value == "None":
- return None
- return int(value)
-
-
-def init_sim_calibration(robot, cfg):
- # Constants necessary for transforming the joint pos of the real robot to the sim
- # depending on the robot description used in that sim.
- start_pos = np.array(robot.leader_arms.main.calibration["start_pos"])
- axis_directions = np.array(cfg.get("axis_directions", [1]))
- offsets = np.array(cfg.get("offsets", [0])) * np.pi
-
- return {"start_pos": start_pos, "axis_directions": axis_directions, "offsets": offsets}
-
-
-def real_positions_to_sim(real_positions, axis_directions, start_pos, offsets):
- """Counts - starting position -> radians -> align axes -> offset"""
- return axis_directions * (real_positions - start_pos) * 2.0 * np.pi / 4096 + offsets
-
-
-########################################################################################
-# Control modes
-########################################################################################
-
-
-def teleoperate(env, robot: Robot, process_action_fn, teleop_time_s=None):
- env = env()
- env.reset()
- start_teleop_t = time.perf_counter()
- while True:
- leader_pos = robot.leader_arms.main.read("Present_Position")
- action = process_action_fn(leader_pos)
- env.step(np.expand_dims(action, 0))
- if teleop_time_s is not None and time.perf_counter() - start_teleop_t > teleop_time_s:
- print("Teleoperation processes finished.")
- break
-
-
-def record(
- env,
- robot: Robot,
- process_action_from_leader,
- root: Path,
- repo_id: str,
- task: str,
- fps: int | None = None,
- tags: list[str] | None = None,
- pretrained_policy_name_or_path: str = None,
- policy_overrides: bool | None = None,
- episode_time_s: int = 30,
- num_episodes: int = 50,
- video: bool = True,
- push_to_hub: bool = True,
- num_image_writer_processes: int = 0,
- num_image_writer_threads_per_camera: int = 4,
- display_cameras: bool = False,
- play_sounds: bool = True,
- resume: bool = False,
- local_files_only: bool = False,
- run_compute_stats: bool = True,
-) -> LeRobotDataset:
- # Load pretrained policy
- policy = None
- if pretrained_policy_name_or_path is not None:
- policy, policy_fps, device, use_amp = init_policy(pretrained_policy_name_or_path, policy_overrides)
-
- if fps is None:
- fps = policy_fps
- logging.warning(f"No fps provided, so using the fps from policy config ({policy_fps}).")
-
- if policy is None and process_action_from_leader is None:
- raise ValueError("Either policy or process_action_fn has to be set to enable control in sim.")
-
- # initialize listener before sim env
- listener, events = init_keyboard_listener()
-
- # create sim env
- env = env()
-
- # Create empty dataset or load existing saved episodes
- num_cameras = sum([1 if "image" in key else 0 for key in env.observation_space])
-
- # get image keys
- image_keys = [key for key in env.observation_space if "image" in key]
- state_keys_dict = env_cfg.state_keys
-
- if resume:
- dataset = LeRobotDataset(
- repo_id,
- root=root,
- local_files_only=local_files_only,
- )
- dataset.start_image_writer(
- num_processes=num_image_writer_processes,
- num_threads=num_image_writer_threads_per_camera * num_cameras,
- )
- sanity_check_dataset_robot_compatibility(dataset, robot, fps, video)
- else:
- features = DEFAULT_FEATURES
- # add image keys to features
- for key in image_keys:
- shape = env.observation_space[key].shape
- if not key.startswith("observation.image."):
- key = "observation.image." + key
- features[key] = {"dtype": "video", "names": ["channels", "height", "width"], "shape": shape}
-
- for key, obs_key in state_keys_dict.items():
- features[key] = {
- "dtype": "float32",
- "names": None,
- "shape": env.observation_space[obs_key].shape,
- }
-
- features["action"] = {"dtype": "float32", "shape": env.action_space.shape, "names": None}
-
- # Create empty dataset or load existing saved episodes
- sanity_check_dataset_name(repo_id, policy)
- dataset = LeRobotDataset.create(
- repo_id,
- fps,
- root=root,
- features=features,
- use_videos=video,
- image_writer_processes=num_image_writer_processes,
- image_writer_threads=num_image_writer_threads_per_camera * num_cameras,
- )
-
- recorded_episodes = 0
- while True:
- log_say(f"Recording episode {dataset.num_episodes}", play_sounds)
-
- if events is None:
- events = {"exit_early": False}
-
- if episode_time_s is None:
- episode_time_s = float("inf")
-
- timestamp = 0
- start_episode_t = time.perf_counter()
-
- seed = np.random.randint(0, 1e5)
- observation, info = env.reset(seed=seed)
-
- while timestamp < episode_time_s:
- start_loop_t = time.perf_counter()
-
- if policy is not None:
- action = predict_action(observation, policy, device, use_amp)
- else:
- leader_pos = robot.leader_arms.main.read("Present_Position")
- action = process_action_from_leader(leader_pos)
-
- observation, reward, terminated, _, info = env.step(action)
-
- success = info.get("is_success", False)
- env_timestamp = info.get("timestamp", dataset.episode_buffer["size"] / fps)
-
- frame = {
- "action": torch.from_numpy(action),
- "next.reward": reward,
- "next.success": success,
- "seed": seed,
- "timestamp": env_timestamp,
- }
-
- for key in image_keys:
- if not key.startswith("observation.image"):
- frame["observation.image." + key] = observation[key]
- else:
- frame[key] = observation[key]
-
- for key, obs_key in state_keys_dict.items():
- frame[key] = torch.from_numpy(observation[obs_key])
-
- dataset.add_frame(frame)
-
- if display_cameras and not is_headless():
- for key in image_keys:
- cv2.imshow(key, cv2.cvtColor(observation[key], cv2.COLOR_RGB2BGR))
- cv2.waitKey(1)
-
- if fps is not None:
- dt_s = time.perf_counter() - start_loop_t
- busy_wait(1 / fps - dt_s)
-
- dt_s = time.perf_counter() - start_loop_t
- log_control_info(robot, dt_s, fps=fps)
-
- timestamp = time.perf_counter() - start_episode_t
- if events["exit_early"] or terminated:
- events["exit_early"] = False
- break
-
- if events["rerecord_episode"]:
- log_say("Re-record episode", play_sounds)
- events["rerecord_episode"] = False
- events["exit_early"] = False
- dataset.clear_episode_buffer()
- continue
-
- dataset.save_episode(task=task)
- recorded_episodes += 1
-
- if events["stop_recording"] or recorded_episodes >= num_episodes:
- break
- else:
- logging.info("Waiting for a few seconds before starting next episode recording...")
- busy_wait(3)
-
- log_say("Stop recording", play_sounds, blocking=True)
- stop_recording(robot, listener, display_cameras)
-
- if run_compute_stats:
- logging.info("Computing dataset statistics")
- dataset.consolidate(run_compute_stats)
-
- if push_to_hub:
- dataset.push_to_hub(tags=tags)
-
- log_say("Exiting", play_sounds)
- return dataset
-
-
-def replay(
- env, root: Path, repo_id: str, episode: int, fps: int | None = None, local_files_only: bool = True
-):
- env = env()
-
- local_dir = Path(root) / repo_id
- if not local_dir.exists():
- raise ValueError(local_dir)
-
- dataset = LeRobotDataset(repo_id, root=root, local_files_only=local_files_only)
- items = dataset.hf_dataset.select_columns("action")
- seeds = dataset.hf_dataset.select_columns("seed")["seed"]
-
- from_idx = dataset.episode_data_index["from"][episode].item()
- to_idx = dataset.episode_data_index["to"][episode].item()
- env.reset(seed=seeds[from_idx].item())
- logging.info("Replaying episode")
- log_say("Replaying episode", play_sounds=True)
- for idx in range(from_idx, to_idx):
- start_episode_t = time.perf_counter()
- action = items[idx]["action"]
- env.step(action.unsqueeze(0).numpy())
- dt_s = time.perf_counter() - start_episode_t
- busy_wait(1 / fps - dt_s)
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- subparsers = parser.add_subparsers(dest="mode", required=True)
-
- # Set common options for all the subparsers
- base_parser = argparse.ArgumentParser(add_help=False)
- base_parser.add_argument(
- "--robot-path",
- type=str,
- default="lerobot/configs/robot/koch.yaml",
- help="Path to robot yaml file used to instantiate the robot using `make_robot` factory function.",
- )
-
- base_parser.add_argument(
- "--sim-config",
- help="Path to a yaml config you want to use for initializing a sim environment based on gym ",
- )
-
- parser_record = subparsers.add_parser("teleoperate", parents=[base_parser])
-
- parser_record = subparsers.add_parser("record", parents=[base_parser])
- parser_record.add_argument(
- "--fps", type=none_or_int, default=None, help="Frames per second (set to None to disable)"
- )
- parser_record.add_argument(
- "--root",
- type=Path,
- default=None,
- help="Root directory where the dataset will be stored locally at '{root}/{repo_id}' (e.g. 'data/hf_username/dataset_name').",
- )
- parser_record.add_argument(
- "--repo-id",
- type=str,
- default="lerobot/test",
- help="Dataset identifier. By convention it should match '{hf_username}/{dataset_name}' (e.g. `lerobot/test`).",
- )
- parser_record.add_argument(
- "--episode-time-s",
- type=int,
- default=60,
- help="Number of seconds for data recording for each episode.",
- )
- parser_record.add_argument(
- "--task",
- type=str,
- required=True,
- help="A description of the task preformed during recording that can be used as a language instruction.",
- )
- parser_record.add_argument("--num-episodes", type=int, default=50, help="Number of episodes to record.")
- parser_record.add_argument(
- "--run-compute-stats",
- type=int,
- default=1,
- help="By default, run the computation of the data statistics at the end of data collection. Compute intensive and not required to just replay an episode.",
- )
- parser_record.add_argument(
- "--push-to-hub",
- type=int,
- default=1,
- help="Upload dataset to Hugging Face hub.",
- )
- parser_record.add_argument(
- "--tags",
- type=str,
- nargs="*",
- help="Add tags to your dataset on the hub.",
- )
- parser_record.add_argument(
- "--num-image-writer-processes",
- type=int,
- default=0,
- help=(
- "Number of subprocesses handling the saving of frames as PNG. Set to 0 to use threads only; "
- "set to ≥1 to use subprocesses, each using threads to write images. The best number of processes "
- "and threads depends on your system. We recommend 4 threads per camera with 0 processes. "
- "If fps is unstable, adjust the thread count. If still unstable, try using 1 or more subprocesses."
- ),
- )
- parser_record.add_argument(
- "--num-image-writer-threads-per-camera",
- type=int,
- default=4,
- help=(
- "Number of threads writing the frames as png images on disk, per camera. "
- "Too much threads might cause unstable teleoperation fps due to main thread being blocked. "
- "Not enough threads might cause low camera fps."
- ),
- )
- parser_record.add_argument(
- "--display-cameras",
- type=int,
- default=0,
- help="Visualize image observations with opencv.",
- )
- parser_record.add_argument(
- "--resume",
- type=int,
- default=0,
- help="Resume recording on an existing dataset.",
- )
- parser_replay = subparsers.add_parser("replay", parents=[base_parser])
- parser_replay.add_argument(
- "--fps", type=none_or_int, default=None, help="Frames per second (set to None to disable)"
- )
- parser_replay.add_argument(
- "--root",
- type=Path,
- default=None,
- help="Root directory where the dataset will be stored locally (e.g. 'data/hf_username/dataset_name'). By default, stored in cache folder.",
- )
- parser_replay.add_argument(
- "--repo-id",
- type=str,
- default="lerobot/test",
- help="Dataset identifier. By convention it should match '{hf_username}/{dataset_name}' (e.g. `lerobot/test`).",
- )
- parser_replay.add_argument("--episode", type=int, default=0, help="Index of the episodes to replay.")
-
- args = parser.parse_args()
-
- init_logging()
-
- control_mode = args.mode
- robot_path = args.robot_path
- env_config_path = args.sim_config
- kwargs = vars(args)
- del kwargs["mode"]
- del kwargs["robot_path"]
- del kwargs["sim_config"]
-
- # make gym env
- env_cfg = init_hydra_config(env_config_path)
- importlib.import_module(f"gym_{env_cfg.env.type}")
-
- def env_constructor():
- return gym.make(env_cfg.env.handle, disable_env_checker=True, **env_cfg.env.gym)
-
- robot = None
- process_leader_actions_fn = None
-
- if control_mode in ["teleoperate", "record"]:
- # make robot
- robot_overrides = ["~cameras", "~follower_arms"]
- # TODO(rcadene): remove
- robot_cfg = init_hydra_config(robot_path, robot_overrides)
- robot = make_robot(robot_cfg)
- robot.connect()
-
- calib_kwgs = init_sim_calibration(robot, env_cfg.calibration)
-
- def process_leader_actions_fn(action):
- return real_positions_to_sim(action, **calib_kwgs)
-
- robot.leader_arms.main.calibration = None
-
- if control_mode == "teleoperate":
- teleoperate(env_constructor, robot, process_leader_actions_fn)
-
- elif control_mode == "record":
- record(env_constructor, robot, process_leader_actions_fn, **kwargs)
-
- elif control_mode == "replay":
- replay(env_constructor, **kwargs)
-
- else:
- raise ValueError(
- f"Invalid control mode: '{control_mode}', only valid modes are teleoperate, record and replay."
- )
-
- if robot and robot.is_connected:
- # Disconnect manually to avoid a "Core dump" during process
- # termination due to camera threads not properly exiting.
- robot.disconnect()
diff --git a/lerobot/scripts/display_sys_info.py b/lerobot/scripts/display_sys_info.py
deleted file mode 100644
index 4d3cc291..00000000
--- a/lerobot/scripts/display_sys_info.py
+++ /dev/null
@@ -1,90 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Use this script to get a quick summary of your system config.
-It should be able to run without any of LeRobot's dependencies or LeRobot itself installed.
-"""
-
-import platform
-
-HAS_HF_HUB = True
-HAS_HF_DATASETS = True
-HAS_NP = True
-HAS_TORCH = True
-HAS_LEROBOT = True
-
-try:
- import huggingface_hub
-except ImportError:
- HAS_HF_HUB = False
-
-try:
- import datasets
-except ImportError:
- HAS_HF_DATASETS = False
-
-try:
- import numpy as np
-except ImportError:
- HAS_NP = False
-
-try:
- import torch
-except ImportError:
- HAS_TORCH = False
-
-try:
- import lerobot
-except ImportError:
- HAS_LEROBOT = False
-
-
-lerobot_version = lerobot.__version__ if HAS_LEROBOT else "N/A"
-hf_hub_version = huggingface_hub.__version__ if HAS_HF_HUB else "N/A"
-hf_datasets_version = datasets.__version__ if HAS_HF_DATASETS else "N/A"
-np_version = np.__version__ if HAS_NP else "N/A"
-
-torch_version = torch.__version__ if HAS_TORCH else "N/A"
-torch_cuda_available = torch.cuda.is_available() if HAS_TORCH else "N/A"
-cuda_version = torch._C._cuda_getCompiledVersion() if HAS_TORCH and torch.version.cuda is not None else "N/A"
-
-
-# TODO(aliberts): refactor into an actual command `lerobot env`
-def display_sys_info() -> dict:
- """Run this to get basic system info to help for tracking issues & bugs."""
- info = {
- "`lerobot` version": lerobot_version,
- "Platform": platform.platform(),
- "Python version": platform.python_version(),
- "Huggingface_hub version": hf_hub_version,
- "Dataset version": hf_datasets_version,
- "Numpy version": np_version,
- "PyTorch version (GPU?)": f"{torch_version} ({torch_cuda_available})",
- "Cuda version": cuda_version,
- "Using GPU in script?": "",
- # "Using distributed or parallel set-up in script?": "",
- }
- print("\nCopy-and-paste the text below in your GitHub issue and FILL OUT the last point.\n")
- print(format_dict(info))
- return info
-
-
-def format_dict(d: dict) -> str:
- return "\n".join([f"- {prop}: {val}" for prop, val in d.items()]) + "\n"
-
-
-if __name__ == "__main__":
- display_sys_info()
diff --git a/lerobot/scripts/push_pretrained.py b/lerobot/scripts/push_pretrained.py
deleted file mode 100644
index e3c683f9..00000000
--- a/lerobot/scripts/push_pretrained.py
+++ /dev/null
@@ -1,71 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
-Once you have trained a policy with our training script (lerobot/scripts/train.py), use this script to push it
-to the hub.
-
-Example:
-
-```bash
-python lerobot/scripts/push_pretrained.py \
- --pretrained_path=outputs/train/act_aloha_sim_transfer_cube_human/checkpoints/last/pretrained_model \
- --repo_id=lerobot/act_aloha_sim_transfer_cube_human
-```
-"""
-
-from dataclasses import dataclass
-from pathlib import Path
-
-import draccus
-from huggingface_hub import HfApi
-
-
-@dataclass
-class PushPreTrainedConfig:
- pretrained_path: Path
- repo_id: str
- branch: str | None = None
- private: bool = False
- exist_ok: bool = False
-
-
-@draccus.wrap()
-def main(cfg: PushPreTrainedConfig):
- hub_api = HfApi()
- hub_api.create_repo(
- repo_id=cfg.repo_id,
- private=cfg.private,
- repo_type="model",
- exist_ok=cfg.exist_ok,
- )
- if cfg.branch:
- hub_api.create_branch(
- repo_id=cfg.repo_id,
- branch=cfg.branch,
- repo_type="model",
- exist_ok=cfg.exist_ok,
- )
-
- hub_api.upload_folder(
- repo_id=cfg.repo_id,
- folder_path=cfg.pretrained_path,
- repo_type="model",
- revision=cfg.branch,
- )
-
-
-if __name__ == "__main__":
- main()
diff --git a/lerobot/scripts/train.py b/lerobot/scripts/train.py
deleted file mode 100644
index 0de247be..00000000
--- a/lerobot/scripts/train.py
+++ /dev/null
@@ -1,288 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import logging
-import time
-from contextlib import nullcontext
-from pprint import pformat
-from typing import Any
-
-import torch
-from termcolor import colored
-from torch.amp import GradScaler
-from torch.optim import Optimizer
-
-from lerobot.common.datasets.factory import make_dataset
-from lerobot.common.datasets.sampler import EpisodeAwareSampler
-from lerobot.common.datasets.utils import cycle
-from lerobot.common.envs.factory import make_env
-from lerobot.common.optim.factory import make_optimizer_and_scheduler
-from lerobot.common.policies.factory import make_policy
-from lerobot.common.policies.pretrained import PreTrainedPolicy
-from lerobot.common.policies.utils import get_device_from_parameters
-from lerobot.common.utils.logging_utils import AverageMeter, MetricsTracker
-from lerobot.common.utils.random_utils import set_seed
-from lerobot.common.utils.train_utils import (
- get_step_checkpoint_dir,
- get_step_identifier,
- load_training_state,
- save_checkpoint,
- update_last_checkpoint,
-)
-from lerobot.common.utils.utils import (
- format_big_number,
- get_safe_torch_device,
- has_method,
- init_logging,
-)
-from lerobot.common.utils.wandb_utils import WandBLogger
-from lerobot.configs import parser
-from lerobot.configs.train import TrainPipelineConfig
-from lerobot.scripts.eval import eval_policy
-
-
-def update_policy(
- train_metrics: MetricsTracker,
- policy: PreTrainedPolicy,
- batch: Any,
- optimizer: Optimizer,
- grad_clip_norm: float,
- grad_scaler: GradScaler,
- lr_scheduler=None,
- use_amp: bool = False,
- lock=None,
-) -> tuple[MetricsTracker, dict]:
- start_time = time.perf_counter()
- device = get_device_from_parameters(policy)
- policy.train()
- with torch.autocast(device_type=device.type) if use_amp else nullcontext():
- loss, output_dict = policy.forward(batch)
- # TODO(rcadene): policy.unnormalize_outputs(out_dict)
- grad_scaler.scale(loss).backward()
-
- # Unscale the gradient of the optimizer's assigned params in-place **prior to gradient clipping**.
- grad_scaler.unscale_(optimizer)
-
- grad_norm = torch.nn.utils.clip_grad_norm_(
- policy.parameters(),
- grad_clip_norm,
- error_if_nonfinite=False,
- )
-
- # Optimizer's gradients are already unscaled, so scaler.step does not unscale them,
- # although it still skips optimizer.step() if the gradients contain infs or NaNs.
- with lock if lock is not None else nullcontext():
- grad_scaler.step(optimizer)
- # Updates the scale for next iteration.
- grad_scaler.update()
-
- optimizer.zero_grad()
-
- # Step through pytorch scheduler at every batch instead of epoch
- if lr_scheduler is not None:
- lr_scheduler.step()
-
- if has_method(policy, "update"):
- # To possibly update an internal buffer (for instance an Exponential Moving Average like in TDMPC).
- policy.update()
-
- train_metrics.loss = loss.item()
- train_metrics.grad_norm = grad_norm.item()
- train_metrics.lr = optimizer.param_groups[0]["lr"]
- train_metrics.update_s = time.perf_counter() - start_time
- return train_metrics, output_dict
-
-
-@parser.wrap()
-def train(cfg: TrainPipelineConfig):
- cfg.validate()
- logging.info(pformat(cfg.to_dict()))
-
- if cfg.wandb.enable and cfg.wandb.project:
- wandb_logger = WandBLogger(cfg)
- else:
- wandb_logger = None
- logging.info(colored("Logs will be saved locally.", "yellow", attrs=["bold"]))
-
- if cfg.seed is not None:
- set_seed(cfg.seed)
-
- # Check device is available
- device = get_safe_torch_device(cfg.policy.device, log=True)
- torch.backends.cudnn.benchmark = True
- torch.backends.cuda.matmul.allow_tf32 = True
-
- logging.info("Creating dataset")
- dataset = make_dataset(cfg)
-
- # Create environment used for evaluating checkpoints during training on simulation data.
- # On real-world data, no need to create an environment as evaluations are done outside train.py,
- # using the eval.py instead, with gym_dora environment and dora-rs.
- eval_env = None
- if cfg.eval_freq > 0 and cfg.env is not None:
- logging.info("Creating env")
- eval_env = make_env(cfg.env, n_envs=cfg.eval.batch_size, use_async_envs=cfg.eval.use_async_envs)
-
- logging.info("Creating policy")
- policy = make_policy(
- cfg=cfg.policy,
- ds_meta=dataset.meta,
- )
-
- logging.info("Creating optimizer and scheduler")
- optimizer, lr_scheduler = make_optimizer_and_scheduler(cfg, policy)
- grad_scaler = GradScaler(device.type, enabled=cfg.policy.use_amp)
-
- step = 0 # number of policy updates (forward + backward + optim)
-
- if cfg.resume:
- step, optimizer, lr_scheduler = load_training_state(cfg.checkpoint_path, optimizer, lr_scheduler)
-
- num_learnable_params = sum(p.numel() for p in policy.parameters() if p.requires_grad)
- num_total_params = sum(p.numel() for p in policy.parameters())
-
- logging.info(colored("Output dir:", "yellow", attrs=["bold"]) + f" {cfg.output_dir}")
- if cfg.env is not None:
- logging.info(f"{cfg.env.task=}")
- logging.info(f"{cfg.steps=} ({format_big_number(cfg.steps)})")
- logging.info(f"{dataset.num_frames=} ({format_big_number(dataset.num_frames)})")
- logging.info(f"{dataset.num_episodes=}")
- logging.info(f"{num_learnable_params=} ({format_big_number(num_learnable_params)})")
- logging.info(f"{num_total_params=} ({format_big_number(num_total_params)})")
-
- # create dataloader for offline training
- if hasattr(cfg.policy, "drop_n_last_frames"):
- shuffle = False
- sampler = EpisodeAwareSampler(
- dataset.episode_data_index,
- drop_n_last_frames=cfg.policy.drop_n_last_frames,
- shuffle=True,
- )
- else:
- shuffle = True
- sampler = None
-
- dataloader = torch.utils.data.DataLoader(
- dataset,
- num_workers=cfg.num_workers,
- batch_size=cfg.batch_size,
- shuffle=shuffle,
- sampler=sampler,
- pin_memory=device.type != "cpu",
- drop_last=False,
- )
- dl_iter = cycle(dataloader)
-
- policy.train()
-
- train_metrics = {
- "loss": AverageMeter("loss", ":.3f"),
- "grad_norm": AverageMeter("grdn", ":.3f"),
- "lr": AverageMeter("lr", ":0.1e"),
- "update_s": AverageMeter("updt_s", ":.3f"),
- "dataloading_s": AverageMeter("data_s", ":.3f"),
- }
-
- train_tracker = MetricsTracker(
- cfg.batch_size, dataset.num_frames, dataset.num_episodes, train_metrics, initial_step=step
- )
-
- logging.info("Start offline training on a fixed dataset")
- for _ in range(step, cfg.steps):
- start_time = time.perf_counter()
- batch = next(dl_iter)
- train_tracker.dataloading_s = time.perf_counter() - start_time
-
- for key in batch:
- if isinstance(batch[key], torch.Tensor):
- batch[key] = batch[key].to(device, non_blocking=True)
-
- train_tracker, output_dict = update_policy(
- train_tracker,
- policy,
- batch,
- optimizer,
- cfg.optimizer.grad_clip_norm,
- grad_scaler=grad_scaler,
- lr_scheduler=lr_scheduler,
- use_amp=cfg.policy.use_amp,
- )
-
- # Note: eval and checkpoint happens *after* the `step`th training update has completed, so we
- # increment `step` here.
- step += 1
- train_tracker.step()
- is_log_step = cfg.log_freq > 0 and step % cfg.log_freq == 0
- is_saving_step = step % cfg.save_freq == 0 or step == cfg.steps
- is_eval_step = cfg.eval_freq > 0 and step % cfg.eval_freq == 0
-
- if is_log_step:
- logging.info(train_tracker)
- if wandb_logger:
- wandb_log_dict = train_tracker.to_dict()
- if output_dict:
- wandb_log_dict.update(output_dict)
- wandb_logger.log_dict(wandb_log_dict, step)
- train_tracker.reset_averages()
-
- if cfg.save_checkpoint and is_saving_step:
- logging.info(f"Checkpoint policy after step {step}")
- checkpoint_dir = get_step_checkpoint_dir(cfg.output_dir, cfg.steps, step)
- save_checkpoint(checkpoint_dir, step, cfg, policy, optimizer, lr_scheduler)
- update_last_checkpoint(checkpoint_dir)
- if wandb_logger:
- wandb_logger.log_policy(checkpoint_dir)
-
- if cfg.env and is_eval_step:
- step_id = get_step_identifier(step, cfg.steps)
- logging.info(f"Eval policy at step {step}")
- with (
- torch.no_grad(),
- torch.autocast(device_type=device.type) if cfg.policy.use_amp else nullcontext(),
- ):
- eval_info = eval_policy(
- eval_env,
- policy,
- cfg.eval.n_episodes,
- videos_dir=cfg.output_dir / "eval" / f"videos_step_{step_id}",
- max_episodes_rendered=4,
- start_seed=cfg.seed,
- )
-
- eval_metrics = {
- "avg_sum_reward": AverageMeter("∑rwrd", ":.3f"),
- "pc_success": AverageMeter("success", ":.1f"),
- "eval_s": AverageMeter("eval_s", ":.3f"),
- }
- eval_tracker = MetricsTracker(
- cfg.batch_size, dataset.num_frames, dataset.num_episodes, eval_metrics, initial_step=step
- )
- eval_tracker.eval_s = eval_info["aggregated"].pop("eval_s")
- eval_tracker.avg_sum_reward = eval_info["aggregated"].pop("avg_sum_reward")
- eval_tracker.pc_success = eval_info["aggregated"].pop("pc_success")
- logging.info(eval_tracker)
- if wandb_logger:
- wandb_log_dict = {**eval_tracker.to_dict(), **eval_info}
- wandb_logger.log_dict(wandb_log_dict, step, mode="eval")
- wandb_logger.log_video(eval_info["video_paths"][0], step, mode="eval")
-
- if eval_env:
- eval_env.close()
- logging.info("End of training")
-
-
-if __name__ == "__main__":
- init_logging()
- train()
diff --git a/lerobot/scripts/visualize_dataset_html.py b/lerobot/scripts/visualize_dataset_html.py
deleted file mode 100644
index d0c8f1ac..00000000
--- a/lerobot/scripts/visualize_dataset_html.py
+++ /dev/null
@@ -1,482 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-""" Visualize data of **all** frames of any episode of a dataset of type LeRobotDataset.
-
-Note: The last frame of the episode doesnt always correspond to a final state.
-That's because our datasets are composed of transition from state to state up to
-the antepenultimate state associated to the ultimate action to arrive in the final state.
-However, there might not be a transition from a final state to another state.
-
-Note: This script aims to visualize the data used to train the neural networks.
-~What you see is what you get~. When visualizing image modality, it is often expected to observe
-lossly compression artifacts since these images have been decoded from compressed mp4 videos to
-save disk space. The compression factor applied has been tuned to not affect success rate.
-
-Example of usage:
-
-- Visualize data stored on a local machine:
-```bash
-local$ python lerobot/scripts/visualize_dataset_html.py \
- --repo-id lerobot/pusht
-
-local$ open http://localhost:9090
-```
-
-- Visualize data stored on a distant machine with a local viewer:
-```bash
-distant$ python lerobot/scripts/visualize_dataset_html.py \
- --repo-id lerobot/pusht
-
-local$ ssh -L 9090:localhost:9090 distant # create a ssh tunnel
-local$ open http://localhost:9090
-```
-
-- Select episodes to visualize:
-```bash
-python lerobot/scripts/visualize_dataset_html.py \
- --repo-id lerobot/pusht \
- --episodes 7 3 5 1 4
-```
-"""
-
-import argparse
-import csv
-import json
-import logging
-import re
-import shutil
-import tempfile
-from io import StringIO
-from pathlib import Path
-
-import numpy as np
-import pandas as pd
-import requests
-from flask import Flask, redirect, render_template, request, url_for
-
-from lerobot import available_datasets
-from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
-from lerobot.common.datasets.utils import IterableNamespace
-from lerobot.common.utils.utils import init_logging
-
-
-def run_server(
- dataset: LeRobotDataset | IterableNamespace | None,
- episodes: list[int] | None,
- host: str,
- port: str,
- static_folder: Path,
- template_folder: Path,
-):
- app = Flask(__name__, static_folder=static_folder.resolve(), template_folder=template_folder.resolve())
- app.config["SEND_FILE_MAX_AGE_DEFAULT"] = 0 # specifying not to cache
-
- @app.route("/")
- def hommepage(dataset=dataset):
- if dataset:
- dataset_namespace, dataset_name = dataset.repo_id.split("/")
- return redirect(
- url_for(
- "show_episode",
- dataset_namespace=dataset_namespace,
- dataset_name=dataset_name,
- episode_id=0,
- )
- )
-
- dataset_param, episode_param = None, None
- all_params = request.args
- if "dataset" in all_params:
- dataset_param = all_params["dataset"]
- if "episode" in all_params:
- episode_param = int(all_params["episode"])
-
- if dataset_param:
- dataset_namespace, dataset_name = dataset_param.split("/")
- return redirect(
- url_for(
- "show_episode",
- dataset_namespace=dataset_namespace,
- dataset_name=dataset_name,
- episode_id=episode_param if episode_param is not None else 0,
- )
- )
-
- featured_datasets = [
- "lerobot/aloha_static_cups_open",
- "lerobot/columbia_cairlab_pusht_real",
- "lerobot/taco_play",
- ]
- return render_template(
- "visualize_dataset_homepage.html",
- featured_datasets=featured_datasets,
- lerobot_datasets=available_datasets,
- )
-
- @app.route("//")
- def show_first_episode(dataset_namespace, dataset_name):
- first_episode_id = 0
- return redirect(
- url_for(
- "show_episode",
- dataset_namespace=dataset_namespace,
- dataset_name=dataset_name,
- episode_id=first_episode_id,
- )
- )
-
- @app.route("///episode_")
- def show_episode(dataset_namespace, dataset_name, episode_id, dataset=dataset, episodes=episodes):
- repo_id = f"{dataset_namespace}/{dataset_name}"
- try:
- if dataset is None:
- dataset = get_dataset_info(repo_id)
- except FileNotFoundError:
- return (
- "Make sure to convert your LeRobotDataset to v2 & above. See how to convert your dataset at https://github.com/huggingface/lerobot/pull/461",
- 400,
- )
- dataset_version = (
- str(dataset.meta._version) if isinstance(dataset, LeRobotDataset) else dataset.codebase_version
- )
- match = re.search(r"v(\d+)\.", dataset_version)
- if match:
- major_version = int(match.group(1))
- if major_version < 2:
- return "Make sure to convert your LeRobotDataset to v2 & above."
-
- episode_data_csv_str, columns, ignored_columns = get_episode_data(dataset, episode_id)
- dataset_info = {
- "repo_id": f"{dataset_namespace}/{dataset_name}",
- "num_samples": dataset.num_frames
- if isinstance(dataset, LeRobotDataset)
- else dataset.total_frames,
- "num_episodes": dataset.num_episodes
- if isinstance(dataset, LeRobotDataset)
- else dataset.total_episodes,
- "fps": dataset.fps,
- }
- if isinstance(dataset, LeRobotDataset):
- video_paths = [
- dataset.meta.get_video_file_path(episode_id, key) for key in dataset.meta.video_keys
- ]
- videos_info = [
- {
- "url": url_for("static", filename=str(video_path).replace("\\", "/")),
- "filename": video_path.parent.name,
- }
- for video_path in video_paths
- ]
- tasks = dataset.meta.episodes[episode_id]["tasks"]
- else:
- video_keys = [key for key, ft in dataset.features.items() if ft["dtype"] == "video"]
- videos_info = [
- {
- "url": f"https://huggingface.co/datasets/{repo_id}/resolve/main/"
- + dataset.video_path.format(
- episode_chunk=int(episode_id) // dataset.chunks_size,
- video_key=video_key,
- episode_index=episode_id,
- ),
- "filename": video_key,
- }
- for video_key in video_keys
- ]
-
- response = requests.get(
- f"https://huggingface.co/datasets/{repo_id}/resolve/main/meta/episodes.jsonl", timeout=5
- )
- response.raise_for_status()
- # Split into lines and parse each line as JSON
- tasks_jsonl = [json.loads(line) for line in response.text.splitlines() if line.strip()]
-
- filtered_tasks_jsonl = [row for row in tasks_jsonl if row["episode_index"] == episode_id]
- tasks = filtered_tasks_jsonl[0]["tasks"]
-
- videos_info[0]["language_instruction"] = tasks
-
- if episodes is None:
- episodes = list(
- range(dataset.num_episodes if isinstance(dataset, LeRobotDataset) else dataset.total_episodes)
- )
-
- return render_template(
- "visualize_dataset_template.html",
- episode_id=episode_id,
- episodes=episodes,
- dataset_info=dataset_info,
- videos_info=videos_info,
- episode_data_csv_str=episode_data_csv_str,
- columns=columns,
- ignored_columns=ignored_columns,
- )
-
- app.run(host=host, port=port)
-
-
-def get_ep_csv_fname(episode_id: int):
- ep_csv_fname = f"episode_{episode_id}.csv"
- return ep_csv_fname
-
-
-def get_episode_data(dataset: LeRobotDataset | IterableNamespace, episode_index):
- """Get a csv str containing timeseries data of an episode (e.g. state and action).
- This file will be loaded by Dygraph javascript to plot data in real time."""
- columns = []
-
- selected_columns = [col for col, ft in dataset.features.items() if ft["dtype"] in ["float32", "int32"]]
- selected_columns.remove("timestamp")
-
- ignored_columns = []
- for column_name in selected_columns:
- shape = dataset.features[column_name]["shape"]
- shape_dim = len(shape)
- if shape_dim > 1:
- selected_columns.remove(column_name)
- ignored_columns.append(column_name)
-
- # init header of csv with state and action names
- header = ["timestamp"]
-
- for column_name in selected_columns:
- dim_state = (
- dataset.meta.shapes[column_name][0]
- if isinstance(dataset, LeRobotDataset)
- else dataset.features[column_name].shape[0]
- )
-
- if "names" in dataset.features[column_name] and dataset.features[column_name]["names"]:
- column_names = dataset.features[column_name]["names"]
- while not isinstance(column_names, list):
- column_names = list(column_names.values())[0]
- else:
- column_names = [f"{column_name}_{i}" for i in range(dim_state)]
- columns.append({"key": column_name, "value": column_names})
-
- header += column_names
-
- selected_columns.insert(0, "timestamp")
-
- if isinstance(dataset, LeRobotDataset):
- from_idx = dataset.episode_data_index["from"][episode_index]
- to_idx = dataset.episode_data_index["to"][episode_index]
- data = (
- dataset.hf_dataset.select(range(from_idx, to_idx))
- .select_columns(selected_columns)
- .with_format("pandas")
- )
- else:
- repo_id = dataset.repo_id
-
- url = f"https://huggingface.co/datasets/{repo_id}/resolve/main/" + dataset.data_path.format(
- episode_chunk=int(episode_index) // dataset.chunks_size, episode_index=episode_index
- )
- df = pd.read_parquet(url)
- data = df[selected_columns] # Select specific columns
-
- rows = np.hstack(
- (
- np.expand_dims(data["timestamp"], axis=1),
- *[np.vstack(data[col]) for col in selected_columns[1:]],
- )
- ).tolist()
-
- # Convert data to CSV string
- csv_buffer = StringIO()
- csv_writer = csv.writer(csv_buffer)
- # Write header
- csv_writer.writerow(header)
- # Write data rows
- csv_writer.writerows(rows)
- csv_string = csv_buffer.getvalue()
-
- return csv_string, columns, ignored_columns
-
-
-def get_episode_video_paths(dataset: LeRobotDataset, ep_index: int) -> list[str]:
- # get first frame of episode (hack to get video_path of the episode)
- first_frame_idx = dataset.episode_data_index["from"][ep_index].item()
- return [
- dataset.hf_dataset.select_columns(key)[first_frame_idx][key]["path"]
- for key in dataset.meta.video_keys
- ]
-
-
-def get_episode_language_instruction(dataset: LeRobotDataset, ep_index: int) -> list[str]:
- # check if the dataset has language instructions
- if "language_instruction" not in dataset.features:
- return None
-
- # get first frame index
- first_frame_idx = dataset.episode_data_index["from"][ep_index].item()
-
- language_instruction = dataset.hf_dataset[first_frame_idx]["language_instruction"]
- # TODO (michel-aractingi) hack to get the sentence, some strings in openx are badly stored
- # with the tf.tensor appearing in the string
- return language_instruction.removeprefix("tf.Tensor(b'").removesuffix("', shape=(), dtype=string)")
-
-
-def get_dataset_info(repo_id: str) -> IterableNamespace:
- response = requests.get(
- f"https://huggingface.co/datasets/{repo_id}/resolve/main/meta/info.json", timeout=5
- )
- response.raise_for_status() # Raises an HTTPError for bad responses
- dataset_info = response.json()
- dataset_info["repo_id"] = repo_id
- return IterableNamespace(dataset_info)
-
-
-def visualize_dataset_html(
- dataset: LeRobotDataset | None,
- episodes: list[int] | None = None,
- output_dir: Path | None = None,
- serve: bool = True,
- host: str = "127.0.0.1",
- port: int = 9090,
- force_override: bool = False,
-) -> Path | None:
- init_logging()
-
- template_dir = Path(__file__).resolve().parent.parent / "templates"
-
- if output_dir is None:
- # Create a temporary directory that will be automatically cleaned up
- output_dir = tempfile.mkdtemp(prefix="lerobot_visualize_dataset_")
-
- output_dir = Path(output_dir)
- if output_dir.exists():
- if force_override:
- shutil.rmtree(output_dir)
- else:
- logging.info(f"Output directory already exists. Loading from it: '{output_dir}'")
-
- output_dir.mkdir(parents=True, exist_ok=True)
-
- static_dir = output_dir / "static"
- static_dir.mkdir(parents=True, exist_ok=True)
-
- if dataset is None:
- if serve:
- run_server(
- dataset=None,
- episodes=None,
- host=host,
- port=port,
- static_folder=static_dir,
- template_folder=template_dir,
- )
- else:
- # Create a simlink from the dataset video folder containing mp4 files to the output directory
- # so that the http server can get access to the mp4 files.
- if isinstance(dataset, LeRobotDataset):
- ln_videos_dir = static_dir / "videos"
- if not ln_videos_dir.exists():
- ln_videos_dir.symlink_to((dataset.root / "videos").resolve().as_posix())
-
- if serve:
- run_server(dataset, episodes, host, port, static_dir, template_dir)
-
-
-def main():
- parser = argparse.ArgumentParser()
-
- parser.add_argument(
- "--repo-id",
- type=str,
- default=None,
- help="Name of hugging face repositery containing a LeRobotDataset dataset (e.g. `lerobot/pusht` for https://huggingface.co/datasets/lerobot/pusht).",
- )
- parser.add_argument(
- "--root",
- type=Path,
- default=None,
- help="Root directory for a dataset stored locally (e.g. `--root data`). By default, the dataset will be loaded from hugging face cache folder, or downloaded from the hub if available.",
- )
- parser.add_argument(
- "--load-from-hf-hub",
- type=int,
- default=0,
- help="Load videos and parquet files from HF Hub rather than local system.",
- )
- parser.add_argument(
- "--episodes",
- type=int,
- nargs="*",
- default=None,
- help="Episode indices to visualize (e.g. `0 1 5 6` to load episodes of index 0, 1, 5 and 6). By default loads all episodes.",
- )
- parser.add_argument(
- "--output-dir",
- type=Path,
- default=None,
- help="Directory path to write html files and kickoff a web server. By default write them to 'outputs/visualize_dataset/REPO_ID'.",
- )
- parser.add_argument(
- "--serve",
- type=int,
- default=1,
- help="Launch web server.",
- )
- parser.add_argument(
- "--host",
- type=str,
- default="127.0.0.1",
- help="Web host used by the http server.",
- )
- parser.add_argument(
- "--port",
- type=int,
- default=9090,
- help="Web port used by the http server.",
- )
- parser.add_argument(
- "--force-override",
- type=int,
- default=0,
- help="Delete the output directory if it exists already.",
- )
-
- parser.add_argument(
- "--tolerance-s",
- type=float,
- default=1e-4,
- help=(
- "Tolerance in seconds used to ensure data timestamps respect the dataset fps value"
- "This is argument passed to the constructor of LeRobotDataset and maps to its tolerance_s constructor argument"
- "If not given, defaults to 1e-4."
- ),
- )
-
- args = parser.parse_args()
- kwargs = vars(args)
- repo_id = kwargs.pop("repo_id")
- load_from_hf_hub = kwargs.pop("load_from_hf_hub")
- root = kwargs.pop("root")
- tolerance_s = kwargs.pop("tolerance_s")
-
- dataset = None
- if repo_id:
- dataset = (
- LeRobotDataset(repo_id, root=root, tolerance_s=tolerance_s)
- if not load_from_hf_hub
- else get_dataset_info(repo_id)
- )
-
- visualize_dataset_html(dataset, **vars(args))
-
-
-if __name__ == "__main__":
- main()
diff --git a/lerobot/templates/visualize_dataset_homepage.html b/lerobot/templates/visualize_dataset_homepage.html
deleted file mode 100644
index 19613afb..00000000
--- a/lerobot/templates/visualize_dataset_homepage.html
+++ /dev/null
@@ -1,68 +0,0 @@
-
-
-
-
-
- Interactive Video Background Page
-
-
-
-
-