* Fix imports * Add feetech write tests * Nit * Add autoclosing fixture * Assert ping stub called * Add CalibrationMode * Add Motor in dxl robots * Simplify split_int_bytes * Rename read/write -> sync_read/write, refactor, add write * Rename tests * Refactor dxl tests by functionality * Add dxl write test * Refactor _is_comm_success * Refactor feetech tests by functionality * Add feetech write test * Simplify _is_comm_success & _is_error * Move mock_serial patch to dedicated file * Remove test skips & fix docstrings * Nit * Add dxl operating modes * Add is_connected in robots and teleops * Update Koch * Add feetech operating modes * Caps dxl OperatingMode * Update ensure_safe_goal_position * Update so100 * Privatize methods & renames * Fix dict * Add _configure_motors & move ping methods * Return models (str) with pings * Implement feetech broadcast ping * Add raw_values option * Rename idx -> id_ * Improve errors * Fix feetech ping tests * Ensure motors exist at connection time * Update tests * Add test_motors_bus * Move DriveMode & TorqueMode * Update Koch imports * Update so100 imports * Fix visualize_motors_bus * Fix imports * Add calibration * Rename idx -> id_ * Rename idx -> id_ * (WIP) _async_read * Add new calibration method for robot refactor (#896) Co-authored-by: Simon Alibert <simon.alibert@huggingface.co> * Remove deprecated scripts * Rename CalibrationMode -> MotorNormMode * Fix calibration functions * Remove todo * Add scan_port utility * Add calibration utilities * Move encoding functions to encoding_utils * Add test_encoding_utils * Rename test * Add more calibration utilities * Format baudrate tables * Implement SO-100 leader calibration * Implement SO-100 follower calibration * Implement Koch calibration * Add test_scan_port (TODO) * Fix calibration * Hack feetech firmware bug * Update tests * Update Koch & SO-100 * Improve format * Rename SO-100 classes * Rename Koch classes * Add calibration tests * Remove old calibration tests * Revert feetech hack and monkeypatch instead * Simplify motors mocks * Add is_calibrated test * Update viperx & widowx * Rename viperx & widowx * Remove old calibration * feat(teleop): thread-safe keyboard teleop implementation (#869) Co-authored-by: Simon Alibert <75076266+aliberts@users.noreply.github.com> * Add support for feetech scs series + various fixes * Update dynamixel with motors bus & tables changes * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * (WIP) Add Hope Jr * Rename arm -> hand * (WIP) Add homonculus arm & glove * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add Feetech protocol version * Implement read * Use constants from sdks * (nit) move write * Fix broadcast ping type hint * Add protocol 1 broadcast ping * Refactor & add _serialize_data * Add feetech sm8512bl * Make feetech broadcast ping faster in protocol 1 * Cleanup * Add support for feetech protocol 1 to _split_into_byte_chunks * Fix unormalize * Remove test_motors_bus fixtures * Add more segmented tests (base motor bus & feetech), add feetech protocol 1 support * Add more segmented tests (dynamixel) * Refactor tests * Add handshake, fix feetech _read_firmware_version * Fix tests * Motors config & disconnect fixes * Add torque_disabled context * Update branch & fix pre-commit errors * Fix hand & glove readings * Update feetech tables * Move read/write_calibration implementations * Add setup_motor * Fix calibration msg display * Fix setup_motor & add it to robots * Fix _find_single_motor * Remove deprecated configure_motor * Remove deprecated dynamixel_calibration * Remove names * Remove deprecated import * refactor/lekiwi robot (#863) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Simon Alibert <simon.alibert@huggingface.co> Co-authored-by: Simon Alibert <75076266+aliberts@users.noreply.github.com> * fix(teleoperators): use property is_connected (#1075) * Remove deprecated manipulator * Update robot features & naming * Update teleop features & naming * Add make_teleoperator_from_config * Rename find_port * Fix config parsing * Remove app script * Add setup_motors * Add teleoperate * Add record * Add replay * Fix test_datasets * Add mock robot & teleop * Add new test_control_robot * Add test_record_and_resume * Remove deprecated scripts & tests * Add calibrate * Add docstrings * Fix tests (no-extras install) * Add SO101 * Remove pynput from optional deps * Rename example 7 * Remove unecessary id * Add MotorsBus docstrings * Rename arm -> bus * Remove Moss arm * Fix setup_motors & calibrate configs * Fix test_calibrate * Add copyrights * Update hand & arm * Update homonculus hand & arm * Fix dxl _find_single_motor * Update glove * Add setup_motors for lekiwi * Fix glove calibration * Complete docstring * Add check for same min and max during calibration * Move MockMotorsBus * Add so100_follower tests * (WIP) add calibration gui * Fix test * Add setup_motors * Update calibration gui * Remove old .cache folder * Replace deprecated abc.abstractproperty * Fix feetech protocol 1 configure * Cleanup gui & add copyrights * Anatomically precise joint names * (WIP) Add glove to hand joints translation * Move make_robot_config * Add drive_mode & norm_mode in glove calibration * Fix joints translation * Fix normalization drive_mode * nit * Fix glove to hand conversion * Adapt feetech calibration * Remove pygame prompt * Implement arm calibration (hacks) * Better MotorsBus error messages * Update feetech read_calibration * Fix feetech test_is_calibrated * Cleanup glove * (WIP) Update arm * Add changes from #1117 * refactor(cameras): cameras implementations + tests improvements (#1108) Co-authored-by: Simon Alibert <simon.alibert@huggingface.co> Co-authored-by: Simon Alibert <75076266+aliberts@users.noreply.github.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Fix arm joints order * Add timeout/event logic * Fix arm & glove * Fix predict_action from record * fix(cameras): update docstring + handle sn when starts with 0 + update timeouts to more reasonable value (#1154) * fix(scripts): parser instead of draccus in record + add __get_path_fields__() to RecordConfig (#1155) * Left/Right sides + other fixes * Arm fixes and add config * More hacks * Add control scripts * Fix merge errors * push changes to calibration, teleop and docs * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Move readme to docs * update readme Signed-off-by: Martino Russi <77496684+nepyope@users.noreply.github.com> * Add files via upload Signed-off-by: Martino Russi <77496684+nepyope@users.noreply.github.com> * Update image sources * Symlink doc * Compress image * Move image * Update docs link * fix docs * simplify teleop scripts * fix variable names * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Address code review * add EMA to glove * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * integrate teleoperation for hand * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * update docs * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * import hopejr/homunculus in teleoperate * update docs for teleoperate, record, replay, train and inference * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * chore(hopejr): address comments * chore(hopejr): address coments 2 * chore(docs): update teleoperation instructions for the hand/glove * fix(hopejr): calibration int + update docs --------- Signed-off-by: Martino Russi <77496684+nepyope@users.noreply.github.com> Signed-off-by: Simon Alibert <75076266+aliberts@users.noreply.github.com> Co-authored-by: Pepijn <138571049+pkooij@users.noreply.github.com> Co-authored-by: Steven Palma <imstevenpmwork@ieee.org> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: nepyope <nopyeps@gmail.com> Co-authored-by: Martino Russi <77496684+nepyope@users.noreply.github.com> Co-authored-by: Steven Palma <steven.palma@huggingface.co>
436 lines
23 KiB
Markdown
436 lines
23 KiB
Markdown
<p align="center">
|
||
<picture>
|
||
<source media="(prefers-color-scheme: dark)" srcset="media/lerobot-logo-thumbnail.png">
|
||
<source media="(prefers-color-scheme: light)" srcset="media/lerobot-logo-thumbnail.png">
|
||
<img alt="LeRobot, Hugging Face Robotics Library" src="media/lerobot-logo-thumbnail.png" style="max-width: 100%;">
|
||
</picture>
|
||
<br/>
|
||
<br/>
|
||
</p>
|
||
|
||
<div align="center">
|
||
|
||
[](https://github.com/huggingface/lerobot/actions/workflows/nightly-tests.yml?query=branch%3Amain)
|
||
[](https://codecov.io/gh/huggingface/lerobot)
|
||
[](https://www.python.org/downloads/)
|
||
[](https://github.com/huggingface/lerobot/blob/main/LICENSE)
|
||
[](https://pypi.org/project/lerobot/)
|
||
[](https://pypi.org/project/lerobot/)
|
||
[](https://github.com/huggingface/lerobot/tree/main/examples)
|
||
[](https://github.com/huggingface/lerobot/blob/main/CODE_OF_CONDUCT.md)
|
||
[](https://discord.gg/s3KuuzsPFb)
|
||
|
||
</div>
|
||
|
||
<h2 align="center">
|
||
<p><a href="https://huggingface.co/docs/lerobot/hope_jr">
|
||
Build Your Own HopeJR Robot!</a></p>
|
||
</h2>
|
||
|
||
<div align="center">
|
||
<img
|
||
src="media/hope_jr/hopejr.png?raw=true"
|
||
alt="HopeJR robot"
|
||
title="HopeJR robot"
|
||
style="width: 60%;"
|
||
/>
|
||
|
||
<p><strong>Meet HopeJR – A humanoid robot arm and hand for dexterous manipulation!</strong></p>
|
||
<p>Control it with exoskeletons and gloves for precise hand movements.</p>
|
||
<p>Perfect for advanced manipulation tasks! 🤖</p>
|
||
|
||
<p><a href="https://huggingface.co/docs/lerobot/hope_jr">
|
||
See the full HopeJR tutorial here.</a></p>
|
||
</div>
|
||
|
||
<br/>
|
||
|
||
<h2 align="center">
|
||
<p><a href="https://huggingface.co/docs/lerobot/so101">
|
||
Build Your Own SO-101 Robot!</a></p>
|
||
</h2>
|
||
|
||
<div align="center">
|
||
<div style="display: flex; gap: 1rem; justify-content: center; align-items: center;" >
|
||
<img
|
||
src="media/so101/so101.webp?raw=true"
|
||
alt="SO-101 follower arm"
|
||
title="SO-101 follower arm"
|
||
style="width: 40%;"
|
||
/>
|
||
<img
|
||
src="media/so101/so101-leader.webp?raw=true"
|
||
alt="SO-101 leader arm"
|
||
title="SO-101 leader arm"
|
||
style="width: 40%;"
|
||
/>
|
||
</div>
|
||
|
||
|
||
<p><strong>Meet the updated SO100, the SO-101 – Just €114 per arm!</strong></p>
|
||
<p>Train it in minutes with a few simple moves on your laptop.</p>
|
||
<p>Then sit back and watch your creation act autonomously! 🤯</p>
|
||
|
||
<p><a href="https://huggingface.co/docs/lerobot/so101">
|
||
See the full SO-101 tutorial here.</a></p>
|
||
|
||
<p>Want to take it to the next level? Make your SO-101 mobile by building LeKiwi!</p>
|
||
<p>Check out the <a href="https://huggingface.co/docs/lerobot/lekiwi">LeKiwi tutorial</a> and bring your robot to life on wheels.</p>
|
||
|
||
<img src="media/lekiwi/kiwi.webp?raw=true" alt="LeKiwi mobile robot" title="LeKiwi mobile robot" width="50%">
|
||
</div>
|
||
|
||
<br/>
|
||
|
||
<h3 align="center">
|
||
<p>LeRobot: State-of-the-art AI for real-world robotics</p>
|
||
</h3>
|
||
|
||
---
|
||
|
||
🤗 LeRobot aims to provide models, datasets, and tools for real-world robotics in PyTorch. The goal is to lower the barrier to entry to robotics so that everyone can contribute and benefit from sharing datasets and pretrained models.
|
||
|
||
🤗 LeRobot contains state-of-the-art approaches that have been shown to transfer to the real-world with a focus on imitation learning and reinforcement learning.
|
||
|
||
🤗 LeRobot already provides a set of pretrained models, datasets with human collected demonstrations, and simulation environments to get started without assembling a robot. In the coming weeks, the plan is to add more and more support for real-world robotics on the most affordable and capable robots out there.
|
||
|
||
🤗 LeRobot hosts pretrained models and datasets on this Hugging Face community page: [huggingface.co/lerobot](https://huggingface.co/lerobot)
|
||
|
||
#### Examples of pretrained models on simulation environments
|
||
|
||
<table>
|
||
<tr>
|
||
<td><img src="media/gym/aloha_act.gif" width="100%" alt="ACT policy on ALOHA env"/></td>
|
||
<td><img src="media/gym/simxarm_tdmpc.gif" width="100%" alt="TDMPC policy on SimXArm env"/></td>
|
||
<td><img src="media/gym/pusht_diffusion.gif" width="100%" alt="Diffusion policy on PushT env"/></td>
|
||
</tr>
|
||
<tr>
|
||
<td align="center">ACT policy on ALOHA env</td>
|
||
<td align="center">TDMPC policy on SimXArm env</td>
|
||
<td align="center">Diffusion policy on PushT env</td>
|
||
</tr>
|
||
</table>
|
||
|
||
### Acknowledgment
|
||
|
||
- The LeRobot team 🤗 for building SmolVLA [Paper](https://arxiv.org/abs/2506.01844), [Blog](https://huggingface.co/blog/smolvla).
|
||
- Thanks to Tony Zhao, Zipeng Fu and colleagues for open sourcing ACT policy, ALOHA environments and datasets. Ours are adapted from [ALOHA](https://tonyzhaozh.github.io/aloha) and [Mobile ALOHA](https://mobile-aloha.github.io).
|
||
- Thanks to Cheng Chi, Zhenjia Xu and colleagues for open sourcing Diffusion policy, Pusht environment and datasets, as well as UMI datasets. Ours are adapted from [Diffusion Policy](https://diffusion-policy.cs.columbia.edu) and [UMI Gripper](https://umi-gripper.github.io).
|
||
- Thanks to Nicklas Hansen, Yunhai Feng and colleagues for open sourcing TDMPC policy, Simxarm environments and datasets. Ours are adapted from [TDMPC](https://github.com/nicklashansen/tdmpc) and [FOWM](https://www.yunhaifeng.com/FOWM).
|
||
- Thanks to Antonio Loquercio and Ashish Kumar for their early support.
|
||
- Thanks to [Seungjae (Jay) Lee](https://sjlee.cc/), [Mahi Shafiullah](https://mahis.life/) and colleagues for open sourcing [VQ-BeT](https://sjlee.cc/vq-bet/) policy and helping us adapt the codebase to our repository. The policy is adapted from [VQ-BeT repo](https://github.com/jayLEE0301/vq_bet_official).
|
||
|
||
|
||
## Installation
|
||
|
||
Download our source code:
|
||
```bash
|
||
git clone https://github.com/huggingface/lerobot.git
|
||
cd lerobot
|
||
```
|
||
|
||
Create a virtual environment with Python 3.10 and activate it, e.g. with [`miniconda`](https://docs.anaconda.com/free/miniconda/index.html):
|
||
```bash
|
||
conda create -y -n lerobot python=3.10
|
||
conda activate lerobot
|
||
```
|
||
|
||
When using `miniconda`, install `ffmpeg` in your environment:
|
||
```bash
|
||
conda install ffmpeg -c conda-forge
|
||
```
|
||
|
||
> **NOTE:** This usually installs `ffmpeg 7.X` for your platform compiled with the `libsvtav1` encoder. If `libsvtav1` is not supported (check supported encoders with `ffmpeg -encoders`), you can:
|
||
> - _[On any platform]_ Explicitly install `ffmpeg 7.X` using:
|
||
> ```bash
|
||
> conda install ffmpeg=7.1.1 -c conda-forge
|
||
> ```
|
||
> - _[On Linux only]_ Install [ffmpeg build dependencies](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#GettheDependencies) and [compile ffmpeg from source with libsvtav1](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#libsvtav1), and make sure you use the corresponding ffmpeg binary to your install with `which ffmpeg`.
|
||
|
||
Install 🤗 LeRobot:
|
||
```bash
|
||
pip install -e .
|
||
```
|
||
|
||
> **NOTE:** If you encounter build errors, you may need to install additional dependencies (`cmake`, `build-essential`, and `ffmpeg libs`). On Linux, run:
|
||
`sudo apt-get install cmake build-essential python3-dev pkg-config libavformat-dev libavcodec-dev libavdevice-dev libavutil-dev libswscale-dev libswresample-dev libavfilter-dev`. For other systems, see: [Compiling PyAV](https://pyav.org/docs/develop/overview/installation.html#bring-your-own-ffmpeg)
|
||
|
||
For simulations, 🤗 LeRobot comes with gymnasium environments that can be installed as extras:
|
||
- [aloha](https://github.com/huggingface/gym-aloha)
|
||
- [xarm](https://github.com/huggingface/gym-xarm)
|
||
- [pusht](https://github.com/huggingface/gym-pusht)
|
||
|
||
For instance, to install 🤗 LeRobot with aloha and pusht, use:
|
||
```bash
|
||
pip install -e ".[aloha, pusht]"
|
||
```
|
||
|
||
To use [Weights and Biases](https://docs.wandb.ai/quickstart) for experiment tracking, log in with
|
||
```bash
|
||
wandb login
|
||
```
|
||
|
||
(note: you will also need to enable WandB in the configuration. See below.)
|
||
|
||
### Visualize datasets
|
||
|
||
Check out [example 1](./examples/1_load_lerobot_dataset.py) that illustrates how to use our dataset class which automatically downloads data from the Hugging Face hub.
|
||
|
||
You can also locally visualize episodes from a dataset on the hub by executing our script from the command line:
|
||
```bash
|
||
python -m lerobot.scripts.visualize_dataset \
|
||
--repo-id lerobot/pusht \
|
||
--episode-index 0
|
||
```
|
||
|
||
or from a dataset in a local folder with the `root` option and the `--local-files-only` (in the following case the dataset will be searched for in `./my_local_data_dir/lerobot/pusht`)
|
||
```bash
|
||
python -m lerobot.scripts.visualize_dataset \
|
||
--repo-id lerobot/pusht \
|
||
--root ./my_local_data_dir \
|
||
--local-files-only 1 \
|
||
--episode-index 0
|
||
```
|
||
|
||
|
||
It will open `rerun.io` and display the camera streams, robot states and actions, like this:
|
||
|
||
https://github-production-user-asset-6210df.s3.amazonaws.com/4681518/328035972-fd46b787-b532-47e2-bb6f-fd536a55a7ed.mov?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20240505%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240505T172924Z&X-Amz-Expires=300&X-Amz-Signature=d680b26c532eeaf80740f08af3320d22ad0b8a4e4da1bcc4f33142c15b509eda&X-Amz-SignedHeaders=host&actor_id=24889239&key_id=0&repo_id=748713144
|
||
|
||
|
||
Our script can also visualize datasets stored on a distant server. See `python -m lerobot.scripts.visualize_dataset --help` for more instructions.
|
||
|
||
### The `LeRobotDataset` format
|
||
|
||
A dataset in `LeRobotDataset` format is very simple to use. It can be loaded from a repository on the Hugging Face hub or a local folder simply with e.g. `dataset = LeRobotDataset("lerobot/aloha_static_coffee")` and can be indexed into like any Hugging Face and PyTorch dataset. For instance `dataset[0]` will retrieve a single temporal frame from the dataset containing observation(s) and an action as PyTorch tensors ready to be fed to a model.
|
||
|
||
A specificity of `LeRobotDataset` is that, rather than retrieving a single frame by its index, we can retrieve several frames based on their temporal relationship with the indexed frame, by setting `delta_timestamps` to a list of relative times with respect to the indexed frame. For example, with `delta_timestamps = {"observation.image": [-1, -0.5, -0.2, 0]}` one can retrieve, for a given index, 4 frames: 3 "previous" frames 1 second, 0.5 seconds, and 0.2 seconds before the indexed frame, and the indexed frame itself (corresponding to the 0 entry). See example [1_load_lerobot_dataset.py](examples/1_load_lerobot_dataset.py) for more details on `delta_timestamps`.
|
||
|
||
Under the hood, the `LeRobotDataset` format makes use of several ways to serialize data which can be useful to understand if you plan to work more closely with this format. We tried to make a flexible yet simple dataset format that would cover most type of features and specificities present in reinforcement learning and robotics, in simulation and in real-world, with a focus on cameras and robot states but easily extended to other types of sensory inputs as long as they can be represented by a tensor.
|
||
|
||
Here are the important details and internal structure organization of a typical `LeRobotDataset` instantiated with `dataset = LeRobotDataset("lerobot/aloha_static_coffee")`. The exact features will change from dataset to dataset but not the main aspects:
|
||
|
||
```
|
||
dataset attributes:
|
||
├ hf_dataset: a Hugging Face dataset (backed by Arrow/parquet). Typical features example:
|
||
│ ├ observation.images.cam_high (VideoFrame):
|
||
│ │ VideoFrame = {'path': path to a mp4 video, 'timestamp' (float32): timestamp in the video}
|
||
│ ├ observation.state (list of float32): position of an arm joints (for instance)
|
||
│ ... (more observations)
|
||
│ ├ action (list of float32): goal position of an arm joints (for instance)
|
||
│ ├ episode_index (int64): index of the episode for this sample
|
||
│ ├ frame_index (int64): index of the frame for this sample in the episode ; starts at 0 for each episode
|
||
│ ├ timestamp (float32): timestamp in the episode
|
||
│ ├ next.done (bool): indicates the end of an episode ; True for the last frame in each episode
|
||
│ └ index (int64): general index in the whole dataset
|
||
├ episode_data_index: contains 2 tensors with the start and end indices of each episode
|
||
│ ├ from (1D int64 tensor): first frame index for each episode — shape (num episodes,) starts with 0
|
||
│ └ to: (1D int64 tensor): last frame index for each episode — shape (num episodes,)
|
||
├ stats: a dictionary of statistics (max, mean, min, std) for each feature in the dataset, for instance
|
||
│ ├ observation.images.cam_high: {'max': tensor with same number of dimensions (e.g. `(c, 1, 1)` for images, `(c,)` for states), etc.}
|
||
│ ...
|
||
├ info: a dictionary of metadata on the dataset
|
||
│ ├ codebase_version (str): this is to keep track of the codebase version the dataset was created with
|
||
│ ├ fps (float): frame per second the dataset is recorded/synchronized to
|
||
│ ├ video (bool): indicates if frames are encoded in mp4 video files to save space or stored as png files
|
||
│ └ encoding (dict): if video, this documents the main options that were used with ffmpeg to encode the videos
|
||
├ videos_dir (Path): where the mp4 videos or png images are stored/accessed
|
||
└ camera_keys (list of string): the keys to access camera features in the item returned by the dataset (e.g. `["observation.images.cam_high", ...]`)
|
||
```
|
||
|
||
A `LeRobotDataset` is serialised using several widespread file formats for each of its parts, namely:
|
||
- hf_dataset stored using Hugging Face datasets library serialization to parquet
|
||
- videos are stored in mp4 format to save space
|
||
- metadata are stored in plain json/jsonl files
|
||
|
||
Dataset can be uploaded/downloaded from the HuggingFace hub seamlessly. To work on a local dataset, you can specify its location with the `root` argument if it's not in the default `~/.cache/huggingface/lerobot` location.
|
||
|
||
### Evaluate a pretrained policy
|
||
|
||
Check out [example 2](./examples/2_evaluate_pretrained_policy.py) that illustrates how to download a pretrained policy from Hugging Face hub, and run an evaluation on its corresponding environment.
|
||
|
||
We also provide a more capable script to parallelize the evaluation over multiple environments during the same rollout. Here is an example with a pretrained model hosted on [lerobot/diffusion_pusht](https://huggingface.co/lerobot/diffusion_pusht):
|
||
```bash
|
||
python -m lerobot.scripts.eval \
|
||
--policy.path=lerobot/diffusion_pusht \
|
||
--env.type=pusht \
|
||
--eval.batch_size=10 \
|
||
--eval.n_episodes=10 \
|
||
--policy.use_amp=false \
|
||
--policy.device=cuda
|
||
```
|
||
|
||
Note: After training your own policy, you can re-evaluate the checkpoints with:
|
||
|
||
```bash
|
||
python -m lerobot.scripts.eval --policy.path={OUTPUT_DIR}/checkpoints/last/pretrained_model
|
||
```
|
||
|
||
See `python -m lerobot.scripts.eval --help` for more instructions.
|
||
|
||
### Train your own policy
|
||
|
||
Check out [example 3](./examples/3_train_policy.py) that illustrates how to train a model using our core library in python, and [example 4](./examples/4_train_policy_with_script.md) that shows how to use our training script from command line.
|
||
|
||
To use wandb for logging training and evaluation curves, make sure you've run `wandb login` as a one-time setup step. Then, when running the training command above, enable WandB in the configuration by adding `--wandb.enable=true`.
|
||
|
||
A link to the wandb logs for the run will also show up in yellow in your terminal. Here is an example of what they look like in your browser. Please also check [here](./examples/4_train_policy_with_script.md#typical-logs-and-metrics) for the explanation of some commonly used metrics in logs.
|
||
|
||

|
||
|
||
Note: For efficiency, during training every checkpoint is evaluated on a low number of episodes. You may use `--eval.n_episodes=500` to evaluate on more episodes than the default. Or, after training, you may want to re-evaluate your best checkpoints on more episodes or change the evaluation settings. See `python -m lerobot.scripts.eval --help` for more instructions.
|
||
|
||
#### Reproduce state-of-the-art (SOTA)
|
||
|
||
We provide some pretrained policies on our [hub page](https://huggingface.co/lerobot) that can achieve state-of-the-art performances.
|
||
You can reproduce their training by loading the config from their run. Simply running:
|
||
```bash
|
||
python -m lerobot.scripts.train --config_path=lerobot/diffusion_pusht
|
||
```
|
||
reproduces SOTA results for Diffusion Policy on the PushT task.
|
||
|
||
## Contribute
|
||
|
||
If you would like to contribute to 🤗 LeRobot, please check out our [contribution guide](https://github.com/huggingface/lerobot/blob/main/CONTRIBUTING.md).
|
||
|
||
<!-- ### Add a new dataset
|
||
|
||
To add a dataset to the hub, you need to login using a write-access token, which can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens):
|
||
```bash
|
||
huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
|
||
```
|
||
|
||
Then point to your raw dataset folder (e.g. `data/aloha_static_pingpong_test_raw`), and push your dataset to the hub with:
|
||
```bash
|
||
python lerobot/scripts/push_dataset_to_hub.py \
|
||
--raw-dir data/aloha_static_pingpong_test_raw \
|
||
--out-dir data \
|
||
--repo-id lerobot/aloha_static_pingpong_test \
|
||
--raw-format aloha_hdf5
|
||
```
|
||
|
||
See `python lerobot/scripts/push_dataset_to_hub.py --help` for more instructions.
|
||
|
||
If your dataset format is not supported, implement your own in `lerobot/datasets/push_dataset_to_hub/${raw_format}_format.py` by copying examples like [pusht_zarr](https://github.com/huggingface/lerobot/blob/main/lerobot/datasets/push_dataset_to_hub/pusht_zarr_format.py), [umi_zarr](https://github.com/huggingface/lerobot/blob/main/lerobot/datasets/push_dataset_to_hub/umi_zarr_format.py), [aloha_hdf5](https://github.com/huggingface/lerobot/blob/main/lerobot/datasets/push_dataset_to_hub/aloha_hdf5_format.py), or [xarm_pkl](https://github.com/huggingface/lerobot/blob/main/lerobot/datasets/push_dataset_to_hub/xarm_pkl_format.py). -->
|
||
|
||
|
||
### Add a pretrained policy
|
||
|
||
Once you have trained a policy you may upload it to the Hugging Face hub using a hub id that looks like `${hf_user}/${repo_name}` (e.g. [lerobot/diffusion_pusht](https://huggingface.co/lerobot/diffusion_pusht)).
|
||
|
||
You first need to find the checkpoint folder located inside your experiment directory (e.g. `outputs/train/2024-05-05/20-21-12_aloha_act_default/checkpoints/002500`). Within that there is a `pretrained_model` directory which should contain:
|
||
- `config.json`: A serialized version of the policy configuration (following the policy's dataclass config).
|
||
- `model.safetensors`: A set of `torch.nn.Module` parameters, saved in [Hugging Face Safetensors](https://huggingface.co/docs/safetensors/index) format.
|
||
- `train_config.json`: A consolidated configuration containing all parameters used for training. The policy configuration should match `config.json` exactly. This is useful for anyone who wants to evaluate your policy or for reproducibility.
|
||
|
||
To upload these to the hub, run the following:
|
||
```bash
|
||
huggingface-cli upload ${hf_user}/${repo_name} path/to/pretrained_model
|
||
```
|
||
|
||
See [eval.py](https://github.com/huggingface/lerobot/blob/main/lerobot/scripts/eval.py) for an example of how other people may use your policy.
|
||
|
||
|
||
### Improve your code with profiling
|
||
|
||
An example of a code snippet to profile the evaluation of a policy:
|
||
```python
|
||
from torch.profiler import profile, record_function, ProfilerActivity
|
||
|
||
def trace_handler(prof):
|
||
prof.export_chrome_trace(f"tmp/trace_schedule_{prof.step_num}.json")
|
||
|
||
with profile(
|
||
activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
|
||
schedule=torch.profiler.schedule(
|
||
wait=2,
|
||
warmup=2,
|
||
active=3,
|
||
),
|
||
on_trace_ready=trace_handler
|
||
) as prof:
|
||
with record_function("eval_policy"):
|
||
for i in range(num_episodes):
|
||
prof.step()
|
||
# insert code to profile, potentially whole body of eval_policy function
|
||
```
|
||
|
||
## Citation
|
||
|
||
If you want, you can cite this work with:
|
||
```bibtex
|
||
@misc{cadene2024lerobot,
|
||
author = {Cadene, Remi and Alibert, Simon and Soare, Alexander and Gallouedec, Quentin and Zouitine, Adil and Palma, Steven and Kooijmans, Pepijn and Aractingi, Michel and Shukor, Mustafa and Aubakirova, Dana and Russi, Martino and Capuano, Francesco and Pascale, Caroline and Choghari, Jade and Moss, Jess and Wolf, Thomas},
|
||
title = {LeRobot: State-of-the-art Machine Learning for Real-World Robotics in Pytorch},
|
||
howpublished = "\url{https://github.com/huggingface/lerobot}",
|
||
year = {2024}
|
||
}
|
||
```
|
||
|
||
Additionally, if you are using any of the particular policy architecture, pretrained models, or datasets, it is recommended to cite the original authors of the work as they appear below:
|
||
- [SmolVLA](https://arxiv.org/abs/2506.01844)
|
||
```bibtex
|
||
@article{shukor2025smolvla,
|
||
title={SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics},
|
||
author={Shukor, Mustafa and Aubakirova, Dana and Capuano, Francesco and Kooijmans, Pepijn and Palma, Steven and Zouitine, Adil and Aractingi, Michel and Pascal, Caroline and Russi, Martino and Marafioti, Andres and Alibert, Simon and Cord, Matthieu and Wolf, Thomas and Cadene, Remi},
|
||
journal={arXiv preprint arXiv:2506.01844},
|
||
year={2025}
|
||
}
|
||
```
|
||
|
||
- [Diffusion Policy](https://diffusion-policy.cs.columbia.edu)
|
||
```bibtex
|
||
@article{chi2024diffusionpolicy,
|
||
author = {Cheng Chi and Zhenjia Xu and Siyuan Feng and Eric Cousineau and Yilun Du and Benjamin Burchfiel and Russ Tedrake and Shuran Song},
|
||
title ={Diffusion Policy: Visuomotor Policy Learning via Action Diffusion},
|
||
journal = {The International Journal of Robotics Research},
|
||
year = {2024},
|
||
}
|
||
```
|
||
- [ACT or ALOHA](https://tonyzhaozh.github.io/aloha)
|
||
```bibtex
|
||
@article{zhao2023learning,
|
||
title={Learning fine-grained bimanual manipulation with low-cost hardware},
|
||
author={Zhao, Tony Z and Kumar, Vikash and Levine, Sergey and Finn, Chelsea},
|
||
journal={arXiv preprint arXiv:2304.13705},
|
||
year={2023}
|
||
}
|
||
```
|
||
|
||
- [TDMPC](https://www.nicklashansen.com/td-mpc/)
|
||
|
||
```bibtex
|
||
@inproceedings{Hansen2022tdmpc,
|
||
title={Temporal Difference Learning for Model Predictive Control},
|
||
author={Nicklas Hansen and Xiaolong Wang and Hao Su},
|
||
booktitle={ICML},
|
||
year={2022}
|
||
}
|
||
```
|
||
|
||
- [VQ-BeT](https://sjlee.cc/vq-bet/)
|
||
```bibtex
|
||
@article{lee2024behavior,
|
||
title={Behavior generation with latent actions},
|
||
author={Lee, Seungjae and Wang, Yibin and Etukuru, Haritheja and Kim, H Jin and Shafiullah, Nur Muhammad Mahi and Pinto, Lerrel},
|
||
journal={arXiv preprint arXiv:2403.03181},
|
||
year={2024}
|
||
}
|
||
```
|
||
|
||
|
||
- [HIL-SERL](https://hil-serl.github.io/)
|
||
```bibtex
|
||
@Article{luo2024hilserl,
|
||
title={Precise and Dexterous Robotic Manipulation via Human-in-the-Loop Reinforcement Learning},
|
||
author={Jianlan Luo and Charles Xu and Jeffrey Wu and Sergey Levine},
|
||
year={2024},
|
||
eprint={2410.21845},
|
||
archivePrefix={arXiv},
|
||
primaryClass={cs.RO}
|
||
}
|
||
```
|
||
## Star History
|
||
|
||
[](https://star-history.com/#huggingface/lerobot&Timeline)
|