diff --git a/.github/workflows/build_documentation.yml b/.github/workflows/build_documentation.yml new file mode 100644 index 00000000..884e2e4b --- /dev/null +++ b/.github/workflows/build_documentation.yml @@ -0,0 +1,23 @@ +name: Build documentation + +on: + workflow_dispatch: + push: + paths: + - "docs/**" + branches: + - main + - doc-builder* + - v*-release + + +jobs: + build: # zizmor: ignore[excessive-permissions] We follow the same pattern as in Transformers + uses: huggingface/doc-builder/.github/workflows/build_main_documentation.yml@main + with: + commit_sha: ${{ github.sha }} + package: lerobot + additional_args: --not_python_module + secrets: + token: ${{ secrets.HUGGINGFACE_PUSH }} + hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }} diff --git a/.github/workflows/build_pr_documentation.yml b/.github/workflows/build_pr_documentation.yml new file mode 100644 index 00000000..51bab10d --- /dev/null +++ b/.github/workflows/build_pr_documentation.yml @@ -0,0 +1,19 @@ +name: Build PR Documentation + +on: + pull_request: + paths: + - "docs/**" + +concurrency: + group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }} + cancel-in-progress: true + +jobs: + build: # zizmor: ignore[excessive-permissions] We follow the same pattern as in Transformers + uses: huggingface/doc-builder/.github/workflows/build_pr_documentation.yml@main + with: + commit_sha: ${{ github.event.pull_request.head.sha }} + pr_number: ${{ github.event.number }} + package: lerobot + additional_args: --not_python_module diff --git a/.github/workflows/upload_pr_documentation.yml b/.github/workflows/upload_pr_documentation.yml new file mode 100644 index 00000000..32665930 --- /dev/null +++ b/.github/workflows/upload_pr_documentation.yml @@ -0,0 +1,16 @@ +name: Upload PR Documentation + +on: # zizmor: ignore[dangerous-triggers] We follow the same pattern as in Transformers + workflow_run: + workflows: [ "Build PR Documentation" ] + types: + - completed + +jobs: + build: # zizmor: ignore[excessive-permissions] We follow the same pattern as in Transformers + uses: huggingface/doc-builder/.github/workflows/upload_pr_documentation.yml@main + with: + package_name: lerobot + secrets: + hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }} + comment_bot_token: ${{ secrets.COMMENT_BOT_TOKEN }} diff --git a/README.md b/README.md index 3ca20147..dd3dc8c7 100644 --- a/README.md +++ b/README.md @@ -23,21 +23,35 @@
- Build Your Own SO-100 Robot!
++ Build Your Own SO-101 Robot!
+
+
+ Meet the SO-100 β Just $110 per arm!
+ +Meet the updated SO100, the SO-101 β Just β¬114 per arm!
Train it in minutes with a few simple moves on your laptop.
Then sit back and watch your creation act autonomously! π€―
-- Get the full SO-100 tutorial here.
++ See the full SO-101 tutorial here.
-Want to take it to the next level? Make your SO-100 mobile by building LeKiwi!
+Want to take it to the next level? Make your SO-101 mobile by building LeKiwi!
Check out the LeKiwi tutorial and bring your robot to life on wheels.
@@ -51,7 +65,6 @@
---
-
π€ LeRobot aims to provide models, datasets, and tools for real-world robotics in PyTorch. The goal is to lower the barrier to entry to robotics so that everyone can contribute and benefit from sharing datasets and pretrained models.
π€ LeRobot contains state-of-the-art approaches that have been shown to transfer to the real-world with a focus on imitation learning and reinforcement learning.
@@ -208,7 +221,7 @@ dataset attributes:
β β episode_index (int64): index of the episode for this sample
β β frame_index (int64): index of the frame for this sample in the episode ; starts at 0 for each episode
β β timestamp (float32): timestamp in the episode
- β β next.done (bool): indicates the end of en episode ; True for the last frame in each episode
+ β β next.done (bool): indicates the end of an episode ; True for the last frame in each episode
β β index (int64): general index in the whole dataset
β episode_data_index: contains 2 tensors with the start and end indices of each episode
β β from (1D int64 tensor): first frame index for each episode β shape (num episodes,) starts with 0
@@ -257,7 +270,7 @@ See `python lerobot/scripts/eval.py --help` for more instructions.
### Train your own policy
-Check out [example 3](./examples/3_train_policy.py) that illustrate how to train a model using our core library in python, and [example 4](./examples/4_train_policy_with_script.md) that shows how to use our training script from command line.
+Check out [example 3](./examples/3_train_policy.py) that illustrates how to train a model using our core library in python, and [example 4](./examples/4_train_policy_with_script.md) that shows how to use our training script from command line.
To use wandb for logging training and evaluation curves, make sure you've run `wandb login` as a one-time setup step. Then, when running the training command above, enable WandB in the configuration by adding `--wandb.enable=true`.
@@ -308,7 +321,7 @@ Once you have trained a policy you may upload it to the Hugging Face hub using a
You first need to find the checkpoint folder located inside your experiment directory (e.g. `outputs/train/2024-05-05/20-21-12_aloha_act_default/checkpoints/002500`). Within that there is a `pretrained_model` directory which should contain:
- `config.json`: A serialized version of the policy configuration (following the policy's dataclass config).
- `model.safetensors`: A set of `torch.nn.Module` parameters, saved in [Hugging Face Safetensors](https://huggingface.co/docs/safetensors/index) format.
-- `train_config.json`: A consolidated configuration containing all parameter userd for training. The policy configuration should match `config.json` exactly. Thisis useful for anyone who wants to evaluate your policy or for reproducibility.
+- `train_config.json`: A consolidated configuration containing all parameters used for training. The policy configuration should match `config.json` exactly. This is useful for anyone who wants to evaluate your policy or for reproducibility.
To upload these to the hub, run the following:
```bash
diff --git a/benchmarks/video/run_video_benchmark.py b/benchmarks/video/run_video_benchmark.py
index c62578c4..9d587ee9 100644
--- a/benchmarks/video/run_video_benchmark.py
+++ b/benchmarks/video/run_video_benchmark.py
@@ -416,7 +416,7 @@ if __name__ == "__main__":
"--vcodec",
type=str,
nargs="*",
- default=["libx264", "libx265", "libsvtav1"],
+ default=["libx264", "hevc", "libsvtav1"],
help="Video codecs to be tested",
)
parser.add_argument(
@@ -446,7 +446,7 @@ if __name__ == "__main__":
# nargs="*",
# default=[0, 1],
# help="Use the fastdecode tuning option. 0 disables it. "
- # "For libx264 and libx265, only 1 is possible. "
+ # "For libx264 and libx265/hevc, only 1 is possible. "
# "For libsvtav1, 1, 2 or 3 are possible values with a higher number meaning a faster decoding optimization",
# )
parser.add_argument(
diff --git a/docs/README.md b/docs/README.md
new file mode 100644
index 00000000..275fee46
--- /dev/null
+++ b/docs/README.md
@@ -0,0 +1,137 @@
+
+
+# Generating the documentation
+
+To generate the documentation, you first have to build it. Several packages are necessary to build the doc,
+you can install them with the following command, at the root of the code repository:
+
+```bash
+pip install -e ".[docs]"
+```
+
+You will also need `nodejs`. Please refer to their [installation page](https://nodejs.org/en/download)
+
+---
+**NOTE**
+
+You only need to generate the documentation to inspect it locally (if you're planning changes and want to
+check how they look before committing for instance). You don't have to `git commit` the built documentation.
+
+---
+
+## Building the documentation
+
+Once you have setup the `doc-builder` and additional packages, you can generate the documentation by
+typing the following command:
+
+```bash
+doc-builder build lerobot docs/source/ --build_dir ~/tmp/test-build
+```
+
+You can adapt the `--build_dir` to set any temporary folder that you prefer. This command will create it and generate
+the MDX files that will be rendered as the documentation on the main website. You can inspect them in your favorite
+Markdown editor.
+
+## Previewing the documentation
+
+To preview the docs, first install the `watchdog` module with:
+
+```bash
+pip install watchdog
+```
+
+Then run the following command:
+
+```bash
+doc-builder preview lerobot docs/source/
+```
+
+The docs will be viewable at [http://localhost:3000](http://localhost:3000). You can also preview the docs once you have opened a PR. You will see a bot add a comment to a link where the documentation with your changes lives.
+
+---
+**NOTE**
+
+The `preview` command only works with existing doc files. When you add a completely new file, you need to update `_toctree.yml` & restart `preview` command (`ctrl-c` to stop it & call `doc-builder preview ...` again).
+
+---
+
+## Adding a new element to the navigation bar
+
+Accepted files are Markdown (.md).
+
+Create a file with its extension and put it in the source directory. You can then link it to the toc-tree by putting
+the filename without the extension in the [`_toctree.yml`](https://github.com/huggingface/lerobot/blob/main/docs/source/_toctree.yml) file.
+
+## Renaming section headers and moving sections
+
+It helps to keep the old links working when renaming the section header and/or moving sections from one document to another. This is because the old links are likely to be used in Issues, Forums, and Social media and it'd make for a much more superior user experience if users reading those months later could still easily navigate to the originally intended information.
+
+Therefore, we simply keep a little map of moved sections at the end of the document where the original section was. The key is to preserve the original anchor.
+
+So if you renamed a section from: "Section A" to "Section B", then you can add at the end of the file:
+
+```
+Sections that were moved:
+
+[ Section A ]
+```
+and of course, if you moved it to another file, then:
+
+```
+Sections that were moved:
+
+[ Section A ]
+```
+
+Use the relative style to link to the new file so that the versioned docs continue to work.
+
+For an example of a rich moved sections set please see the very end of [the transformers Trainer doc](https://github.com/huggingface/transformers/blob/main/docs/source/en/main_classes/trainer.md).
+
+### Adding a new tutorial
+
+Adding a new tutorial or section is done in two steps:
+
+- Add a new file under `./source`. This file can either be ReStructuredText (.rst) or Markdown (.md).
+- Link that file in `./source/_toctree.yml` on the correct toc-tree.
+
+Make sure to put your new file under the proper section. If you have a doubt, feel free to ask in a Github Issue or PR.
+
+### Writing source documentation
+
+Values that should be put in `code` should either be surrounded by backticks: \`like so\`. Note that argument names
+and objects like True, None or any strings should usually be put in `code`.
+
+#### Writing a multi-line code block
+
+Multi-line code blocks can be useful for displaying examples. They are done between two lines of three backticks as usual in Markdown:
+
+
+````
+```
+# first line of code
+# second line
+# etc
+```
+````
+
+#### Adding an image
+
+Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted `dataset` like
+the ones hosted on [`hf-internal-testing`](https://huggingface.co/hf-internal-testing) in which to place these files and reference
+them by URL. We recommend putting them in the following dataset: [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images).
+If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images
+to this dataset.
diff --git a/docs/source/_toctree.yml b/docs/source/_toctree.yml
new file mode 100644
index 00000000..a0f69d0a
--- /dev/null
+++ b/docs/source/_toctree.yml
@@ -0,0 +1,12 @@
+- sections:
+ - local: index
+ title: LeRobot
+ - local: installation
+ title: Installation
+ title: Get started
+- sections:
+ - local: assemble_so101
+ title: Assemble SO-101
+ - local: getting_started_real_world_robot
+ title: Getting Started with Real-World Robots
+ title: "Tutorials"
diff --git a/docs/source/assemble_so101.mdx b/docs/source/assemble_so101.mdx
new file mode 100644
index 00000000..3150061b
--- /dev/null
+++ b/docs/source/assemble_so101.mdx
@@ -0,0 +1,346 @@
+# Assemble SO-101
+
+In the steps below we explain how to assemble our flagship robot, the SO-101.
+
+## Source the parts
+
+Follow this [README](https://github.com/TheRobotStudio/SO-ARM100). It contains the bill of materials, with a link to source the parts, as well as the instructions to 3D print the parts,
+and advice if it's your first time printing or if you don't own a 3D printer.
+
+Before assembling, you will first need to configure your motors. To this end, we provide a nice script, so let's first install LeRobot. After configuration, we will also guide you through assembly.
+
+## Install LeRobot
+
+To install LeRobot follow our [Installation Guide](./installation)
+
+## Configure motors
+
+To configure the motors designate one bus servo adapter and 6 motors for your leader arm, and similarly the other bus servo adapter and 6 motors for the follower arm. It's convenient to label them and write on each motor if it's for the follower `F` or for the leader `L` and it's ID from 1 to 6.
+
+You now should plug the 5V or 12V power supply to the motor bus. 5V for the STS3215 7.4V motors and 12V for the STS3215 12V motors. Note that the leader arm always uses the 7.4V motors, so watch out that you plug in the right power supply if you have 12V and 7.4V motors, otherwise you might burn your motors! Now, connect the motor bus to your computer via USB. Note that the USB doesn't provide any power, and both the power supply and USB have to be plugged in.
+
+### Find the USB ports associated to each arm
+
+To find the port for each bus servo adapter, run this script:
+```bash
+python lerobot/scripts/find_motors_bus_port.py
+```
+##### Example outputs of script
+
+
|
|
|
|
+
+Make sure both arms are connected and run this script to launch manual calibration:
+```bash
+python lerobot/scripts/control_robot.py \
+ --robot.type=so101 \
+ --robot.cameras='{}' \
+ --control.type=calibrate \
+ --control.arms='["main_follower"]'
+```
+
+#### Manual calibration of leader arm
+You will also need to move the leader arm to these positions sequentially:
+
+| 1. Middle position | 2. Zero position | 3. Rotated position | 4. Rest position |
+| ------------ |------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
+|
|
|
|
|
+
+Run this script to launch manual calibration:
+```bash
+python lerobot/scripts/control_robot.py \
+ --robot.type=so101 \
+ --robot.cameras='{}' \
+ --control.type=calibrate \
+ --control.arms='["main_leader"]'
+```
+
+Congrats π, your robot is all set to learn a task on its own. Start training it by following this tutorial: [Getting started with real-world robots](./getting_started_real_world_robot)
diff --git a/docs/source/getting_started_real_world_robot.mdx b/docs/source/getting_started_real_world_robot.mdx
new file mode 100644
index 00000000..dbc16d8a
--- /dev/null
+++ b/docs/source/getting_started_real_world_robot.mdx
@@ -0,0 +1,370 @@
+# Getting Started with Real-World Robots
+
+This tutorial will explain you how to train a neural network to autonomously control a real robot.
+
+**You'll learn:**
+1. How to record and visualize your dataset.
+2. How to train a policy using your data and prepare it for evaluation.
+3. How to evaluate your policy and visualize the results.
+
+By following these steps, you'll be able to replicate tasks like picking up a Lego block and placing it in a bin with a high success rate, as demonstrated in [this video](https://x.com/RemiCadene/status/1814680760592572934).
+
+This tutorial is specifically made for the affordable [SO-101](https://github.com/TheRobotStudio/SO-ARM100) robot, but it contains additional information to be easily adapted to various types of robots like [Aloha bimanual robot](https://aloha-2.github.io) by changing some configurations. The SO-101 consists of a leader arm and a follower arm, each with 6 motors. It can work with one or several cameras to record the scene, which serve as visual sensors for the robot.
+
+During the data collection phase, you will control the follower arm by moving the leader arm. This process is known as "teleoperation." This technique is used to collect robot trajectories. Afterward, you'll train a neural network to imitate these trajectories and deploy the network to enable your robot to operate autonomously.
+
+If you encounter any issues at any step of the tutorial, feel free to seek help on [Discord](https://discord.com/invite/s3KuuzsPFb) or don't hesitate to iterate with us on the tutorial by creating issues or pull requests.
+
+## Setup and Calibrate
+
+If you haven't yet setup and calibrate the SO-101 follow these steps:
+1. [Find ports and update config file](./assemble_so101#find-the-usb-ports-associated-to-each-arm)
+2. [Calibrate](./assemble_so101#calibrate)
+
+## Teleoperate
+
+Run this simple script to teleoperate your robot (it won't connect and display the cameras):
+```bash
+python lerobot/scripts/control_robot.py \
+ --robot.type=so101 \
+ --robot.cameras='{}' \
+ --control.type=teleoperate
+```
+
+The teleoperate command will automatically:
+1. Identify any missing calibrations and initiate the calibration procedure.
+2. Connect the robot and start teleoperation.
+
+## Setup Cameras
+
+To connect a camera you have three options:
+1. OpenCVCamera which allows us to use any camera: usb, realsense, laptop webcam
+2. iPhone camera with MacOS
+3. Phone camera on Linux
+
+### Use OpenCVCamera
+
+The [`OpenCVCamera`](../lerobot/common/robot_devices/cameras/opencv.py) class allows you to efficiently record frames from most cameras using the [`opencv2`](https://docs.opencv.org) library. For more details on compatibility, see [Video I/O with OpenCV Overview](https://docs.opencv.org/4.x/d0/da7/videoio_overview.html).
+
+To instantiate an [`OpenCVCamera`](../lerobot/common/robot_devices/cameras/opencv.py), you need a camera index (e.g. `OpenCVCamera(camera_index=0)`). When you only have one camera like a webcam of a laptop, the camera index is usually `0` but it might differ, and the camera index might change if you reboot your computer or re-plug your camera. This behavior depends on your operating system.
+
+To find the camera indices, run the following utility script, which will save a few frames from each detected camera:
+```bash
+python lerobot/common/robot_devices/cameras/opencv.py \
+ --images-dir outputs/images_from_opencv_cameras
+```
+
+The output will look something like this if you have two cameras connected:
+```
+Mac or Windows detected. Finding available camera indices through scanning all indices from 0 to 60
+[...]
+Camera found at index 0
+Camera found at index 1
+[...]
+Connecting cameras
+OpenCVCamera(0, fps=30.0, width=1920.0, height=1080.0, color_mode=rgb)
+OpenCVCamera(1, fps=24.0, width=1920.0, height=1080.0, color_mode=rgb)
+Saving images to outputs/images_from_opencv_cameras
+Frame: 0000 Latency (ms): 39.52
+[...]
+Frame: 0046 Latency (ms): 40.07
+Images have been saved to outputs/images_from_opencv_cameras
+```
+
+Check the saved images in `outputs/images_from_opencv_cameras` to identify which camera index corresponds to which physical camera (e.g. `0` for `camera_00` or `1` for `camera_01`):
+```
+camera_00_frame_000000.png
+[...]
+camera_00_frame_000047.png
+camera_01_frame_000000.png
+[...]
+camera_01_frame_000047.png
+```
+
+Note: Some cameras may take a few seconds to warm up, and the first frame might be black or green.
+
+Now that you have the camera indexes, you should specify the camera's in the config. TODO(pepijn): add more info about setting camera config, rotate etc..
+
+### Use your phone
+
+
|
|
|
+| 1. Middle position | 2. Zero position | 3. Rotated position | 4. Rest position |
+| ------------ |------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
+|
|
|
|
|
Make sure both arms are connected and run this script to launch manual calibration:
```bash
@@ -467,12 +465,12 @@ python lerobot/scripts/control_robot.py \
--control.arms='["main_follower"]'
```
-#### b. Manual calibration of leader arm
-Follow step 6 of the [assembly video](https://youtu.be/FioA2oeFZ5I?t=724) which illustrates the manual calibration. You will need to move the leader arm to these positions sequentially:
+#### Manual calibration of leader arm
+You will also need to move the leader arm to these positions sequentially:
-| 1. Zero position | 2. Rotated position | 3. Rest position |
-| ------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
-|
|
|
|
+| 1. Middle position | 2. Zero position | 3. Rotated position | 4. Rest position |
+| ------------ |------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
+|
|
|
|
|
Run this script to launch manual calibration:
```bash
@@ -580,7 +578,7 @@ python lerobot/scripts/train.py \
Let's explain it:
1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/so100_test`.
-2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../lerobot/common/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor sates, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
+2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../lerobot/common/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor states, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
4. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
diff --git a/examples/11_use_lekiwi.md b/examples/11_use_lekiwi.md
index 1be7cbc4..4c15dcd1 100644
--- a/examples/11_use_lekiwi.md
+++ b/examples/11_use_lekiwi.md
@@ -134,7 +134,7 @@ First we will assemble the two SO100 arms. One to attach to the mobile base and
## SO100 Arms
### Configure motors
-The instructions for configuring the motors can be found [Here](https://github.com/huggingface/lerobot/blob/main/examples/10_use_so100.md#c-configure-the-motors) in step C of the SO100 tutorial. Besides the ID's for the arm motors we also need to set the motor ID's for the mobile base. These needs to be in a specific order to work. Below an image of the motor ID's and motor mounting positions for the mobile base. Note that we only use one Motor Control board on LeKiwi. This means the motor ID's for the wheels are 7, 8 and 9.
+The instructions for configuring the motors can be found [Here](https://github.com/huggingface/lerobot/blob/main/examples/10_use_so100.md#c-configure-the-motors) in step C of the SO100 tutorial. Besides the ID's for the arm motors we also need to set the motor ID's for the mobile base. These need to be in a specific order to work. Below an image of the motor ID's and motor mounting positions for the mobile base. Note that we only use one Motor Control board on LeKiwi. This means the motor ID's for the wheels are 7, 8 and 9.
@@ -567,7 +567,7 @@ python lerobot/scripts/train.py \
Let's explain it:
1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/lekiwi_test`.
-2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../lerobot/common/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor sates, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
+2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../lerobot/common/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor states, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
4. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
diff --git a/examples/11_use_moss.md b/examples/11_use_moss.md
index 1b6f23b9..b2e18573 100644
--- a/examples/11_use_moss.md
+++ b/examples/11_use_moss.md
@@ -44,7 +44,7 @@ cd ~/lerobot && pip install -e ".[feetech]"
## Configure the motors
-Follow steps 1 of the [assembly video](https://www.youtube.com/watch?v=DA91NJOtMic) which illustrates the use of our scripts below.
+Follow step 1 of the [assembly video](https://www.youtube.com/watch?v=DA91NJOtMic) which illustrates the use of our scripts below.
**Find USB ports associated to your arms**
To find the correct ports for each arm, run the utility script twice:
@@ -164,7 +164,7 @@ Try to avoid rotating the motor while doing so to keep position 2048 set during
## Assemble the arms
-Follow step 4 of the [assembly video](https://www.youtube.com/watch?v=DA91NJOtMic). The first arm should take a bit more than 1 hour to assemble, but once you get use to it, you can do it under 1 hour for the second arm.
+Follow step 4 of the [assembly video](https://www.youtube.com/watch?v=DA91NJOtMic). The first arm should take a bit more than 1 hour to assemble, but once you get used to it, you can do it under 1 hour for the second arm.
## Calibrate
@@ -301,7 +301,7 @@ python lerobot/scripts/train.py \
Let's explain it:
1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/moss_test`.
-2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../lerobot/common/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor sates, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
+2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../lerobot/common/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor states, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
4. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
diff --git a/examples/12_use_so101.md b/examples/12_use_so101.md
new file mode 100644
index 00000000..0886d988
--- /dev/null
+++ b/examples/12_use_so101.md
@@ -0,0 +1,709 @@
+# Assemble and use SO-101
+
+In the steps below we explain how to assemble and use our flagship robot, the SO-101 with LeRobot π€.
+
+## Source the parts
+
+Follow this [README](https://github.com/TheRobotStudio/SO-ARM100). It contains the bill of materials, with a link to source the parts, as well as the instructions to 3D print the parts,
+and advice if it's your first time printing or if you don't own a 3D printer.
+
+Before assembling, you will first need to configure your motors. To this end, we provide a nice script, so let's first install LeRobot. After configuration, we will also guide you through assembly.
+
+## Install LeRobot
+
+> [!TIP]
+> We use the Command Prompt (cmd) quite a lot. If you are not comfortable using the cmd or want to brush up using the command line you can have a look here: [Command line crash course](https://developer.mozilla.org/en-US/docs/Learn_web_development/Getting_started/Environment_setup/Command_line)
+
+Download our source code:
+```bash
+git clone https://github.com/huggingface/lerobot.git
+cd lerobot
+```
+
+Create a virtual environment with Python 3.10 and activate it, e.g. with [`miniconda`](https://docs.anaconda.com/miniconda/install/#quick-command-line-install):
+```bash
+conda create -y -n lerobot python=3.10
+```
+Now restart the shell by running:
+
+##### Windows:
+```bash
+`source ~/.bashrc`
+```
+
+##### Mac:
+```bash
+`source ~/.bash_profile`
+```
+
+##### zshell:
+```bash
+`source ~/.zshrc`
+```
+
+Then activate your conda environment, you have to do this each time you open a shell to use lerobot:
+```bash
+conda activate lerobot
+```
+
+When using `miniconda`, install `ffmpeg` in your environment:
+```bash
+conda install ffmpeg -c conda-forge
+```
+
+> [!NOTE]
+> This usually installs `ffmpeg 7.X` for your platform compiled with the `libsvtav1` encoder. If `libsvtav1` is not supported (check supported encoders with `ffmpeg -encoders`), you can:
+> - _[On any platform]_ Explicitly install `ffmpeg 7.X` using:
+> ```bash
+> conda install ffmpeg=7.1.1 -c conda-forge
+> ```
+> - _[On Linux only]_ Install [ffmpeg build dependencies](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#GettheDependencies) and [compile ffmpeg from source with libsvtav1](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#libsvtav1), and make sure you use the corresponding ffmpeg binary to your install with `which ffmpeg`.
+
+Install π€ LeRobot:
+```bash
+cd lerobot && pip install ".[feetech]"
+```
+
+> [!NOTE]
+> If you encounter build errors, you may need to install additional dependencies (`cmake`, `build-essential`, and `ffmpeg libs`). On Linux, run: `sudo apt-get install cmake build-essential python3-dev pkg-config libavformat-dev libavcodec-dev libavdevice-dev libavutil-dev libswscale-dev libswresample-dev libavfilter-dev pkg-config`. For other systems, see: [Compiling PyAV](https://pyav.org/docs/develop/overview/installation.html#bring-your-own-ffmpeg)
+
+
+## Configure motors
+
+To configure the motors designate one bus servo adapter and 6 motors for your leader arm, and similarly the other bus servo adapter and 6 motors for the follower arm. It's convenient to label them and write on each motor if it's for the follower `F` or for the leader `L` and it's ID from 1 to 6.
+
+You now should plug the 5V or 12V power supply to the motor bus. 5V for the STS3215 7.4V motors and 12V for the STS3215 12V motors. Note that the leader arm always uses the 7.4V motors, so watch out that you plug in the right power supply if you have 12V and 7.4V motors, otherwise you might burn your motors! Now, connect the motor bus to your computer via USB. Note that the USB doesn't provide any power, and both the power supply and USB have to be plugged in.
+
+### Find the USB ports associated to each arm
+
+To find the port for each bus servo adapter, run this script:
+```bash
+python lerobot/scripts/find_motors_bus_port.py
+```
+#### Example outputs of script
+
+##### Mac:
+Example output leader arm's port: `/dev/tty.usbmodem575E0031751`
+
+```bash
+Finding all available ports for the MotorBus.
+['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
+Remove the usb cable from your MotorsBus and press Enter when done.
+
+[...Disconnect leader arm and press Enter...]
+
+The port of this MotorsBus is /dev/tty.usbmodem575E0031751
+Reconnect the usb cable.
+```
+
+Example output follower arm port: `/dev/tty.usbmodem575E0032081`
+
+```
+Finding all available ports for the MotorBus.
+['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
+Remove the usb cable from your MotorsBus and press Enter when done.
+
+[...Disconnect follower arm and press Enter...]
+
+The port of this MotorsBus is /dev/tty.usbmodem575E0032081
+Reconnect the usb cable.
+```
+
+##### Linux:
+On Linux, you might need to give access to the USB ports by running:
+```bash
+sudo chmod 666 /dev/ttyACM0
+sudo chmod 666 /dev/ttyACM1
+```
+
+Example output leader arm port: `/dev/ttyACM0`
+
+```bash
+Finding all available ports for the MotorBus.
+['/dev/ttyACM0', '/dev/ttyACM1']
+Remove the usb cable from your MotorsBus and press Enter when done.
+
+[...Disconnect leader arm and press Enter...]
+
+The port of this MotorsBus is /dev/ttyACM0
+Reconnect the usb cable.
+```
+
+Example output follower arm port: `/dev/ttyACM1`
+
+```
+Finding all available ports for the MotorBus.
+['/dev/ttyACM0', '/dev/ttyACM1']
+Remove the usb cable from your MotorsBus and press Enter when done.
+
+[...Disconnect follower arm and press Enter...]
+
+The port of this MotorsBus is /dev/ttyACM1
+Reconnect the usb cable.
+```
+
+#### Update config file
+
+Now that you have your ports, update the **port** default values of [`SO101RobotConfig`](https://github.com/huggingface/lerobot/blob/main/lerobot/common/robot_devices/robots/configs.py).
+You will find something a class called `so101` where you can update the `port` values with your actual motor ports:
+```python
+@RobotConfig.register_subclass("so101")
+@dataclass
+class So101RobotConfig(ManipulatorRobotConfig):
+ calibration_dir: str = ".cache/calibration/so101"
+ # `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
+ # Set this to a positive scalar to have the same value for all motors, or a list that is the same length as
+ # the number of motors in your follower arms.
+ max_relative_target: int | None = None
+
+ leader_arms: dict[str, MotorsBusConfig] = field(
+ default_factory=lambda: {
+ "main": FeetechMotorsBusConfig(
+ port="/dev/tty.usbmodem58760431091", <-- UPDATE HERE
+ motors={
+ # name: (index, model)
+ "shoulder_pan": [1, "sts3215"],
+ "shoulder_lift": [2, "sts3215"],
+ "elbow_flex": [3, "sts3215"],
+ "wrist_flex": [4, "sts3215"],
+ "wrist_roll": [5, "sts3215"],
+ "gripper": [6, "sts3215"],
+ },
+ ),
+ }
+ )
+
+ follower_arms: dict[str, MotorsBusConfig] = field(
+ default_factory=lambda: {
+ "main": FeetechMotorsBusConfig(
+ port="/dev/tty.usbmodem585A0076891", <-- UPDATE HERE
+ motors={
+ # name: (index, model)
+ "shoulder_pan": [1, "sts3215"],
+ "shoulder_lift": [2, "sts3215"],
+ "elbow_flex": [3, "sts3215"],
+ "wrist_flex": [4, "sts3215"],
+ "wrist_roll": [5, "sts3215"],
+ "gripper": [6, "sts3215"],
+ },
+ ),
+ }
+ )
+```
+
+Here is a video of the process:
+
+
+
+### Set motor IDs
+
+Now we need to set the motor ID for each motor. Plug your motor in only one of the two ports of the motor bus and run this script to set its ID to 1. Replace the text after --port to the corresponding control board port.
+```bash
+python lerobot/scripts/configure_motor.py \
+ --port /dev/tty.usbmodem58760432961 \
+ --brand feetech \
+ --model sts3215 \
+ --baudrate 1000000 \
+ --ID 1
+```
+
+Then unplug your motor and plug the second motor and set its ID to 2.
+```bash
+python lerobot/scripts/configure_motor.py \
+ --port /dev/tty.usbmodem58760432961 \
+ --brand feetech \
+ --model sts3215 \
+ --baudrate 1000000 \
+ --ID 2
+```
+
+Redo this process for all your motors until ID 6. Do the same for the 6 motors of the leader arm, but make sure to change the power supply if you use motors with different voltage.
+
+Here is a video of the process:
+
+
+
+## Step-by-Step Assembly Instructions
+
+The follower arm uses 6x STS3215 motors with 1/345 gearing. The leader however uses three differently geared motors to make sure it can both sustain its own weight and it can be moved without requiring much force. Which motor is needed for which joint is shown in table below.
+
+| Leader-Arm Axis | Motor | Gear Ratio |
+|-----------------|:-------:|:----------:|
+| Base / Shoulder Yaw | 1 | 1 / 191 |
+| Shoulder Pitch | 2 | 1 / 345 |
+| Elbow | 3 | 1 / 191 |
+| Wrist Roll | 4 | 1 / 147 |
+| Wrist Pitch | 5 | 1 / 147 |
+| Gripper | 6 | 1 / 147 |
+
+
+### Clean Parts
+Remove all support material from the 3D-printed parts.
+
+### Joint 1
+
+- Place the first motor into the base.
+- Fasten the motor with 4 M2x6mm screws (smallest screws). Two from the top and two from bottom.
+- Slide over the first motor holder and fasten it using two M2x6mm screws (one on each side).
+- Install both motor horns, securing the top horn with a M3x6mm screw.
+- Attach the shoulder part.
+- Tighten the shoulder part with 4 M3x6mm screws on top and 4 M3x6mm screws on the bottom
+- Add the shoulder motor holder.
+
+
+
+### Joint 2
+
+- Slide the second motor in from the top.
+- Fasten the second motor with 4 M2x6mm screws.
+- Attach both motor horns to motor 2, again use the M3x6mm horn screw.
+- Attach the upper arm with 4 M3x6mm screws on each side.
+
+
+
+### Joint 3
+
+- Insert motor 3 and fasten using 4 M2x6mm screws
+- Attach both motor horns to motor 3 and secure one again with a M3x6mm horn screw.
+- Connect the forearm to motor 3 using 4 M3x6mm screws on each side.
+
+
+
+### Joint 4
+
+- Slide over motor holder 4.
+- Slide in motor 4.
+- Fasten motor 4 with 4 M2x6mm screws and attach its motor horns, use a M3x6mm horn screw.
+
+
+
+### Joint 5
+
+- Insert motor 5 into the wrist holder and secure it with 2 M2x6mm front screws.
+- Install only one motor horn on the wrist motor and secure it with a M3x6mm horn screw.
+- Secure the wrist to motor 4 using 4 M3x6mm screws on both sides.
+
+
+
+### Gripper / Handle
+
+#### Follower:
+
+- Attach the gripper to motor 5, attach it to the motor horn on the wrist using 4 M3x6mm screws.
+- Insert the gripper motor and secure it with 2 M2x6mm screws on each side.
+- Attach the motor horns and again use a M3x6mm horn screw.
+- Install the gripper claw and secure it with 4 M3x6mm screws on both sides.
+
+
+
+#### Leader:
+
+- Mount the leader holder onto the wrist and secure it with 4 M3x6mm screws.
+- Attach the handle to motor 5 using 1 M2x6mm screw.
+- Insert the gripper motor, secure it with 2 M2x6mm screws on each side, attach a motor horn using a M3x6mm horn screw.
+- Attach the follower trigger with 4 M3x6mm screws.
+
+
+
+##### Wiring
+
+- Attach the motor controller on the back.
+- Then insert all wires, use the wire guides everywhere to make sure the wires don't unplug themself and stay in place.
+
+
+
+## Calibrate
+
+Next, you'll need to calibrate your SO-101 robot to ensure that the leader and follower arms have the same position values when they are in the same physical position.
+The calibration process is very important because it allows a neural network trained on one SO-101 robot to work on another.
+
+#### Manual calibration of follower arm
+
+You will need to move the follower arm to these positions sequentially, note that the rotated position is on the right side of the robot and you have to open the gripper fully.
+
+| 1. Middle position | 2. Zero position | 3. Rotated position | 4. Rest position |
+| ------------ |------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
+|
|
|
|
|
+
+Make sure both arms are connected and run this script to launch manual calibration:
+```bash
+python lerobot/scripts/control_robot.py \
+ --robot.type=so101 \
+ --robot.cameras='{}' \
+ --control.type=calibrate \
+ --control.arms='["main_follower"]'
+```
+
+#### Manual calibration of leader arm
+You will also need to move the leader arm to these positions sequentially:
+
+| 1. Middle position | 2. Zero position | 3. Rotated position | 4. Rest position |
+| ------------ |------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
+|
|
|
|
|
+
+Run this script to launch manual calibration:
+```bash
+python lerobot/scripts/control_robot.py \
+ --robot.type=so101 \
+ --robot.cameras='{}' \
+ --control.type=calibrate \
+ --control.arms='["main_leader"]'
+```
+## Control your robot
+
+Congrats π, your robot is all set to learn a task on its own. Next we will explain you how to train a neural network to autonomously control a real robot.
+
+**You'll learn to:**
+1. How to record and visualize your dataset.
+2. How to train a policy using your data and prepare it for evaluation.
+3. How to evaluate your policy and visualize the results.
+
+By following these steps, you'll be able to replicate tasks like picking up a Lego block and placing it in a bin with a high success rate, as demonstrated in [this video](https://x.com/RemiCadene/status/1814680760592572934).
+
+This tutorial is specifically made for the affordable [SO-101](https://github.com/TheRobotStudio/SO-ARM100) robot, but it contains additional information to be easily adapted to various types of robots like [Aloha bimanual robot](https://aloha-2.github.io) by changing some configurations. The SO-101 consists of a leader arm and a follower arm, each with 6 motors. It can work with one or several cameras to record the scene, which serve as visual sensors for the robot.
+
+During the data collection phase, you will control the follower arm by moving the leader arm. This process is known as "teleoperation." This technique is used to collect robot trajectories. Afterward, you'll train a neural network to imitate these trajectories and deploy the network to enable your robot to operate autonomously.
+
+If you encounter any issues at any step of the tutorial, feel free to seek help on [Discord](https://discord.com/invite/s3KuuzsPFb) or don't hesitate to iterate with us on the tutorial by creating issues or pull requests.
+
+## Teleoperate
+
+Run this simple script to teleoperate your robot (it won't connect and display the cameras):
+```bash
+python lerobot/scripts/control_robot.py \
+ --robot.type=so101 \
+ --robot.cameras='{}' \
+ --control.type=teleoperate
+```
+
+The teleoperate command will automatically:
+1. Identify any missing calibrations and initiate the calibration procedure.
+2. Connect the robot and start teleoperation.
+
+## Setup Cameras
+
+To connect a camera you have three options:
+1. OpenCVCamera which allows us to use any camera: usb, realsense, laptop webcam
+2. iPhone camera with MacOS
+3. Phone camera on Linux
+
+### Use OpenCVCamera
+
+The [`OpenCVCamera`](../lerobot/common/robot_devices/cameras/opencv.py) class allows you to efficiently record frames from most cameras using the [`opencv2`](https://docs.opencv.org) library. For more details on compatibility, see [Video I/O with OpenCV Overview](https://docs.opencv.org/4.x/d0/da7/videoio_overview.html).
+
+To instantiate an [`OpenCVCamera`](../lerobot/common/robot_devices/cameras/opencv.py), you need a camera index (e.g. `OpenCVCamera(camera_index=0)`). When you only have one camera like a webcam of a laptop, the camera index is usually `0` but it might differ, and the camera index might change if you reboot your computer or re-plug your camera. This behavior depends on your operating system.
+
+To find the camera indices, run the following utility script, which will save a few frames from each detected camera:
+```bash
+python lerobot/common/robot_devices/cameras/opencv.py \
+ --images-dir outputs/images_from_opencv_cameras
+```
+
+The output will look something like this if you have two cameras connected:
+```
+Mac or Windows detected. Finding available camera indices through scanning all indices from 0 to 60
+[...]
+Camera found at index 0
+Camera found at index 1
+[...]
+Connecting cameras
+OpenCVCamera(0, fps=30.0, width=1920.0, height=1080.0, color_mode=rgb)
+OpenCVCamera(1, fps=24.0, width=1920.0, height=1080.0, color_mode=rgb)
+Saving images to outputs/images_from_opencv_cameras
+Frame: 0000 Latency (ms): 39.52
+[...]
+Frame: 0046 Latency (ms): 40.07
+Images have been saved to outputs/images_from_opencv_cameras
+```
+
+Check the saved images in `outputs/images_from_opencv_cameras` to identify which camera index corresponds to which physical camera (e.g. `0` for `camera_00` or `1` for `camera_01`):
+```
+camera_00_frame_000000.png
+[...]
+camera_00_frame_000047.png
+camera_01_frame_000000.png
+[...]
+camera_01_frame_000047.png
+```
+
+Note: Some cameras may take a few seconds to warm up, and the first frame might be black or green.
+
+Now that you have the camera indexes, you should change them in the config. You can also change the fps, width or height of the camera.
+
+The camera config is defined per robot, can be found here [`RobotConfig`](https://github.com/huggingface/lerobot/blob/main/lerobot/common/robot_devices/robots/configs.py) and looks like this:
+```python
+cameras: dict[str, CameraConfig] = field(
+ default_factory=lambda: {
+ "wrist": OpenCVCameraConfig(
+ camera_index=0, <-- UPDATE HERE
+ fps=30,
+ width=640,
+ height=480,
+ ),
+ "base": OpenCVCameraConfig(
+ camera_index=1, <-- UPDATE HERE
+ fps=30,
+ width=640,
+ height=480,
+ ),
+ }
+ )
+```
+
+### Use your phone
+#### Mac:
+
+To use your iPhone as a camera on macOS, enable the Continuity Camera feature:
+- Ensure your Mac is running macOS 13 or later, and your iPhone is on iOS 16 or later.
+- Sign in both devices with the same Apple ID.
+- Connect your devices with a USB cable or turn on Wi-Fi and Bluetooth for a wireless connection.
+
+For more details, visit [Apple support](https://support.apple.com/en-gb/guide/mac-help/mchl77879b8a/mac).
+
+Your iPhone should be detected automatically when running the camera setup script in the next section.
+
+#### Linux:
+
+If you want to use your phone as a camera on Linux, follow these steps to set up a virtual camera
+
+1. *Install `v4l2loopback-dkms` and `v4l-utils`*. Those packages are required to create virtual camera devices (`v4l2loopback`) and verify their settings with the `v4l2-ctl` utility from `v4l-utils`. Install them using:
+```python
+sudo apt install v4l2loopback-dkms v4l-utils
+```
+2. *Install [DroidCam](https://droidcam.app) on your phone*. This app is available for both iOS and Android.
+3. *Install [OBS Studio](https://obsproject.com)*. This software will help you manage the camera feed. Install it using [Flatpak](https://flatpak.org):
+```python
+flatpak install flathub com.obsproject.Studio
+```
+4. *Install the DroidCam OBS plugin*. This plugin integrates DroidCam with OBS Studio. Install it with:
+```python
+flatpak install flathub com.obsproject.Studio.Plugin.DroidCam
+```
+5. *Start OBS Studio*. Launch with:
+```python
+flatpak run com.obsproject.Studio
+```
+6. *Add your phone as a source*. Follow the instructions [here](https://droidcam.app/obs/usage). Be sure to set the resolution to `640x480`.
+7. *Adjust resolution settings*. In OBS Studio, go to `File > Settings > Video`. Change the `Base(Canvas) Resolution` and the `Output(Scaled) Resolution` to `640x480` by manually typing it in.
+8. *Start virtual camera*. In OBS Studio, follow the instructions [here](https://obsproject.com/kb/virtual-camera-guide).
+9. *Verify the virtual camera setup*. Use `v4l2-ctl` to list the devices:
+```python
+v4l2-ctl --list-devices
+```
+You should see an entry like:
+```
+VirtualCam (platform:v4l2loopback-000):
+/dev/video1
+```
+10. *Check the camera resolution*. Use `v4l2-ctl` to ensure that the virtual camera output resolution is `640x480`. Change `/dev/video1` to the port of your virtual camera from the output of `v4l2-ctl --list-devices`.
+```python
+v4l2-ctl -d /dev/video1 --get-fmt-video
+```
+You should see an entry like:
+```
+>>> Format Video Capture:
+>>> Width/Height : 640/480
+>>> Pixel Format : 'YUYV' (YUYV 4:2:2)
+```
+
+Troubleshooting: If the resolution is not correct you will have to delete the Virtual Camera port and try again as it cannot be changed.
+
+If everything is set up correctly, you can proceed with the rest of the tutorial.
+
+### Add wrist camera
+If you have an additional camera you can add a wrist camera to the SO101. There are already many premade wrist camera holders that you can find in the SO101 repo: [Wrist camera's](https://github.com/TheRobotStudio/SO-ARM100#wrist-cameras)
+
+## Teleoperate with cameras
+
+We can now teleoperate again while at the same time visualizing the camera's and joint positions with `rerun`.
+
+```bash
+python lerobot/scripts/control_robot.py \
+ --robot.type=so101 \
+ --control.type=teleoperate \
+ --control.display_data=true
+```
+
+## Record a dataset
+
+Once you're familiar with teleoperation, you can record your first dataset with SO-100.
+
+We use the Hugging Face hub features for uploading your dataset. If you haven't previously used the Hub, make sure you can login via the cli using a write-access token, this token can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens).
+
+Add your token to the cli by running this command:
+```bash
+huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
+```
+
+Then store your Hugging Face repository name in a variable:
+```bash
+HF_USER=$(huggingface-cli whoami | head -n 1)
+echo $HF_USER
+```
+
+Now you can record a dataset, to record 2 episodes and upload your dataset to the hub execute this command:
+```bash
+python lerobot/scripts/control_robot.py \
+ --robot.type=so101 \
+ --control.type=record \
+ --control.fps=30 \
+ --control.single_task="Grasp a lego block and put it in the bin." \
+ --control.repo_id=${HF_USER}/so101_test \
+ --control.tags='["so101","tutorial"]' \
+ --control.warmup_time_s=5 \
+ --control.episode_time_s=30 \
+ --control.reset_time_s=30 \
+ --control.num_episodes=2 \
+ --control.display_data=true \
+ --control.push_to_hub=true
+```
+
+You will see a lot of lines appearing like this one:
+```
+INFO 2024-08-10 15:02:58 ol_robot.py:219 dt:33.34 (30.0hz) dtRlead: 5.06 (197.5hz) dtWfoll: 0.25 (3963.7hz) dtRfoll: 6.22 (160.7hz) dtRlaptop: 32.57 (30.7hz) dtRphone: 33.84 (29.5hz)
+```
+It contains:
+- `2024-08-10 15:02:58` which is the date and time of the call to the print function,
+- `ol_robot.py:219` which is the end of the file name and the line number where the print function is called (`lerobot/scripts/control_robot.py` line `219`).
+- `dt:33.34 (30.0hz)` which is the "delta time" or the number of milliseconds spent between the previous call to `robot.teleop_step(record_data=True)` and the current one, associated with the frequency (33.34 ms equals 30.0 Hz) ; note that we use `--fps 30` so we expect 30.0 Hz ; when a step takes more time, the line appears in yellow.
+- `dtRlead: 5.06 (197.5hz)` which is the delta time of reading the present position of the leader arm.
+- `dtWfoll: 0.25 (3963.7hz)` which is the delta time of writing the goal position on the follower arm ; writing is asynchronous so it takes less time than reading.
+- `dtRfoll: 6.22 (160.7hz)` which is the delta time of reading the present position on the follower arm.
+- `dtRlaptop:32.57 (30.7hz) ` which is the delta time of capturing an image from the laptop camera in the thread running asynchronously.
+- `dtRphone:33.84 (29.5hz)` which is the delta time of capturing an image from the phone camera in the thread running asynchronously.
+
+#### Dataset upload
+Locally your dataset is stored in this folder: `~/.cache/huggingface/lerobot/{repo-id}` (e.g. `data/cadene/so101_test`). At the end of data recording, your dataset will be uploaded on your Hugging Face page (e.g. https://huggingface.co/datasets/cadene/so101_test) that you can obtain by running:
+```bash
+echo https://huggingface.co/datasets/${HF_USER}/so101_test
+```
+Your dataset will be automatically tagged with `LeRobot` for the community to find it easily, and you can also add custom tags (in this case `tutorial` for example).
+
+You can look for other LeRobot datasets on the hub by searching for `LeRobot` [tags](https://huggingface.co/datasets?other=LeRobot).
+
+#### Record function
+
+The `record` function provides a suite of tools for capturing and managing data during robot operation:
+1. Set the flow of data recording using command line arguments:
+ - `--control.warmup_time_s=10` defines the number of seconds before starting data collection. It allows the robot devices to warmup and synchronize (10 seconds by default).
+ - `--control.episode_time_s=60` defines the number of seconds for data recording for each episode (60 seconds by default).
+ - `--control.reset_time_s=60` defines the number of seconds for resetting the environment after each episode (60 seconds by default).
+ - `--control.num_episodes=50` defines the number of episodes to record (50 by default).
+2. Control the flow during data recording using keyboard keys:
+ - Press right arrow `->` at any time during episode recording to early stop and go to resetting. Same during resetting, to early stop and to go to the next episode recording.
+ - Press left arrow `<-` at any time during episode recording or resetting to early stop, cancel the current episode, and re-record it.
+ - Press escape `ESC` at any time during episode recording to end the session early and go straight to video encoding and dataset uploading.
+3. Checkpoints are done set during recording, so if any issue occurs, you can resume recording by re-running the same command again with `--control.resume=true`. You will need to manually delete the dataset directory if you want to start recording from scratch.
+
+#### Tips for gathering data
+
+Once you're comfortable with data recording, you can create a larger dataset for training. A good starting task is grasping an object at different locations and placing it in a bin. We suggest recording at least 50 episodes, with 10 episodes per location. Keep the cameras fixed and maintain consistent grasping behavior throughout the recordings. Also make sure the object you are manipulating is visible on the camera's. A good rule of thumb is you should be able to do the task yourself by only looking at the camera images.
+
+In the following sections, youβll train your neural network. After achieving reliable grasping performance, you can start introducing more variations during data collection, such as additional grasp locations, different grasping techniques, and altering camera positions.
+
+Avoid adding too much variation too quickly, as it may hinder your results.
+
+#### Troubleshooting:
+- On Linux, if the left and right arrow keys and escape key don't have any effect during data recording, make sure you've set the `$DISPLAY` environment variable. See [pynput limitations](https://pynput.readthedocs.io/en/latest/limitations.html#linux).
+
+## Visualize a dataset
+
+If you uploaded your dataset to the hub with `--control.push_to_hub=true`, you can [visualize your dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset) by copy pasting your repo id given by:
+```bash
+echo ${HF_USER}/so101_test
+```
+
+If you didn't upload with `--control.push_to_hub=false`, you can visualize it locally with (via a window in the browser `http://127.0.0.1:9090` with the visualization tool):
+```bash
+python lerobot/scripts/visualize_dataset_html.py \
+ --repo-id ${HF_USER}/so101_test \
+ --local-files-only 1
+```
+
+This will launch a local web server that looks like this:
+
+
+